Every time we open a browser, launch a game, or run a command in the terminal, we are creating a process. but what exactly is a process. Let's explore this in this article.
What is a Process?
A process is basically a program in execution. While a program is just a static piece of code stored on disk (like a .exe
or .out
file), a process is that same code brought to life – loaded into memory, running on the CPU, interacting with devices, and consuming resources.
A process is an instance of a running program. It consists of:
- Code (Text Segment) – The executable instructions.
- Data (Heap & Stack) – Variables, dynamic memory, and function calls.
- Process Control Block (PCB) – OS metadata (PID, state, registers)
Real-Life Analogy: Process as a Cooking Recipe Being Executed
- Recipe (Program):
A recipe is like a program stored on disk—just instructions. It doesn’t "do" anything until someone (CPU) starts using it. - Chef Starts Cooking (CPU Executes Program):
When a chef starts following the recipe, it becomes a process—a running instance of a program. - Ingredients & Tools (Resources like Memory, Files, I/O):
Just as the chef needs pots, vegetables, and a stove, the process needs RAM, CPU time, file handles, and I/O. - Instructions (Code Instructions):
The recipe has to be followed in order, just like code is executed line by line. - Multiple Chefs (Multiple Processes):
You can have multiple chefs each cooking different recipes in the same kitchen. They need to coordinate to avoid conflicts over limited resources (stove burners = CPU cores, pots = files/memory).
Process in Memory
Whenever a user double-clicks an icon representing an executable file (program), the program is loaded into the memory and it becomes a process. This running process stored in the memory consists of several parts.
Memory Segments of a Process:
- Text Segment (Code Segment)
- Contains the compiled executable instructions of the program.
- It is typically read-only to prevent accidental modification.
- Shared among the processes running the same program to save memory.
- Data Segment:
- Initialized Data: Stores the global and static variables that are initialized with values.
- Uninitialized Data (BSS Segment): Stores global and static variables that are not initialized (defaults to zero).
- Heap:
- Used for dynamic memory allocation during runtime (e.g., via malloc in C/C++ or new in C++/Java).
- Grows upward towards higher memory addresses.
- Stack:
- Stores function call frames, including local variables, parameters, and return addresses.
- Grows downward toward lower memory addresses.
- Managed automatically by the compiler and OS.
Visual Layout:
+---------------------+ ← Higher memory address
| Stack | ← Function calls, local variables
|---------------------|
| Heap | ← Dynamically allocated memory
| | (e.g., malloc, new)
|---------------------|
| Uninitialized Data | (.bss segment: global/static
| | variables initialized to 0)
|---------------------|
| Initialized Data | (.data segment: global/static
| | variables with values)
|---------------------|
| Code (Text) | ← Program instructions (read-only)
+---------------------+ ← Lower memory address
Note:
- managed correctly (e.g., due to recursion or memory leaks), they can collide, causing a crash.
- This layout may vary slightly depending on the architecture (x86/x64) and OS (Windows/Linux), but the general structure remains the same.
Process Creation
Request to Create a Process
- Triggered by:
- User action (e.g., double-clicking a program_
- System call by another process (e.g.,
fork()
in Unix/Linux) - System startup tasks
- Batch job schedular
- Assigning a Unique Process ID (PID)
- The OS assigns a unique identifier to track the process.
- Memory Allocation
- The OS allocates memory to hold:
- Code (text) segment
- Data segment
- Stack
- Heap
- The OS allocates memory to hold:
- Loading Program Code and Data
- The executable file is read from disk and loaded into the code and data segments of memory.
- Setting Up the Process Control Block (PCB)
- The PCB is a data structure used by the OS to manage the process. It includes:
- Process ID
- Process state (Ready, Running, Waiting, etc.)
- Program Counter (where to start execution)
- CPU registers
- Memory pointers
- I/O status
- Priority, scheduling info, etc.
- Initializing Stack and Heap
- The stack is set up for function calls and local variables.
- The heap is initialized for dynamic memory use.
- Adding to Scheduling Queue
- The process is placed into the Ready Queue, waiting to be scheduled for execution by the CPU.
- Execution Begins
- When the scheduler picks this process, its context is loaded, and it begins executing.
Example: Linux Process Creation with fork()
and exec()
In Unix-like systems:
fork()
creates a copy of the current process.exec()
replaces the current process image with a new program.
Together, they allow one process to create another and start a new program.
The Process Lifecycle | States
Just like a living being, a process goes through different states:
- New – The process is being created.
- The OS has allocated a PCB, but the process is not yet ready to execute.
- Ready:
- The process is ready to be assigned to the CPU.
- This state is achieved after the process is initialized and deemed to run.
- Running:
- The process is actively being executed.
- A process enters this state when the schedular assigns CPU time to it.
- Only one process can be in the running state on a single-core CPU, but multiple processes can run simultaneously on multi-core CPUs.
- Waiting (Blocked):
- The process is waiting for an event (like I/O).
- This state is achieved when the process requests a resource that is not currently available.
- Once the event is completed, the process transitions back to the
ready
state.
- Suspended (Swapped):
- The process is paused temporarily, either by the user or the operating system, often to free up resources.
- The process may be swapped out to disk, making room for other processes.
- Terminated:
- The process has finished execution.
- Its resources are released, and its PCB is deleted by the OS.
- This state is reached after the process finishes all its instructions or is explicitly terminated.
The operating system manages these transitions using data structures like the Process Control Block (PCB)
.
State Transitions
Processes transition between states based on specific events:
- New → Ready: Process creation is complete, and it is ready to run.
- Ready → Running: The scheduler assigns CPU time to the process.
- Running → Waiting: The process requests I/O or other resources.
- Running → Ready: Process is preempted(stopped temporarily) by the scheduler (e.g., time slice ends).
- Waiting → Ready: The I/O operation is completed or the resource becomes available, and the process is ready to execute again.
- Running → Terminated: The process completes its execution or is killed.
- Ready/Waiting → Suspended: The process is paused while ready for execution/waiting for an event and swapped to disk.
- Suspended → Ready/Waiting: The process is brought back to memory as it is ready for execution/close to completion
What Is a Process Control Block (PCB_?
The Process Control Block is a crucial data structure that stores all the information about a process:
- Process ID
- Current state
- Program counter
- CPU registers
- Memory limits
- Accounting info
- I/O status
It is maintained by the operating system to keep track of every running process on the system. PCB acts like an identity card
for each process, containing all the information required by the OS to manage, schedule, and control that process.
Whenever the CPU switches from one process to another (called a context switch), it saves the current state in the PCB and restores the new process’s state from its PCB.
Real Life Analogy:
Consider a hospital scenario: each patient (process) has a medical file (PCB) that stores their identity, current condition, treatment plan, and history. When a doctor (operating system) wants to treat a patient, they simply look at the patient's file to find all the critical information. Similarly, when the OS needs to manage a process—be it rescheduling, switching, or terminating—it refers to the PCB.
Information Stored in a PCB
The Process Control Block (PCB) is a data structure maintained by the operating system for each process. It acts like a process’s ID card + record sheet, storing everything the OS needs to manage, schedule, and control the process.
Category | Information Stored |
---|---|
🔢 Process Identification | - Process ID (PID) - Parent Process ID (PPID) |
⚙️ Process State | - Current state: New, Ready, Running, Waiting, Terminated |
📍 Program Counter | - Address of the next instruction to be executed |
🧮 CPU Registers | - Contents of all CPU registers (general-purpose, stack pointer, etc.) |
🧠 Memory Management Info | - Pointers to memory segments: Code, Data, Stack, Heap - Page tables or segment tables |
⏱️ Scheduling Info | - Priority - Scheduling queue pointers - CPU burst time |
📥 I/O Status Info | - List of open files - Devices allocated - I/O requests pending |
🛑 Accounting Info | - CPU time used - Time limits - Process creation time - User ID |
🔒 Security Info | - User ID, Group ID - Access rights/privileges |
Process Identification (PID): Each process is assigned a unique ID number to differentiate it from others.
Process State: Indicates whether the process is ready, running, waiting, or terminated.
Program Counter: Stores the address of the next instruction to be executed, ensuring the process resumes exactly where it left off after any interruption.
CPU Registers: The registers vary in number and type, depending on the computer architecture. They include accumulators, index registers, stack pointers, and general-purpose registers. Along with the program counter, this state information must be saved when an interrupt occurs, to allow the process to be continued correctly afterward.
Memory Management Information: Contains details about the process's memory allocation, such as base and limit registers, page tables, or segment tables. This ensures the process accesses only the memory allocated to it.
Accounting and Scheduling Information: Holds data like CPU usage time, process priorities, and how long the process has waited. The OS uses this to schedule tasks efficiently, like a hospital staff scheduling patient check-ups.
I/O Status Information: Tracks the status of the process's input/output requests—what files it has opened, which I/O devices it's using, and whether it's waiting for input or output operations to complete.
Importance of the PCB
The PCB is vital because it enables the operating system to manage processes effectively. Without the PCB, the OS would have no organized way to know what each process is doing, where it left off, and how to handle it next.
Key Reasons for PCB's Importance:
- Context Switching: When the OS needs to switch from one process to another (like a doctor moving from one patient to another), it saves the current process state into its PCB, and loads the next process state from its PCB, ensuring seamless execution.
- Scheduling Decisions: The OS examines the PCB to determine which process gets the CPU next. This helps ensure that CPU time is distributed fairly and efficiently.
- Error Handling and Control: If a process crashes or encounters an error, the OS refers to the PCB to terminate it safely and free the resources it was using.
- Resource Allocation: The PCB helps the OS track the resources allocated to each process, ensuring no two processes interfere with each other's memory or I/O devices.
Multiple Processes and Multitasking
Modern OSes support multiprogramming, which allows multiple processes to reside in memory and share CPU time. This enables:
- Concurrency – Many processes appearing to run at once
- Isolation – Each process runs in its own protected memory space
- Efficiency – The CPU is always busy
The OS scheduler decides which process gets the CPU, and when.
Processes vs Threads
While a process is an independent unit with its own memory, a thread is a lightweight subunit that shares memory and resources within the process. Threads are useful for parallelism and responsiveness.
Threads share the same memory space (code, data, files)
Each thread has its own stack and register state.
- Process = heavy, isolated
- Threads = light, cooperative
Why Use Threads?
✔ Faster than processes (lower creation/context-switching overhead).
✔ Efficient parallelism (multicore CPU utilization).
✔ Responsiveness (UI threads don’t block background tasks).
Context Switching
Context Switching is a fundamental concept in operating systems that allows the CPU to switch from one process to another, ensuring multitasking and efficient resource utilization. The CPU executes processes sequentially, but by switching between processes rapidly, it creates the illusion of parallel execution in single-core systems.
Real Life Analogy:
Imagine a teacher in a classroom who needs to check the assignments of multiple students. The teacher starts with one student's assignment, marks some of it, then moves on to another student's work, and so on. While the teacher switches between assignments, they keep a note of where they stopped for each student to resume later. Similarly, the CPU keeps track of process states using the Process Control Block (PCB) and resumes processes as needed.
Why is Context Switching Needed?
Because:
- The CPU can run only one process/thread at a time per core.
- The OS must share CPU time among processes (via scheduling).
- It allows suspended processes to resume later from where they left off.
Events During Context Switching
Context switching involves saving the state of the currently running process and loading the state of the next process to be executed. The events included are as follows:
- Interrupt or System Call: When a process is interrupted (e.g., due to I/O requests or timer interrupt) or a system call occurs, the operating system gains control of the CPU.
- Save the context of the currently running process:
- Register values
- Program counter
- Stack pointer
- Other CPU state info
- All of this is saved into the process's PCB (Process Control Block).
- Update the process state:
- The running process is moved to the Ready or Waiting queue.
- Load the context of the next scheduled process:
- The OS fetches the PCB of the next process.
- Loads its saved context (registers, program counter, etc.) into the CPU.
- Start executing the new process and the cycle continues as needed.
Diagram of Context Switching
The diagram below illustrates the flow of context switching between two processes, P0 and P1, with the OS managing the transitions:

Explanation:
- Process P0 is initially executing while P1 is idle.
- An interrupt or system call causes the OS to save P0’s state into PCB0 and load P1’s state from PCB1.
- Process P1 begins execution, and P0 enters the idle state.
- The same process occurs in reverse when the OS switches back to P0.
Key Components Involved in Context Switching
- Process Control Block (PCB): Stores all the critical information about a process, such as its program counter, CPU registers, and memory pointers.
- CPU Registers: The values of CPU registers are saved and restored during switching to ensure the process resumes correctly.
- Interrupts: Interrupts signal the OS to perform a context switch, such as I/O completion or a timer interrupt.
- Scheduler: Decides which process will execute next, based on scheduling algorithms like First-Come-First-Serve (FCFS) or Round Robin.
Advantages of Context Switching
- Efficient CPU Utilization: Ensures that the CPU is never idle and always working on a process.
- Multitasking: Allows multiple processes to progress simultaneously, improving user experience.
- Fairness: Distributes CPU time among processes, ensuring no single process monopolizes resources.
- Flexibility: Adapts to changing workloads and priorities through scheduling.
Disadvantages of Context Switching
- Overhead: Context switching consumes time and resources, as the CPU must save and load process states.
- Performance Impact: Frequent switching can reduce overall system performance.
- As it involves saving/restoring registers, cache reloads, memory mapping changes, etc.
- Too many switches = CPU spends time switching instead of executing (called thrashing).
- Complexity: Managing PCBs and scheduling algorithms adds complexity to the OS.