Processes and Threads are the very fundamental parts of an Operating System. Juggling through one process to another is the most important aspect of any modern multi-processing/multi-threaded OS.

Whether it’s running multiple applications simultaneously or managing hundreds of simultaneous connections on a web server, concurrency is key to performance.

Processes: The Foundation of Execution

At its core every Operating System has processes, right after boot the first thing a kernel run is a process called the init process. It is the parent process of every other process in the OS.

So what exactly is a process, to put is simply any process is a program whether GUI or CLI in execution.

Process Structure

  • Text and Data: These are the static(read-only) components of the program loaded into memory.
  • Heap: Dynamically allocated memory, can grow or shrink at runtime.
  • Stack: Used for function calls and local variables, growing and shrinking as needed follows LIFO.

Process Control Block (PCB)

Every process has a PCB, which holds the process’s metadata, like the program counter, registers, and other memory management information. This is essential for context switching— where the OS saves and loads process states to switch to other tasks.

Lifecycle of a Process:

From creation (New) to execution (Ready, Running) to completion (Terminated), a process undergoes several states. At any given point of time, the process is either running, waiting for I/O, or ready to be scheduled by the OS.

Threads: Lightweight Execution Units

While processes are the foundation of execution, threads are where true parallelism happens for a multiprocessing OS. A thread is a smaller unit of execution within a process. Multiple threads within the same process share the same address space, i.e., memory, which makes them more efficient for tasks that require shared memory or resources.

Benefits of Threads:

Threads are lightweight and allow for efficient parallelization, reduced memory usage and faster communication between tasks.

Threads are created, managed and most importantly synchronized by the use of murexes and shared variables. These methods helps to mitigates the risk of there being any race-condition and deadlocks between different threads.

Concurrency

Concurrency is all about how to handle multiple tasks with multiple threads having different resources all running together without any interruption to any process.

To ensure smooth execution of processes and threads, several techniques can be used:

  • Mutual Exclusion (Mutexes): they are kind of like a lock and key, only one process can enter the critical section at the time and while it being there no other process can enter before the lock is cleared.
  • Shared Variables: These are the variables present in the shared memory of the process, reading and writing the value of this variable the threads can know whether it is okay to perform operations of a resource.
  • Deadlocks: A deadlock is said to occur when two or more threads/process want to access a resource that is currently assigned to the other process leading to a problem of circular dependency called deadlock.

Inter-Process Communication (IPC)

Sharing data between multiple threads of a process is made easier through multiple operations the OS helps with like shared memory. But when it comes to processes since each process has it’s own separate address space they have to rely on the IPC mechanisms.

  • Message Passing: Processes exchange messages via communication channels such as Unix sockets. It’s easy to implement and useful for simpler interactions.
  • Shared Memory: Multiple processes map a common region of memory into their address spaces. It’s faster once set up but requires synchronization to avoid conflicts.

Additional resources related to this topic can be found here.

Leave a Reply

Your email address will not be published. Required fields are marked *