Imagine you’re in a bustling kitchen with multiple chefs, each juggling different tasks to create a perfect meal. Some chefs switch between tasks quickly, while others work side by side. Computers face a similar challenge when handling multiple operations. Let us explore the concepts of concurrency, parallelism, atomicity, and synchronization—all that simplifies these ideas as graspable as a well-run kitchen.
For the purposes of this technical description there are no multithreaded optimizations to processes. Each thread carries a single process. For example: Adding melted butter to 'sauce à la hollandaise'. Just like there are many steps to completing a recipe, a program may be multithreaded as we learn to run more complex tasks in a single program, some tasks are so complex that computers don't have the bandwidth to keep up. This is why we care about concurrency.
Two-phase locking(2PL): "Two-phase locking enforces a strict order on resource acquisition and release." [4]
If you know anything about Computer Hardware, you’ve likely heard about companies like Nvidia. Their specialized Graphics Processing Units (GPUs) are at the heart of modern computational power, particularly in powering artificial intelligence like ChatGPT and making AAA games. But how does this relate to our kitchen? Just as a chef uses various tools and ingredients to craft a dish, a computer uses hardware innovations and software techniques to perform multiple operations at once.
Figure 1 is showing how a user manually manages tabs but is oblivious to what is happening under the hood, which is what this article is about. Since the audience is high school students I have abstracted away a lot of the prerequisites like caching, role of memory, scheduling algorithms, stacks, heaps, dynamic memory allocation, and other key concepts in understanding concurrency at a high level. Through the help of animations and analogies, I have tried to simplify basic concurrency and its C2PL optimization.
Before we get in too deep, it’s important to clarify a common misconception even among CS graduates: concurrency and parallelism are not the same thing.
Concurrency is like a single chef rapidly switching between multiple dishes. The chef interleaves tasks so that all the dishes make progress, even if only one task is performed at a time. In computing, concurrency is the art of managing multiple tasks that can overlap in time.
In computing terms:
Parallelism happens when multiple chefs work simultaneously in the kitchen. In a computer, this is seen on multi-core processors or GPUs, where different cores handle different operations at the same time.
In computing terms:
Understanding this difference is key to appreciating how computers juggle tasks efficiently.
In the early days, computers were like kitchens with a single chef: one thread, one task at a time. Today's machines are much more powerful and complex:
Today's advanced hardware, such as Nvidia GPUs, feature thousands of tiny cores that work in parallel. Each core is capable of handling separate tasks, allowing for true parallelism and speeding up complex operations like training deep neural networks or rendering video game graphics.
In Figure 2, we see an AMD Threadripper manage concurrent tasks . It has 96 cores each capable of 2.4 billion cycles per second. This means it can perform a tremendous number of operations simultaneously.
While hardware gives us the muscle, software provides the finesse. Modern programming languages and operating systems support techniques like multithreading and asynchronous programming.
Atomicity ensures that operations happen either completely or not at all. Think of it as a chef executing a precise, uninterruptible step in a recipe. For example, when making a sauce, either all ingredients are mixed in or none are—there's no in-between state.
In computing, an atomic operation cannot be interrupted halfway through. It's all or nothing, which helps prevent data from being corrupted when multiple processes try to access it.
Synchronization is equally important. It's like placing a "do not use" sign on a critical kitchen tool. Synchronization mechanisms (like locks) ensure that when one process is using a shared resource, no other process can interfere.
Without proper synchronization, we risk race conditions—where multiple processes access or modify data concurrently, leading to unpredictable outcomes.
This simulation helps us understand at a very basic level what concurrency does. Each thread performs tasks independently of the other thread. The slider controls the time each thread takes to complete one process cycle.
Instructions for using Simulation A:This animation demonstrates how multiple threads execute across multiple CPU cores.
Elapsed Time: 0 seconds
"Concurrency control prevents interference among simultaneous operations in a shared environment." [5]
Even with the best software and hardware, chaos can ensue if multiple processes try to access the same resource simultaneously. This is known as a race condition. Race conditions are not only common, but expected when there are multiple people developing software independently, but there is a way to solve this bug.
Imagine two chefs in a restaurant kitchen both needing to prepare a different version of the delicious Hollandaise sauce. Chef A begins carefully balancing butter, egg yolks, and lemon juice. Before finishing, Chef B jumps in to use the sauce for their dish—unaware of Chef A’s progress—and alters it by adding extra butter. The result? A sauce with inconsistent taste and texture, much like how unsynchronized processes can produce unpredictable results in computing.
"Advanced concurrency control algorithms mitigate deadlocks and optimize performance."[3]
To avoid such mishaps, computers use synchronization mechanisms like locks and atomic operations.
Locks: Ensure that only one process can modify a shared resource at a time.
Atomic operations: Atomic operations further guarantee that critical steps are completed fully before another process can intervene, maintaining the consistency and integrity of the data.
Simulation B illustrates some key problems in concurrency in a single timeline.
This animation uses a single timeline to show normal thread execution (steps 1–10), a race condition between Threads 1 & 2 (steps 11–20), and a deadlock between Threads 3 & 4 (steps 21–30).
Animation1 Explanation:
The first animation is a dynamic simulation powered by GSAP (a JS library that is the industry standard for animation) that visually demonstrates how a concurrent system manages tasks using a two-phase locking (2PL) mechanism. In this animation, tasks are represented by "threads" that are created and queued, then move dynamically toward available processing cores. By the time you read this the animation might have ended, please click the run again button to run the animation again. I decided to make this a button because this animation is resource intensive. As each thread reaches its designated core area, it attempts to acquire a lock before entering a critical section—a process that simulates ensuring exclusive access to a shared resource. Once a thread successfully enters the critical section, it updates a shared counter (simulating resource use) and then eventually exits, releasing the lock for subsequent tasks. This vivid depiction helps users understand key concepts like resource contention, atomic operations, and the orderly progression of tasks in a concurrent environment.
Simulation C: Animation1: Demonstrates 2PL with 4 cores and 20 threads.
The second animation serves as a static visual framework that complements the dynamic simulation. It renders multiple parallel tracks and a clearly defined critical section on a canvas, symbolizing the spatial pathways, along which tasks progress. These tracks represent the ordered routes that threads follow, emphasizing the underlying structure of process scheduling and resource allocation in a multi-threaded system. The critical section is highlighted to symbolize the area where resource conflicts might occur if proper synchronization is not maintained. This visualization reinforces the theoretical concepts by providing a concrete, spatial illustration of how tasks are managed and how potential contention is organized within the system. inCS is when a task is in the critical section.
SIMULATION C: ANIMATION2
Back to Animation Navbar ->To manage the challenges of concurrent operations, especially in database systems, computer scientists developed protocols like Two-Phase Locking (2PL).
Imagine a chef gathering all necessary ingredients and tools before starting to cook. In the growing phase, a transaction (or chef) locks every resource it needs. During this phase, the chef can acquire more locks but isn’t allowed to release any—ensuring that all required elements are secured before the operation begins.
Once the cooking (or transaction) has started, the chef enters the shrinking phase by releasing the locks. No new locks are acquired during this period. This process is akin to cleaning up after a meal; as ingredients are put away, no new ones are brought out. This disciplined approach helps maintain consistency and prevents conflicts.
2PL guarantees serializability, meaning that even though operations are concurrent, the end result is as if they had been executed sequentially. However, there’s a catch: if two transactions hold locks that each other needs, they might end up waiting indefinitely—a situation known as a deadlock.
To further guard against deadlocks, an enhanced version of 2PL called Conservative Two-Phase Locking (C2PL) was developed.
In C2PL, a transaction must request and acquire all the locks it will ever need before it starts executing. If even one required resource is locked by another process, the transaction will wait until every single resource is available. This “all-or-nothing” approach prevents the partial execution that can lead to deadlocks—where two or more transactions are stuck waiting for each other.
The primary advantage of C2PL is its ability to completely eliminate deadlocks. Since every transaction begins only when it has full control of the necessary resources, there’s no chance for the circular wait condition that typically causes deadlocks.
While C2PL creates a more stable and predictable environment, it can sometimes reduce overall system efficiency. In a busy system, waiting to acquire all locks before starting can slow down processes. This trade-off between safety (deadlock prevention) and performance (throughput) is a critical consideration for system designers. In systems where it is acceptable to compromise on speed and not on data integrity, C2PL is the best protocol, for regular users, we generally use other protocols.
In today’s digital world, the ability to manage multiple tasks safely and efficiently is paramount. Concurrency and parallelism empower computers to handle many operations, while atomicity and synchronization ensure these tasks are performed without stepping on each other’s toes. Two-Phase Locking (2PL) provides a structured way to maintain order, and its stricter counterpart, Conservative Two-Phase Locking (C2PL), offers a robust solution to avoid deadlocks.
I have tried to explain concurrency through the use of multiple animations, but it is important to understand that the animations are astronomically (a few billion times) slower than what is actually happening.
From the powerful GPUs accelerating AI to sophisticated databases preserving data integrity, these techniques form the backbone of modern computing. By understanding the balance between resource locking, atomicity, and synchronization, you gain valuable insights into how complex systems operate reliably—even when they’re doing many things at once.