Concurrency and Conservative Two-Phase Locking: Keeping Computers in Sync

Created by UC Choudhary ©

For the best experience please view the animations in light mode.

1. Introduction [1]

Imagine you’re in a bustling kitchen with multiple chefs, each juggling different tasks to create a perfect meal. Some chefs switch between tasks quickly, while others work side by side. Computers face a similar challenge when handling multiple operations. Let us explore the concepts of concurrency, parallelism, atomicity, and synchronization—all that simplifies these ideas as graspable as a well-run kitchen.

For the purposes of this technical description there are no multithreaded optimizations to processes. Each thread carries a single process. For example: Adding melted butter to 'sauce à la hollandaise'. Just like there are many steps to completing a recipe, a program may be multithreaded as we learn to run more complex tasks in a single program, some tasks are so complex that computers don't have the bandwidth to keep up. This is why we care about concurrency.

Two-phase locking(2PL): "Two-phase locking enforces a strict order on resource acquisition and release." [4]
Conservative Two-phase locking(C2PL): Conservative Two-phase locking enforces a strict order on resource acquisition and release. It ensures all required components are acquired before execution. C2PL mechanism "hold locks on all data items at the same time as in conservative two-phase locking" [0]

obtained image from flickr
Figure 1: A modern computer, where multiple tabs are handled efficiently. From Stack Overflow

If you know anything about Computer Hardware, you’ve likely heard about companies like Nvidia. Their specialized Graphics Processing Units (GPUs) are at the heart of modern computational power, particularly in powering artificial intelligence like ChatGPT and making AAA games. But how does this relate to our kitchen? Just as a chef uses various tools and ingredients to craft a dish, a computer uses hardware innovations and software techniques to perform multiple operations at once.

Figure 1 is showing how a user manually manages tabs but is oblivious to what is happening under the hood, which is what this article is about. Since the audience is high school students I have abstracted away a lot of the prerequisites like caching, role of memory, scheduling algorithms, stacks, heaps, dynamic memory allocation, and other key concepts in understanding concurrency at a high level. Through the help of animations and analogies, I have tried to simplify basic concurrency and its C2PL optimization.


Back to Contents ->

2. Concurrency vs. Parallelism [5]

Why learn about Operating Systems while learning about Computers?
"Operating system concepts elucidate the principles of process scheduling and resource management."[6]

Before we get in too deep, it’s important to clarify a common misconception even among CS graduates: concurrency and parallelism are not the same thing.

2.1 Concurrency

Concurrency is like a single chef rapidly switching between multiple dishes. The chef interleaves tasks so that all the dishes make progress, even if only one task is performed at a time. In computing, concurrency is the art of managing multiple tasks that can overlap in time.

In computing terms:

  • Concurrency means managing multiple tasks that overlap in time
  • Only one task is actually being processed at any given moment
  • The computer switches between tasks so quickly it appears simultaneous
  • This is what happens when you have multiple apps open on a single-core computer

2.2 Parallelism

Parallelism happens when multiple chefs work simultaneously in the kitchen. In a computer, this is seen on multi-core processors or GPUs, where different cores handle different operations at the same time.

In computing terms:

  • Parallelism means multiple tasks are literally executing at the same time
  • This requires multiple processors or cores (like having multiple chefs)
  • Each core handles its own task independently
  • Modern computers with multiple cores can truly perform parallel operations

Understanding this difference is key to appreciating how computers juggle tasks efficiently.


Back to Contents ->

3. Understanding Concurrency in Modern Computing [2]

3.1 The Hardware Behind the Magic

In the early days, computers were like kitchens with a single chef: one thread, one task at a time. Today's machines are much more powerful and complex:

  • CPU (Central Processing Unit): The main "brain" of your computer that handles most general tasks
  • Core: A processing unit within a CPU that can handle tasks independently. Modern CPUs have multiple cores (dual-core, quad-core, etc.)
  • GPU (Graphics Processing Unit): A specialized processor designed to handle graphics and certain types of calculations very efficiently

Today's advanced hardware, such as Nvidia GPUs, feature thousands of tiny cores that work in parallel. Each core is capable of handling separate tasks, allowing for true parallelism and speeding up complex operations like training deep neural networks or rendering video game graphics.

In Figure 2, we see an AMD Threadripper manage concurrent tasks . It has 96 cores each capable of 2.4 billion cycles per second. This means it can perform a tremendous number of operations simultaneously.

Figure2: Graphical Task Manager AMD Threadripper Stats

3.2 Software’s Role: Atomicity and Synchronization

While hardware gives us the muscle, software provides the finesse. Modern programming languages and operating systems support techniques like multithreading and asynchronous programming.

What is Atomicity?

Atomicity ensures that operations happen either completely or not at all. Think of it as a chef executing a precise, uninterruptible step in a recipe. For example, when making a sauce, either all ingredients are mixed in or none are—there's no in-between state.

In computing, an atomic operation cannot be interrupted halfway through. It's all or nothing, which helps prevent data from being corrupted when multiple processes try to access it.

What is Synchronization?

Synchronization is equally important. It's like placing a "do not use" sign on a critical kitchen tool. Synchronization mechanisms (like locks) ensure that when one process is using a shared resource, no other process can interfere.

Without proper synchronization, we risk race conditions—where multiple processes access or modify data concurrently, leading to unpredictable outcomes.

Simulation A: Basic Concurrency

This simulation helps us understand at a very basic level what concurrency does. Each thread performs tasks independently of the other thread. The slider controls the time each thread takes to complete one process cycle.

Instructions for using Simulation A:
  • Move the slider to adjust the speed of thread execution
  • The simulation updates clock speed at the next cycle
  • To change the number of cores change them, then click the "Update Cores" button

Back to Contents ->

Very Basic Concurrency in Systems Programming

This animation demonstrates how multiple threads execute across multiple CPU cores.



500ms





Elapsed Time: 0 seconds



SIMULATION A
Back to Animation Navbar ->

4. The Need for Concurrency Control [1]

"Concurrency control prevents interference among simultaneous operations in a shared environment." [5]

Even with the best software and hardware, chaos can ensue if multiple processes try to access the same resource simultaneously. This is known as a race condition. Race conditions are not only common, but expected when there are multiple people developing software independently, but there is a way to solve this bug.

4.1 A Kitchen Analogy for Race Conditions

Imagine two chefs in a restaurant kitchen both needing to prepare a different version of the delicious Hollandaise sauce. Chef A begins carefully balancing butter, egg yolks, and lemon juice. Before finishing, Chef B jumps in to use the sauce for their dish—unaware of Chef A’s progress—and alters it by adding extra butter. The result? A sauce with inconsistent taste and texture, much like how unsynchronized processes can produce unpredictable results in computing.

"Advanced concurrency control algorithms mitigate deadlocks and optimize performance."[3]

4.2 The Role of Synchronization and Atomicity

To avoid such mishaps, computers use synchronization mechanisms like locks and atomic operations.
Locks: Ensure that only one process can modify a shared resource at a time.
Atomic operations: Atomic operations further guarantee that critical steps are completed fully before another process can intervene, maintaining the consistency and integrity of the data.
Simulation B illustrates some key problems in concurrency in a single timeline.


Back to Contents ->

Possible Concurrency Problems

This animation uses a single timeline to show normal thread execution (steps 1–10), a race condition between Threads 1 & 2 (steps 11–20), and a deadlock between Threads 3 & 4 (steps 21–30).

500ms
Explanation:
Normal Execution (Steps 1–10): Each row is a thread; colored blocks show when the thread is active.
Race Condition (Steps 11–20): Threads 1 & 2 concurrently update a shared variable.
Deadlock (Steps 21–30): Threads 3 & 4 are waiting for each other's locks.

SIMULATION B
Back to Animation Navbar ->

Simulation C: Animation 1 Explanation

Animation1 Explanation:
The first animation is a dynamic simulation powered by GSAP (a JS library that is the industry standard for animation) that visually demonstrates how a concurrent system manages tasks using a two-phase locking (2PL) mechanism. In this animation, tasks are represented by "threads" that are created and queued, then move dynamically toward available processing cores. By the time you read this the animation might have ended, please click the run again button to run the animation again. I decided to make this a button because this animation is resource intensive. As each thread reaches its designated core area, it attempts to acquire a lock before entering a critical section—a process that simulates ensuring exclusive access to a shared resource. Once a thread successfully enters the critical section, it updates a shared counter (simulating resource use) and then eventually exits, releasing the lock for subsequent tasks. This vivid depiction helps users understand key concepts like resource contention, atomic operations, and the orderly progression of tasks in a concurrent environment.


Back to Contents ->
Lock Status: Unlocked
Phase: Idle
Task Completion Log:

Legend

  • Lock Status: Locked - At least one task is running.
  • Lock Status: Unlocked - No tasks are running.
  • Phase: Idle - No tasks in the system.
  • Phase: Growing - Some tasks are running and tasks are queued.
  • Phase: At Capacity - All cores are busy.
  • Phase: Overloaded - More tasks than cores.
  • Phase: Shrinking - Tasks are completing and the system winds down.

Back to Animation Navbar ->

Simulation C: Animation1: Demonstrates 2PL with 4 cores and 20 threads.

Simulation C: Animation2 Explanation

The second animation serves as a static visual framework that complements the dynamic simulation. It renders multiple parallel tracks and a clearly defined critical section on a canvas, symbolizing the spatial pathways, along which tasks progress. These tracks represent the ordered routes that threads follow, emphasizing the underlying structure of process scheduling and resource allocation in a multi-threaded system. The critical section is highlighted to symbolize the area where resource conflicts might occur if proper synchronization is not maintained. This visualization reinforces the theoretical concepts by providing a concrete, spatial illustration of how tasks are managed and how potential contention is organized within the system. inCS is when a task is in the critical section.


Back to Contents ->

SIMULATION C: ANIMATION2

Back to Animation Navbar ->

5. Delving into Two-Phase Locking (2PL) [1]

To manage the challenges of concurrent operations, especially in database systems, computer scientists developed protocols like Two-Phase Locking (2PL).

5.1 The Growing Phase (Locking)

Imagine a chef gathering all necessary ingredients and tools before starting to cook. In the growing phase, a transaction (or chef) locks every resource it needs. During this phase, the chef can acquire more locks but isn’t allowed to release any—ensuring that all required elements are secured before the operation begins.

5.2 The Shrinking Phase (Unlocking)

Once the cooking (or transaction) has started, the chef enters the shrinking phase by releasing the locks. No new locks are acquired during this period. This process is akin to cleaning up after a meal; as ingredients are put away, no new ones are brought out. This disciplined approach helps maintain consistency and prevents conflicts.

table
Figure 3: A table from Wikipedia that compares various concurrency protocols. From Wikipedia

5.3 Benefits and Challenges

2PL guarantees serializability, meaning that even though operations are concurrent, the end result is as if they had been executed sequentially. However, there’s a catch: if two transactions hold locks that each other needs, they might end up waiting indefinitely—a situation known as a deadlock.


Back to Contents ->

6. Conservative Two-Phase Locking (C2PL): A Deadlock-Free Approach [3]

To further guard against deadlocks, an enhanced version of 2PL called Conservative Two-Phase Locking (C2PL) was developed.

6.1 How It Works

In C2PL, a transaction must request and acquire all the locks it will ever need before it starts executing. If even one required resource is locked by another process, the transaction will wait until every single resource is available. This “all-or-nothing” approach prevents the partial execution that can lead to deadlocks—where two or more transactions are stuck waiting for each other.

6.2 Advantages

The primary advantage of C2PL is its ability to completely eliminate deadlocks. Since every transaction begins only when it has full control of the necessary resources, there’s no chance for the circular wait condition that typically causes deadlocks.

6.3 Trade-offs and Drawbacks

While C2PL creates a more stable and predictable environment, it can sometimes reduce overall system efficiency. In a busy system, waiting to acquire all locks before starting can slow down processes. This trade-off between safety (deadlock prevention) and performance (throughput) is a critical consideration for system designers. In systems where it is acceptable to compromise on speed and not on data integrity, C2PL is the best protocol, for regular users, we generally use other protocols.


Back to Contents ->

7. Conclusion [1]

In today’s digital world, the ability to manage multiple tasks safely and efficiently is paramount. Concurrency and parallelism empower computers to handle many operations, while atomicity and synchronization ensure these tasks are performed without stepping on each other’s toes. Two-Phase Locking (2PL) provides a structured way to maintain order, and its stricter counterpart, Conservative Two-Phase Locking (C2PL), offers a robust solution to avoid deadlocks.

I have tried to explain concurrency through the use of multiple animations, but it is important to understand that the animations are astronomically (a few billion times) slower than what is actually happening.

From the powerful GPUs accelerating AI to sophisticated databases preserving data integrity, these techniques form the backbone of modern computing. By understanding the balance between resource locking, atomicity, and synchronization, you gain valuable insights into how complex systems operate reliably—even when they’re doing many things at once.


Back to Contents ->

Core Citation

[0] Al-Jumah, N.B., et al. "Implementation and Modeling of Two-Phase Locking Concurrency Control—A Performance Study." Information and Software Technology, vol. 42, no. 4, 2000, pp. 257-73. https://doi.org/10.1016/S0950-5849(99)00062-2

Other Citations

[1] Bernstein, Philip A., Hadzilacos, Vassos , and Goodman, Nathan. Concurrency Control and Recovery in Database Systems. 1987.
[2] Gray, Jim, and Reuter, Andreas. Transaction Processing: Concepts and Techniques. 2nd ed., 1992.
[3] Batra, Neera, and A.K. Kapil. “Concurrency Control Algorithms and its Variants: A Survey.” ResearchGate, Nov. 2010, doi:10.1063/1.3526261.
[4] “Two-phase locking.” Wikipedia, The Free Encyclopedia, 2 Feb. 2025, https://en.wikipedia.org/wiki/Two-phase_locking.
[5] “Concurrency control.” Wikipedia, The Free Encyclopedia, 2 Feb. 2025, https://en.wikipedia.org/wiki/Concurrency_control.
[6] Silberschatz, Abraham, Peter B. Galvin, and Greg Gagne. Operating System Concepts. 10th ed., Wiley, 2018. Link.
[7] Tanenbaum, Andrew S., and Herbert Bos. Modern Operating Systems. 4th ed., Pearson, 2014. Link.
[8] Zhu, Timothy. CMPSC 473. The Pennsylvania State University.
[9] Bryant, Randal E., and David R. O’Hallaron. Computer Systems: A Programmer’s Perspective. Carnegie Mellon University.
[10] Arpaci-Dusseau, Remzi H., and Andrea C. Arpaci-Dusseau. OS: Three Easy Pieces. Link.

Back to Contents ->
...