Research on Controlling Concurrency in Real-Time Applications( PART II)

Gabrelsoft
6 min readJun 25, 2023

Serializability

Serializability is the most widely used correctness criterion in concurrency control. Autonomy has considerable effects on global serializability. It can schedule data concurrency control to be serializable. However, this requires the knowledge of all currently active transactions and the ability to control access to data items which is not normally possible with standard applications. Moreover, different local concurrency control schemes are adopted by different applications. The global system has enough information to provide concurrency control for global transactions, but it does not have information about local transactions.

Therefore, it can not provide total concurrency, Database participates in a real-time application, it provides distributed connection across all the data stored in DBM. There are various autonomy affected by global transactions executed at nodes;

  • Design autonomy refers to the ability of a component DBMS to choose its own design criteria.
  • Communication autonomy refers to the ability of a component DBMS to decide whether to communicate with other component DBMSs.
  • Execution autonomy refers to the ability of a component DBMS to execute local operations without interference from external operations and decide the order in which to execute external operations.

DETAILS

The following outlines some ways of solving these problems below;

1. Scheduling

2. Avoid Busy-waiting

3. Good Data Sharing Mechanism

4. Consider Time-Triggered Architectures

Scheduling —

The use of a scheduling algorithm to assign priorities to tasks or threads based on their timing requirements. Assigning higher priorities to critical tasks to ensure they receive timely access to shared resources and avoid being delayed by lower-priority tasks.

Some well-known scheduling algorithms are as follows; Round-robin scheduling, Priority-based Scheduling, and First-come First-serve(FCFS).

However, every algorithm has its own limitation, round-robin is no exception; To reduce these limitations, lots of variations of this algorithm have been developed. For instance, dynamic time quantum adjustments can be implemented, where tasks with shorter burst times are allocated smaller time quanta, and tasks with longer burst times receive larger time quanta. This adaptive approach can improve the overall performance of the application and resource utilization.

Avoid Busy-Waiting —

Eliminating busy-waiting will help tasks continuously check for conditions to become true. Busy waiting wastes CPU cycles and can adversely impact the timing behavior of real-time tasks. Best known as spin-waiting or tight-coupling, is typically used when a thread needs to synchronize or coordinate with another thread or process for a specific condition to be satisfied before proceeding. It can be seen as an active form of waiting, where Eliminating busy waiting will help tasks continuously check for conditions to become true. Busy waiting wastes CPU cycles and can adversely impact the timing behavior of real-time tasks. Best known as spin-waiting or tight-coupling, is typically used when a thread needs to synchronize or coordinate with another thread or process for a specific condition to be satisfied before proceeding. It can be seen as an active form of waiting, where the thread yields the CPU resources and allows other tasks to execute while it waits. While waiting for the thread, can be simple to implement. it can be categorized into the following conditions;

  • Low Latency — it provides low latency in situations where the condition being waited for changes frequently or is expected to be resolved quickly. it eliminates the overhead associated with context switches and allows for immediate responsiveness once the condition becomes true.
  • High CPU Utilization — Busy waiting involves continuous CPU usage by the thread or process. Since it actively checks the condition in a tight loop, it consumes CPU cycles even when there is no progress or work to be done.
  • Continuous Looping — A thread or process repeatedly executes a loop, checking for a specific condition using conditional statements or polling mechanisms. This loop continues until the condition is met.
  • Lack of Yielding — Busy waiting does not yield or release the CPU to other tasks or processes. Instead, it continues to execute the loop until the condition is satisfied, potentially leading to inefficient resource utilization and reduced performance.

To reduce the limitations of busy waiting, alternative synchronization techniques and waiting strategies are implemented. Some of these include blocking or sleeping the thread, signaling, and notification mechanisms, callbacks, and proper utilization of synchronization primitives like locks and condition variables. These alternatives allow for more efficient resource utilization, reduced computational, and improved system responsiveness.

In general, busy waiting should be used judiciously and only in situations where the conditions being waited for change rapidly and are expected to be resolved quickly. Careful consideration should be given to the trade-offs between latency, CPU utilization, and overall application performance when deciding to use busy waiting in a concurrent system. Instead, use blocking or event-driven mechanisms that allow tasks to suspend and wait for a condition to occur.

Good Data-Sharing Mechanism —

Enforcing rules for data sharing among concurrent entities, with the use of techniques such as data locking, message, passing, or shared memory with appropriate synchronization to ensure data consistency with the aim of avoiding race conditions. The following are some commonly used data-sharing mechanisms in concurrency control;

  • Locks/Mutexes — The term locks, provides mutexes(exclusive access to shared resources). Threads or processes need to acquire the lock before accessing the shared data and release it once they’re done. The mechanism prevents concurrent access to shared data, ensuring data integrity and avoiding race conditions.
  • Semaphores — this is a synchronization technique for allowing a fixed number of threads or processes to access a shared resource simultaneously. Semaphores maintain a counter that tracks the number of available resources. Threads acquire and release semaphores to request and release access to the shared resource, respectively.
  • Atomic Operations — This provides indivisible access to shared variables or memory locations. These operations ensure the read-modify-write operations. The operations provided lock-free algorithms and provide synchronization guarantees.
  • Condition Variable — The thread enables us to wait until a certain condition becomes true. Threads can block a condition variable. suspending their execution until another thread signals or broadcasts a change in the condition.Condition variables are often used in combination with locks to provide more sophisticated synchronization patterns.
  • Barriers — Barriers synchronize a group of threads by making them wait until all threads in the group have reached a certain point in their execution. Once all threads have reached the barrier, they can proceed concurrently. Barriers are useful for coordinating parallel tasks that need to synchronize at specific points during their execution.

It is important to choose the appropriate data-sharing mechanism based on the specific requirements and characteristics of the concurrent system. Considerations such as the nature of data access, the level of concurrency, and the need for synchronization and coordination will influence the choice of the data-sharing mechanism. Additionally, proper usage and understanding of the chosen mechanism are essential to ensure correct and efficient concurrent execution.

Consider Time-Triggered Architectures(TTA) —

Exploring time-triggered architectures where the execution of tasks is synchronized with predefined time slots. It eliminates the need for explicit synchronization mechanisms in some cases. The architecture ensures temporal isolation and allows for the predictable execution of tasks, making it suitable for time-critical applications. The following outlines some key aspects of TTA;

  • Synchronization Points — TTAs define specific synchronization points in the schedule where tasks or processes can communicate, exchange data, or synchronize their actions. These synchronization points ensure proper coordination and avoid race conditions or inconsistent behavior. Synchronization mechanisms such as locks, semaphores, or message passing are often used to facilitate inter-task communication and synchronization.
  • Predefined Schedule — the schedule is typically designed and fixed during system design. The schedule specifies the timing and order of execution for each task or process in the system. It determines when each task should start and how long it should run, providing a synchronized and predictable execution flow.
  • Resource Sharing — TTAs require careful resource-sharing mechanisms to ensure that shared resources, such as memory or I/O devices, are accessed in a coordinated and controlled manner. Synchronization techniques like locks or semaphores are employed to manage concurrent access to shared resources, preventing data races and maintaining data integrity.
  • Priority-Based Scheduling — Priority-based scheduling is often used to determine the order of task execution in case of contention or overlapping time slots. Tasks with higher priorities are given precedence over lower-priority tasks, ensuring that critical tasks meet their timing requirements. Priority-based scheduling ensures that important tasks are not delayed or starved by lower-priority tasks.

TTAs provide a framework for managing concurrent tasks and processes in a predictable and coordinated manner. By obeying a predefined schedule and employing synchronization mechanisms, it ensures temporal isolation, determinism, and resource sharing while meeting timing requirements. This architecture is used in the real-time application(e.g. Dropbox) where timing is critical.

TO BE CONTINUED…

--

--