A deadlock is a
situation in which two or more competing actions are each waiting for the other
to finish, and thus neither ever does.
In a transactional
database[disambiguation needed], a deadlock happens when two processes each
within its own transaction updates two rows of information but in the opposite
order. For example, process A updates row 1 then row 2 in the exact timeframe
process B updates row 2 then row 1. Process A can't finish updating row 2 until
process B is finished, but it cannot finish updating row 1 until process A
finishes. No matter how much time is allowed to pass, this situation will never
resolve itself and because of this database management systems will typically
kill the transaction of the process that has done the least amount of work.
In an operating
system, a deadlock is a situation which occurs when a process or thread enters
a waiting state because a resource requested is being held by another waiting
process, which in turn is waiting for another resource. If a process is unable
to change its state indefinitely because the resources requested by it are
being used by another waiting process, then the system is said to be in a
deadlock.
Deadlock is a
common problem in multiprocessing systems, parallel computing and distributed
systems, where software and hardware locks are used to handle shared resources
and implement process synchronization.
In
telecommunication systems, deadlocks occur mainly due to lost or corrupt
signals instead of resource contention.
Examples
Any deadlock
situation can be compared to the classic "chicken or egg" problem. It
can be also considered a paradoxical "Catch-22" situation. A real
world example would be an illogical statute passed by the Kansas legislature in
the early 20th century, which stated:
“ When two trains approach each other at a
crossing, both shall come to a full stop and neither shall start up again until
the other has gone. ”
A simple
computer-based example is as follows. Suppose a computer has three CD drives
and three processes. Each of the three processes holds one of the drives. If
each process now requests another drive, the three processes will be in a
deadlock. Each process will be waiting for the "CD drive released"
event, which can be only caused by one of the other waiting processes. Thus, it
results in a circular chain.
Necessary condition
A deadlockers
situation can arise if all of the following conditions hold simultaneously in a
system:
Mutual Exclusion: At least one resource
must be held in a non-sharable mode. Only one process can use the resource at
any given instant of time.
Hold and Wait or Resource Holding: A
process is currently holding at least one resource and requesting additional
resources which are being held by other processes.
No Preemption: The operating system must
not de-allocate resources once they have been allocated; they must be released
by the holding process voluntarily.
Circular Wait: A process must be waiting
for a resource which is being held by another process, which in turn is waiting
for the first process to release the resource. In general, there is a set of
waiting processes, P = {P1, P2, ..., PN}, such that P1 is waiting for a
resource held by P2, P2 is waiting for a resource held by P3 and so on until PN
is waiting for a resource held by P1.
These four
conditions are known as the Coffman conditions from their first description in
a 1971 article by Edward G. Coffman, Jr.Unfulfillment of any of these
conditions is enough to preclude a deadlock from occurring.
Avoiding database
deadlocks
This section is written like a manual
or guidebook. Please help rewrite this section from a descriptive, neutral
point of view, and remove advice or instruction. (August 2013)
An effective way to
avoid database deadlocks is to follow this approach from the Oracle Locking
Survival Guide:
Application developers can eliminate all
risk of enqueue deadlocks by ensuring that transactions requiring multiple
resources always lock them in the same order."
This single
sentence needs much explanation to understand the recommended solution. First
it highlights the fact that processes must be inside a transaction for
deadlocks to happen. Note that some database systems can be configured to
cascade deletes which creates an implicit transaction which then can cause
deadlocks. Also some DBMS vendors offer row-level locking a type of record
locking which greatly reduces the chance of deadlocks as opposed to page level
locking which creates many times more locks. Second, by "multiple
resources" this means more than one row in one or more tables. An example
of locking in the same order would be to process all INSERTS first, all UPDATES
second, and all DELETES last and within processing each of these handle all
parent table changes before children table changes; and process table changes
in the same order such as alphabetically or ordered by an ID or account number.
Third, eliminating all risk of deadlocks is difficult to achieve as the DBMS
has automatic lock escalation features that raise row level locks into page
locks which can be escalated to table locks. Although the risk or chance of
experiencing a deadlock will not go to zero as deadlocks tend to happen more on
large, high-volume, complex systems, it can be greatly reduced and when
required the software can be enhanced to retry transactions when a deadlock is
detected. Fourth, deadlocks can result in data loss if the software is not
developed to use transactions on every interaction with a DBMS and the data
loss is difficult to locate and creates unexpected errors and problems.
Deadlocks are a
challenging problem to correct as they result in data loss, are difficult to
isolate, create unexpected problems, and are time consuming to fix. Modifying
every section of software code in a large system that access the database to
always lock resources in the same order when the order is inconsistent takes
significant resources and testing to implement. That and the use of the strong
word "dead" in front of lock are some of the reasons why deadlocks
have a "this is a big problem" reputation.
Deadlock handling
Most current
operating systems cannot prevent a deadlock from occurring. When a deadlock
occurs, different operating systems respond to them in different non-standard
manners. Most approaches work by preventing one of the four Coffman conditions
from occurring, especially the fourth one. Major approaches are as follows.
Ignoring deadlock
In this approach,
it is assumed that a deadlock will never occur. This is also an application of
the Ostrich algorithm.[9][10] This approach was initially used by MINIX and
UNIX.[7] This is used when the time intervals between occurrences of deadlocks
are large and the data loss incurred each time is tolerable.
Detection
Under deadlock
detection, deadlocks are allowed to occur. Then the state of the system is
examined to detect that a deadlock has occurred and subsequently it is
corrected. An algorithm is employed that tracks resource allocation and process
states, it rolls back and restarts one or more of the processes in order to
remove the detected deadlock. Detecting a deadlock that has already occurred is
easily possible since the resources that each process has locked and/or
currently requested are known to the resource scheduler of the operating
system.
Deadlock detection
techniques include, but are not limited to, model checking. This approach
constructs a finite state-model on which it performs a progress analysis and
finds all possible terminal sets in the model. These then each represent a
deadlock.
After a deadlock is
detected, it can be corrected by using one of the following methods:
Process Termination: One or more process
involved in the deadlock may be aborted. We can choose to abort all processes
involved in the deadlock. This ensures that deadlock is resolved with certainty
and speed. But the expense is high as partial computations will be lost. Or, we
can choose to abort one process at a time until the deadlock is resolved. This
approach has high overheads because after each abort an algorithm must
determine whether the system is still in deadlock. Several factors must be
considered while choosing a candidate for termination, such as priority and age
of the process.
Resource Preemption: Resources allocated to
various processes may be successively preempted and allocated to other
processes until the deadlock is broken.
Prevention
Deadlock prevention
works by preventing one of the four Coffman conditions from occurring.
Removing the mutual exclusion condition
means that no process will have exclusive access to a resource. This proves
impossible for resources that cannot be spooled. But even with spooled
resources, deadlock could still occur. Algorithms that avoid mutual exclusion
are called non-blocking synchronization algorithms.
The hold and wait or resource holding
conditions may be removed by requiring processes to request all the resources
they will need before starting up (or before embarking upon a particular set of
operations). This advance knowledge is frequently difficult to satisfy and, in
any case, is an inefficient use of resources. Another way is to require processes
to request resources only when it has none. Thus, first they must release all
their currently held resources before requesting all the resources they will
need from scratch. This too is often impractical. It is so because resources
may be allocated and remain unused for long periods. Also, a process requiring
a popular resource may have to wait indefinitely, as such a resource may always
be allocated to some process, resulting in resource starvation.(These
algorithms, such as serializing tokens, are known as the all-or-none
algorithms.)
The no preemption condition may also be
difficult or impossible to avoid as a process has to be able to have a resource
for a certain amount of time, or the processing outcome may be inconsistent or
thrashing may occur. However, inability to enforce preemption may interfere
with a priority algorithm. Preemption of a "locked out" resource
generally implies a rollback, and is to be avoided, since it is very costly in
overhead. Algorithms that allow preemption include lock-free and wait-free
algorithms and optimistic concurrency control.
The final condition is the circular wait
condition. Approaches that avoid circular waits include disabling interrupts
during critical sections and using a hierarchy to determine a partial ordering
of resources. If no obvious hierarchy exists, even the memory address of
resources has been used to determine ordering and resources are requested in
the increasing order of the enumeration. Dijkstra's solution can also be
used.
Avoidance
Deadlock can be
avoided if certain information about processes are available to the operating
system before allocation of resources, such as which resources a process will
consume in its lifetime. For every resource request, the system sees whether
granting the request will mean that the system will enter an unsafe state,
meaning a state that could result in deadlock. The system then only grants
requests that will lead to safe states.In order for the system to be able to
determine whether the next state will be safe or unsafe, it must know in
advance at any time:
SUBMITTED BY - SUSHMA PANCHVAL


