Sunday, 23 February 2014

Deadlock



A deadlock is a situation in which two or more competing actions are each waiting for the other to finish, and thus neither ever does.

In a transactional database[disambiguation needed], a deadlock happens when two processes each within its own transaction updates two rows of information but in the opposite order. For example, process A updates row 1 then row 2 in the exact timeframe process B updates row 2 then row 1. Process A can't finish updating row 2 until process B is finished, but it cannot finish updating row 1 until process A finishes. No matter how much time is allowed to pass, this situation will never resolve itself and because of this database management systems will typically kill the transaction of the process that has done the least amount of work.

In an operating system, a deadlock is a situation which occurs when a process or thread enters a waiting state because a resource requested is being held by another waiting process, which in turn is waiting for another resource. If a process is unable to change its state indefinitely because the resources requested by it are being used by another waiting process, then the system is said to be in a deadlock.

Deadlock is a common problem in multiprocessing systems, parallel computing and distributed systems, where software and hardware locks are used to handle shared resources and implement process synchronization.

In telecommunication systems, deadlocks occur mainly due to lost or corrupt signals instead of resource contention.

Examples

Any deadlock situation can be compared to the classic "chicken or egg" problem. It can be also considered a paradoxical "Catch-22" situation. A real world example would be an illogical statute passed by the Kansas legislature in the early 20th century, which stated:
“       When two trains approach each other at a crossing, both shall come to a full stop and neither shall start up again until the other has gone. ”

A simple computer-based example is as follows. Suppose a computer has three CD drives and three processes. Each of the three processes holds one of the drives. If each process now requests another drive, the three processes will be in a deadlock. Each process will be waiting for the "CD drive released" event, which can be only caused by one of the other waiting processes. Thus, it results in a circular chain.
Necessary condition

A deadlockers situation can arise if all of the following conditions hold simultaneously in a system:

    Mutual Exclusion: At least one resource must be held in a non-sharable mode. Only one process can use the resource at any given instant of time.
    Hold and Wait or Resource Holding: A process is currently holding at least one resource and requesting additional resources which are being held by other processes.
    No Preemption: The operating system must not de-allocate resources once they have been allocated; they must be released by the holding process voluntarily.
    Circular Wait: A process must be waiting for a resource which is being held by another process, which in turn is waiting for the first process to release the resource. In general, there is a set of waiting processes, P = {P1, P2, ..., PN}, such that P1 is waiting for a resource held by P2, P2 is waiting for a resource held by P3 and so on until PN is waiting for a resource held by P1.

These four conditions are known as the Coffman conditions from their first description in a 1971 article by Edward G. Coffman, Jr.Unfulfillment of any of these conditions is enough to preclude a deadlock from occurring.
Avoiding database deadlocks
         This section is written like a manual or guidebook. Please help rewrite this section from a descriptive, neutral point of view, and remove advice or instruction. (August 2013)

An effective way to avoid database deadlocks is to follow this approach from the Oracle Locking Survival Guide:

    Application developers can eliminate all risk of enqueue deadlocks by ensuring that transactions requiring multiple resources always lock them in the same order."

This single sentence needs much explanation to understand the recommended solution. First it highlights the fact that processes must be inside a transaction for deadlocks to happen. Note that some database systems can be configured to cascade deletes which creates an implicit transaction which then can cause deadlocks. Also some DBMS vendors offer row-level locking a type of record locking which greatly reduces the chance of deadlocks as opposed to page level locking which creates many times more locks. Second, by "multiple resources" this means more than one row in one or more tables. An example of locking in the same order would be to process all INSERTS first, all UPDATES second, and all DELETES last and within processing each of these handle all parent table changes before children table changes; and process table changes in the same order such as alphabetically or ordered by an ID or account number. Third, eliminating all risk of deadlocks is difficult to achieve as the DBMS has automatic lock escalation features that raise row level locks into page locks which can be escalated to table locks. Although the risk or chance of experiencing a deadlock will not go to zero as deadlocks tend to happen more on large, high-volume, complex systems, it can be greatly reduced and when required the software can be enhanced to retry transactions when a deadlock is detected. Fourth, deadlocks can result in data loss if the software is not developed to use transactions on every interaction with a DBMS and the data loss is difficult to locate and creates unexpected errors and problems.

Deadlocks are a challenging problem to correct as they result in data loss, are difficult to isolate, create unexpected problems, and are time consuming to fix. Modifying every section of software code in a large system that access the database to always lock resources in the same order when the order is inconsistent takes significant resources and testing to implement. That and the use of the strong word "dead" in front of lock are some of the reasons why deadlocks have a "this is a big problem" reputation.
Deadlock handling

Most current operating systems cannot prevent a deadlock from occurring. When a deadlock occurs, different operating systems respond to them in different non-standard manners. Most approaches work by preventing one of the four Coffman conditions from occurring, especially the fourth one. Major approaches are as follows.
Ignoring deadlock

In this approach, it is assumed that a deadlock will never occur. This is also an application of the Ostrich algorithm.[9][10] This approach was initially used by MINIX and UNIX.[7] This is used when the time intervals between occurrences of deadlocks are large and the data loss incurred each time is tolerable.
Detection

Under deadlock detection, deadlocks are allowed to occur. Then the state of the system is examined to detect that a deadlock has occurred and subsequently it is corrected. An algorithm is employed that tracks resource allocation and process states, it rolls back and restarts one or more of the processes in order to remove the detected deadlock. Detecting a deadlock that has already occurred is easily possible since the resources that each process has locked and/or currently requested are known to the resource scheduler of the operating system.

Deadlock detection techniques include, but are not limited to, model checking. This approach constructs a finite state-model on which it performs a progress analysis and finds all possible terminal sets in the model. These then each represent a deadlock.

After a deadlock is detected, it can be corrected by using one of the following methods:

    Process Termination: One or more process involved in the deadlock may be aborted. We can choose to abort all processes involved in the deadlock. This ensures that deadlock is resolved with certainty and speed. But the expense is high as partial computations will be lost. Or, we can choose to abort one process at a time until the deadlock is resolved. This approach has high overheads because after each abort an algorithm must determine whether the system is still in deadlock. Several factors must be considered while choosing a candidate for termination, such as priority and age of the process.
    Resource Preemption: Resources allocated to various processes may be successively preempted and allocated to other processes until the deadlock is broken.

Prevention

Deadlock prevention works by preventing one of the four Coffman conditions from occurring.

    Removing the mutual exclusion condition means that no process will have exclusive access to a resource. This proves impossible for resources that cannot be spooled. But even with spooled resources, deadlock could still occur. Algorithms that avoid mutual exclusion are called non-blocking synchronization algorithms.
    The hold and wait or resource holding conditions may be removed by requiring processes to request all the resources they will need before starting up (or before embarking upon a particular set of operations). This advance knowledge is frequently difficult to satisfy and, in any case, is an inefficient use of resources. Another way is to require processes to request resources only when it has none. Thus, first they must release all their currently held resources before requesting all the resources they will need from scratch. This too is often impractical. It is so because resources may be allocated and remain unused for long periods. Also, a process requiring a popular resource may have to wait indefinitely, as such a resource may always be allocated to some process, resulting in resource starvation.(These algorithms, such as serializing tokens, are known as the all-or-none algorithms.)
    The no preemption condition may also be difficult or impossible to avoid as a process has to be able to have a resource for a certain amount of time, or the processing outcome may be inconsistent or thrashing may occur. However, inability to enforce preemption may interfere with a priority algorithm. Preemption of a "locked out" resource generally implies a rollback, and is to be avoided, since it is very costly in overhead. Algorithms that allow preemption include lock-free and wait-free algorithms and optimistic concurrency control.
    The final condition is the circular wait condition. Approaches that avoid circular waits include disabling interrupts during critical sections and using a hierarchy to determine a partial ordering of resources. If no obvious hierarchy exists, even the memory address of resources has been used to determine ordering and resources are requested in the increasing order of the enumeration. Dijkstra's solution can also be used.

Avoidance

Deadlock can be avoided if certain information about processes are available to the operating system before allocation of resources, such as which resources a process will consume in its lifetime. For every resource request, the system sees whether granting the request will mean that the system will enter an unsafe state, meaning a state that could result in deadlock. The system then only grants requests that will lead to safe states.In order for the system to be able to determine whether the next state will be safe or unsafe, it must know in advance at any time:

   


 SUBMITTED BY - SUSHMA PANCHVAL

Friday, 21 February 2014

Banker's algorithm

Banker's algorithm:-  The Banker's algorithm is a resource allocation and deadlock avoidance algorithm developed by Edsger Dijkstra that tests for safety by simulating the allocation of predetermined maximum possible amounts of all resources, and then makes an "s-state" check to test for possible deadlock conditions for all other pending activities, before deciding whether allocation should be allowed to continue.
The algorithm was developed in the design process for the THE operating system and originally described  in EWD108. The name is by analogy with the way that bankers account for liquidity constraints.


1. limitations:-
Specifically, it needs to know how much of each resource a process could possibly request. In most systems, this information is unavailable, making it impossible to implement the Banker's algorithm. Also, it is unrealistic to assume that the number of processes is static since in most systems the number of processes varies dynamically. Moreover, the requirement that a process will eventually release all its resources  is sufficient for the correctness of the algorithm, however it is not sufficient for a practical system. Waiting for hours  for resources to be released is usually not acceptable.


2. Algoritm:-
it is a step by step process.These parts are following as:
(1.) Resources:-
For the Banker's algorithm to work, it needs to know three things:
    How much of each resource each process could possibly request[CLAIMS]
    How much of each resource each process is currently holding[ALLOCATED]
    How much of each resource the system currently has available[AVAILABLE]

Claims: The Banker's Algorithm derives its name from the fact that this algorithm could be used in a banking system to ensure that the bank does not run out of resources, because the bank would never allocate its money in such a way that it can no longer satisfy the needs of all its customers.
Available: A vector of length m indicates the number of available resources of each type. If Available[j] = k, there are k instances of resource type Rj available.

    Allocation: An n×m matrix defines the number of resources of each type currently allocated to each process. If Allocation[i,j] = k, then process Pi is currently allocated k instance of resource type 
Example:-
Assuming that the system distinguishes between four types of resources, the following is an example of how those resources could be distributed. Note that this example shows the system at an instant before a new request for resources arrives. Also, the types and number of resources are abstracted. Real systems, for example, would deal with much larger quantities of each resource.

Total resources in system:
A B C D
2 1 0 0
Available system resources are:
A B C D
2 1 0 0
Processes :
   A B C D
P1 0 0 1 2
P2 2 0 0 0
P3 0 0 3 4
Processes :
   A B C D
P1 0 0 1 2
P2 2 7 5 0
P3 6 6 5 6
Need= maximum resources - currently allocated resources
Processes :
   A B C D
P1 0 0 0 0
P2 0 7 5 0
P3 6 6 2 2

(2.) Safe and Unsafe States:-
It is possible for all processes to finish executing . Since the system cannot know when a process will terminate, or how many resources it will have requested by then, the system assumes that all processes will eventually attempt to acquire their stated maximum resources and terminate soon afterward.

Given that assumption, the algorithm determines if a state is safe by trying to find a hypothetical set of requests by the processes that would allow each to acquire its maximum resources and then terminate. Any state where no such set exists is an unsafe state.

Example:-
We can show that the state given in the previous example is a safe state by showing that it is possible for each process to acquire its maximum resources and then terminate.
  P1 acquires 2 A, 1 B and 1 D more resources, achieving its maximum
        [available resource: <3 1 1 2> - <2 1 0 1> = <1 0 1 1>]
        The system now still has 1 A, no B, 1 C and 1 D resource available
        [available resource: <1 0 1 1> + <3 3 2 2> = <4 3 3 3>]
        The system now has 4 A, 3 B, 3 C and 3 D resources available
        [available resource: <4 3 3 3>-<0 2 0 1>+<1 2 3 4> = <5 3 6 6>]
        The system now has 5 A, 3 B, 6 C and 6 D resources
        [available resource: <5 3 6 6> - <0 1 4 0> + <1 3 5 0> = <6 5 7 6>
        The system now has all resources: 6 A, 5 B, 7 C and 6D

(3.) Requests:-
when the system receives a request for resources, it runs the Banker's algorithm to determine if it is safe to grant the request.  The algorithm is fairly straight forward once the distinction between safe and unsafe states is understood.
Can the request be granted?
        If not, the request is impossible and must either be denied or put on a waiting list
    Assume that the request is granted
    Is the new state safe?
        If so grant the request
        If not, either deny the request or put it on a waiting list
Whether the system denies or postpones an impossible or unsafe request is a decision specific to the operating system.

Example:-
        The new state of the system d be:
     A B C D
Free 2 1 0 0
    Processes :
     A B C D
P1   0 0 1 2
P2   2 0 0 0
P3   0 0 3 0
    Processes :
     A B C D
P1   0 0 1 2
P2   2 7 5 0
P3   6 6 5 6
    Determine if this new state is safe
        P1 can acquire 2 A, 1 B and 1 D resources and terminate
        Then, P2 can acquire 2 B and 1 D resources and terminate
        Finally, P3 can acquire 1B and 3 C resources and terminate
        Assuming the request is granted, the new state would be:
    Available system resources:
     A B C D
Free 3 0 1 2
    Processes:
     A B C D
P1   1 2 2 1
P2   1 1 3 3
P3   1 2 1 0
    Processes:
     A B C D
P1   3 3 2 2
P2   1 2 3 4
P3   1 3 5 0
    Is this state safe? Assuming P1, P2, and P3 request more of resource B and C.
        P1 is unable to acquire enough B resources
        P2 is unable to acquire enough B resources
        P3 is unable to acquire enough B resources
        /*PROGRAM TO IMPLEMENT BANKER'S ALGORITHM
  *   --------------------------------------------*/
#include <stdio.h>
int curr[5][5], maxclaim[5][5], avl[5];
int alloc[5] = {0,0,0,0,0};
int maxres[5], running[5], safe=0;
int count = 0, i, j, exec, r, p,k=1;
 int main()
{
    printf("\nEnter the number of processes: ");
    scanf("%d",&p);
    for(i=0;i<p;i++)
    {
        running[i]=1;
        count++;
    }
    printf("\nEnter the number of resources: ");
    scanf("%d",&r);
    for(i=0;i<r;i++)
    {
        printf("\nEnter the resource for instance %d: ",k++);
        scanf("%d",&maxres[i]);
    }
    printf("\nEnter maximum resource table:\n");
    for(i=0;i<p;i++)
    {
        for(j=0;j<r;j++)
        {
            scanf("%d",&maxclaim[i][j]);
        }
    }
    printf("\nEnter allocated resource table:\n");
    for(i=0;i<p;i++)
    {
        for(j=0;j<r;j++)
        {
            scanf("%d",&curr[i][j]);
        }
    }
    printf("\nThe resource of instances: ");
    for(i=0;i<r;i++)
    {
        printf("\t%d",maxres[i]);
    }
    printf("\nThe allocated resource table:\n");
    for(i=0;i<p;i++)
    {
        for(j=0;j<r;j++)
        {
            printf("\t%d",curr[i][j]);
        }
        printf("\n");
    }
    printf("\nThe maximum resource table:\n");
    for(i=0;i<p;i++)
    {
        for(j=0;j<r;j++)
        {
            printf("\t%d",maxclaim[i][j]);
        }
        printf("\n");
    }
    for(i=0;i<p;i++)
    {
        for(j=0;j<r;j++)
        {
            alloc[j]+=curr[i][j];
        }
    }
    printf("\nAllocated resources:");
    for(i=0;i<r;i++)
    {
        printf("\t%d",alloc[i]);
    }
    for(i=0;i<r;i++)
    {
        avl[i]=maxres[i]-alloc[i];
    }
    printf("\nAvailable resources:");
    for(i=0;i<r;i++)
    {
        printf("\t%d",avl[i]);
    }
    printf("\n");
    //Main procedure goes below to check for unsafe state.
    while(count!=0)
    {
        safe=0;
        for(i=0;i<p;i++)
        {
            if(running[i])
            {
                exec=1;
                for(j=0;j<r;j++)
                {
                    if(maxclaim[i][j] - curr[i][j] > avl[j]){
                        exec=0;
                        break;
                    }
                }
                if(exec)
                {
                    printf("\nProcess%d is executing\n",i+1);
                    running[i]=0;
                    count--;
                    safe=1;
                    for(j=0;j<r;j++) {
                        avl[j]+=curr[i][j];
                    }
                    break;
                }
            }
        }
        if(!safe)
        {
            printf("\nThe processes are in unsafe state.\n");
            break;
        }
        else
        {
            printf("\nThe process is in safe state");
            printf("\nSafe sequence is:");
            for(i=0;i<r;i++)
            {
                printf("\t%d",avl[i]);
            }
            printf("\n");
        }
    }
}
 /*SAMPLE  OUTPUT
-----------------

Disadvantages of the Banker's Algorithm:-

 It requires the number of processes to be fixed; no additional processes can start while it is executing.
    It requires that the number of resources remain fixed; no resource may go down for any reason without the possibility of deadlock occurring.
    It allows all requests to be granted in finite time, but one year is a finite amount of time.
    Similarly, all of the processes guarantee that the resources loaned to them will be repaid in a finite amount of time. While this prevents absolute starvation, some pretty hungry processes might develop.
   All processes must know and state their maximum resource need in advance.


SUBMITTED BY = MANDEEP KAUR
CLASS=BSC (CS) 2ND YEAR
ROLL NO. =2304


Wednesday, 19 February 2014

Magnetic Disk Drive

Basically, a floppy disk drive reads and writes data to a small, circular piece of metal-coated plastic similar to audio cassette tape.
  • Both use a thin plastic base material coated with iron oxide. This oxide is a ferromagnetic material, meaning that if you expose it to a magnetic field it is permanently magnetized by the field.
  • Both can record information instantly.
  • Both can be erased and reused many times.
  • Both are very inexpensive and easy to use.
A floppy disk, like a cassette tape, is made from a thin piece of plastic coated with a magnetic material on both sides. However, it is shaped like a disk rather than a long thin ribbon. The tracks are arranged in concentric rings so that the software can jump from each file easily. The diskette spins like a record and the heads move to the correct track. The write head puts data on the diskette by magnetizing small, iron, bar-magnet particles embedded in the plastic surface. The magnetized particles have their north and south poles oriented in such a way that their pattern may be detected and read on a subsequent read operation. This type of storage is a common type in all computers.  However due to the small amout
information able to be held on a magnetic disk, technology has advanced and brought us optical disk drives.  Plastics are used because they are inexpensive to make and you can get similar conductive properties by adding the oxide layer. 
Magnetic storage

Longitudinal recording and perpendicular recording, two types of writing heads on a hard disk.

Magnetic storage (or magnetic recording) is the storage of data on a magnetized medium. Magnetic storage uses different patterns of magnetization in a magnetizable material to store data and is a form of non-volatile memory. The information is accessed using one or more read/write heads.
As of 2013[update], magnetic storage media, primarily hard disks, are widely used to store computer data as well as audio and video signals. In the field of computing, the term magnetic storage is preferred and in the field of audio and video production, the term magnetic recording is more commonly used. The distinction is less technical and more a matter of preference. Other examples of magnetic storage media include floppy disks, magnetic recording tape, and magnetic stripes on credit cards.

History
Magnetic storage in the form of wire recording—audio recording on a wire—was publicized by Oberlin Smith in 1888. He filed a patent in September, 1878 but did not pursue the idea as his business was machine tools. The first publicly demonstrated (Paris Exposition of 1900) magnetic recorder was invented by Valdemar Poulsen in 1898. Poulsen's device recorded a signal on a wire wrapped around a drum. In 1928, Fritz Pfleumer developed the first magnetic tape recorder. Early magnetic storage devices were designed to record analog audio signals. Computer and now most audio and video magnetic storage devices record digital data.
In old computers, magnetic storage was also used for primary storage in a form of magnetic drum, or core memory, core rope memory, thin film memory, twistor memory or bubble memory. Unlike modern computers, magnetic tape was also often used for secondary storage.

Design
Hard drives use magnetic memory to store giga- and terabytes of data in computers.
Information is written to and read from the storage medium as it moves past devices called read-and-write heads that operate very close (often tens of nanometers) over the magnetic surface. The read-and-write head is used to detect and modify the magnetization of the material immediately under it. There are two magnetic polarities, each of which is used to represent either 0 or 1.
The magnetic surface is conceptually divided into many small sub-micrometer-sized magnetic regions, referred to as magnetic domains, (although these are not magnetic domains in a rigorous physical sense), each of which has a mostly uniform magnetization. Due to the polycrystalline nature of the magnetic material each of these magnetic regions is composed of a few hundred magnetic grains. Magnetic grains are typically 10 nm in size and each form a single true magnetic domain. Each magnetic region in total forms a magnetic dipole which generates a magnetic field. In older hard disk drive (HDD) designs the regions were oriented horizontally and parallel to the disk surface, but beginning about 2005, the orientation was changed to perpendicular to allow for closer magnetic domain spacing.
For reliable storage of data, the recording material needs to resist self-demagnetization, which occurs when the magnetic domains repel each other. Magnetic domains written too densely together to a weakly magnetizable material will degrade over time due to rotation of the magnetic moment one or more domains to cancel out these forces. The domains rotate sideways to a halfway position that weakens the readability of the domain and relieves the magnetic stresses. Older hard disk drives used iron(III) oxide as the magnetic material, but current disks use a cobalt-based alloy.
A write head magnetizes a region by generating a strong local magnetic field, and a read head detects the magnetization of the regions. Early HDDs used an electromagnet both to magnetize the region and to then read its magnetic field by using electromagnetic induction. Later versions of inductive heads included metal in Gap (MIG) heads and thin film heads. As data density increased, read heads using magnetoresistance (MR) came into use; the electrical resistance of the head changed according to the strength of the magnetism from the platter. Later development made use of spintronics; in read heads, the magnetoresistive effect was much greater than in earlier types, and was dubbed "giant" magnetoresistance (GMR). In today's heads, the read and write elements are separate, but in close proximity, on the head portion of an actuator arm. The read element is typically magneto-resistive while the write element is typically thin-film inductive.
The heads are kept from contacting the platter surface by the air that is extremely close to the platter; that air moves at or near the platter speed. The record and playback head are mounted on a block called a slider, and the surface next to the platter is shaped to keep it just barely out of contact. This forms a type of air bearing.



Magnetic recording classes

Analog recording
Analog recording is based on the fact that remnant magnetization of a given material depends on the magnitude of the applied field. The magnetic material is normally in the form of tape, with the tape in its blank form being initially demagnetized. When recording, the tape runs at a constant speed. The writing head magnetizes the tape with current proportional to the signal. A magnetization distribution is achieved along the magnetic tape. Finally, the distribution of the magnetization can be read out, reproducing the original signal. The magnetic tape is typically made by embedding magnetic particles in a plastic binder on polyester film tape. The commonly used magnetic particles are Iron oxide particles or Chromium oxide and metal particles with size of 0.5 micrometers.Analog recording was very popular in audio and video recording. In the past 20 years, however, tape recording has been gradually replaced by digital recording.


Digital recording
Instead of creating a magnetization distribution in analog recording, digital recording only needs two stable magnetic states, which are the +Ms and -Ms on the hysteresis loop. Examples of digital recording are floppy disks and HDDs.
Magneto-optical recording
Magneto-optical recording writes/reads optically. When writing, the magnetic medium is heated locally by a laser, which induces a rapid decrease of coercive field. Then, a small magnetic field can be used to switch the magnetization. The reading process is based on magneto-optical Kerr effect. The magnetic medium are typically amorphous R-FeCo thin film (R being a rare earth element). Magneto-optical recording is not very popular. One famous example is Minidisc developed by Sony.
Domain propagation memory
Domain propagation memory is also called bubble memory. The basic idea is to control domain wall motion in a magnetic medium that is free of microstructure. Bubble refers to a stable cylindrical domain. Data is then recorded by the presence/absence of a bubble domain.

Technical details
Access method
Magnetic storage media can be classified as either sequential access memory or random access memory although in some cases the distinction is not perfectly clear. The access time can be defined as the average time needed to gain access to stored records. In the case of magnetic wire, the read/write head only covers a very small part of the recording surface at any given time. Accessing different parts of the wire involves winding the wire forward or backward until the point of interest is found. The time to access this point depends on how far away it is from the starting point. The case of ferrite-core memory is the opposite. Every core location is immediately accessible at any given time.
Hard disks and modern linear serpentine tape drives do not precisely fit into either category. Both have many parallel tracks across the width of the media and the read/write heads take time to switch between tracks and to scan within tracks. Different spots on the storage media take different amounts of time to access. For a hard disk this time is typically less than 10 ms, but tapes might take as much as 100 s.

Current usage
As of 2011[update], common uses of magnetic storage media are for computer data mass storage on hard disks and the recording of analog audio and video works on analog tape. Since much of audio and video production is moving to digital systems, the usage of hard disks is expected to increase at the expense of analog tape. Digital tape and tape libraries are popular for the high capacity data storage of archives and backups. Floppy disks see some marginal usage, particularly in dealing with older computer systems and software. Magnetic storage is also widely used in some specific applications, such as bank cheques (MICR) and credit/debit cards (mag stripes).
Future

A new type of magnetic storage, called Magnetoresistive Random Access Memory or MRAM, is being produced that stores data in magnetic bits based on the tunnel magnetoresistance (TMR) effect. Its advantage is non-volatility, low power usage, and good shock robustness. The 1st generation that was developed was produced by Everspin Technologies, and utilized field induced writing. The 2nd generation is being developed through two approaches: Thermal Assisted Switching (TAS)which is currently being developed by Crocus Technology, and Spin Torque Transfer (STT) on which Crocus, Hynix.

Submitted by-  Gaurav kocher
Class-Bsc1st cs
Roll no. 4263
Assigment of computer science

Deadlock

   A deadlock is a situation when a process in the system has acquired some resources and waiting for more resources which are acquired by some other process which in turn is waiting for the resources acquired by this process. Hence, none of them can proceed and OS can’t do any work.



Necessary and Sufficient deadlock conditions: 
Coffman (1971) identified four (4) conditions that must hold simultaneously for there to be a deadlock.

1. Mutual Exclusion Condition
The resources involved are non-shareable.
Explanation: At least one resource (thread) must be held in a non-shareable mode, that is, only one process at a time claims exclusive control of the resource. If another process requests that resource, the requesting process must be delayed until the resource has been released.

2. Hold and Wait Condition
Requesting process hold already, resources while waiting for requested resources.
Explanation: There must exist a process that is holding a resource already allocated to it while waiting for additional resource that are currently being held by other processes.

3. No-Preemptive Condition
Resources already allocated to a process cannot be preempted.
Explanation: Resources cannot be removed from the processes are used to completion or released voluntarily by the process holding it.

4. Circular Wait Condition
The processes in the system form a circular list or chain where each process in the list is waiting for a resource held by the next process in the list.
As an example, consider the traffic deadlock in the following figure                                   



                    
 Submitted By: Deeksha Pundir
                                              B.Sc. 2nd year(4th sem.)

                                               Roll no: 2321

Buddy memory allocation




The buddy memory allocation technique is a memory allocation algorithm that divides memory into partitions to try to satisfy a memory request as suitably as possible. This system makes use of splitting memory into halves to try to give a best-fit. According to Donald Knuth, the buddy system was invented in 1963 by Harry Markowitz, who won the 1990 Nobel Memorial Prize in Economics, and was first described by Kenneth C. Knowlton (published 1965).Buddy memory allocation is relatively easy to implement. It supports limited but efficient splitting and coalescing of memory blocks.

How it works

There are various forms of the buddy system, but binary buddies, in which each block is subdivided into two smaller blocks, are the simplest and most common variety. Every memory block in this system has an order, where the order is an integer ranging from 0 to a specified upper limit. The blocks in each order have sizes proportional to 2order, so that each block is exactly twice the size of blocks that are one order lower. Power-of-two block sizes make address computation simple, because all buddies are aligned on memory address boundaries that are powers of two. When a larger block is split, it is divided into two smaller blocks, and each smaller block becomes a unique buddy to the other. A split block can only be merged with its unique buddy block, which then reforms the larger block they were split from.
Starting off, the size of the smallest possible block is determined, i.e. the smallest memory block that can be allocated. If no lower limit existed at all (e.g., bit-sized allocations were possible), there would be a lot of memory and computational overhead for the system to keep track of which parts of the memory are allocated and unallocated. However, a rather low limit may be desirable, so that the average memory waste per allocation (concerning allocations that are, in size, not multiples of the smallest block) is minimized. Typically the lower limit would be small enough to minimize the average wasted space per allocation, but large enough to avoid excessive overhead. The smallest block size is then taken as the size of an order-0 block, so that all higher orders are expressed as power-of-two multiples of this size.
The programmer then has to decide on, or to write code to obtain, the highest possible order that can fit in the remaining available memory space. Since the total available memory in a given computer system may not be a power-of-two multiple of the minimum block size, the largest block size may not span the entire memory of the system. For instance, if the system had 2000K of physical memory and the order-0 block size was 4K, the upper limit on the order would be 8, since an order-8 block (256 order-0 blocks, 1024K) is the biggest block that will fit in memory. Consequently it is impossible to allocate the entire physical memory in a single chunk; the remaining 976K of memory would have to be allocated in smaller blocks.

In practice

The following is an example of what happens when a program makes requests for memory. Let's say in this system, the smallest possible block is 64 kilobytes in size, and the upper limit for the order is 4, which results in a largest possible allocatable block, 24 times 64K = 1024K in size. The following shows a possible state of the system after various memory requests.
Step
64K
64K
64K
64K
64K
64K
64K
64K
64K
64K
64K
64K
64K
64K
64K
64K
1
24
2.1
23
23
2.2
22
22
23
2.3
21
21
22
23
2.4
20
20
21
22
23
2.5
A: 20
20
21
22
23
3
A: 20
20
B: 21
22
23
4
A: 20
C: 20
B: 21
22
23
5.1
A: 20
C: 20
B: 21
21
21
23
5.2
A: 20
C: 20
B: 21
D: 21
21
23
6
A: 20
C: 20
21
D: 21
21
23
7.1
A: 20
C: 20
21
21
21
23
7.2
A: 20
C: 20
21
22
23
8
20
C: 20
21
22
23
9.1
20
20
21
22
23
9.2
21
21
22
23
9.3
22
22
23
9.4
23
23
9.5
24
This allocation could have occurred in the following manner
  1. The initial situation.
  2. Program A requests memory 34K, order 0.
    1. No order 0 blocks are available, so an order 4 block is split, creating two order 3 blocks.
    2. Still no order 0 blocks available, so the first order 3 block is split, creating two order 2 blocks.
    3. Still no order 0 blocks available, so the first order 2 block is split, creating two order 1 blocks.
    4. Still no order 0 blocks available, so the first order 1 block is split, creating two order 0 blocks.
    5. Now an order 0 block is available, so it is allocated to A.
  3. Program B requests memory 66K, order 1. An order 1 block is available, so it is allocated to B.
  4. Program C requests memory 35K, order 0. An order 0 block is available, so it is allocated to C.
  5. Program D requests memory 67K, order 1.
    1. No order 1 blocks are available, so an order 2 block is split, creating two order 1 blocks.
    2. Now an order 1 block is available, so it is allocated to D.
  6. Program B releases its memory, freeing one order 1 block.
  7. Program D releases its memory.
    1. One order 1 block is freed.
    2. Since the buddy block of the newly freed block is also free, the two are merged into one order 2 block.
  8. Program A releases its memory, freeing one order 0 block.
  9. Program C releases its memory.
    1. One order 0 block is freed.
    2. Since the buddy block of the newly freed block is also free, the two are merged into one order 1 block.
    3. Since the buddy block of the newly formed order 1 block is also free, the two are merged into one order 2 block.
    4. Since the buddy block of the newly formed order 2 block is also free, the two are merged into one order 3 block.
    5. Since the buddy block of the newly formed order 3 block is also free, the two are merged into one order 4 block.
As you can see, what happens when a memory request is made is as follows:
  • If memory is to be allocated
  1. Look for a memory slot of a suitable size (the minimal 2k block that is larger or equal to that of the requested memory)
    1. If it is found, it is allocated to the program
    2. If not, it tries to make a suitable memory slot. The system does so by trying the following:
      1. Split a free memory slot larger than the requested memory size into half
      2. If the lower limit is reached, then allocate that amount of memory
      3. Go back to step 1 (look for a memory slot of a suitable size)
      4. Repeat this process until a suitable memory slot is found
  • If memory is to be freed
  1. Free the block of memory
  2. Look at the neighboring block - is it free too?
  3. If it is, combine the two, and go back to step 2 and repeat this process until either the upper limit is reached (all memory is freed), or until a non-free neighbour block is encountered

Implementation and efficiency

In comparison to other simpler techniques such as dynamic allocation, the buddy memory system has little external fragmentation, and allows for compaction of memory with little overhead. The buddy method of freeing memory is fast, with the maximal number of compactions required equal to log2(highest order). Typically the buddy memory allocation system is implemented with the use of a binary tree to represent used or unused split memory blocks. The "buddy" of each block can be found with an exclusive OR of the block's address and the block's size.
However, there still exists the problem of internal fragmentation — memory wasted because the memory requested is a little larger than a small block, but a lot smaller than a large block. Because of the way the buddy memory allocation technique works, a program that requests 66K of memory would be allocated 128K, which results in a waste of 62K of memory. This problem can be solved by slab allocation, which may be layered on top of the more coarse buddy allocator to provide more fine-grained allocation.
One version of the buddy allocation algorithm was described in detail by Donald Knuth in volume 1 of The Art of Computer Programming The Linux kernel also uses the buddy system, with further modifications to minimize external fragmentation, along with various other allocators to manage the memory within blocks.

Jemalloc is a modern memory allocator that employs, among others, the buddy technique.



Submitted by -Kulvinder Kaur