Wednesday, 5 March 2014

OUTPUT DEVICES


An output device is an electromechanical device which accepts data from a computer and translates  them into a form , which is suitable for use by the outside world(users). 
  Output devices can be broadly classified into the following two types;
Soft-Copy Output
Output which is not produced on a paper, is known as softcopy output. They are temporary in nature. For example, output displayed on a terminal screen, or spoken out by a voice response system are softcopy output.
     Advantages:
     1)Easily searchable.
     2)Can be easily updated and modified.
     3)Can be redistributed quickly via E-mail.
     4)Requires very little physical space. A CD can hold thousands documents.

Hard-Copy Output
    Output which is produced on a paper, is known as hard copy output. They are permanent in nature. For example, output produced by printers and plotters on paper are hard copy output.  
     Advantages:
     1)Can be used even while computer is not operating.
     2)Any unauthorized change can be detected easily.
     3)Some users like to read information.
    4)Can be redistributed to those persons who do not have computer.

    Types Of Output
    Some types of output are text, graphics , tactile, audio and video. Text consist of characters, no.’s and punctuation marks, or any other symbol requiring one byte of computer storage that are used to create words, sentences and paragraphs. Graphics are digital representation of non text information such as drawing, charts, photographs and animation. Tactile output such as raised line drawings may be useful for some individual who are blind .Audio is music ,speech, or any other sound. Video consists of images play back at speeds to provide the appearance of full motion.

  These examples of output device also include input/output device. Printers and visual displays are the most common types of output device for interfacing to people , but voice is becoming increasing available.

        Examples:

     1)Speakers

    2)Headphones

    3)Screen

    4)Printer

    5)Plotter

    6)Projector

    7)Radio

 
Submitted by:Satwinder kaur 
 Class:B.Sc.1st(comp.sci)
Roll no.:4213   

 

Introduction about memory:


Memory in a computer is required for storage and subsequent retrieval of the in the instruction and data . A computer system uses variety of devices for storing these instructions and data which are required for its operations.

  “The storage devices along with the algorithm or information   on how to control and manage these storage devices constitute the memory system of a computer.”
Memory system

   Memory system is a very simple system yet it exhibits a wide range of technology and its types. The basic objective of a computer system is to increase the speed of computation. Likewise the basic objective of a memory system is to provide fast uninterrupted access by the processor  to the memory such that the processor can operate at the speed it is expected to work.
   A memory system can be considered to consist of the three group of memories. These are:

Secondary/auxiliary memory

Auxiliary memory infect is much larger in size than main memory but is slower than main memory .It normally  stores system programs, instructions and data files. Secondary memory can also be used as an overflow memory in case the main memory capacity has been exceeded. Secondary memories can not be accessed directly by a processor. First the information of these memories is transferred to the main memory and then the information can be accessed as the information of main memory.
   Characteristics Terms of Memory Systems

    The following  terms are most commonly used for identifying comparative behavior of various memory devices and technologies :

 1)Storage Location:3 possible storage locations are:    a)Internal storage: We define internal storage which is      needed all the time and located inside also called primary    storage .

    b)External storage: We define this storage which is located outside of CUP but connected to CUP. It is also called secondary storage.       

2.Storage capacity:  

    It is the amount  of data that can be stored in the storage unit. The storage capacity can be expressed in terms of bytes.

    a)Bit :A binary digit is local 0 or 1 representing called a passive or an active state of a component in an electric circuit.

    b)Nibble: A group of 4 bits is called a nibble.

    c) Byte: A group of 8 bits is called byte. A byte is the smallest unit which can represent a data item or a character.

    3) Unit of transfer:

   Unit of transfer can a word or block. For internal memory, generally a unit of transfer is equal to the word length. For external memory, data are often transferred in much larger units than a word and these are referred to as a blocks.

    4)Performance : These are performance parameters:

    a)Access Time :Access time is defined as time required to locate and retrieve a record.
    b)Access rate: Access rate is defined as no. of read/ write operations carried out per second.  

5)Access Method:   

   Access refers to the way the memory can be addresses or recorded information can be accessed.

6)Physical type:

    A variety of physical types of memory have been employed. The two most common types today are semiconductor or magnetic memory.

 7)Physical characteristic: various physical characteristic are:

   1)Destructive read out:

     In this type the contents are read, they are wiped out . This is called destructive read out. Magnetic core memories are such memories.

    2)Dynamic Memories: In this type ,memory has the property that they tend to decay. Some semiconductor memory come into this category. The soln is Refreshing

    3)Volatile memories: These are those where information are lost when power is turned off. Semiconductor memories are of this type. The solution to the problem is using of UPS.

        Submitted by:kamaljeet kaur  
        Class:B.Sc.(com.sci.)1st
        Roll no.:4214
 

 

 

 

 

Sunday, 2 March 2014

Page Replacement

Page replacement: -                                                                            When the number of available real memory frames on the free list becomes low, a page stealer is invoked. A page stealer moves through the Page Frame Table (PFT), looking for pages to steal.
The PFT includes flags to signal which pages have been referenced and which have been modified. If the page stealer encounters a page that has been referenced, it does not steal that page, but instead, resets the reference flag for that page. The next time the clock hand (page stealer) passes that page and the reference bit is still off, that page is stolen. A page that was not referenced in the first pass is immediately stolen.
The modify flag indicates that the data on that page has been changed since it was brought into memory. When a page is to be stolen, if the modify flag is set, a page out call is made before stealing the page. Pages that are part of working segments are written to paging space; persistent segments are written to disk.
Figure 1. Page Replacement Example. The illustration consists of excerpts from three tables. The first table is the page frame table with four columns that contain the real address, the segment type, a reference flag, and a modify flag. A second table is called the free list table and contains addresses of all free pages. The last table represents the resulting page frame table after all of the free addresses have been removed.   

 In addition to the page-replacement, the algorithm keeps track of both new page faults (referenced for the first time) and repage faults (referencing pages that have been paged out), by using a history buffer that contains the IDs of the most recent page faults. It then tries to balance file (persistent data) page outs with computational (working storage or program text) page outs.
When a process exits, its working storage is released immediately and its associated memory frames are put back on the free list. However, any files that the process may have opened can stay in memory.
Page replacement is done directly within the scope of the thread if running on a uniprocessor. On a multiprocessor system, page replacement is done through the lrud kernel process, which is dispatched to a CPU when the minfree threshold has been reached. Starting with AIX® 4.3.3, the lrud kernel process is multithreaded with one thread per memory pool. Real memory is split into evenly sized memory pools based on the number of CPUs and the amount of RAM. The number of memory pools on a system can be determined by running the vmstat -v command.
You can use the vmo -r -o me pools= <number of memory pools> command to change the number of memory pools that will be configured at system boot. The values for the minfree and maxfree parameters in the vmo command output is the sum of the min free and max free parameters for each memory pool.
1.  RAND (Random)
  • choose any page to replace at random
  • assumes the next page to be referenced is random
  • can test other algorithms against random page replacement
2. MIN (minimum) or OPT (optimal):
  • Be lady’s optimal algorithm for the minimum number of page faults
  • replace the page that will be referenced furthest in the future or not at all
  • problem: we cannot implement it, because we cannot predict the future
  • this is the best case
  • can use it to compare other algorithms against
3. FIFO (First In, First Out):
  • select the page that has been in main memory the longest
  • use a queue (data structure)
  • problem: although a page has been present for a long time, it may be really useful
  • Windows NT and Windows 2000 use this algorithm, as a local page replacement algorithm (described separately), with the pool method (described in more detail separately)
    • create a pool of the pages that have been marked for removal
    • manage the pool in the same way as the rest of the pages
    • if a new page is needed, take a page from the pool
    • if a page in the pool is referenced again before being replaced in memory, it is simply reactivated
    • this is relatively efficient
4. LRU (Least Recently Used):
  • choose the page that was last referenced the longest time ago
  • assumes recent behavior is a good predictor of the near future
  • can manage LRU with a list called the LRU stack or the paging stack (data structure)
  • In the LRU stack, the first entry describes the page referenced least recently, the last entry describes to the last page referenced.
  • if a page is referenced, move it to the end of the list
  • problem: requires updating on every page referenced
  • too slow to be used in practice for managing the page table, but many systems use approximations to LRU
5. NRU (Not Recently Used):
  • as an approximation to LRU, select one of the pages that has not been used recently (as opposed to identifying exactly which one has not been used for the longest amount of time)
  • keep one bit called the "used bit" or "reference bit", where 1 => used recently and 0 => not used recently
  • variants of this scheme are used in many operating systems, including UNIX and Macintosh
  • Most variations use a scan pointer and go through the page frames one by one, in some order, looking for a page that has not been used recently.
                          
SUBMITTED BY:-Diksha
                 Class:-B.sc 2nd year (c.s)
                 Roll no.:-2319

                 Topic:-Page Replacement

Sunday, 23 February 2014

Deadlock



A deadlock is a situation in which two or more competing actions are each waiting for the other to finish, and thus neither ever does.

In a transactional database[disambiguation needed], a deadlock happens when two processes each within its own transaction updates two rows of information but in the opposite order. For example, process A updates row 1 then row 2 in the exact timeframe process B updates row 2 then row 1. Process A can't finish updating row 2 until process B is finished, but it cannot finish updating row 1 until process A finishes. No matter how much time is allowed to pass, this situation will never resolve itself and because of this database management systems will typically kill the transaction of the process that has done the least amount of work.

In an operating system, a deadlock is a situation which occurs when a process or thread enters a waiting state because a resource requested is being held by another waiting process, which in turn is waiting for another resource. If a process is unable to change its state indefinitely because the resources requested by it are being used by another waiting process, then the system is said to be in a deadlock.

Deadlock is a common problem in multiprocessing systems, parallel computing and distributed systems, where software and hardware locks are used to handle shared resources and implement process synchronization.

In telecommunication systems, deadlocks occur mainly due to lost or corrupt signals instead of resource contention.

Examples

Any deadlock situation can be compared to the classic "chicken or egg" problem. It can be also considered a paradoxical "Catch-22" situation. A real world example would be an illogical statute passed by the Kansas legislature in the early 20th century, which stated:
“       When two trains approach each other at a crossing, both shall come to a full stop and neither shall start up again until the other has gone. ”

A simple computer-based example is as follows. Suppose a computer has three CD drives and three processes. Each of the three processes holds one of the drives. If each process now requests another drive, the three processes will be in a deadlock. Each process will be waiting for the "CD drive released" event, which can be only caused by one of the other waiting processes. Thus, it results in a circular chain.
Necessary condition

A deadlockers situation can arise if all of the following conditions hold simultaneously in a system:

    Mutual Exclusion: At least one resource must be held in a non-sharable mode. Only one process can use the resource at any given instant of time.
    Hold and Wait or Resource Holding: A process is currently holding at least one resource and requesting additional resources which are being held by other processes.
    No Preemption: The operating system must not de-allocate resources once they have been allocated; they must be released by the holding process voluntarily.
    Circular Wait: A process must be waiting for a resource which is being held by another process, which in turn is waiting for the first process to release the resource. In general, there is a set of waiting processes, P = {P1, P2, ..., PN}, such that P1 is waiting for a resource held by P2, P2 is waiting for a resource held by P3 and so on until PN is waiting for a resource held by P1.

These four conditions are known as the Coffman conditions from their first description in a 1971 article by Edward G. Coffman, Jr.Unfulfillment of any of these conditions is enough to preclude a deadlock from occurring.
Avoiding database deadlocks
         This section is written like a manual or guidebook. Please help rewrite this section from a descriptive, neutral point of view, and remove advice or instruction. (August 2013)

An effective way to avoid database deadlocks is to follow this approach from the Oracle Locking Survival Guide:

    Application developers can eliminate all risk of enqueue deadlocks by ensuring that transactions requiring multiple resources always lock them in the same order."

This single sentence needs much explanation to understand the recommended solution. First it highlights the fact that processes must be inside a transaction for deadlocks to happen. Note that some database systems can be configured to cascade deletes which creates an implicit transaction which then can cause deadlocks. Also some DBMS vendors offer row-level locking a type of record locking which greatly reduces the chance of deadlocks as opposed to page level locking which creates many times more locks. Second, by "multiple resources" this means more than one row in one or more tables. An example of locking in the same order would be to process all INSERTS first, all UPDATES second, and all DELETES last and within processing each of these handle all parent table changes before children table changes; and process table changes in the same order such as alphabetically or ordered by an ID or account number. Third, eliminating all risk of deadlocks is difficult to achieve as the DBMS has automatic lock escalation features that raise row level locks into page locks which can be escalated to table locks. Although the risk or chance of experiencing a deadlock will not go to zero as deadlocks tend to happen more on large, high-volume, complex systems, it can be greatly reduced and when required the software can be enhanced to retry transactions when a deadlock is detected. Fourth, deadlocks can result in data loss if the software is not developed to use transactions on every interaction with a DBMS and the data loss is difficult to locate and creates unexpected errors and problems.

Deadlocks are a challenging problem to correct as they result in data loss, are difficult to isolate, create unexpected problems, and are time consuming to fix. Modifying every section of software code in a large system that access the database to always lock resources in the same order when the order is inconsistent takes significant resources and testing to implement. That and the use of the strong word "dead" in front of lock are some of the reasons why deadlocks have a "this is a big problem" reputation.
Deadlock handling

Most current operating systems cannot prevent a deadlock from occurring. When a deadlock occurs, different operating systems respond to them in different non-standard manners. Most approaches work by preventing one of the four Coffman conditions from occurring, especially the fourth one. Major approaches are as follows.
Ignoring deadlock

In this approach, it is assumed that a deadlock will never occur. This is also an application of the Ostrich algorithm.[9][10] This approach was initially used by MINIX and UNIX.[7] This is used when the time intervals between occurrences of deadlocks are large and the data loss incurred each time is tolerable.
Detection

Under deadlock detection, deadlocks are allowed to occur. Then the state of the system is examined to detect that a deadlock has occurred and subsequently it is corrected. An algorithm is employed that tracks resource allocation and process states, it rolls back and restarts one or more of the processes in order to remove the detected deadlock. Detecting a deadlock that has already occurred is easily possible since the resources that each process has locked and/or currently requested are known to the resource scheduler of the operating system.

Deadlock detection techniques include, but are not limited to, model checking. This approach constructs a finite state-model on which it performs a progress analysis and finds all possible terminal sets in the model. These then each represent a deadlock.

After a deadlock is detected, it can be corrected by using one of the following methods:

    Process Termination: One or more process involved in the deadlock may be aborted. We can choose to abort all processes involved in the deadlock. This ensures that deadlock is resolved with certainty and speed. But the expense is high as partial computations will be lost. Or, we can choose to abort one process at a time until the deadlock is resolved. This approach has high overheads because after each abort an algorithm must determine whether the system is still in deadlock. Several factors must be considered while choosing a candidate for termination, such as priority and age of the process.
    Resource Preemption: Resources allocated to various processes may be successively preempted and allocated to other processes until the deadlock is broken.

Prevention

Deadlock prevention works by preventing one of the four Coffman conditions from occurring.

    Removing the mutual exclusion condition means that no process will have exclusive access to a resource. This proves impossible for resources that cannot be spooled. But even with spooled resources, deadlock could still occur. Algorithms that avoid mutual exclusion are called non-blocking synchronization algorithms.
    The hold and wait or resource holding conditions may be removed by requiring processes to request all the resources they will need before starting up (or before embarking upon a particular set of operations). This advance knowledge is frequently difficult to satisfy and, in any case, is an inefficient use of resources. Another way is to require processes to request resources only when it has none. Thus, first they must release all their currently held resources before requesting all the resources they will need from scratch. This too is often impractical. It is so because resources may be allocated and remain unused for long periods. Also, a process requiring a popular resource may have to wait indefinitely, as such a resource may always be allocated to some process, resulting in resource starvation.(These algorithms, such as serializing tokens, are known as the all-or-none algorithms.)
    The no preemption condition may also be difficult or impossible to avoid as a process has to be able to have a resource for a certain amount of time, or the processing outcome may be inconsistent or thrashing may occur. However, inability to enforce preemption may interfere with a priority algorithm. Preemption of a "locked out" resource generally implies a rollback, and is to be avoided, since it is very costly in overhead. Algorithms that allow preemption include lock-free and wait-free algorithms and optimistic concurrency control.
    The final condition is the circular wait condition. Approaches that avoid circular waits include disabling interrupts during critical sections and using a hierarchy to determine a partial ordering of resources. If no obvious hierarchy exists, even the memory address of resources has been used to determine ordering and resources are requested in the increasing order of the enumeration. Dijkstra's solution can also be used.

Avoidance

Deadlock can be avoided if certain information about processes are available to the operating system before allocation of resources, such as which resources a process will consume in its lifetime. For every resource request, the system sees whether granting the request will mean that the system will enter an unsafe state, meaning a state that could result in deadlock. The system then only grants requests that will lead to safe states.In order for the system to be able to determine whether the next state will be safe or unsafe, it must know in advance at any time:

   


 SUBMITTED BY - SUSHMA PANCHVAL

Friday, 21 February 2014

Banker's algorithm

Banker's algorithm:-  The Banker's algorithm is a resource allocation and deadlock avoidance algorithm developed by Edsger Dijkstra that tests for safety by simulating the allocation of predetermined maximum possible amounts of all resources, and then makes an "s-state" check to test for possible deadlock conditions for all other pending activities, before deciding whether allocation should be allowed to continue.
The algorithm was developed in the design process for the THE operating system and originally described  in EWD108. The name is by analogy with the way that bankers account for liquidity constraints.


1. limitations:-
Specifically, it needs to know how much of each resource a process could possibly request. In most systems, this information is unavailable, making it impossible to implement the Banker's algorithm. Also, it is unrealistic to assume that the number of processes is static since in most systems the number of processes varies dynamically. Moreover, the requirement that a process will eventually release all its resources  is sufficient for the correctness of the algorithm, however it is not sufficient for a practical system. Waiting for hours  for resources to be released is usually not acceptable.


2. Algoritm:-
it is a step by step process.These parts are following as:
(1.) Resources:-
For the Banker's algorithm to work, it needs to know three things:
    How much of each resource each process could possibly request[CLAIMS]
    How much of each resource each process is currently holding[ALLOCATED]
    How much of each resource the system currently has available[AVAILABLE]

Claims: The Banker's Algorithm derives its name from the fact that this algorithm could be used in a banking system to ensure that the bank does not run out of resources, because the bank would never allocate its money in such a way that it can no longer satisfy the needs of all its customers.
Available: A vector of length m indicates the number of available resources of each type. If Available[j] = k, there are k instances of resource type Rj available.

    Allocation: An n×m matrix defines the number of resources of each type currently allocated to each process. If Allocation[i,j] = k, then process Pi is currently allocated k instance of resource type 
Example:-
Assuming that the system distinguishes between four types of resources, the following is an example of how those resources could be distributed. Note that this example shows the system at an instant before a new request for resources arrives. Also, the types and number of resources are abstracted. Real systems, for example, would deal with much larger quantities of each resource.

Total resources in system:
A B C D
2 1 0 0
Available system resources are:
A B C D
2 1 0 0
Processes :
   A B C D
P1 0 0 1 2
P2 2 0 0 0
P3 0 0 3 4
Processes :
   A B C D
P1 0 0 1 2
P2 2 7 5 0
P3 6 6 5 6
Need= maximum resources - currently allocated resources
Processes :
   A B C D
P1 0 0 0 0
P2 0 7 5 0
P3 6 6 2 2

(2.) Safe and Unsafe States:-
It is possible for all processes to finish executing . Since the system cannot know when a process will terminate, or how many resources it will have requested by then, the system assumes that all processes will eventually attempt to acquire their stated maximum resources and terminate soon afterward.

Given that assumption, the algorithm determines if a state is safe by trying to find a hypothetical set of requests by the processes that would allow each to acquire its maximum resources and then terminate. Any state where no such set exists is an unsafe state.

Example:-
We can show that the state given in the previous example is a safe state by showing that it is possible for each process to acquire its maximum resources and then terminate.
  P1 acquires 2 A, 1 B and 1 D more resources, achieving its maximum
        [available resource: <3 1 1 2> - <2 1 0 1> = <1 0 1 1>]
        The system now still has 1 A, no B, 1 C and 1 D resource available
        [available resource: <1 0 1 1> + <3 3 2 2> = <4 3 3 3>]
        The system now has 4 A, 3 B, 3 C and 3 D resources available
        [available resource: <4 3 3 3>-<0 2 0 1>+<1 2 3 4> = <5 3 6 6>]
        The system now has 5 A, 3 B, 6 C and 6 D resources
        [available resource: <5 3 6 6> - <0 1 4 0> + <1 3 5 0> = <6 5 7 6>
        The system now has all resources: 6 A, 5 B, 7 C and 6D

(3.) Requests:-
when the system receives a request for resources, it runs the Banker's algorithm to determine if it is safe to grant the request.  The algorithm is fairly straight forward once the distinction between safe and unsafe states is understood.
Can the request be granted?
        If not, the request is impossible and must either be denied or put on a waiting list
    Assume that the request is granted
    Is the new state safe?
        If so grant the request
        If not, either deny the request or put it on a waiting list
Whether the system denies or postpones an impossible or unsafe request is a decision specific to the operating system.

Example:-
        The new state of the system d be:
     A B C D
Free 2 1 0 0
    Processes :
     A B C D
P1   0 0 1 2
P2   2 0 0 0
P3   0 0 3 0
    Processes :
     A B C D
P1   0 0 1 2
P2   2 7 5 0
P3   6 6 5 6
    Determine if this new state is safe
        P1 can acquire 2 A, 1 B and 1 D resources and terminate
        Then, P2 can acquire 2 B and 1 D resources and terminate
        Finally, P3 can acquire 1B and 3 C resources and terminate
        Assuming the request is granted, the new state would be:
    Available system resources:
     A B C D
Free 3 0 1 2
    Processes:
     A B C D
P1   1 2 2 1
P2   1 1 3 3
P3   1 2 1 0
    Processes:
     A B C D
P1   3 3 2 2
P2   1 2 3 4
P3   1 3 5 0
    Is this state safe? Assuming P1, P2, and P3 request more of resource B and C.
        P1 is unable to acquire enough B resources
        P2 is unable to acquire enough B resources
        P3 is unable to acquire enough B resources
        /*PROGRAM TO IMPLEMENT BANKER'S ALGORITHM
  *   --------------------------------------------*/
#include <stdio.h>
int curr[5][5], maxclaim[5][5], avl[5];
int alloc[5] = {0,0,0,0,0};
int maxres[5], running[5], safe=0;
int count = 0, i, j, exec, r, p,k=1;
 int main()
{
    printf("\nEnter the number of processes: ");
    scanf("%d",&p);
    for(i=0;i<p;i++)
    {
        running[i]=1;
        count++;
    }
    printf("\nEnter the number of resources: ");
    scanf("%d",&r);
    for(i=0;i<r;i++)
    {
        printf("\nEnter the resource for instance %d: ",k++);
        scanf("%d",&maxres[i]);
    }
    printf("\nEnter maximum resource table:\n");
    for(i=0;i<p;i++)
    {
        for(j=0;j<r;j++)
        {
            scanf("%d",&maxclaim[i][j]);
        }
    }
    printf("\nEnter allocated resource table:\n");
    for(i=0;i<p;i++)
    {
        for(j=0;j<r;j++)
        {
            scanf("%d",&curr[i][j]);
        }
    }
    printf("\nThe resource of instances: ");
    for(i=0;i<r;i++)
    {
        printf("\t%d",maxres[i]);
    }
    printf("\nThe allocated resource table:\n");
    for(i=0;i<p;i++)
    {
        for(j=0;j<r;j++)
        {
            printf("\t%d",curr[i][j]);
        }
        printf("\n");
    }
    printf("\nThe maximum resource table:\n");
    for(i=0;i<p;i++)
    {
        for(j=0;j<r;j++)
        {
            printf("\t%d",maxclaim[i][j]);
        }
        printf("\n");
    }
    for(i=0;i<p;i++)
    {
        for(j=0;j<r;j++)
        {
            alloc[j]+=curr[i][j];
        }
    }
    printf("\nAllocated resources:");
    for(i=0;i<r;i++)
    {
        printf("\t%d",alloc[i]);
    }
    for(i=0;i<r;i++)
    {
        avl[i]=maxres[i]-alloc[i];
    }
    printf("\nAvailable resources:");
    for(i=0;i<r;i++)
    {
        printf("\t%d",avl[i]);
    }
    printf("\n");
    //Main procedure goes below to check for unsafe state.
    while(count!=0)
    {
        safe=0;
        for(i=0;i<p;i++)
        {
            if(running[i])
            {
                exec=1;
                for(j=0;j<r;j++)
                {
                    if(maxclaim[i][j] - curr[i][j] > avl[j]){
                        exec=0;
                        break;
                    }
                }
                if(exec)
                {
                    printf("\nProcess%d is executing\n",i+1);
                    running[i]=0;
                    count--;
                    safe=1;
                    for(j=0;j<r;j++) {
                        avl[j]+=curr[i][j];
                    }
                    break;
                }
            }
        }
        if(!safe)
        {
            printf("\nThe processes are in unsafe state.\n");
            break;
        }
        else
        {
            printf("\nThe process is in safe state");
            printf("\nSafe sequence is:");
            for(i=0;i<r;i++)
            {
                printf("\t%d",avl[i]);
            }
            printf("\n");
        }
    }
}
 /*SAMPLE  OUTPUT
-----------------

Disadvantages of the Banker's Algorithm:-

 It requires the number of processes to be fixed; no additional processes can start while it is executing.
    It requires that the number of resources remain fixed; no resource may go down for any reason without the possibility of deadlock occurring.
    It allows all requests to be granted in finite time, but one year is a finite amount of time.
    Similarly, all of the processes guarantee that the resources loaned to them will be repaid in a finite amount of time. While this prevents absolute starvation, some pretty hungry processes might develop.
   All processes must know and state their maximum resource need in advance.


SUBMITTED BY = MANDEEP KAUR
CLASS=BSC (CS) 2ND YEAR
ROLL NO. =2304