Memory Allocation Methods/Principles/Rules/Schemes/ in Operating System:


Definition and Explanation:

In order to execute a process, the program code has to be transferred from secondary storage to main memory. This activity being called process loading.


In some systems, the reverse operation is also used i.e. the transfer of a process from memory to secondary storage. This is done to accommodate and execute another process. The overall effect is the a 'swap' of two processes. Hence the activity is called swapping.


Memory Allocation Schemes:


1. Single process allocation scheme.


2. Fixed partition allocation scheme.


3. Variable partition allocation scheme.


In general the main memory is usually partitioned in two main parts. One is reserved for the operating system and the other to hold one or more user process.


1. Single Process Allocation Scheme/Single Absolute Partition:


Memory management is simple in a computer that only runs one process at a time. The process to be executed is loaded into the free space area of the memory. In general, the remaining part of the memory space will not be used.


In single process allocation scheme, only one process is in memory at any time. Such an arrangement is clearly limited in capability and is found primary in simple systems such as computer games. Early MS-DOS systems operated in this way also.


2. Multiple Fixed Partition Allocation Scheme:


Multiple Fixed Partition Allocation Scheme divides the memory into a number of separate fixed areas. Each memory area could hold one process.


In the above example, the memory consists of three areas of size 200K, 300K, and 400K respectively. Each area holds a process. The number of partitions and their sizes in a practical operating system would be controlled depending on the amount of free memory available and the size of processes to be run.


For example, one partition should be large enough to hold the largest process to be run. As shown above, each partition will typically contain unused space so that the total unused space could be considerable.


The occurrence of wasted space in this way is called internal fragmentation. It is called 'internal' because it occurs within the space (partition) allocated to the process in question.


Typically, the partitions would be set up with a range of partitions sizes so that a mixture of large and small processes could be accommodated. This scheme was one of the first schemes to be employed in multiprogramming systems.


Advantages:



  • It is relative simple to implement 
  • Due to its fixed partitions, it facilitates memory protection mechanism.



Disadvantages:



  • The fixed partition sizes can prevent a process from being run due to the unavailability of a partition of sufficient size. 
  • Internal fragmentation wastes space that collectively could accommodate another process.



3. Multiple Variable Partition Allocation Scheme:


The obvious solution for the problems of the fixed partition scheme is to allow the partitions to be variable in size at load time i.e. to allocate to the process the exact amount of memory it requires. Processes are loaded into consecutive areas until the memory is filled or the remaining space is too small to accommodate another process.


If a new process D is loaded then it is allocated adjacent to process C if available space is sufficient. When a process terminates, the space it occupies is freed and becomes available for the loading of a new process. However, this reveals a problem in the nature of the technique. As processes terminate and space is freed, the free space appears as a series of 'holes' between the active memory areas.


The operating system must attempt to load an incoming process into a space large enough to accommodate it. It is possible that a new process cannot be started because none of the holes is large enough even though the total free space is more than the required size. Distribution of free memory in this way is called external fragmentation.


1. Coalescing Holes:


A process that is adjacent to one or more holes may terminate and free its space allocation. This results in two or three adjacent holes. The holes can be viewed and utilized as a single hole. The effect is called coalescing of holes.


Any memory management scheme will cause some operating overhead. In variable partition scheme, the system must keep track of the position and size of each hole. It should consider the effect of coalescing. A common method is to use a linked list. A linked list contains pointers to the start of each hole and the hole size.


2. Storage Replacement Strategies:


When a new process has to be loaded using a variable partition scheme, it is necessary to try and select the 'best' hole in which to place it. In other words how are we going to satisfy a request for an allocation of size n from a list of free holes?


A number of policies have been devised and are described below:


First Fit: The incoming process is placed in the main memory in the first available hole that is big enough to hold the process.

Best Fit: The incoming process is placed in the main memory in the smallest hole that is big enough to hold the process. In this policy, the system must search entire list unless ordered by size. It produces the smallest leftover hole.

Worst Fit: The incoming process is placed in main memory in the largest hole. The system must also search the entire list. It produces the largest leftover hole.
In practice, the best fit and the worst fit generally prove to be the most effective.


The variable partition scheme has been used successfully in many systems and is a significant improvement on the fixed partition scheme. Its principal defect is the problem of fragmentation. It reduces the effective capacity of memory.


3. Compaction:


The fragmentation problem encountered in the previous method can be resolved by physically moving the processes about the memory in order to close up the holes. It brings the free space into a single large block. This method is called compaction.


Compaction makes the total free space more usable for incoming processes. But an overhead is required to re-shuffle current processes. All processes must be suspended while the re-shuffle takes place. Such activity is not be feasible in a real-time system. Moreover, the operating system must decide at what interval of time compaction will be carried out.


In practice, the compaction scheme is usually not used. Its overheads and added complexity minimize its advantage over non-compacted schemes.

Leave a Reply

Subscribe to Posts | Subscribe to Comments

Blog Archive

Powered by Blogger.

- Copyright © 2013 Taqi Shah Blogspot -Metrominimalist- Powered by Blogger - Designed by Johanes Djogan -