In this section we will discuss a memory managing problem in which files with given sizes have to be placed onto discs with given sizes. The aim is to minimise the number of the discs used. The problem is the same as the bin-packing problem that can be found among the problems in Section Approximation algorithms in the book titled Introduction to Algorithms. Also scheduling theory uses this model in connection with minimising the number of processors. There is the number of the files given, and array vector containing the sizes of the files to be stored, for the elements of which holds . The files have to be placed onto the discs taking into consideration that they cannot be divided and the capacity of the discs is a unit.
The given problem is NP-complete. Therefore, different approaching algorithms are used in practice. The input data of these algorithms are: the number of files, a vector with the sizes of the files to be placed. And the output data are the number of discs needed (discnumber) and the level array of discs.
According to Linear Fit file is placed to disc . The pseudocode of LF is the following.
Both the running time and the place requirement of this algorithm are . If, however, reading the sizes and printing the levels are carried out in the loop in rows 2–3, then the space requirement can be decreased to .
Next Fit packs the files onto the disc next in line as long as possible. Its pseudocode is the following.
1 2 3
Both the running time and the place requirement of this algorithm are . If, however, reading the sizes and taking the levels out are carried out in the loop in rows 3–6, then the space requirement can be decreased to , but the running time is still .
First Fit packs each files onto the first disc onto which it fits.
The space requirement of this algorithm is , while its time requirement is . If, for example, every file size is 1, then the running time of the algorithm is .
Best Fit places each file onto the first disc on which the remaining capacity is the smallest.
The space requirement of this algorithm is , while its time requirement is .
Pairwise Fit creates a pair of the first and the last element of the array of sizes, and places the two files onto either one or two discs—according to the sum of the two sizes. In the pseudocode there are two auxiliary variables: bind is the index of the first element of the current pair, and eind is the index of the second element of the current pair.
1 2 3 4
THEN8 9 10
THEN14 15 16 17
The space requirement of this algorithm is , while its time requirement is . If, however, reading the sizes and taking the levels of the discs out are carried out online, then the space requirement will only be .
The following five algorithms consist of two parts: first they put the tasks into decreasing order according to their executing time, and then they schedule the ordered tasks. Next Fit Decreasing operates according to NF after ordering. Therefore, both its space and time requirement are made up of that of the applied ordering algorithm and NF.
First Fit Decreasing operates according to First Fit (FF) after ordering, therefore its space requirement is and its time requirement is .
Best Fit Decreasing operates according to Best Fit (BF) after ordering, therefore its space requirement is and its time requirement is .
Pairwise Fit Decreasing creates pairs of the first and the last tasks one after another, and schedules them possibly onto the same processor (if the sum of their executing time is not bigger than one). If it is not possible, then it schedules the given pair onto two processors.
Quick Fit Decreasing places the first file after ordering onto the next empty disc, and then adds the biggest possible files (found from the end of the ordered array of sizes) to this file as long as possible. The auxiliary variables used in the pseudocode are: bind is the index of the first file to be examined, and eind is the index of the last file to be examined.
1 2 3 4
DO6 7 8
The space requirement of this program is , and its running time in worst case is , but in practice—in case of executing times of uniform distribution—it is .
This algorithm places each file—independently of each other—on each of the discs, so it produces placing, from which it chooses an optimal one. Since this algorithm produces all the different packing (supposing that two placing are the same if they allocate the same files to all of the discs), it certainly finds one of the optimal placing.
This algorithm produces the permutations of all the files (the number of which is ), and then it places the resulted lists using NF.
The algorithm being optimal can be proved as follows. Consider any file system and its optimal packing is . Produce a permutation of the files based on so that we list the files placed onto respectively. If permutation is placed by NF algorithm, then we get either or another optimal placing (certain tasks might be placed onto processors with smaller indices).
This algorithm tries to decrease the time requirement of SP by placing 'large' files (the size of which is bigger than 0.5) on separate discs, and tries to place only the others (the 'small' ones) onto all the discs. Therefore, it produces only placing instead of , where is the number of small files.
This algorithm also takes into consideration that two small files always fit onto a disc—besides the fact that two large ones do not fit. Therefore, denoting the number of large files by and that of the small ones by it needs at most discs. So first we schedule the large discs to separate discs, and then the small ones to each of the discs of the number mentioned above. If, for instance, , then according to this we only have to produce .
With certain conditions it holds that list can be split into lists and so that (in these cases the formula holds with equality). Its advantage is that usually shorter lists can be packed optimally in a shorter time than the original list. For example, let us assume that . Let and . In this case and . To prove this, consider the two discs onto which the elements of list have been packed by an optimal algorithm. Since next to them there can be files whose sum is at most and , their executing time can sum up to at most , i.e., 1. Examining the lists on both ends at the same time we can sort out the pairs of files the sum of whose running time is 1 in . After that we order the list . Let the ordered list be s. If, for example , then the first file will be packed onto a different disc by every placing, so and is a good choice. If for the ordered list and hold, then let be the largest element of the list that can be added to without exceeding one. In this case with choices and list is two elements shorter than list . With the help of the last two operations lists can often be shortened considerably (in favourable case they can be shortened to such an extent that we can easily get the optimal number of processors for both lists). Naturally, the list remained after shortening has to be processed—for example with one of the previous algorithms.
Algorithms based on upper and lower estimations operate as follows. Using one of the approaching algorithms they produce an upper estimation A of OPT(), and then they give a lower estimation for the value of OPT( as well. For this—among others—the properties of packing are suitable, according to which two large files cannot be placed onto the same disc, and the sum of the size cannot be more than 1 on any of the discs. Therefore, both the number of the large files and the sum of the size of the files, and so also their maximum MAX() is suitable as a lower estimation. If A(MAX(), then algorithm A produced an optimal scheduling. Otherwise it can be continued with one of the time-consuming optimum searching algorithms.
If there are several algorithms known for a scheduling (or other) problem, then a simple way of comparing the algorithms is to examine whether the values of the parameters involved can be given so that the chosen output value is more favourable in the case of one algorithm than in the case of the other one.
In the case of the above discussed placing algorithm the number of processors discs allocated to size array t by algorithm A and B is denoted by A(t and B(t, and we examine whether there are arrays and for which A() ≤ B() and A() ≥ B() hold. We answer this question in the case of the above defined ten approaching algorithms and for the optimal one. It follows from the definition of the optimal algorithms that for each t and each algorithm A holds OPT A . In the following the elements of the arrays in the examples will be twentieth.
Consider the following seven lists: ,
The packing results of these lists are summarised in Figure 17.18.
As shown in Figure 17.18, LF needs four discs for the first list, while the others need fewer than that. In addition, the row of list shows that FFD, BFD, PFD, QFD and OPT need fewer discs than NF, FF, BF, PF and NFD. Of course, there are no lists for which any of the algorithms would use fewer discs than OPT. It is also obvious that there are no lists for which LF would use fewer discs than any of the other ten algorithms.
These facts are shown in Figure 17.19. In the figure symbols X in the main diagonal indicate that the algorithms are not compared to themselves. Dashes in the first column indicate that for the algorithm belonging to the given row there is no list which would be processed using more disc by this algorithm than by the algorithm belonging to the given column, i.e., LF. Dashes in the last column show that there is no list for which the optimal algorithm would use more discs than any of the examined algorithms. Finally, 1's indicate that for list the algorithm belonging to the row of the given cell in the figure needs more discs than the algorithm belonging to the column of the given cell.
Since the first row and the first column of the table is filled, we do not deal more with algorithm LF.
For list NF, FF, BF and OPT use two discs, while the other 6 algorithms use three ones. Therefore we write 2's in the points of intersection of the columns of the 'winners' and the rows of the 'losers' (but we do not rewrite the 1's given in the points of intersection of PF and OPT, and NFD and OPT, so we write 2's in cells. Since both the row and the column of OPT have been filled in, it is not dealt with any more in this section. The third list is disadvantageous for PF and PFD, therefore we write 3's in the empty cells in their rows. This list shows an example also for the fact that NF can be worse than FF, BF can be worse than FF, and BFD than FFD and QFD.
The fourth list can be processed only by BF and BFD optimally, i.e., using two discs. Therefore we can write 4's in the empty cells in the columns of these two algorithms. For the fifth list NFD, FFD, BFD and QFD use only two, while NF, FF, BF, PF and PDF use three discs. So we can fill the suitable cells with 5's. The 'losers' of list are NF and NFD—therefore, we write 6's in the empty cells in their rows. PF performs better when processing list than FF. The following theorem helps us filling in the rest of the cells.
Proof. We perform an induction according to the length of the list. Let and . Let NF N and FF F , and let be the level of the last disc according to NF, which means the sum of the lengths of the files placed onto the non empty disc with the higher index, when NF has just processed . Similarly, let be the level of the last disc according to FF. We are going to prove the following invariant property for each : either , or and . If , then and , i.e., the second part of the invariant property holds. Suppose that the property holds for the value . If the first part of the invariant property holds before packing , then either inequality stays true, or the numbers of discs are equal, and holds. If the numbers of discs were equal before packing of , then after placing it either the number of discs of FF is smaller, or the numbers of discs are equal and the level of the last disc of FF is at most as big as that of NF.
A similar statement can be proved for the pairs of algorithms NF-BF, NFD-FFD and NFD-BFD. Using an induction we could prove that FFD and QFD need the same number of discs for every list. The previous statements are summarised in Figure 11.20.
The relative efficiency of two algorithms (A and B) is often described by the ratio of the values of the chosen efficiency measures, this time the relative number of processors . Several different characteristics can be defined using this ratio. These can be divided into two groups: in the first group there are the quantities describing the worst case, while in the other group there are those describing the usual case. Only the worst case is going to be discussed here (the discussion of the usual case is generally much more difficult). Let denote the real list of elements and the set of all the real lists, i.e.,
Let be the set of algorithms, determining the number of discs, that is of algorithms, connecting a nonnegative real number to each list , so implementing the mapping ).
Let be the set of the optimal algorithms, that is of algorithms ordering the optimal number of discs to each list, and OPT an element of this set (i.e., an algorithm that gives the number of discs sufficient and necessary to place the files belonging to the list for each list ).
Let be the set of the approximation algorithms, that is of algorithms for which for each list , and there is a list , for which .
Let be the set of estimation algorithms, that is of algorithms for which for each list , and there is a list , for which . Let denote the set of real lists for which , i.e., . In the following we discuss only algorithms contained in . We define error function, error (absolute error) and asymptotic error of algorithms A and B as follows:
These quantities are interesting especially if . In this case, to be as simple as possible, we omit B from the denotations, and speak about the error function, error and asymptotic error of algorithms , and . The characteristic values of NF file placing algorithm are known.
Furthermore, if , then there are lists and for which
From this statement follows the error function, absolute error and asymptotic error of NF placing algorithm.
The following statement refers to the worst case of the FF and BF file packing algorithms.
Furthermore, if , then there are lists and for which
For the algorithm FF holds the following stronger upper bound too.
From this statement follows the asymptotic error of FF and BF, and the good estimation of their error function.
If is divisible by 10, then the upper and lower limits in inequality (17.40) are equal, thus in this case .
17.4-1 Prove that the absolute error of the FF and BF algorithms is at least 1.7 by an example.
17.4-2 Implement the basic idea of the FF and BF algorithms so that the running time would be .
17.4-3 Complete Figure 11.20.
Smooth process selection for an empty partition
algorithm in a way that instead of giving priority to processes with above the , selects the process with the highest among the processes fitting into the partition. Prove the correctness of the algorithm and give an upper bound for the waiting time of a process.
Partition search algorithms with restricted scope
algorithms to only search for their optimal partitions among the next suitable one following the last split partition, where is a fixed positive number. Which algorithms do we get in the and cases. Simulate both the original and the new algorithms, and compare their performance regarding execution time, average number of waiting processes and memory fragmentation.
Avoiding page replacement anomaly
Class the discussed page replacement algorithms based on whether they ensure to avoid the anomaly or not.
Optimal page replacement algorithm
Prove that for each demanding page replacement algorithm , memory size and reference string holds
Plan (and implement) an algorithm with which it can occur that a given problem takes longer to solve on processors than on ones.
Error of file placing algorithms
Give upper and lower limits for the error of the BF, BFD, FF and FFD algorithms.
Defining page replacement algorithms by a Mealy-automat is based on the summarising article by Denning [ 87 ], and textbooks by Ferenc Gécseg and István Peák [ 131 ], Hopcroft, Motwani and Ullman [ 166 ].
The anomaly experienced in practice when using FIFO page replacement algorithm was first described by László Bélády [ 43 ] in 1966, after that he proved in a constructive way that the degree of the anomaly can approach two arbitrarily closely in his study he wrote together with Shedler. The conjecture that it cannot actually reach two can be found in the same article (written in 1969).
Péter Formai and Antal Iványi [ 115 ] showed that the ratio of the numbers of page replacements needed on a big and on a smaller computer can be arbitrarily large in 2002.
Analysis of the interleaved memory derives from the article [ 175 ].
The bound can be found in D. S. Johnson's PhD dissertation [ 184 ], the precise Theorem 17.9. comes from [ 177 ]. The upper limit for FF and BF is a result by Johnson, Demers, Ullman, Garey and Graham [ 185 ], while the proof of the accuracy of the limit is that by [ 177 ], [ 178 ]. The source of the upper limit for FFD and BFD is [ 185 ], and that of the limit for NFD is [ 27 ]. The proof of the NP-completeness of the file packing problem—leading it back to the problem of partial sum—can be found in the chapter on approximation algorithms in Introduction to Algorithms [ 73 ].