17.3. 17.3 Anomalies

When the first page replacement algorithms were tested in the IBM Watson Research Institute at the beginning of the 1960's, it caused a great surprise that in certain cases increasing the size of the memory leads to an increase in running time of the programs. In computer systems the phenomenon, when using more recourses leads to worse results is called anomaly. Let us give three concrete examples. The first one is in connection with the FIFO page replacement algorithm, the second one with the List-Scheduling algorithm used for processor scheduling, and the third one with parallel program execution in computers with interleaved memories.

Note that in two examples out of the three ones a very rare phenomenon can be observed, namely that the degree of the anomaly can be any large.

17.3.1. 17.3.1 Page replacement

Let , , and be positive integers , a non-negative integer, a finite alphabet. is the set of the words over with length , and the words over with finite length. Let be the number of page frames in the main memory of a small, and a big computer. The FIFO algorithm has already been defined in the previous section. Since in this subsection only the FIFO page replacement algorithm is discussed, the sign of the page replacement algorithm can be omitted from the notations.

Let us denote the number of the page faults by . The event, when and is called anomaly. In this case the quotient is the degree of the anomaly. The efficiency of algorithm is measured by paging speed which is defined as

for a finite reference string , while for an infinite reference string by

Let and let be an infinite, circular reference sequence. In this case .

If we process the reference string , then we will get 9 page faults in the case of , and 10 ones in the case of , therefore, . Bélády, Nelson and Shedler has given the following necessary and sufficient condition for the existing of the anomaly.

Theorem 17.1 There exists a reference sequence R for which the FIFO page replacement algorithm causes an anomaly if, and only if .

The following has been proved as far as the degree of the anomaly is concerned.

Theorem 17.2 If , then for every there exists a reference sequence for which

Bélády, Nelson and Shedler had the following conjecture.

Conjecture 17.3 For every reference sequence R and memory sizes

This conjecture can be refuted e. g. by the following example. Let , and , where and . If execution sequence U is executed using a physical memory with page frames, then there will be 29 page faults, and the processing results in controlling status (7,3,6,2,5). After that every execution of reference sequence causes 7 new page faults and results in the same controlling status.

If the reference string is executed using a main memory with page frames, then we get control state and 14 page faults. After that every execution of reference sequence causes 21 new page faults and results in the same control state.

Choosing the degree of the anomaly will be . As we increment the value of , the degree of the anomaly will go to three. Even more than that is true: according to the following theorem by Péter Fornai and Antal Iványi the degree of the anomaly can be any arbitrarily large.

Theorem 17.4 For any large number L it is possible to give parameters m, M and R so that the following holds:

17.3.2. 17.3.2 Scheduling with lists

Suppose that we would like to execute tasks on processors. By the execution the priority order of the programs has to be taken into consideration. The processors operate according to First Fit, and the execution is carried out according to a given list . E. G. Coffman jr. wrote in 1976 that decreasing the number of processors, decreasing execution time of the tasks, reducing the precedence restrictions, and altering the list can each cause an anomaly. Let the vector of the execution times of the tasks denoted by t, the precedence relation by ≤, the list by , and execution time of all the tasks with a common list on equivalent processors by .

The degree of the anomaly is measured by the ratio of the execution time at the new parameters and execution time at the original parameters. First let us show four examples for the different types of the anomaly.

Example 17.4 Consider the following task system and its scheduling received using list on equivalent processors. In this case (see Figure 17.1), which can be easily proved to be the optimal value.

Figure 17.1.  Task system Task system \tau_{1} , and its optimal schedule., and its optimal schedule.

Task system \tau_{1} , and its optimal schedule.

Example 17.5 Schedule the previous task system for equivalent processors with list . In this case for the resulting scheduling we get (see Figure 17.2).

Figure 17.2.  Scheduling of the task system Scheduling of the task system \tau_{1} at list L' . at list Scheduling of the task system \tau_{1} at list L' ..

Scheduling of the task system \tau_{1} at list L' .

Example 17.6 Schedule the task system with list for processors. It results in (see Figure 17.3).

Figure 17.3.  Scheduling of the task system Scheduling of the task system \tau_{1} using list L on m'=4 processors. using list Scheduling of the task system \tau_{1} using list L on m'=4 processors. on Scheduling of the task system \tau_{1} using list L on m'=4 processors. processors.

Scheduling of the task system \tau_{1} using list L on m'=4 processors.

Example 17.7 Decrement the executing times by one in . Schedule the resulting task system with list for processors. The result is: (see Figure 17.4).

Figure 17.4.  Scheduling of Scheduling of \tau_{2} with list L on m=3 processors. with list Scheduling of \tau_{2} with list L on m=3 processors. on Scheduling of \tau_{2} with list L on m=3 processors. processors.

Scheduling of \tau_{2} with list L on m=3 processors.

Example 17.8 Reduce the precedence restrictions: omit edges and from the graph. The result of scheduling of the resulting task system can be seen in Figure 17.5: .

Figure 17.5.  Scheduling task system Scheduling task system \tau_{3} on m=3 processors. on Scheduling task system \tau_{3} on m=3 processors. processors.

Scheduling task system \tau_{3} on m=3 processors.

The following example shows that the increase of maximal finishing time in the worst case can be caused not only by a wrong choice of the list.

Example 17.9 Let task system and its optimal scheduling be as showed by Figure 17.6. In this case .

Figure 17.6.  Task system Task system \tau and its optimal scheduling S_{\mbox{\texttt{\textit{OPT}}}} on two processors. and its optimal scheduling Task system \tau and its optimal scheduling S_{\mbox{\texttt{\textit{OPT}}}} on two processors. on two processors.

Task system \tau and its optimal scheduling S_{\mbox{\texttt{\textit{OPT}}}} on two processors.

We can easily prove that if the executing times are decremented by one, then in the resulting task system we cannot reach a better result than with any lists (see Figure 17.7).

Figure 17.7.  Optimal list scheduling of task system Optimal list scheduling of task system \tau' ..

Optimal list scheduling of task system \tau' .


After these examples we give a relative limit reflecting the effects of the scheduling parameters. Suppose that for given task systems and we have , , . Task system is scheduled with the help of list , and with — the former on , while the latter on equivalent processors. For the resulting schedulings and let and .

Theorem 17.5 (scheduling limit) With the above conditions

Proof. Consider scheduling diagram for the parameters with apostrophes (for ). Let the definition of two subsets— and —of the interval be the following: , . Note that both sets are unions of disjoint, half-open (closed from the left and open from the right) intervals.

Let be a task the execution of which ends in instant of time according to D1 (i. e., ). In this case there are two possibilities: Starting point of task is either an inner point of , or not.

  1. If is an inner point of , then according to the definition of there is a processor for which with it holds that it does not work in the interval . This can only occur if there is a task for which and (case a).

  2. If is not an inner point of , then either (case b), or . If has got a smaller element than (case c), then let , else let (case d). If , then it follows from the construction of and that there is a processor for which a task can be found the execution of which is still in progress in this time interval, and for which .

Summarising the two cases we can say that either there is a task for which in the case of holds (case a or c), or for every number or holds (case a or d).

Repeating this procedure we get a task chain for which it holds that in the case of either or . This proves that there are tasks for which

and in every instant of time there is a processor which is working, and is executing one of the elements of the chain. It yields

where denotes the empty periods, so the sum on the left hand side refers to all the empty periods in .

Based on (11.9) and , therefore,

Since

and

így (17.10), (17.11) and (17.12)

based on (17.10), (17.11) and (17.12) we get

implying .

The following examples show us not only that the limit in the theory is the best possible, but also that we can get the given limit (at least asymptotically) by altering any of the parameters.

Example 17.10 In this example the list has changed, ≤ is empty, is arbitrary. Execution times are the following:

If this task system is scheduled for processors with list , then we get the optimal scheduling which can be seen in Figure 17.8.

Figure 17.8.  Scheduling Scheduling S_{7}(\tau_{4}) belonging to list L=(T_{1},\dots,T_{n}) . belonging to list Scheduling S_{7}(\tau_{4}) belonging to list L=(T_{1},\dots,T_{n}) ..

Scheduling S_{7}(\tau_{4}) belonging to list L=(T_{1},\dots,T_{n}) .


If we use the list instead of list , then we get scheduling which can be seen in Figure 17.9.

Figure 17.9.  Scheduling Scheduling S_{8}(\tau_{4}) belonging to list L' . belonging to list Scheduling S_{8}(\tau_{4}) belonging to list L' ..

Scheduling S_{8}(\tau_{4}) belonging to list L' .


In this case , , therefore ; which means that altering the list results in the theorem holding with equality, i.e., the expression on the right hand side of the sign cannot be decreased.

Example 17.11 In this example we decrease the execution times. We use list . in both cases. Here as well as in the remaining part of the chapter denotes an arbitrarily small positive number. Original execution times are stored in vector , where

The new execution times are

The precedence graph of task system , and its modification are shown in Figure 17.10, while optimal scheduling and scheduling can be seen in Figure 17.11. Here and C = Cmax, therefore, increasing goes to value (). This means that altering the execution times we can approach the limit in the theorem arbitrarily closely.

Figure 17.10.  Identical graph of task systems Identical graph of task systems \tau_{5} and \tau_{5}' . and Identical graph of task systems \tau_{5} and \tau_{5}' ..

Identical graph of task systems \tau_{5} and \tau_{5}' .

Figure 17.11.  Schedulings Schedulings S_{9}(\tau_{5}) and S_{10}(\tau_{5}') . and Schedulings S_{9}(\tau_{5}) and S_{10}(\tau_{5}') ..

Schedulings S_{9}(\tau_{5}) and S_{10}(\tau_{5}') .

Example 17.12 In this example we reduce the precedence restrictions. The precedence graph of task system is shown in Figure 17.12.

Figure 17.12.  Graph of the task system Graph of the task system \tau_{6} ..

Graph of the task system \tau_{6} .


The execution times of the tasks are: , , if , and . The optimal scheduling of belonging to list can be seen in Figure 17.13.

Figure 17.13.  Optimal scheduling Optimal scheduling S_{11}(\tau_{6}) ..

Optimal scheduling S_{11}(\tau_{6}) .


Omitting all the precedence restrictions from we get the task system . Scheduling is shown in Figure 17.14.

Figure 17.14.  Scheduling Scheduling S_{12}(\tau_{6}') ..

Scheduling S_{12}(\tau_{6}') .

Example 17.13 This time the number of the processors will be increased from to . The graph of task system is shown by Figure 17.15, and the running times are

Figure 17.15.  Precedence graph of task system Precedence graph of task system \tau_{7} ..

Precedence graph of task system \tau_{7} .


The optimal scheduling of the task system on , and processors is shown by Figure 17.16 and Figure 17.17.

Figure 17.16.  The optimal scheduling The optimal scheduling S_{13}(\tau_{7}) (a=mm'-m'+3 , b=a+1 , c=mm'-m'+m+1 ). (The optimal scheduling S_{13}(\tau_{7}) (a=mm'-m'+3 , b=a+1 , c=mm'-m'+m+1 )., The optimal scheduling S_{13}(\tau_{7}) (a=mm'-m'+3 , b=a+1 , c=mm'-m'+m+1 )., The optimal scheduling S_{13}(\tau_{7}) (a=mm'-m'+3 , b=a+1 , c=mm'-m'+m+1 ).).

The optimal scheduling S_{13}(\tau_{7}) (a=mm'-m'+3 , b=a+1 , c=mm'-m'+m+1 ).


Figure 17.17.  The optimal scheduling The optimal scheduling S_{14}(\tau_{7}) (a=mm'-2m'+m+2 , b=m+1 , c=2m+2 , d=mm'-2m'+2m+2 , e=m+m'+1 , f=mm'-m'+m+1 ). (The optimal scheduling S_{14}(\tau_{7}) (a=mm'-2m'+m+2 , b=m+1 , c=2m+2 , d=mm'-2m'+2m+2 , e=m+m'+1 , f=mm'-m'+m+1 )., The optimal scheduling S_{14}(\tau_{7}) (a=mm'-2m'+m+2 , b=m+1 , c=2m+2 , d=mm'-2m'+2m+2 , e=m+m'+1 , f=mm'-m'+m+1 )., The optimal scheduling S_{14}(\tau_{7}) (a=mm'-2m'+m+2 , b=m+1 , c=2m+2 , d=mm'-2m'+2m+2 , e=m+m'+1 , f=mm'-m'+m+1 )., The optimal scheduling S_{14}(\tau_{7}) (a=mm'-2m'+m+2 , b=m+1 , c=2m+2 , d=mm'-2m'+2m+2 , e=m+m'+1 , f=mm'-m'+m+1 )., The optimal scheduling S_{14}(\tau_{7}) (a=mm'-2m'+m+2 , b=m+1 , c=2m+2 , d=mm'-2m'+2m+2 , e=m+m'+1 , f=mm'-m'+m+1 )., The optimal scheduling S_{14}(\tau_{7}) (a=mm'-2m'+m+2 , b=m+1 , c=2m+2 , d=mm'-2m'+2m+2 , e=m+m'+1 , f=mm'-m'+m+1 ).).

The optimal scheduling S_{14}(\tau_{7}) (a=mm'-2m'+m+2 , b=m+1 , c=2m+2 , d=mm'-2m'+2m+2 , e=m+m'+1 , f=mm'-m'+m+1 ).


Comparing the , and maximal finishing times we get the ratio and so again the required asymptotic value:

With the help of these examples we proved the following statement.

Theorem 17.6 (sharpness of the scheduling limit) The limit given for the relative speed (11.8) is asymptotically sharp for the changing of (any of the) parameters m, t, ≤ and L.

17.3.3. 17.3.3 Parallel processing with interleaved memory

We describe the parallel algorithm modelling the operating of computers with interleaved memory in a popular way. The sequence of dumplings is modelling the reference string, the giants the processors and the bites the commands executed simultaneously. Dwarfs cook dumplings of different types. Every dwarf creates an infinite sequence of dumplings.

These sequences are usually given as random variables with possible values . For the following analysis of the extreme cases deterministic sequences are used.

The dumplings eating giants eat the dumplings. The units of the eating are the bits.

The appetite of the different giants is characterised by the parameter . Giant is able to eat up most dumplings of the same sort at one bite.

Giant eats the following way. He chooses for his first bite from the beginning from the beginning of the dumpling sequence of dwarf so many dumplings, as possible (at most of the same sort), and he adds to these dumplings so many ones from the beginning of the sequences of the dwarfs as possible.

After assembling the first bite the giant eats it, then he assembles and eats the second, third, bites.

Example 17.14 To illustrate the model let us consider an example. We have two dwarfs ( and ) and the giant . The dumpling sequences are

or in a shorter form

where the star (*) denotes a subsequence repeated infinitely many times.

For his first bite chooses from the first sequence the first four dumlings 1212 (because the fifth dumpling is the third one of the sort 1) and no dumpling from the second sequence (because the beginning element is 2, and two dumplings of this sort is chosen already). The second bite contains the subsequence 1233 from the first sequence, and the dumplings 244 from the second one. The other bites are identical: 321321 from the first sequence and 44 from the second one. In a short form the bites are as follows:

(bites are separated by double lines).

For given dumpling sequences and a given giant let denote the number of dumplings in the -th bite. According to the eating-rules holds for every .

Considering the elements of the dumpling sequences as random variables with possible values and given distribution we define the dumpling-eating speed (concerning the given sequences) of as the average number of dumplings in one bite for a long time, more precisely

where denotes the expected value of the random variable .

One can see that the defined limit always exists.

17.3.3.1.  Maximal and minimal speed-ratio

Let us consider the case, when we have at least one dumpling sequence, at least one type of dumplings, and two different giants, that is let . Let the sequences be deterministic.

Since for every bite-size of holds , the same bounds are right for every average value and for every expected value , too. From this it follows, that the limits and defined in (17.16) also must lie between these bounds, that is

Choosing the maximal value of and the minimal value of and vice versa we get the following trivial upper and lower bounds for the speed ratio :

Now we show that in many cases these trivial bounds cannot be improved (and so the dumpling eating speed of a small giant can be any times bigger than that of a big giant).

Theorem 17.7 If , then there exist dumpling sequences, for which

further

Proof. To see the sharpness of the lower limit in the inequality (17.18) giving the natural limits let consider the following sequences:

Giant eats these sequences in the following manner:

Here , , , (for .

For the given sequences we have

eats these sequences as follows:

Here

In this case we get (in a similar way, as we have got )

and therefore

In order to derive the exact upper bound, we consider the following sequence:

eats these sequences as follows:

From here we get

'c eating is characterised by

where

Since for , therefore , and so .

17.3.4. 17.3.4 Avoiding the anomaly

We usually try to avoid anomalies.

For example at page replacing the sufficient condition of avoiding it is that the replacing algorithm should have the stack property: if the same reference string is run on computers with memory sizes of and , then after every reference it holds that the bigger memory contains all the pages that the smaller does. At the examined scheduling problem it is enough not to require the scheduling algorithm's using a list.

Exercises

17.3-1 Give parameters and so that the FIFO algorithm would cause at least three more page faults with a main memory of size than with that of size .

17.3-2 Give such parameters that using scheduling with list when increasing the number of processors the maximal stopping time increases at least to half as much again.

17.3-3 Give parameters with which the dumpling eating speed of a small giant is twice as big as that of a big giant.