In this section we deal with the pushdown automata and the class of languages — the context-free languages — accepted by them.
As we have been seen in Section 1.1, a context-free grammar is one with the productions of the form , , . The production is also permitted if does not appear in right hand side of any productions. Language is the context-free language generated by grammar .
We have been seen that finite automata accept the class of regular languages. Now we get to know a new kind of automata, the so-called pushdown automata, which accept context-free languages. The pushdown automata differ from finite automata mainly in that to have the possibility to change states without reading any input symbol (i.e. to read the empty symbol) and possess a stack memory, which uses the so-called stack symbols (See Fig. 1.31).
The pushdown automaton get a word as input, start to function from an initial state having in the stack a special symbol, the initial stack symbol. While working, the pushdown automaton change its state based on current state, next input symbol (or empty word) and stack top symbol and replace the top symbol in the stack with a (possibly empty) word.
There are two type of acceptances. The pushdown automaton accepts a word by final state when after reading it the automaton enter a final state. The pushdown automaton accepts a word by empty stack when after reading it the automaton empties its stack. We show that these two acceptances are equivalent.
is the finite, non-empty set of states
is the input alphabet,
is the stack alphabet,
is the set of transitions or edges,
is the initial state,
is the start symbol of stack,
is the set of final states.
A transition means that if pushdown automaton is in state , reads from the input tape letter (instead of input letter we can also consider the empty word ), and the top symbol in the stack is , then the pushdown automaton enters state and replaces in the stack by word . Writing word in the stack is made by natural order (letters of word will be put in the stack letter by letter from left to right). Instead of writing transition we will use a more suggestive notation .
Here, as in the case of finite automata, we can define a transition function
which associate to current state, input letter and top letter in stack pairs of the form , where is the word written in stack and the new state.
Because the pushdown automaton is nondeterministic, we will have for the transition function
(if the pushdown automaton reads an input letter and moves to right), or
(without move on the input tape).
A pushdown automaton is deterministic, if for any and we have
if , then , .
We can associate to any pushdown automaton a transition table, exactly as in the case of finite automata. The rows of this table are indexed by elements of , the columns by elements from and (to each and will correspond a column). At intersection of row corresponding to state and column corresponding to and we will have pairs if . The transition graph, in which the label of edge will be corresponding to transition , can be also defined.
The transition function:
The transition table:
Because for the transition function every set which is not empty contains only one element (e.g. ), in the above table each cell contains only one element, and the set notation is not used. Generally, if a set has more than one element, then its elements are written one under other. The transition graph of this pushdown automaton is in Fig. 1.32.
The current state, the unread part of the input word and the content of stack constitutes a configuration of the pushdown automaton, i.e. for each , and the triplet can be a configuration. If and , then the pushdown automaton can change its configuration in two ways:
The reflexive and transitive closure of the relation will be denoted by . Instead of using , sometimes is considered.
How does work such a pushdown automaton? Getting started with the initial configuration we will consider all possible next configurations, and after this the next configurations to these next configurations, and so on, until it is possible.
the first element of the sequence is ,
there is a going from each element of the sequence to the next element, excepting the case when the sequence has only one element,
the last element of the sequence is , where and .
Therefore pushdown automaton accepts word by final state, if and only if for some and . The set of words accepted by final state by pushdown automaton will be called the language accepted by by final state and will be denoted by .
the first element of the sequence is ,
there is a going from each element of the sequence to the next element,
the last element of the sequence is and is an arbitrary state.
Therefore pushdown automaton accepts a word by empty stack if for some . The set of words accepted by empty stack by pushdown automaton will be called the language accepted by empty stack by and will be denoted by .
Example 1.26 Pushdown automaton of Example 1.25 accepts the language by final state. Consider the derivation for words and .
Word is accepted by the considered pushdown automaton because
and because is a final state the pushdown automaton accepts this word. But the stack being empty, it accepts this word also by empty stack.
Because the initial state is also a final state, the empty word is accepted by final state, but not by empty stack.
To show that word is not accepted, we need to study all possibilities. It is easy to see that in our case there is only a single possibility:
, but there is no further going, so word is not accepted.
Figure 1.33. Transition graph of the Example 1.27
The corresponding transition graph can be seen in Fig. 1.33. Pushdown automaton accepts the language . Because is nemdeterministic, all the configurations obtained from the initial configuration can be illustrated by a computation tree. For example the computation tree associated to the initial configuration can be seen in Fig. 1.34. From this computation tree we can observe that, because is a leaf of the tree, pushdown automaton accepts word 1001 by empty stack. The computation tree in Fig. 1.35 shows that pushdown automaton does not accept word , because the configurations in leaves can not be continued and none of them has the form .
Figure 1.34. Computation tree to show acceptance of the word (see Example 1.27).
Figure 1.35. Computation tree to show that the pushdown automaton in Example 1.27 does not accept word .
be the pushdown automaton which accepts by empty stack language . Define pushdown automaton , where and
Working of : Pushdown automaton with an -move first goes in the initial state of , writing (the initial stack symbol of ) in the stack (beside ). After this it is working as . If for a given word empties its stack, then still has in the stack, which can be deleted by using an -move, while a final state will be reached. can reach a final state only if has emptied the stack.
b) Let be a pushdown automaton, which accepts language by final state. Define pushdown automaton , where , and
Working : Pushdown automaton with an -move writes in the stack beside the initial stack symbol of , then works as , i.e reaches a final state for each accepted word. After this empties the stack by an -move. can empty the stack only if goes in a final state.
The next two theorems prove that the class of languages accepted by nondeterministic pushdown automata is just the set of context-free languages.
We outline the proof only. Let be a context-free grammar. Define pushdown automaton , where , and the set of transitions is:
If there is in the set of productions of a production of type , then let put in the transition ,
For any letter let put in the transition . If there is a production in , the pushdown automaton put in the stack the mirror of with an -move. If the input letter coincides with that in the top of the stack, then the automaton deletes it from the stack. If in the top of the stack there is a nonterminal , then the mirror of right-hand side of a production which has in its left-hand side will be put in the stack. If after reading all letters of the input word, the stack will be empty, then the pushdown automaton recognized the input word.
The following algorithm builds for a context-free grammar the pushdown automaton , which accepts by empty stack the language generated by .
DOput in the transition 2
DOput in the transition 3
If has productions and terminals, then the number of step of the algorithm is .
Let us see how pushdown automaton accepts word , which in grammar can be derived in the following way:
where productions and were used. Word is accepted by empty stack (see Fig. 1.36).
Figure 1.36. Recognising a word by empty stack (see Example 1.28).
Instead of a proof we will give a method to obtain grammar . Let be the nondeterministic pushdown automaton in question.
Then , where
Productions in will be obtained as follows.
For all state put in production .
If , where , () and , put in for all possible states productions
If , where , and , put in production
The context-free grammar defined by this is an extended one, to which an equivalent context-free language can be associated. The proof of the theorem is based on the fact that to every sequence of configurations, by which the pushdown automaton accepts a word, we can associate a derivation in grammar . This derivation generates just the word in question, because of productions of the form , which were defined for all possible states . In Example 1.27 we show how can be associated a derivation to a sequence of configurations. The pushdown automaton defined in the example recognizes word 00 by the sequence of configurations
which sequence is based on the transitions
To these transitions, by the definition of grammar , the following productions can be associated
(1) for all states ,
Furthermore, for each state productions were defined.
By the existence of production there exists the derivation , where can be chosen arbitrarily. Let choose in above production (1) state to be equal to . Then there exists also the derivation
where can be chosen arbitrarily. If , then the derivation
will result. Now let equal to , then
which proves that word 00 can be derived used the above grammar.
The next algorithm builds for a pushdown automaton a context-free grammar , which generates the language accepted by pushdown automaton by empty stack.
DOput in production 3
FORall , (), 4
FORall states 5
DOput in productions 6
FORAll , 7
DOput in production
If the automaton has states and productions, then the above algorithm executes at most steps, so in worst case the number of steps is . Finally, without proof, we mention that the class of languages accepted by deterministic pushdown automata is a proper subset of the class of languages accepted by nondeterministic pushdown automata. This points to the fact that pushdown automata behave differently as finite automata.
Example 1.29 As an example, consider pushdown automaton from the Example 1.28: . Grammar is:
where for all instead of we shortly used . The transitions:
Based on these, the following productions are defined:
It is easy to see that can be eliminated, and the productions will be:
and these productions can be replaced:
Consider context-free grammar . A derivation tree of is a finite, ordered, labelled tree, which root is labelled by the the start symbol , every interior vertex is labelled by a nonterminal and every leaf by a terminal. If an interior vertex labelled by a nonterminal has descendents, then in there exists a production such that the descendents are labelled by letters , , . The result of a derivation tree is a word over , which can be obtained by reading the labels of the leaves from left to right. Derivation tree is also called syntax tree.
Consider the context-free grammar . It generates language . Derivation of word is:
In Fig. 1.37 this derivation can be seen, which result is .
To every derivation we can associate a syntax tree. Conversely, to any syntax tree more than one derivation can be associated. For example to syntax tree in Fig. 1.37 the derivation
also can be associated.
In this grammar word has two different leftmost derivations:
The above grammar is ambiguous, because word has two different leftmost derivations. A language can be generated by more than one grammar, and between them can exist ambiguous and unambiguous too. A context-free language is inherently ambiguous, if there is no unambiguous grammar which generates it.
Grammar is ambiguous because
Grammar is unambiguous.
Can be proved that .
Like for regular languages there exists a pumping lemma also for context-free languages.
Theorem 1.29 (pumping lemma) For any context-free language there exists a natural number (which depends only on ), such that every word of the language longer than can be written in the form and the following are true:
(4) is also in for all .
Proof. Let be a grammar without unit productions, which generates language . Let be the number of nonterminals, and let be the maximum of lengths of right-hand sides of productions, i.e. . Let and , such that . Then there exists a derivation tree with the result . Let be the height of (the maximum of path lengths from root to leaves). Because in all interior vertices have at most descendents, has at most leaves, i.e. . On the other hand, because of , we get that . From this follows that in derivation tree there is a path from root to a leave in which there are more than vertices. Consider such a path. Because in the number of nonterminals is and on this path vertices different from the leaf are labelled with nonterminals, by the pigeonhole principle, it must be a nonterminal on this path which occurs at least twice.
Let us denote by the nonterminal being the first on this path from root to the leaf which firstly repeat. Denote by the subtree, which root is this occurrence of . Similarly, denote by the subtree, which root is the second occurrence of on this path. Let be the result of the tree . Then the result of is in form , while of in . Derivation tree with this decomposition of can be seen in Fig. 1.38. We show that this decomposition of satisfies conditions (1)–(4) of lemma. Because in there are no -productions (except maybe the case ), we have . Furthermore, because each interior vertex of the derivation tree has at least two descendents (namely there are no unit productions), also the root of has, hence . Because is the first repeated nonterminal on this path, the height of is at most , and from this results.
After eliminating from all vertices of excepting the root, the result of obtained tree is , i.e. .
Similarly, after eliminating we get , and finally because of the definition of we get Then . Therefore and for all . Therefore, for all we have , i.e. for all .
Now we present two consequences of the lemma.
Proof. This consequence states that there exists a context-sensitive language which is not context-free. To prove this it is sufficient to find a context-sensitive language for which the lemma is not true. Let this language be .
To show that this language is context-sensitive it is enough to give a convenient grammar. In Example 1.2 both grammars are extended context-sensitive, and we know that to each extended grammar of type an equivalent grammar of the same type can be associated.
Let be the natural number associated to by lemma, and consider the word . Because of , if is context-free can be decomposed in such that conditions (1)–(4) are true. We show that this leads us to a contradiction.
Firstly, we will show that word and can contain only one type of letters. Indeed if either or contain more than one type of letters, then in word the order of the letters will be not the order , so , which contradicts condition (4) of lemma.
If both and contain at most one type of letters, then in word the number of different letters will be not the same, so . This also contradicts condition (4) in lemma. Therefore is not context-free.
and , where :
Languages and are context-free. But
is not context-free (see the proof of the Consequence 1.30).
In the case of arbitrary grammars the normal form was defined (see Section 1.1) as grammars with no terminals in the left-hand side of productions. The normal form in the case of the context-free languages will contains some restrictions on the right-hand sides of productions. Two normal forms (Chomsky and Greibach) will be discussed.
To each -free context-free language can be associated an equivalent grammar is Chomsky normal form. The next algorithm transforms an -free context-free grammar in grammar which is in Chomsky normal form.
1 2 eliminate unit productions, and let the new set of productions (see algorithm
ELIMINATE-UNIT-PRODUCTIONSin Section 1.1) 3 in replace in each production with at least two letters in right-hand side all terminals by a new nonterminal , and add this nonterminal to and add production to 4 replace all productions , where and , by the following: , , , , where are new nonterminals, and add them to 5
Step 2: After eliminating the unit production the productions are:
Step 3: We introduce three new nonterminals because of the three terminals in productions. Let these be . Then the production are:
Step 4: Only one new nonterminal (let this ) must be introduced because of a single production with three letters in the right-hand side. Therefore , and the productions in are:
All these productions are in required form.
To each -free context-free grammar an equivalent grammar in Greibach normal form can be given. We give and algorithm which transforms a context-free grammar in Chomsky normal form in a grammar in Greibach normal form.
First, we give an order of the nonterminals: , where is the start symbol. The algorithm will use the notations , .
1 2 3
DOfor all productions and (where has no as first letter) in productions , delete from productions 6
IFthere is a production Case 7
THENput in the new nonterminal , for all productions put in productions and , delete from production ,empty≥ for all production (where is not the first letter of ) put in production 8
DOfor all productions and put in production and delete from productions , 11
DOfor all productions and put in production and delete from productions 14
The algorithm first transform productions of the form such that or , where this latter is in Greibach normal form. After this, introducing a new nonterminal, eliminate productions , and using substitutions all production of the form and will be transformed in Greibach normal form.
in Greibach normal form.
Steps of the algorithm:
3–5: Production must be transformed. For this production is appropriate. Put in the set of productions and eliminate .
The productions will be:
6–7: Elimination of production will be made using productions:
Then, after steps 6–7. the productions will be:
8–10: We make substitutions in productions with in left-hand side. The results is:
11–13: Similarly with productions with in left-hand side:
After the elimination in steps 8–13 of productions in which substitutions were made, the following productions, which are now in Greibach normal form, result:
can be generated by grammar
First, will eliminate the single unit production, and after this we will give an equivalent grammar in Chomsky normal form, which will be transformed in Greibach normal form.
Productions after the elimination of production :
We introduce productions , and replace terminals by the corresponding nonterminals:
After introducing two new nonterminals (, ):
This is now in Chomsky normal form. Replace the nonterminals to be letters as in the algorithm. Then, after applying the replacements
replaced by , replaced by , replaced by , replaced by , replaced by , replaced by , replaced by ,
our grammar will have the productions:
In steps 3–5 of the algorithm the new productions will occur:
Steps 6–7 will be skipped, because we have no left-recursive productions. In steps 8–10 after the appropriate substitutions we have:
1.3-1 Give pushdown automata to accept the following languages:
1.3-2 Give a context-free grammar to generate language , and transform it in Chomsky and Greibach normal forms. Give a pushdown automaton which accepts .
1.3-3 What languages are generated by the following context-free grammars?
1.3-4 Give a context-free grammar to generate words with an equal number of letters and .
1.3-5 Prove, using the pumping lemma, that a language whose words contains an equal number of letters , and can not be context-free.
1.3-6 Let the grammar , where
Show that word if a then if a then c else c has two different leftmost derivations.
1.3-7 Prove that if is context-free, then is also context-free.
A grammar which has productions only in the form or , where , is called a linear grammar. If in a linear grammar all production are of the form or , then it is called a left-linear grammar. Prove that the language generated by a left-linear grammar is regular.
An -free context-free grammar is called operator grammar if in the right-hand side of productions there are no two successive nonterminals. Show that, for all -free context-free grammar an equivalent operator grammar can be built.
Complement of context-free languages
Prove that the class of context-free languages is not closed on complement.
In the definition of finite automata instead of transition function we have used the transition graph, which in many cases help us to give simpler proofs.
There exist a lot of classical books on automata and formal languages. We mention from these the following: two books of Aho and Ullman [ 5 ], [ 6 ] in 1972 and 1973, book of Gécseg and Peák [ 87 ] in 1972, two books of Salomaa [ 221 ], [ 222 ] in 1969 and 1973, a book of Hopcroft and Ullman [ 118 ] in 1979, a book of Harrison [ 108 ] in 1978, a book of Manna [ 174 ], which in 1981 was published also in Hungarian. We notice also a book of Sipser [ 242 ] in 1997 and a monograph of Rozenberg and Salomaa [ 220 ]. In a book of Lothaire (common name of French authors) [ 166 ] on combinatorics of words we can read on other types of automata. Paper of Giammarresi and Montalbano [ 89 ] generalise the notion of finite automata. A new monograph is of Hopcroft, Motwani and Ullman [ 117 ]. In German we recommend the textbook of Asteroth and Baier [ 14 ]. The concise description of the transformation in Greibach normal form is based on this book.
A practical introduction to formal languages is written by Webber [ 270 ].
At the end of the next chapter on compilers another books on the subject are mentioned.