Parsing English with L-Systems


I. General Problem

"I think that I shall never see a poem as lovely as a tree." - Joyce Kilmer

The purpose of this paper is to determine if what Joyce Kilmer didn't know was that poems, and other English compositions, and trees actually have deep rooted similarities. This will be done by approaching the problem of parsing English sentences with grammars that were originally developed to study biological systems.

With current approaches, the problem of parsing sentences is the first step to having a computer understand what that sentence is saying. Most parsing methods use context free grammars as the formal construct for creating the parse tree. In this paper we will look at the possibility of using L-Systems in the place of context free grammars. Because L-Systems were developed to aid the study of plant growth and development their application to the field of computational linguistics appears to have not been well explored.

This paper will begin with a look at various types of L-Systems and what types of languages they generate. It will then do a brief comparison of these constructs to the more widely used members of the Chompsky hierarchy. Lastly we will look specifically at what types of L-Systems are appropriate for parsing and play with a few small grammars to see how this parsing works.




II. L-Systems

In 1968, Aristid Lindenmayer published a paper describing a mathematical model of plant development. This first paper dealt with the plant topology and the relationships between adjacent cells. Since that time, this model of formal grammars has been given the name L-Systems, after their creator, and expanded to deal with many different types of systems. In this section of the paper, we will discuss the features of many of these systems in their normal context of developmental biology.

While formal grammars produce strings, biological systems are definitely not just strings. For this reason, different tools have been created to give visual representations to the strings these grammars generate. One tool that we shall use in this section is the turtle interpretation. In the turtle interpretation, different characters are read as instructions to move a pointer with the ability to indicate when a line should be drawn from the start point to the end point. Near the end of this section, there is an applet that you can use to create some of the different L-Systems discussed here and view them with turtle graphics.



II.A. 0L-Systems

The simplest type of L-System is the 0L-System. These systems model organisms where no communication takes place between cells. For the purposes of this paper, we will not be using a very formal treatment. If the reader is interested in a more formal presentation of many of the results from this section of the paper he or she should look in Herman and Rozenberg (1975). In an informal treatment, we can simply define a 0L-System as being composed of three elements: an alphabet, a set of productions, and an axiom. The alphabet, S, is the set of characters that appear in that grammar. The axiom is a string of characters from the alphabet that represents the starting state of the system. We will refer to it as w. The productions are a set of pairs, P, where the first element is an element of the alphabet, and the second element is a string over the alphabet or the empty string. We will denote productions as a®a where aÎS, and aÎS*. We say that b=a1a2…an directly derives b=a1a2an if (ai®ai | 1£i£n)ÎP. To be well defined, it is required that there exists a production a®a for every aÎS. For simplicity in this paper, we will assume that for any a for which this is not the case there is a production a®a. This definition deviates from the Chompsky grammar that we will discuss later in two significant ways. The first is that productions are applied to all elements in parallel instead of allowing them to be applied to single elements, and the second is that there is not a distinction between terminal and non-terminal characters.

There are two specific subsets of 0L-Systems that are frequently discussed. Deterministic 0L-Systems, D0L, are ones in which for every aÎS there exists only one aÎS*, such that a®a. In a deterministic system, there is only one path of production for the given axiom. The other special case of 0L-System is the set of propagating systems, P0L. In a propagating system there are no productions of the form a®e, where e is the empty string. The result of this is that if b derives b, then b must have at least as many characters in it as b does. The intersection of these two subsets is referred to as PD0L-Systems. One other subset that is used frequently in proofs of properties of 0L-Systems is the unary, or U0L, system in which the alphabet has only one symbol. These three types of specific subsets can also be applied to the systems that will be discussed in the following sections.

One last concept that needs to be covered before proceeding is that of the language of a grammar. For a 0L-System, the language is the set of all derivations of that grammar. This includes the axiom of the system and any strings that can be derived from it through any number of steps. To define the "power" of different types of systems we will use the concept of families of languages. Given a specific type of grammar the family of languages associated with it is the union of the languages of every grammar of that type.

While we are primarily interested with the use of L-Systems for modeling natural languages here, no treatment of them can be truly considered complete if it doesn't mention the fact that they can be used to draw pretty pictures. This ability is illustrated later in section II.D. with the generator applet, but here we will first discuss the turtle implementation used in that applet. The basic idea is that you have a "turtle" with a position and an orientation. This turtle has the ability to move forward drawing a line, jump forward without drawing the line, or turn either to the left or right. This system has also been extended to three dimensions to produce realistic trees, but we will not be discussing that here. For L-Systems, the turtle decides what to do based upon the characters in a string. The standard characters used are as follows:

Character Read

Action Performed

F

Move forward drawing a line.

f

Move forward without drawing the line.

+

Turn right a specified number of degrees.

-

Turn left a specified number of degrees.

For example, if the turning angle is specified to be 90 degrees then the string F+F+F+F draws a square.

With this system, we can do some rather interesting graphics. It turns out that 0L-Systems are very good at constructing fractals. A number of examples of these are given in the generator applet including the Koch Snowflake. Generally this fractal is defined as an initiator and a generator as shown in figure 1. With an 0L-System though, we can define this structure with the axiom F--F--F and the single production F®F+F--F+F with a turning angle of 60 degrees (again we assume that +®+ and -®- to make the system complete). Basically here the axiom is the initiator, and the production is the generator. This method can be used to produce any line fractal that is normally defined with an initiator and a generator. The applet has several other examples of these as well.

DrawObj
Figure 1 - This figure shows the initiator and generator for the Koch Snowflake with the result of applying the generator four times. If this procedure is repeated infinitely, it produces a figure that has an infinite perimeter bounding a finite area.

While this type of turtle graphics is powerful and can make lots of pretty pictures, it doesn't do all that well for drawing trees. This is a result of the fact that trees have branching structures and the turtle graphics as defined here don't handle those very well. To facilitate branching, we need to add a stack to our turtle implementation and define two more characters to use this stack. The elements of the stack are position and orientation data for the turtle, and we will use the '[' and ']' symbols to denote pushing to and popping from the stack. Adding in these characters a small tree with two side branches might be represented by F[+F]F[-F]F.

In theory, if 0L-Systems were found to model language well, it could imply a certain underlying structure which is somewhat fractal in nature. Exactly what such a finding would mean to the field of linguistics is unclear to me. However, of more significance is the fact that such a result might have implications to the way that the human brain itself works. Maybe such self-similar recurrence relations are important in the way our brains store, process, and retrieve information.



II.B. T0L-Systems

A variation on 0L-Systems are table 0L-Systems or T0L-Systems. In these systems the single set of productions is replaced by a set of sets of productions. For each step in a derivation one set of productions is chosen from which the individual productions can be used. In this type of system we say that b=a1a2…an directly derives b=a1a2an if (ai®ai | 1£i£n)ÎP for some set of productions P in the table. Obviously the family of T0L languages is a superset of the family of 0L languages because any 0L-System can be expressed as a T0L-System with a single set of productions.

T0L-Systems were developed to model the effects of environment on biological systems. The tables represent different models of growth associated with different environmental states. For example, in a model of algae growth one table might represent the behavior of the algae under strong light while another represents what would happen in the dark. For a tree, the tables could be used to distinguish between different seasons.

I mention this type of system here simply for completeness. Thus far I have not come up with a way in which these can be applied to parsing to any great benefit. However, it is possible that by distinguishing between the types of phrases that are being parsed a T0L could improve incorporating semantics into the parse.



II.C. IL-Systems

The third, and last, major type of L-System that will be discussed here are interacting L-Systems or IL-Systems. These systems integrate context sensitivity into the scheme by allowing the productions to specify strings of characters that must appear to the right or left of the character in question. In a general sense we can define a <k,l> system as one in which productions are of the form w1 < a > w3 ® w4, where w1ÎSk, aÎS, w3ÎSl and w4ÎS*. For this production to be applied not only must the character a be present, but it must have the strings w1 and w3 to its left and right respectively.

This addition to the basic 0L-System was made to enable the grammars to model communication between cells in a plant. One reason for wanting to do this is to control growth rates. If you go back to the example of the Koch Snowflake given in section II.A. the length of the string grows exponentially over iteration. If we just consider the occurrences of the character F then the length of the string goes as 4n, where n is the number of iterations. Without interactions string length most either be constant or grow/decay exponentially. For models of plants this is an obvious problem because no plants can actually sustain exponential growth for any period of time. Even with <1,1> interactions the growth rate can not be suppressed to anything below logarithmic (Prusinkiewicz and Lindenmayer, 1990).

To demonstrate how interactions can be used to produce different growth rates we will look at an example of a system that grows as the square root of the number of iterations. This example appears in the generator applet, and it is highly recommend that the reader use it to see exactly how the interactions produce this result. The system is a <1,1> IL-System. In the notation of Prusinkiewicz and Lindenmayer (1990), from which this example is taken, this is a 2L-System as it uses context from both the left and right. The set of productions used in this system are:

U<F>F®U

U<F>X®DF

F<F>D®D

X<F>D®U

U®F

D®F

and the axiom of the system is XUFX. Essentially, what is happening here is that a message is being passed up and down the "stem" of the plant, and every time it reaches the top the plant grows a little.

Growth rates could also be important to parsing as well. Generally speaking sentences are rather short. If a grammar displays exponential growth then the number of steps in the parsing process will be small. Because there is only a fixed amount of information included in each step, this may limit the amount of information we are able to extract from the parsing process. By using a grammar with a slower growth rate, we might be able to construct parses that extract more information about the sentence.



II.D. The Generator Applet

To get a better feel for how L-Systems work, I have written an applet that can be used to generate strings for a specific grammar. The applet loosely implements 2L-Systems (sensitive to context out one character in either direction). It displays the results of iterating the system as both a string and a turtle graphic using the model discussed above. In addition to these two displays the bottom of the applet has three buttons and two text fields. The first button iterates the system. The second button resets the string to the axiom of the current system. The third button allows you to alter the system itself. The two text fields give parameters for the turtle interpretation of the string. The first specifies the length of the segments, and the second is the angle, in degrees, through which the turtle turns when it receives a '+' or '-' character.

When you click on the third button, labeled System, it brings up a window that has two drop down menus at the top and two buttons at the bottom. The drop down menus list predefined grammars. The top one of these has grammars that are applicable for turtle interpretation. The second one contains grammars of English that can be used for generation, but which don't lend themselves to turtle interpretation. The first button brings up a different window in which you can edit the system directly. The second button should be clicked when you have the grammar you want to iterate.

At the top of the edit window is a text field where you can enter the axiom for the system. Next to this is a check box where you can tell the system to parse the productions as words instead of single characters. Below these is a large text area where you can type in the productions for the system. The syntax for these productions is as follows: left <a> right : probability : result. The probability is required, because we are doing generation. If there is more than one production that could match a character, then the probabilities on those productions determine the odds of each one being selected. The left and/or right conditions can be omitted to give productions that look like a : probability : result, left < a :probability : result, or a > right : probability : result. If there is not a production for a given case, or if the probabilities for a case do not add up to one, then a production is assumed that just copies that character through to the next iteration. The done button at the bottom of the window can be clicked when all adjustments to the grammar have been made.

Applet 1 - This applet allows the user to iterate predefined or custom grammars and view turtle interpretations of them.

It should be noted that the treatment of context sensitivity in this applet is not complete. In a more complete implementation of a 2L-System, one should be able to specify a set of characters that will be ignored when searching for adjacent characters. For branching, the context sensitivity should also take into account the positions of brackets and only consider those characters that are at the same level enclosed in the same brackets.



II.E. Extended Systems

I mentioned above that there were two primary differences between 0L-Systems and the grammars of the Chompsky hierarchy, in particular context-free grammars. In an extended system, one of these differences is essentially removed. To extend a system we add another set of characters to its definition, D. This set is some subset of the alphabet of the system, DÍS. Essentially, the elements of this set act as terminal characters, but that isn't actually how they are implemented. Instead, in an extended system, what is changed is the definition of what strings are actually part of the language of the system. The generation of strings is carried out just the same as in the normal system, but the language of the system only contains those strings which are composed of characters from D. That is to say that, if G is the original grammar, and EG is the extended grammar, then L(EG)=L(G) Ç D*.

The only real difference between a context-free grammar and an E0L-System is the parallelism of the derivations in the E0L-System. Though this might seem to be a limitation (as I initially believed), it turns out to make the E0L-Systems more computationally powerful than context-free grammars.

The applets in this paper to not implement extended systems mainly because they aren't really required for the objectives of those applets. In the generation applet the user can take not of whether or not a string contains characters that aren't accepted in the language. In the parsing applet that comes later in this paper it is expected that the user input will be a string that is part of the language and contain only elements from D. Of course, it can also be constructive to enter strings that contain other characters to see how they are handled by the L-System.



II.F. Parametric L-Systems

The last addition to L-Systems that we will consider in this paper is the addition of parameters. In a parametric L-System, every character in the string can have a set of parameters associated with it. The original reason this type of addition was considered was to allow for further control of growth rates. Remember that with an 2L-System the slowest attainable non-zero growth rate is logarithmic. In many cases though, biological forms tend to exhibit asymptotic growth. This can be achieved with parametric systems by making the turtle interpretation take into account the parameters of the character. For example, a parameter on a F or f character could indicate the length of the segment to draw while one on a + or - would indicate the number of degrees which the turtle should turn.

In a parametric system the productions are also enhanced with the inclusion of arithmetic conditionals on the parameters. For example, a production in a parametric 2L-System might look like this: a(x)<b(y)>c(z) : x+y=z : d(y*y)e(y). In this case, the production is only used if the character b occurs between the characters a and c, and if each one has one parameter such that the sum of the parameters on a and b have a sum equal to the parameter on the c character. The conditional on a parametric system can generally also contain boolean operators as well as arithmetic ones so that complex logic statements can be constructed.

Though they won't be used in this paper, I introduce the concept of parametric systems because it seems to me that they may be a natural way to include more semantic analysis in the parsing process. At the simplest level a parameter specifying number could be placed on the elements of the grammar. However, since almost anything can be encoded as a number or set of numbers, and the parameters allow for complex logic relationships, it should be possible to use them to do full semantic parsing.




III. Comparison to Chompsky Hierarchy

Before launching into a discussion of using L-Systems to parse English we should briefly visit the standard Chompsky hierarchy to see how they compare to current methods. Hopefully this comparison can provide a reason for using them to do the parsing in the first place.

The Chompsky hierarchy is named after Noam Chompsky who's pioneering work in formal linguistics (Chompsky, 1957) began a revolution on linguistics. This work also did much to promote the use of these formal grammars in the study of linguistics, and their prominence in modern transformational grammars can largely be traced back to this work. One might even say that the developments this work spawned are the root of the development of L-Systems and other developmental grammars.



III. A. Definition of Chompsky Forms

Here again we will be following the more rigorous, if somewhat brief, treatment by Herman and Rosenberg (1975). We define a grammar as a 4-tuple, G=<VN, VT, P, S> where VN is a set of non-terminal characters, VT is a set of terminal characters, P is a set of productions, and S is the start state. If we define V as VN È VT, then the productions are of the form a®b where V+ and V*. (Note to Dr. Martin: I was surprised by the fact that a is defined as it is because I was expecting it to contain only non-terminals. Is this a typo in my reference or do RE grammars allow terminals in the left hand side of a production?) If r, V*, then we say that r directly derives s if r=cad, s=cbd, and a®bÎP. The language of a grammar G, L(G) is the set of all derivations of S that are elements of VT*.

Given this definition of a grammar, we can define the Chompsky grammars as follows. A grammar is context sensitive if for every production a®b in that grammar |a|£|b|, where |a| denotes the length of the string a. A grammar is said to be context free if for every production in the grammar VT. Lastly a grammar is said to be regular if every production in P is of the form A®aB or A®a, where A, BÎVN and aÎVT. Additionally any of these grammars can contain the production S®e as long as S does not appear on the right hand side of any of the productions in the grammar.

Given these definitions of grammars we can define different languages in the following way. The language of any grammar G is said to be a recursively enumerable language. It is also a context sensitive language if the grammar is context sensitive, and likewise for context free and regular languages. In the following discussion, we will abbreviate these forms as RE, CS, CF, and RG. We will denote the family of languages of the various grammar types as F(YZ), where YZ is our shorthand notation for a grammar type.



III. B. Relationships

As was mentioned in the beginning of the paper, most parsing of English is currently done with context free grammars. In this section we will compare the generative powers of the Chompsky grammars to the various L-Systems introduced above. The inclusion relationship between these forms are quickly summarized in Figure 2 below. In this figure an arrow from one family to another indicates inclusion of the first in the second. If there is not an arrow between two families of languages, it means that they are incompatible but not disjoint. Height up the graph indicates relative generative power.

Figure 2 - This figure from Herman and Rozenburg (1975) shows the inclusion relationships between the Chompsky language families and those of various L-Systems. Notice that the family of E0L languages is between those of CF and CS Chompsky forms while EIL-Systems are equivalent to recursively enumerable systems indicating that they have the full computing power of a Turing Machine.

The main features of the figure that the reader should note are the positions of the families of 0L, E0L, IL, and EIL languages with respect to the Chompsky families of languages. All of the non-extended forms of L-Systems are incompatible with regular and context free grammars. The simplest proof of this is by noting that the empty set is not a member of these families of languages, because in all cases the axiom of the system is an element of the language. Though this proves that they are incompatible, there are other more substantive examples of this as well.

As we will discuss more in the next section, the extended systems are of particular interest for parsing. It is interesting to note the positions of the E0L and EIL-Systems in this figure. First, the family of E0L language is a superset of context free languages and a subset of context sensitive languages. Because of this intermediate position, it could provide a way to incorporate parts of a language that go beyond context free grammars without bringing the full power of context sensitive grammars to bear. The interesting point to note about the family of EIL languages is that it is equivalent to the family of recursively enumerable languages. This implies that it has the full computational power of a Turing Machine. For simple parsing it would be overkill to use this powerful a system.

It was mentioned earlier that the parallelism of L-Systems gives them extra generating power even though at face value it seems like a constraint on the system. To illustrate how this works we will look at an E0L-System that is not context free. In fact it is probably the simplest non-context free grammar there is and a textbook example of something that context free grammars can't do: anbnan. This language is generated by the following E0L-System.

S={S, A, B, A', B', F, a, b}

D={a, b}, w=S

S®ABA

a

AA'

b

BB'

A'®a

A'®A'

B'®b

B'®B'

a®F

F®F

b®F

The behavior of this system is illustrated by two different parse trees in figure 3. The parse tree on the left produces a string that is part of the language of the system. The one on the right doesn't. Notice that because all of the symbols are advanced in parallel if the characters a and b are produced at different depths no valid string is ever produced.

DrawObj
Figure 3 - This figure shows two different derivations from an E0L-System with the language anbnan. The tree on the left produces a string which is accepted into the language while the one on the right doesn't. This demonstrates the additional power provided by parallelism.

One should notice that parametric systems are not present on this figure. Though no reference was found stating this, I would assume that because of the ability to write complex arithmetic and logic functions into the productions of parametric systems they have the computing ability of a Turing Machine. Actually, though a parametric 0L-System should have the computing power of a Turing Machine, it can not be equivalent to the family of RE languages. This is because as noted above for the other non-extended L-Systems its family does not include the empty set language. The computing power of parametric systems is contained more in the value of parameters than in the final production strings though it is possible that F(Parametric 0L)=F(RE)-Æ.

Note to Dr. Marti: There was one thing in the Herman and Rozenberg book that confused me a bit and I think I know what the problem is, but I still feel the question should be asked. They make the statement that for any grammar, G, as defined above for the Chompsky grammars there is an associated <0,1> IL-System, H, such that L(G)=L(H)ÇVT*. However, this is equivalent to the statement that for any grammar there is an equivalent Extended <1,0> IL-System. At the same time though they go through great lengths to show that the families of <p,0>, <0,p>, and <k,l> (k, l>0) systems are not equivalent. In fact, the <0,p> and <p,0> systems are disjoint though they are both subsets of <k,l> for k+l³p+1. The conclusion I reach from this is that the family of <1,1> systems is a proper superset of the family of all languages produced by the defined Chompsky type grammars. I guess I was assuming that that set of languages was equal to the family of RE languages, but that can't be the case because then the family of <1,1> systems would be larger than that which would imply greater computational power than a Turing Machine. The question then becomes, what is the definition of a RE grammar (if such a thing exists) since it has to be more encompassing than the definition given above.




IV. Modifications for Parsing

Now that we have taken a look at L-Systems and compared them to the standard grammars used for parsing, we can turn to the problem of parsing with L-Systems. There were quite a few such systems discussed so the first task is to decide which one or ones are best suited for this application. Because we only want the language of the grammar to contain valid English sentences; the system that we choose has to be an extended form of some L-System. The simplest way to illustrate this is that if it isn't, then the axiom of the system will be in the language. Since it is unlikely that we will find a working system where the axiom and all derivatives of it are valid English sentences, we must assume that an extended system is required where only proper English words are in the accepted set of symbols.

There were still three types of extended systems shown in Figure 2 and still more that weren't shown. As mentioned in the previous section we should not need a full blown Turing Machine for parsing and any sentences that would require such a machine go far beyond the scope of this paper. It was also mentioned in the second section of the paper that tables don't appear to be of much use for parsing. For this reason we will choose the class of E0L-Systems to construct our parsing grammars. If case sensitivity were included, we might also make these systems parametric, but that won't be done here.

Even with an E0L-System there are modifications that must be made to the description above in order to make the problem tractable. First of all, the systems above had alphabets that consisted of single characters. For parsing, it is more appropriate to have the symbols in the language consist of complete words as they are the atomic units of the parse.

Somewhat ironically the other change that needs to be made is a change back to the more formal definition of an L-System. We mentioned earlier that for simplicity we would assume productions of the form a®a in cases where that is the only production from a (essentially when a acts like a terminal). For parsing, because we will choose a bottom-up parsing method, this becomes less practical and these types of productions need to once again be included. For parsing we don't want to make assumptions about the grammar and default behaviors for bottom-up tracing are more complex than they were in the top-down generation scheme. This having been said, we will turn to parsing schemes.




V. Top-down versus Bottom-up

The two primary classes of parsing algorithms are top-down and bottom-up. The names pretty well describe how they work if one has the right mental image. The image that one should have is of a parse tree, though it is oriented more like a parse root system. The start symbol or axiom of the system is at the top. Below that is the product of applying the grammar to that string once. Generally lines are drawn between each character and the characters that it becomes. Repeating this process produces a structure that resembles a root structure.

In top-down parsing, the parser begins at the top with the start symbol and works its way down creating the possible new layers as it goes. In bottom-up parsing, one begins with the sentence and works up grouping words together into phrases until it reaches the axiom or start state. There are two reasons that the top-down approach does not work with L-Systems; they are the same things that make L-Systems different from Chompsky type grammars: parallelism and a lack of terminal characters. Generally in a top-down approach, the algorithm will work down the left (or right) most unfinished branch until it can't go any further. As soon as it reaches a terminal symbol on a path it can stop and move to the next unfinished path. Because of the parallel nature of L-Systems it isn't possible to only follow the left most branch (though this will work for some grammars). This means that the tree must be expanded in a breadth first manner which is much more memory intensive. Even if this weren't the case the fact that there are no terminals in L-Systems would cause the normal approach to fall into an infinite loop. Even if we are dealing with an E0L-System, reaching a character that is part of the accept set does not imply the branch is done because it can still be involved in productions.

Luckily neither parallelism nor lack of terminals affects the working of bottom-up parsing much. The lack of terminals is not a problem because it begins with the string which is assumed to be composed of valid characters or words. If anything the parallelism makes the parsing easier because it removes the ambiguity of when words should be joined into phrases, every symbol must have a production applied to it at every step in the parse. The one difficulty that bottom-up parses face is with erasing productions. For this reason our applet is actually limited to PE0L-Systems. Any erasing productions that are added to the system will be ignored by the parser.

It is interesting to note that more formal work has been done on the problem of parsing of various types of L-Systems. In fact, it has been proven that E0L-Systems can be parsed O(n4) time while the problem of parsing T0L and ET0L-Systems is NP-complete (Opatrny and Culik, 1976).




VI. Bottom-up Parsing Code

The parsing applet below has a rather simple interface. At the top of the applet is a text field where the user can input the string to be parsed and a button that tells the applet to parse the current string. At the bottom is a button which when clicked brings up the same grammar selecting window as was used in the first applet in Section II.D. For this applet though the second drop down menu on this window is most applicable as it contains various example English grammars. Between these two is a graphical display of all of the parses that were found for the given sentence. All that is drawn here are the lines or edges of the tree. To get a more complete view, just click on the one you are interested in and a window showing it with the grammar elements will come up. This can be done for several parses at the same time to compare them side by side.

Since the grammars can be viewed in their entirety in the applet I won't write them out here. Instead the following paragraphs will have brief descriptions of each and possibly an interesting example or two. As with the generator applet the user does have the ability to edit the productions in this applet so more words or states can be added. Remember though that if a word is added, then in most cases a production of some form must be added where that word is on the left side of the production.

The first grammar would be used on a sentence that has had the parts of speech marked in it. The tags that it accepts as parts of speech are Det, Adj, Noun, Verb, Prep, NounVerb, and NounVerbAdj. The last two can be used to represent words that fill multiple parts of speech. The grammar itself is largely taken from Chapter 9 of the course book and is nowhere near a complete grammar of English. However, having looked at Quirk, et al. (1972), it would take a very long time to parse any sentence if the grammar were complete. As it stands with this grammar the parsing process can take a while. An example of this grammar might be the parts of speech for the sentence, "The dog on the hill with the collar wants a bone." For the given set of valid words this would become "Det NounVerb Prep Det Noun Prep Det Noun Verb Det Noun". When typing in expression like this the user should note that capitalization matters in the applet. This sentence has two possible parses under this grammar. The first one is what we probably think of as correct with a dog wearing a collar sitting on a hill. The second parse has the dog sitting on the hill, but now the hill is wearing the collar.

One fact to note about this grammar is that while this is an L-System, it does not explicitly take advantage of that fact and as such could just as easily be written as a context free grammar. In order to take advantage of the fact that we are using an L-System one of the accepted states would need to have a transition to a non-accepted state instead of just going to itself. This is analogous to the example E0L given previously for generating aabncn.

The second grammar is the same as the first one except that a number of words have been put in so that a small number of sentences can be entered in plain English (without capitalization). Of course, the choice of words was made to create interesting parses. For that reason three words have been included that are both nouns and verbs and one word has been included that operates as a noun, a verb, or an adjective. Because the number of words is relatively large they aren't listed here. They can be easily viewed in the applet. It is also a rather simple procedure to add new words to this grammar. Note that for the parsing the probabilities don't matter so you need not take time to balance all the probabilities every time you add a new production.

An interesting sentence from the choice of words available here is, "the mean man lived in the mean loft". The parsing of this sentence produces four possible parse trees because mean is both a noun and an adjective and those parts of speech can be interchanged without any effect in many sentences. However, the meaning of the noun and adjective uses of the word mean are very different and hence one one of the four possible parse trees actually makes much sense. It is the one where a rude man lives in an average loft. The verb form of mean is also included in this grammar but I couldn't think of a sentence in which all three forms were used that needed only the words I had added to the grammar.

The third grammar is just like the first except that here we have included a production of the <1,0> IL form. This alteration was made to the production that takes a verb phrase to a verb followed by a full sentence. Now for this production to be used the symbol to the left of the verb phrase must be a noun phrase. Though I don't believe this rule is actually valid for English it still demonstrates the possible use of EIL-Systems for parsing. Since the applet has the ability to handle this I figured it should be tried.

An example for this would be "Verb Det Noun Verb". Running this with the first grammar produces one parse while the second grammar produces no parses. However, if it is modified by adding another "Noun" to the beginning, then both grammars produce the same parse tree.

Applet 2 - This applet is designed to parse strings from the language of a grammar. It has the ability to parse <1,1> PIL-Systems. It has three predesigned grammars of English that the user can edit. Warning, parsing long sentences can take a while.




IV. Concluding Remarks

In the course of this paper we have examined a variety of types of L-Systems and how they compare to the Chompsky hierarchy. We have also discussed how they might be used in parsing and done a few small tests to show that parsing with these constructs is possible. Unfortunately there are few hard conclusions that can be drawn at this point. It definitely appears that E0L-Systems can be used for parsing. However, it is no clear if when we do so we use any of the abilities of the E0L-Systems that go beyond what can be done with context free grammars. I do feel though that the possibility exists that L-Systems might be a better choice than what is currently being used. However, it could be that the mode of thinking required to take advantage of this is very different than what is currently used with context free grammars. Only more work and thought into the matter will allow us to address these questions.



Bibliography




This document created with StarOffice Personal Edition. It didn't put this here, I did because I think it's a remarkably good office suite considering that it is free for a number of platforms.