The main aim of automata is to calculate automata that will solve the word problem for a specified rewriting system. Unless you tell it otherwise, or the nature of the rewriting system precludes it, automata will try to solve the word problem both by finding a confluent rewriting system, and by finding an automatic structure, and it will run for ever until it succeeds in doing at least one of these, or you interrupt it, or it gives up due to limits you have placed on it being exceeded, or insufficient resources are available for it to continue. In all the latter cases you will be left with automata that can probably reduce words correctly up to somewhere near the length of equations MAF was investigating when you interrupted it. From now on these will always be referred to as provisional automata. They can be distinguished from definitely correct automata by the suffix, which is always .psuffix where .suffix would be the normal suffix for the relevant automaton type.
Although automata has quite a few command line options, you often won't need to bother with any of them. The first requirement for successful use of the program, is that you should be able to create input files.
When using MAF to analyse a group or monoid, the best method of creating an input rewriting system, and the one always employed by the author, is to copy an existing one and then edit it, using some editor that is happy to save files in plain text format. Let us briefly look at another example and then discuss it more fully than we did in the introduction. (A more formal, though incomplete, definition of the syntax of these files than is given here can be found in the file fsa_format, which comes from KBMAG).
#Free nilpotent group of rank 2 and class 2 _RWS := rec ( isRWS := true, generatorOrder := [c,C,b,B,a,A], ordering := "recursive", inverses := [C,c,B,b,A,a], equations := [ [b*a,a*b*c], [c*a,a*c], [c*b,b*c] ] );
The first line is a comment, and is ignored by programs. (Comments are preceded by the # symbol and last to the end of the line.) The next line, _RWS := rec, begins the definition of a GAP record, with the name of record being defined appearing on the left. To comply with the KBMAG GAP interface, we name our rewriting system _RWS. A record definition consists of a number of comma-separated field definitions, which are enclosed by matching opening and closing parentheses (,), and which describe the record in detail. In the present case the first field definition states that this is a definition of a rewriting system. This line must always be present. Note the use of := rather than = in field definitions, and remember that each field within the object is separated from the next by a comma (but there is no comma after the last field definition). It is easy to forget these points, and neither MAF nor KBMAG are at all forgiving of syntax errors. Note also the use of square brackets  in several places. Such brackets always enclose a list of similar items, possibly another list, as in the case of the equations field. A list can have gaps, which is indicated by there being no text other than spaces between two or more adjacent commas. A list can also be empty, which is indicated by there being nothing at all between the opening [ and the closing ].
First comes a list of generators. These must generate the structure as a monoid, even if it is a group; this means inverses should be included in the generator list. The field is named generatorOrder to emphasize the fact that the order is relevant - it will affect the ordering of words in the alphabet, and hence which words are "reduced". The names of the generators should be alphanumeric strings. Case is significant. It is recommended that you use single letters, and use case-change for inversion, as in this example. Another policy is to use a common prefix followed by numbers; for example, a file output by GAP might have its generators named G.1, G.2, ..., G.n for some n ≥ 0. It is also permissible to name a generator name^-1, where name is the name of another generator, and the two are mutually inverse. Any other use of ^-1 in a name is forbidden.
The ordering field specifies which ordering on words in the input alphabet is to be used. Although there is a default ordering, which is "shortlex", this field is required by the GAP interface, so it is recommended that it should always be included. Most of the time you probably will want to use "shortlex" ordering. The ordering options are discussed in more detail in Word Ordering Methods. Note that the double quotes are required around the value given for this field. The ordering field must come before the list of equations.
The inverses field supplies the list of two-sided inverses of the generators, in the same order as the generators. This field must be present, but, in general, not all generators need have inverses, so the list could be empty, or contain gaps. For example, if only the first and third generators have inverses, and these are named i1 and i2, then the list would be written as [i1,,i2]. However, if generator A is listed as the inverse of a, then a must also be listed as the inverse of A (any mistakes here will result in generators you intended to be distinct elements being equal). One sided inverses must not be specified here, but can be specified using equations. Note that you need to change the order of the inverses if you change the order of the generators.
Finally, there comes the equations field. This consists of a list of lists. Each inner list constitutes a defining relation for the monoid, and should have two entries, which are words in the generators, and are the left and right hand sides of the relation. The empty word is denoted by IdWord. The word may contain brackets to any level, and positive powers. So, for example a*(b*(A*c)^4)^3*c^12 is a valid word in the generators in the example above. Negative powers are permitted on generators for which an inverse has been specified. MAF also permits negative powers on expressions in most contexts, but KBMAG does not. Neither KBMAG nor MAF accept expressions of the form a^b as an abbreviation for b^-1*a*b. It is not necessary, but not harmful, to include defining relations of type [a*A,IdWord] in the list of equations, where the generators a and A have been already been specified as mutually inverse in the "inverses" field, and this has not been done in the example above. (If you do do this then MAF will issue a diagnostic noting that an axiom is redundant). In MAF, you do not need to worry at all about which of two terms should be the LHS and which the RHS. Nor do you need to worry about "balancing" the equation (which means replacing a generator on the left or right by its inverse on the right or left, with the aim of equalising so far as possible the number of generators on each side). In KBMAG this might make a difference, at least in the case of a recursive ordering being used, but it is very unlikely to do so in MAF.
There are a number of other optional fields, which do not occur in this example, and which provide further detailed instructions for automata or more particularly KBMAG's kbprog. Such optional fields are listed in the Program Usage section.
MAF supports all the methods supported by KBMAG, and some others, and the code is constructed so as to make it easy to add more if required. With some orderings, such as shortlex, automatic structures are assumed to be required; for certain other orderings they are only generated if explicitly requested. You can use command line options to over-ride the default behaviour.
In MAF the ordering method is always set in the input file, and, unlike KBMAG, the use of command line options to change the ordering is not supported. This is mainly because a user could generate the automata for a different ordering and then inadvertently use them as though they applied to to the ordering specified in the input file.
The following options are currently supported as values for the ordering field:
This ordering,"weighted lex ordering" requires a second field, weight, to appear in the input file, containing a list of positive integers, one for each generator, like this:weight := [2,1,6,3,4],
The length of the list of weights must be equal to the number of generators. The assignment of the weight field must come after the generatorOrder field and before the equations field. Using weight values above 65535 could lead to erroneous results, because integer overflows will be possible on very long words.
The weight of a word is computed by adding up the weights of the generators in the words. Heavier words come later in the ordering than all lighter words. Amongst words of equal weight lexicographic ordering is used to decide which word comes first.
This ordering,"weighted shortlex ordering" requires also requires the weight field, and is defined almost identically to "wtlex" order. The only difference is that in the definition above "Amongst words of equal weight lexicographic ordering is used to decide which word comes first" is replaced by "Amongst words of equal weight shortlex ordering is used to decide which word comes first".
This ordering is very similar to "wtlex", and often may not produce anything different from it. It will give very different results in the following type of situation: a,b,c all appear as generators in that order, a and b have weight 1, c has weight 2, and a*b and c are the same element. In "wtlex" ordering a*b is less than c, so c is eliminated. In "wtshortlex" ordering a*b is greater than c, so c is not eliminated.
Use a recursive ordering on strings. There are various ways to define this. Perhaps the quickest is as follows:
Let u and v be strings in the generators.
If one of u and v, say v, is empty, then u ≥ v.
Otherwise, let u=u′a and v=v′b, where a and b are generators.
Then u > v if and only if at least one of the following holds:
In a confluent rewriting system using recursive ordering, the reduced words will have as few of the later generators in the ordering as possible, and these will come as early in the word as possible. If b is at a higher level than a then a*b comes later in the ordering than any word of the form b*a^n. More problematically, (from the point of view of achieving success with MAF) if c > b > a in the ordering, then a*c comes later in the ordering than c*w where w is any word at all containing just a and b.
This ordering, "right-recursive", is very similar to recursive ordering. The only difference from the definition above is that instead of:
"Otherwise, let u=u′a and v=v′b,..." we have
"Otherwise, let u=au′ and v=bv′, ..."
As with recursive ordering, the reduced words will have as few as possible of the later generators in the ordering as possible, but in this ordering they will come as late in the word as possible. "rt_recursive" order is slightly more like "shortlex" ordering, in that it will prefer a*b to b*a. In fact for some Coxeter groups it will orderings give the same results.
"Wreath Product" ordering is rather complex, as can be seen if you examine the code for comparing two words using this ordering in the source file alphabet.cpp. In this ordering each generator must be assigned a level via the "level" field. In order to compare two words we proceed as follows:
Recursive ordering is the special case of wreath product ordering in which the level of generator number i is i. On the other hand, if all the generators have the same level, then this ordering is the same as shortlex. So depending on the levels given to the generators "wreathprod" ordering is capable of behaving both like "shortlex ordering" and like "recursive ordering". (This is part of the reason for its use in coset systems, which are discussed in the MAF for subgroups and cosets). Note that we could have used some other way of comparing the maximal subwords in step 3 above rather than shortlex. For example, a "wtlex" comparison. Also, in step 4 we might choose to look to the right of the maximal subwords rather than the left. So this ordering could be more accurately described as "left wreath product order over left shortlex".
Here is an example of how the level field might look in an input file: level := [4,3,2,1],
This obviously assigns levels 4,3,2 and 1 to the generators. The length of the list of levels must be equal to the number of generators. The assignment of the level field must come after the generatorOrder field and before the list of equations. The smallest permitted value for the level of any generator is 1.
If all the generators are at different levels, then the ordering is equivalent to recursive ordering with the generators re-ordered according to increasing level. Performance will be marginally better if you actually change the input file to use that ordering.
Good results are often obtained with this ordering if each generator has the same level as its inverse, but increasing levels are used otherwise. Sometimes a symmetry between one or more generators may suggest other ways of grouping generators together at the same level. Another possibility, of interest in the case where all the generators are torsion elements, is to assign, for each mutually inverse pair of generators, one generator to level 1, and the other to level 2.
The remaining word ordering methods are new to MAF. The author does not know if they have been described and named elsewhere. Several of them are only included because the author used them for testing purposes, or was interested in experimenting with them.
Words are ordered so that all words of a particular length come before any longer word. Amongst words of equal length, recursive ordering is used, with words earlier in the recursive ordering coming first.
The author has not found this ordering to be useful as yet. For finite groups, this ordering and the next usually give rise to the least compact rewriting systems and automatic structures of any of the supported orderings.
Words are ordered so that all words of a particular length come before any longer word. Amongst words of equal length, right recursive ordering is used, with words earlier in the right recursive ordering coming first.
The author has not found this ordering to be useful as yet.
Words are ordered so that all words of a particular length come before any longer word. Amongst words of equal length, weighted lex (wtlex) ordering is used, with words earlier in the weighted lex ordering coming first. This ordering therefore uses the "weight" field.
The author has not found this ordering to be especially useful as yet, though it is an ordering for which one can often construct an automatic structure.
Words are ordered so that all words of a particular length come before any longer word. Amongst words of equal length, right lexicographical ordering is used, with words earlier in the ordering coming first. Right lexicographical ordering is like ordinary lexicographical ordering but with the words read from right to left rather than left to right. In other words, it is not the first different generator that is decisive between two words of the same length, but the last.
This ordering will usually give similar results to ordinary shortlex, but might be more convenient in some cases, for example if you want to use the word-acceptor automaton to control a drawing application, and the words represent some sequence of transformations performed left-first rather than right-first. (So that the action of a word a*b*c on some point x, might be represented x*a*b*c rather than a*b*c*x.). It is perfectly possible for a rewriting system to be automatic using "short_rtlex" order when it is not automatic using "shortlex" order.
(fptp stands for first past the post). Words are ordered so that all words of a particular length come before any longer word. Words of equal length are ordered by considering the left-most occurrence of the latest generator that occurs in either word, ignoring all positions where the generators are equal. Whichever word has this generator is the greater.
The remaining orderings all use the level field, and are more interesting.
This ordering is very similar in its definition to wreathprod ordering, except that at step 4 we look to the right of the maximal subwords, rather than the left. This order could therefore be described as "right wreath product order over left shortlex". This order is actually a better choice for coset systems, than "wreathprod" ordering, because it works correctly when the rewriting system for the group does not use shortlex order.
Interestingly, this order often gives rise to more compact automata than ordinary "wreathprod" ordering.
This ordering uses the level field, but can be considered to be the limiting case of weighted lex ordering in which the weights of two generators, if different, differ by an infinite ratio. When comparing words first compares the number of generators in each word at the highest level, if this is not equal, then the word with fewer generators at this level comes earlier. The words are then compared at the next level, and so on. If the words are still equal (so that they are certainly the same length), the word which comes first in lexicographical ordering comes earlier.
This ordering and the next two have some nice properties, which are discussed in the next subsection
This ordering uses the level field,and is intermediate between weighted lex ordering and a wreath ordering. The words are first compared by ignoring all the letters except those at the highest level. If they are different, the word which has the remainder which comes first in shortlex ordering comes first. If they are equal we then compare the words by ignoring all the letters except those at the next highest level (now also ignoring the letters at the very highest level). We keep doing this until we have compared the words at every level. If they are still equal after all that, then the word which comes first in lexicographical ordering comes first.
Note that this ordering can only differ from grouped ordering if at least two generators are at the same level.
This ordering is similar to a recursive ordering in that it will make the word arbitrarily longer just to improve the word lexicographically at a higher level, but unlike it in that equations such as a*c=c*w are impossible unless w contains at most one a, and no generators of the same or higher level than a.
This ordering is very similar to the last, but in the event of a draw at a higher level the higher level letters are still taken into account when the generators from the next level down are compared. Therefore this ordering is slightly more like rt_wreathprod orderings than "ranked" is.
Both the grouped and ranked type orders tend to produce rewriting systems with fewer equations than shortlex, but more than "recursive" or "wreathprod" type orders.
MAF does not currently support building word acceptors from word difference machines for "ranked" or "nestedrank" order.
The author hoped that these three orderings would prove genuinely useful for rewriting systems. Unfortunately, although they tend to produce confluent systems with far fewer equations than shortlex systems, and are less prone than recursive orderings to pathological behaviour, they also seem to be less likely to collapse quickly, and to be much larger before the collapse. The author has also observed that recursive and wreathprod rewriting systems seem to be more likely to collapse quickly than rt_recursive and rt_wreathprod ones.
short_accented_lex is almost identical to shortlex. However, if at the first place where the words differ the letters are from the same level, the decision on the order of the words is only provisional and can be over-ridden by the first position in which the generators are from different levels. This ordering is similar to how words are ordered in French and other European languages with accented characters, hence the name.
In short_multiaccentedlex, if at a later position in the word the order has still not been decided by the occurrence of generators from different levels, and again there are two different generators from the same level, and both generators are at a higher level than has been used to make a provisional decision before, then the provisional decision is over-ridden.
short_fptp, short_accentedlex and short_multiaccentedlex are primarily there to provide the best possible test case for building non-shortlex automatic structures. All these orderings can be recognised by an FSA, so that they are equally suited to the construction of an automatic structure as shortlex. The author has not found any examples which are automatic using one of these orders but not shortlex, but it is quite possible that this might happen, although since the FSAs for these orderings have more states than the shortlex automaton it is probably an unusual occurrence. Construction of word-acceptors for these orderings is slightly more difficult than with shortlex or short_rtlex, because both the latter two can use highly optimised code for building the word acceptor, whereas the former all need to use the generic non shortlex code.
All the word orderings discussed, apart from those whose name begins with the word "short" have the property that the "reduced" word for an element may not be a shortest equal word, and that therefore "reduction" can increase the length of a word. In the case of weighted lex ordering, the potential increase is not usually too severe, but with some of the others, and especially with the two recursive orderings and wreath product orderings, the increase in length is potentially unlimited. Even where the equations are all quite short and when the increase in length is limited reduction can become very difficult, or even impossible within reasonable time and space constraints. Both MAF and KBMAG can run into severe difficulties with these orderings. A sign that MAF is in trouble is that the depth of its "rewriting machine" becomes extremely high and stays high, and that the message "Performing a difficult reduction" appears frequently. MAF tries hard not to get into this kind of state. Although it has to allow the creation of equations that increase the length of words, during Knuth Bendix completion it attempts to give preference to equations that do not do so, and tries to eliminate perverse equations. Also it simply aborts any reduction that takes too long, saves the equation it was working on in its current state if necessary, and only comes back to it later on if it still seems to be needed.
Both recursive ordering and right-recursive ordering are rather remarkable. Certainly there are input files, with which automata can do nothing useful when ordering is set to "shortlex", but for which a confluent rewriting system is found within seconds when one or other of these orderings is used. Also the systems that are found when one of these orderings is used are usually much smaller, in terms of the number of equations, than the shortlex one in the case where both exist, which means they are potentially useful for large finite groups as well as for infinite ones that are not shortlex confluent. On the other hand the equations that emerge are often frequently severely unbalanced, with the RHS much longer than the LHS. This means that "reduction" becomes a severely misleading term.
If you use one of these orderings, and MAF cannot cope, it is worth trying to run automata again with the -work_order n option or the -strategy n and -tactics n options. This makes it possible to set flags that alter MAF's algorithms in various ways. This may be enough for the problem to be avoided.
If using a recursive type of word ordering, make sure you order the generators sensibly. Using one of the recursive orderings frequently results in the elimination of one or more generators - if you specify the order incorrectly you will not eliminate the generators you want to eliminate. Generators you want to keep should come first.
If you want to use a recursive ordering, but the input file is actually confluent when using shortlex ordering, it may be better to create an input file from the .kbprog file for the shortlex ordering. It sometimes happens that MAF is unable to create a confluent rewriting system from the original axiom set, but can do so from the shortlex confluent rewriting system.
You can use recursive ordering to change generators. For example if a group is generated by a,b it is also generated by a,c where c=a*b. The author used this approach to find some two-generator presentations of symmetric groups included in the examples.
To summarise the capabilities of automata we can say that automata is capable of finding, where they exist, any or all of the following:
In general, simply starting automata with the name of the input file containing the rewriting system with the details of the presentation and ordering, will, given sufficient patience on the part of the user, and resources on part of the computer, result in the eventual discovery of whichever of the "correct" automata happen to exist, or approximations to them which are probably correct for words up to any desired length.
It may very occasionally happen, that by giving automata a command line option, that it will succeed when otherwise it would not have done, but this is quite unusual. The intention is that you should be able to run it against a presentation about which you know nothing, and that MAF should be able to discover what kind of an object the presentation describes. On the other hand there are several options, that can often make automata complete more quickly when you know, or strongly suspect something about the presentation, or are only be interested in looking for some of the automata, in some cases much more quickly. This extra information can be given to automata via command line options, or in a few cases by setting optional fields in the input file (though this latter facility is a much less important feature than in KBMAG). The most useful options, which are all new to MAF and do not come from KBMAG are described in Useful Command Line Options.
The following table summarises the names of all the files that automata may generate, given an input file named group, what the files are, and the circumstances in which they are produced. In the "When produced" column, the formula "if the presentation is proved automatic" is used several times. This means that MAF has been able to generate an automatic structure, and has at least been able to verify that the multiplier is closed. It can happen that MAF is able to prove that such a structure exists, but that it is beyond its capacity MAF to actually generate all of it. Similarly, the formula "proved confluent" or "proved shortlex confluent" mean that MAF has succeeded in actually finding such a system. It is at least theoretically possible that MAF might be able to prove that such a system exists, but that it is too big to find. However, in practice, if MAF ever does discover this, then the system is so usually close to confluence, that the confluent system will be found very soon afterwards. As advertised in the introduction we are here using the term "presentation" to mean "rewriting system".
For the sake of completeness the various different suffixes used for automata generated for automatic coset systems are also shown here. However they are described more fully in the equivalent of this table that appears in "MAF for Subgroups and Cosets" subgroups. Subgroup and coset automata can be recognised in the table below by the fact that the filename is shown as cosname rather than group. Files called group.cosstring.suffix or group.string_cos.suffix are also coset automata or files, and are coset equivalents to files described here.
|Suffix||File Description||When produced|
|group.wa||This the word-acceptor for the group.||If presentation is either proved confluent or proved automatic|
|group.minred||This the FSA that accepts minimally reducible words. automata will calculate this to prove that there is no confluent rewriting system, or as an additional check that it has been found correctly.||If presentation is proved either confluent or automatic|
|group.diff2||This is the FSA that contains all the word differences needed for the general multiplier, and all possible transitions between them, not just transitions that actually occur in some reduction. In MAF this FSA is guaranteed to be correct if automata completes successfully. In KBMAG it often contains spurious word differences. See also group.pdiff2.||If presentation is proved automatic.|
|group.diff1||This FSA is produced by KBMAG. MAF does not produce this FSA, unless you are running one of the KBMAG emulation programs. automata does generate an equivalent file, if it is interrupted, called group.pdiff1||kbprog -wd and autgroup will produce this file, both in KBMAG and MAF.|
|group.diff1c||This is the FSA that contains all the word differences and transitions that arise from the (possibly infinite) minimal confluent rewriting system||If presentation is proved automatic or confluent.|
|group.diff2c||This is the FSA that contains all the word differences needed for the (possibly infinite) directly reducing confluent rewriting system, and just the transitions that occur in these equations. If the ordering used is a "short" order than it also contains all the transitions needed for the general multiplier. An FSA with this name is also produced by KBMAG's gpminkb, but is a different FSA that is useless for any purpose other than rebuilding the general multiplier (and which has the TRIM flag set erroneously).||If presentation is proved automatic.|
|group.gm||The general multiplier. This is the FSA that which accepts all those pairs of reduced words (u,v), at an accept state labelled with all those words x (with x either IdWord or a generator), for which ux==v is a true statement. MAF makes much more use of this FSA than KBMAG. For example, in the absence of a confluent rewriting system, it is its preferred method of performing word reduction.||If presentation is proved automatic. If a presentation is proved confluent and finite, the group is automatic, but it can happen that the multiplier is too large to construct. In this one case MAF does not generate the general multiplier that is known to exist. If a group is proved confluent and infinite, then the group is quite likely not to be automatic (at least for this ordering). MAF will perform a certain number of iterations to try to prove it automatic, but if number of word differences continues to increase it will give up because it probably is not. There are many other multiplier automata that can be produced, all with the general characteristic of having accept states which are labelled by the difference between the left and right words, but with other sets of words chosen, and the criteria for which pairs of words are accepted varying. The programs gpmult, and gpmimult can be used to generate such automata.|
|group.fastreduce||This is the reduction FSA for a confluent fast rewriting system. It contains the correct reduction for all words for which both the word itself and its correct reduction appear in the minimal rewriting system. If feasible it contains the correct reduction for all words in L2 that give rise to equations. This FSA can be much larger than group.reduce, and, in the case where the accepted language of group.wa is infinite,it may not exist at all. In such cases the group.fastreduce FSA contains as many reductions as can be include without increasing the number of states of the FSA.||When MAF has proved a rewriting system confluent.|
|group.maxkb||This is the FSA which reaches its single accept state, for just those word pairs (u,v) in which u is not accepted by group.wa, but every prefix of u is and v is accepted by group.wa, and for which u==v is a true equation, and in which u,v have no common prefix.||If presentation is proved automatic.|
|These are "MIDFA" automata produced during the construction of an automatic coset system||All these automata are produced when automata is processing a coset system. See MAF for Subgroups and Cosets for details.|
|group.minkb||This is the FSA which reaches its single accept state, which is also its initial state, for just those word pairs (u,v) in which u is accepts by group.minred and v by group.wa, and for which u==v is a true equation.||If presentation is proved shortlex confluent or shortlex automatic.|
|group.pdiff1||Provisional primary difference machine. This contains all the word differences and transitions between them that were thought to be needed for the equations in the minimal rewriting system at the time MAF terminated. This FSA is correct in the sense that any equation it can prove is true, and that it can prove all the primary equations that were known when MAF was interrupted, but incorrect in the sense that it may contain more than one state for the same group element, and there may be true equations it cannot prove. There may even be true equations that the provisional complete difference machine can prove that this cannot. In fact this is more often than not the case.||If MAF is interrupted while trying to prove a group presentation shortlex automatic.|
|group.pdiff2||Provisional complete difference machine. It contains all the word differences and transitions between them that were known at the time. This FSA is correct in the sense that any equation it can prove is true, and that it could reduce all words at least as well as the provisional reduction FSA, but incorrect in the sense that it may contain more than one state for the same group element, and there may be true equations it cannot prove. KBMAG uses the diff2 extenstion for this FSA. MAF reserves the diff2 extension for an FSA that has been proved correct.||If MAF is interrupted while trying to prove a group presentation shortlex automatic.|
|group.preduce||This is the provisional reduction FSA for a rewriting system.||When MAF was interrupted while trying to prove a minimal rewriting system confluent|
|group.reduce||This is the reduction FSA for a confluent minimal rewriting system.||When MAF has proved a minimal rewriting system confluent|
Several of the automata above can be used for reducing words to their unique minimal representative in the group or monoid described in the input file under the ordering specified there. This is henceforth called the "reduced word" for the element. This solves the word-problem for the presentation for this ordering. Others can be used for counting and enumerating the language of reduced words, (and hence the elements of the group).
There is also a program gpgeowa that attempts to construct various "geodesic" FSAs for a group for which MAF has alread found a shortlex automatic structure. A word is "geodesic" if it has the same length as the normal form (in the shortlex ordering) for the group element it represents. automata itself does not try to generate these FSAs, because they only exist for groups which are "word-hyperbolic", which many automatic groups are not. Even for finite groups, which certainly are word-hyperbolic, these FSAs can be extremely large, and can take an extremely long time to generate. To answer questions about geodesic words it will often be better just to enumerate the words from the free group on the same alphabet, and to use reduction to discover the ones that are geodesics. If you do run gpgeowa and it suceeds,then it effectively verifies the word-hyperbolicity of the group. It will compute an automaton that accepts all geodesic words in the group generators and a 2-variable automaton that accepts all pairs of geodesic words which represent the same group element.
This section summarises the more useful command line options you can give automata, so that they stand out from the crowd of options provided for compatibility with KBMAG, and options that are only occasionally useful. For very simple examples, such as hyperbolic triangle groups, or their Von Dyck equivalent, MAF will usually be so fast that any kind of option is unnecessary.
This command line option, only required when the presentation could, based on the information in the input file, be for a shortlex automatic group, that you do not want such a structure to be created, and that therefore you only require a confluent reduction machine. In most cases this won't make too much difference, but for largish finite groups, and for pathological presentations, it can make MAF run much more quickly. Also for some presentations, (notably the examples/coxeter/a15 example presentation), although an automatic structure can be created, it takes rather a long time to validate it (though not intolerably long). You should probably always use this option if what you are trying to do is to prove the non-existence of a non-trivial group, because in this case you hope your presentation will be trivial. For example the with -nowd option MAF proves the difficult examples/trivial/degen4c example trivial in well under one minute on most machines, but run with no special options, it is likely to take two minutes or longer. This is because it keeps many more long equations in its internal data structures in this case, and a lot of house-keeping is required on them. With -nowd most of these long equations are "pooled", which basically means they are treated with low priority, and MAF manages to prove the group trivial long before it ever gets round to looking at most of these equations ever again. If you are mainly interested in finite groups, and are not interested in their automatic structure, then you may prefer to use kbprog rather than automata since that has the -nowd option on by default. The main difference between the two is that automata -nowd will still calculate some additional automata whereas kbprog will not.
This option is quite similar in effect to -nowd, and will often have a similar effect on performance. However, it merely tells MAF that you care more about confluence than about automatic structure, and the automatic structure will still be generated if possible. But MAF will make much less effort than it normally would to find word differences correctly, and will concentrate more on the Knuth Bendix completion procedure. Note that if you supply this option, or indeed -nowd, for a presentation that is actually not confluent, but is automatic, then MAF will run for ever, and will never try to build the automatic structure, because it will wait for ever for the rewriting system to become confluent.
This command line option is similar to the -confluent command line option, but weaker. Now MAF will still do everything it normally would in terms of finding word differences, and won't assume it can find a confluent machine, and so will eventually decide to build the automatic structure if it appears that there will be one. The one change to processing is that equations with new word differences, which are normally given special treatment, are now just treated like ordinary equations. What this means is that the tree of equations will grow in a more steady and uniform manner than it normally would, because MAF will not be probing into the overlaps of long equations with word differences. This option is appropriate in the case where you expect the presentation in the input file to be confluent, but are not quite sure that it will be.
The three options described so far have all applied to the case where presentations are expected to be confluent. Usually it does not matter too much if you don't tell MAF to expect confluence. However it will be slower than it otherwise would be, particularly if, as sometimes happens, it does not realise the system is becoming confluent, and tries to build the automatic-structure too soon. In the case of the examples/coxeter/e8 presentation for example, MAF will complete in one or two seconds with the -nowd option, in about three seconds with -confluent, but will take around twelve seconds with no command line option.
The next two options apply in the opposite case, where you expect the presentation not to be confluent, and are only interested in getting an automatic structure. So these options should not be used when a monoid rather than a group presentation is being analysed, or when non-shortlex orderings are used. Indeed MAF will simply ignore the options or refuse to honour them, in such cases.
This option should be used with caution, and never with a presentation you know is for a finite group, nor a pathological presentation. This option tells MAF to suppress the Knuth-Bendix phase altogether and to immediately build an automatic structure, using just the word-differences it has been able to deduce from a very cursory examination of the axioms. With this option the automata built will almost certainly be incorrect on the first iteration, and possibly for a few iterations after that. However if the group is genuinely automatic, success is eventually inevitable, and for examples of only moderate difficulty, such as picard and cox3334c this may very well give the fastest results. This option is to be recommended for Triangle and Von Dyck type groups, but not for most Coxeter groups. Even for more difficult examples it is usually beneficial, though for a few examples it may be catastrophic. Another possible reason for using this option is where you have already tried, and failed, to build an automatic structure using automata without the -nokb option. If MAF had created a very large number of equations, then it may not have had enough memory available to be able to work efficiently. When you use the -nokb option the number of equations will be kept to a minimum. So there might be cases where the resulting saving in memory would allow an automatic structure to be found. Generally, in this case you would also use the -resume option at the same time, so that more word-differences would be known initially.
This option is like a milder form of -nokb and causes MAF to take a less cautious view than usual about whether or not to try to build an automatic structure. When it works it will save you a few seconds, or perhaps tens of seconds. On the rare occasions that it goes wrong it may cost you many minutes. In fact it works quite well, or at any rate is not horribly counter-productive, with most of the example presentations (even the confluent ones), but it should not be used for groups you know in advance are finite, since these definitely do have a confluent rewriting system, and MAF will insist on finding this. However, it is often worth using this option when you are confident a presentation is for an infinite group, especially if it is not an especially difficult example. (If a presentation is difficult then the -fast option usually won't make much difference, because MAF will not want to leave the Knuth Bendix phase anyway.)
The next group of options are mainly relevant when you are using a wreath product type word ordering (including recursive orderings), and are unlikely to be useful with shortlex ordering.
These options are primarily intended for use when you want MAF to find a confluent rewriting system. They are unlikely to be useful when you are trying to find an automatic structure. The -strategy and -tactics options are "bitmask" values which cause MAF to vary its processing in various ways. The other options are mostly enumerations. All these options will mainly be of use if you are not using "shortlex" ordering. With recursive orderings, equations can become very long, or MAF may take an unexpectedly long time to complete for no obvious reason. It sometimes happens that by varying things slightly MAF can complete. So, if you find MAF runs into difficulty with a presentation you want to work with, and especially if you keep seeing the "performing a difficult reduction" status message, you may find it worth trying the effect of these options. However, it may well be worth trying a different word ordering method, or ordering the generators differently first.
The exact meaning of these options can be found by studying the source code, and will be subject to change in future. The values are mostly ordered so that lower values are more likely to be helpful, or neutral, and higher values is more likely to be harmful.
The following combinations have been found to be helpful for some example presentations:
-strategy 0. Turning off the 1 strategy bit, forces MAF to repeatedly consider each overlap. Usually, once an overlap has been processed MAF will only consider it again if the RHS of one of the relevant equations changes, or if the overlap is longer than a certain limit. If the 1 bit is disabled, MAF will try each overlap on each expansion cycle. This is sometimes helpful, because it can happen that the equation deducible from an overlap changes due to changes elsewhere in the rewriting system, even, even when the original equations that allowed a=b to be deduced have not been eliminated.
-strategy 3. Normally MAF finds overlaps by looking forwards in the tree of equations from the trailing subwords of a subset of the equations it has found so far. This process is very quick, but it means that, to a certain extent, the set of overlaps is examined in a overly-selective manner, because the left equation in the overlaps considered always comes from this subset. If strategy bit 2 is turned on MAF also looks for overlaps in which the right equation comes from this subset, but the left does not. This process is slower. MAF may decide to turn this bit off after a while.
-special_overlaps 0 is quite often helpful (the default value of this option is 1). This disables part of the processing that MAF usually performs for new equations. For some systems, especially those where the axioms are of very different lengths, this processing is sometimes unhelpful.
The -secondary n sets MAF policy towards secondary equations. An equation is secondary if a subword of the LHS is reducible (the subword must necessarily include the last symbol of the LHS, otherwise the equation would not have an irreducible prefix). MAF will normally choose a value for this option according to properties of the word ordering. -secondary 0 causes MAF to eliminate any secondary equation that is created as soon as possible. -secondary 1 stops MAF from creating these equations, but allows them an equation that becomes secondary after it is created to remain. With -secondary 2 and -secondary 3 secondary equations are deliberately created (for instance during conjugation). With -secondary 3 additional processing is done to make sure that a new secondary equation will never be skipped by a prefix change, which means its "special overlaps" will be discovered. It is very difficult to give guidance about this option. The higher values of the option will usually improve stability, and usually improve performance, but may increase memory usage significantly. There are input files which work much better with -secondary 0, and input files which work much better with -secondary 3.
The next option is to do do with proving that an automatic structure is correct.
This option causes MAF to perform additional checks on the multiplier after an automatic structure has been found for a group. These are the checks that are performed by gpaxioms in KBMAG. These checks can take a very long time to complete, often much longer than finding the automatic structure in the first place. The author has been unable to find a single example (for a group rewriting system) where a multiplier that has passed the normal validation that MAF performs fails these checks. Therefore, in the case of groups, these checks are not performed by automata by default. (For KBMAG compatibility they are performed by the MAF implementation of autgroup).
MAF prefers, whenever possible, to compute the word-acceptor for a group from a confluent rewriting system, rather than from a word-difference machine. It is worth having a word-acceptor even when a confluent rewriting system exists, because although both structures can be used to enumerate the reduced words, the former will usually have many fewer states, and take up dramatically less memory. For largish finite groups it is often very difficult, if possible at all, to construct a word acceptor from the word-difference machine, because there are too many word-differences, whereas the computation of the word-acceptor from the rewriting system is very rapid. Usually, though not always, MAF refrains from trying to build an automatic-structure prior to a rewriting system becoming confluent in the case that it will do so. If you specify -fast or -nokb for such a system then it is much less likely to be able to so this, and the consequence may be that MAF will spend a long time build the word-acceptor, and may have difficulty doing so. Then, if the word-acceptor is correct, MAF will see that the group is finite and resume Knuth-Bendix processing anyway. In this case time, possibly a long time, has been wasted. And, if the word-acceptor is wrong and MAF does not know the group is finite, even more time will be wasted, because in such cases it will probably spend a very long time building and then correcting a multiplier.
Actually, in a few cases, MAF may fail to wait for a presentation that will prove confluent to be become so, and it is for these cases that the -confluent or -nowd command tail will prove especially useful.
There is a collection of examples of automatic groups in the directory examples/automatic of widely varying difficulty. The automatic structure has been calculated and verified as correct in all cases, however. These can usefully be used as test examples for gaining experience in the use of the programs.
In this section, we mention some of the examples. These can usefully be used as test examples, and some of them have been included to demonstrate particular features.
The degen examples are all of the trivial group. Note, in particular, degen4a, degen4b and degen4c. These are the first three of an infinite sequences of increasingly complicated presentations of the trivial group, due to B.H. Neumann. automata will run very quickly on all of these presentations, whereas KBMAG can only do the first two within a reasonable time. The -confluent, or the -nowd command line options speed up the calculation considerably in the case of degen4c, the primary reason for this being that automata puts far more equations in its "pool" rather than its reduction FSA, and this drastically reduces the size of the tree. Some of the saving in time is also due to the fact that word-difference calculations are no longer required.
The example ab2 is the free abelian group on two generators. It terminates with a confluent set for the given ordering of the generators, [a,A,b,B], but does not terminate with the ordering [a,b,A,B]. In general it is almost always best to order the alphabet so that generators and their inverses appear next to one another if they are different.
Several of the examples are of finite groups. Others are monoid presentations, where generators are not supplied with inverses. Try f25monoid, which is the presentation of the Fibonacci group F(2,5), but as a monoid. In fact, the structure is almost identical to the group in this example. The group is cyclic of order 11. The monoid has order 12, the extra element being either the a^11 word (which is not recognised as being equal to IdWord), or IdWord itself. The corresponding semigroup (without the empty IdWord) is isomorphic to the group.
In the examples nilp2, nonhopf, heinnilp and verifynilp, the recursive ordering is essential. All these examples are from KBMAG. The last two of these are examples of the use of Knuth-Bendix to prove that a presentation defines a nilpotent group. The verifynilp example shows up a difference in the behaviour of MAF and KBMAG. In KBMAG the maxstoredlen parameter (or equivalently the -mrl option) is used to make KBPROG complete more quickly. MAF has to ignore this setting, because in every one of the examples included with KBMAG that use such a setting, the KBMAG value of the setting prevents MAF from ever finding the correct confluent rewriting system, whereas when the setting is ignored MAF very quickly completes successfully!. MAF does allow for such a setting however, in case it should ever prove useful: MAF's equivalent of KBMAG's maxstorelen is maf_maxstoredlen. However, the author has yet to find a single example where the use of this setting is at all beneficial. In most cases it just prevents MAF from ever finding a confluent set.
The examples f27monoid_recursive and f27monoid_shortlex are both monoid presentations corresponding to the Fibonacci group F(2,7), which has order 29. As is the case with F(2,5), the monoid has one extra element, which in this case is a^29. This example is another interesting example of the differences between MAF and KBMAG. KBMAG cannot easily solve this example when shortlex ordering is used, but with MAF it completes in around 10 seconds. KBMAG uses the recursive ordering and the "maxstoredlen" parameter again. In this case the limits found in the file have no hope of working with MAF, because several of the equations in the confluent set exceed the rhs limit. With no limit MAF completes almost instantly.
There is a program called gpaxioms which is used to verify that an automatic structure is correct, or at least, that it defines a group multiplication that satisfies the original axioms. MAF does not perform axiom checking by default when it creates an automatic structure, at least not in the case of groups. This is because the axiom check can take a very long time, and has never been known to fail. Generally speaking, even if MAF had produced erroneous results, gpaxioms would be unlikely to reveal anything, because the almost certain effect of an error is that the automata output by MAF would be for a quotient group (most likely the trivial group), and in this case the checks made by gpaxioms would all succeed anyway. MAF does perform the axiom checks by default when analysing an automatic coset system, because in this case the multiplier can definitely pass even when the structure is not correct. You can use the -validate command line option to force MAF to check the axioms.
In fact, if you already have an FSA which you believe solves the word problem for a rewriting system, and you suspect MAF does not process this rewriting system correctly, you can tell MAF to check each equation it produces against this FSA by using the -check option. In the unlikely event that an incorrect equation is produced, MAF will terminate with an internal error. If you then send both the input file and the validating FSA to the author he will do his best to investigate whether MAF is producing incorrect results, or whether in fact the error is in the supposedly correct FSA you have used for validation.
Unlike KBMAG, MAF does not usually check the inverse relations when checking axioms. This is because MAF cannot produce a multiplier which would fail these checks.