This page contains detailed usage information for MAF components that take their input from an "input file" (rewriting system). Most of the utilities also requires one or more of the automata usually created by automata to be present.
The utility fsafl which can be used to create an automaton that accepts any desired finite language on some alphabet, and ought, in theory, to be listed in this section, because it takes (some of) its input from a rewriting system. However, as its name suggests, the utility is intended to be an FSA utility, and is effectively one, so is described in the FSA utilities usage section.
gpaxioms [loglevel] [-midfa] [-serial | -hybrid | -parallel] [-check_inverses] groupname [-cos [subsuffix]]
Validate a previously computed automatic structure for a group or coset system by checking the multiplication defined by the structure satisfies the group axioms. The -midfa option is only relevant to a coset system; it causes the validation to be performed using the MIDFA multiplier rather than the determinised multiplier. gpaxioms supports three different techniques for building the composite multipliers that the axiom check relies upon. The -hybrid option is the default as this usually works well. The -parallel option is sometimes much faster, but may require much more memory and is also often slower. The -serial option uses the least memory, but is usually the slowest method of performing the check, but may be better if the alphabet is large.
MAF does not usually check the relators implied by the inverses
field of the input file, i.e. the relators of the form g*g^-1, when it is checking axioms. This is because MAF cannot possibly produce a multiplier which would fail these checks. To make MAF perform these checks specify the -check_inverses option.
gpcosets [loglevel] [format] [reduction_method] [-conjugator] groupname [-cos [subsuffix]]
Compute the conjugacy classes of the finite group described by the input file groupname, or , if the -cos is specified, the conjugacy classes modulo the finite index subgroup groupname.subsuffix. The latter usage does not make much sense unless the subgroup is normal, but gpcclass does not require this.
An automaton that can perform word reduction for the relevant rewriting system must previously have been computed.
The computed FSA has the same states and transitions as the coset table computed by gpcosets. The accepted language consists of words that are equal as elements to the minimal representative of the conjugacy class of the element. Every state is labelled by its conjugacy class representative. The FSA is output to groupname.cclasses. A file groupname.cc_statistics containing a summary of the conjugacy information is also generated.
If the -conjugator option is specified an additional 2-variable FSA is computed and output to groupname.conjugator. This accepts the word pair (u,v) at a state labelled with w if u and v are accepted words and v is the least accepted word such that v^-1*u*v reduces to the conjugacy class representative, which is w. This FSA is not computed by default because it can be very large.
gpcosets [loglevel] [format] [reduction_method] groupname [-cos [subsuffix]]
Compute the coset table for the subgroup of finite index of groupname specified in the substructure file groupname.subsuffix. If -cos is omitted the subgroup is the trivial subgroup. gpcosets will exit immediately with a diagnostic if asked to create an infinite coset table, or one that is too large to fit in memory. A method of performing word reduction and a word acceptor are required, so the relevant input file should previously have been processed with automata. The coset table is standardised as though the word-ordering was shortlex, (because it has the BFS property). However, the coset representatives can be computed, because each coset is labelled with the final generator appearing in the reduced word for that coset, and these labels are and are correct for the word-ordering used by the coset system.
gptcenum can also create the coset table.
gpdifflabs [loglevel] [format] [reduction_method] groupname -i | input_file [-o | output file]
gpdifflabs can be used to label all the states of multipliers with the corresponding word-differences. An automaton able to perform word reduction must previously have been computed. Either stdin (if you specify -i), or input_file (otherwise), should contain a 2-variable FSA. gpdifflabs computes and FSA with the same accepted language but with states labelled by the difference between the left and right words for each state. The input file should be a multiplier FSA for some group element, but might be a multiplier for a coset system for the group. If you attempt to label a 2-variable FSA where the states are not associated with a unique word-difference and the group is infinite then gpdifflabs will probably run until memory is exhausted. gpdifflabs is one of the rare RWS utilities that does not work directly with coset systems: you can use it to label the states of a multiplier FSA from a coset system, but you must use a reduction automaton from the underlying group to do this, since word-differences are group elements, not cosets. For example to label a coset multiplier groupname.cos.migm you might use the following command:
gpdifflabs groupname groupname.cos.migm
The default output file is input_file.difflabs, so the command line above would create groupname.cos.migm.difflabs.
Since most FSA that you might want to run this program against using groupname as the rewriting system will also have a name starting with groupname gpdifflabs will allow you to specify just the suffix part of the input file. So the command above could be replaced by:
gpdifflabs groupname cos.migm
The default output file is input_file.difflabs, so both the command lines above would create groupname.cos.migm.difflabs.
gpgeowa [loglevel] [format] [reduction_method] [-near | -n] groupname
This program attempts to construct a geodesic word-acceptor for a group after an automatic structure using a geodesic word-ordering has been computed. The geodesic word-acceptor accepts a word if it is a shortest word for the corresponding group element (rather than the (necessarily shortest) word that is minimal under the word-ordering). A group for which such a word-acceptor can be found, and for which this word-acceptor is part of an automatic structure is said to be strongly geodesically automatic. Cannon showed that word-hyperbolic groups have this property, and a paper of Panos Papasoglu proved that groups with this property are also word-hyperbolic. Word-hyperbolic groups are shortlex automatic for any choice of generating set. In fact gpgeowa also computes a two-variable automaton that accepts a pair of words if and only if they are equal geodesics: it is easy to see that if we can construct this automaton we can also construct the desired automatic structure by composition with the multiplier from the automatic structure previously computed.
Successful completion of this program against a suitable input file can therefore prove that a group is word-hyperbolic, although it does not provide an estimate of the hyperbolic constant (whatever that may be).
If the group is not word-hyperbolic, then gpgeowa will not terminate and will have to be stopped by pressing <Ctrl+C>, otherwise it will continue to run forever, consuming ever greater amounts of memory until it runs out of address space and crashes. This happens even when a geodesic word-acceptor that is not part of an automatic structure exists. This will happen for many infinite groups, for example it happens with any group with a subgroup that is the free abelian group of rank 2.
The input file groupname must contains a rewriting system for a group and use a geodesic word-ordering, and the corresponding automatic structure must have been successfully computed for this file (usually with automata).
Input is from groupname.wa, groupname.diff2, and from one of the automata capable of performing word reduction. Output (when successful) is to the three files groupname.geowa, which contains the geodesic word-acceptor, groupname.geopairs, which contains a two-variable automaton whose language consists of all pairs of equal geodesic words in the group generators, and groupname.geodiff, which contains a two-variable word-difference machine, consisting of the word-differences and transitions necessary for constructing groupname.geopairs.
If the -near option is specified, then two additional 2-variable automaton is output to groupname.near_geopairs and to groupname.near_geodiff. The first of these accepts a pairs of words (u,v) if and only if u and v are geodesic and u*x=v for x either the identity or some generator. It is in fact the geodesic general multiplier, except that the states are labelled only with words rather than a list of words, and the accepting labels will omit any reducible generators. The second automaton is the word-difference machine of this multiplier.
MAF uses a different algorithm from KBMAG to calculate geodesic automata.
automata does not try to generate these automata itself, because they only exist for groups which are "word-hyperbolic", which many automatic groups are not. Even for finite groups, which certainly are word-hyperbolic, these automata can be very large, and can take an extremely long time to generate. To answer questions about geodesic words it will sometimes be better just to build an automaton that always exists (even for groups that are not automatic): the "outer geodesic word-acceptor". This a word-acceptor, not part of an automatic structure, which certainly accepts all geodesic words, but which also usually accepts some words that are not geodesic. This word-acceptor can be constructed using the gpwa utility with -outer_geodesic. Enumerate the words and use reduction to discover the ones that really are geodesics.
gpminkb [loglevel] [format] groupname[-cos [subsuffix]]
This program calculates some automata derived from an automatic structure. The automata constructed are groupname.diff1c, groupname.diff2c, groupname.minred, groupname.minkb and groupname.maxkb (which is new to MAF). Refer to Output files (groups and monoids) for details of these automata. If the -cos option is specified then the first two files have suffix .midiff1c and .midiff2c instead. Refer to Output files (coset systems) instead, for details of the automata gpminkb creates in this case.
If you use automata these automata are computed automatically when an automatic structure is computed, but if you use the KBMAG compatibility programs autgroup orautcos to generate an automatic structure, then you may need to invoke this utility separately.
gpmorphism [loglevel] [reduction_method] [-check_index] sourcerws targetrws [output_file]
gpmorphism tries to compute mappings from the generators of sourcerws to words in the generators of targetrws, in which the axioms of sourcerws are satisfied. An automaton that solves the word problem and permits word reduction for targetrws must previously have been computed. If successful the computed mapping defines a homomorphism between the groups or monoids corresponding to sourcerws and targetrws.
If the -check_index is specified gpmorphism performs a coset enumeration on the subgroup generated by the images of the generators, so that it can be checked if the mapping is an epimorphism.
Output is sent to stdout by default, but you can specify an output file if desired. The output is a GAP list of lists, each list consists of the images of the generators of sourcerws. If the -check_index option is used, then there is an additional level of nesting, and each list of generator images is followed by the index. It is possible the coset enumeration will fail, and in this case the index is shown as 0. You can try computing the index of the subgroup using either a coset system, or gptcenum.
gpmorphism will not list every possible morphism: it will exclude any morphism where the first generator that is not mapped to the identity is mapped to a word which is obviously not a conjugacy class representative; if gpcclass has been run against targetrws then the image of the first generator will be a conjugacy class representative. Ideally gpmorphism would only list one representative amongst morphisms which were equivalent modulo an automorphism of targetrws, or at least modulo an inner automorphism, the one in which every generator image was as early as possible in the word ordering of targetrws. It would not be not easy to implement this. However, gpmorphism will exclude any morphism in which an obvious conjugation shows that the images are not minimal.
gpmorphism computes the total length of the images of the group generators, (i.e. it only counts the length for one out of each pair of mutually inverse monoid generators). It first looks for morphisms where this length is 0 (i.e. the trivial morphism), and then 1,2 and so on. This ensures that if targetrws is infinite then any particular morphism will be found at some point.
gpmult [loglevel] [format] [-cos] [-migm] [-pres | -2 | -sub | -word expression] groupname [subsuffix]
This program assumes that the automatic structure has already been created for the input file, i.e. that groupname.gm has been created successfully, or groupname.cossuffix.gm if you are using the -cosoption. gpmult can compute various additional multiplier automata which may sometimes be useful:
In no special command line options are specified gpmult computes the individual multiplier automata, for the monoid generators of the group, of which the rewriting system defining the group is in the file groupname, using suffixes .m1 to .mn where n is one more than the number of generators listed in the generatorOrder field of the groupname. The final multiplier is the equality multiplier, which accepts (u,v) if and only if u = v and u is accepted by the word-acceptor.
If the -2 option is specified then instead MAF computes the general multiplier for words of length 1 and 2, groupname.gm2. This automaton is not needed in MAF, but if it is available certain other multipliers may be computed more quickly. It may take a very long time to generate this multiplier. In this usage gpmult is equivalent to KBMAG's gpgenmult2, or to gpmigenmult2 if you also use the -cos option and -migm options. In fact MAF versions of those utilities are also provided, as also is gpmimult, but there is no reason to use them and they are not described in detail in this section.
If the -sub option is specified, then gpmult reads the substructure file groupname.subsuffix. It then generates the general multiplier in the group, for the subgroup generators, their inverses, and the identity element and outputs it to groupname.subsuffix.gmg. This option cannot be used when the -cos option is specified, because gpmult only allows one subgroup suffix to be specified. However, if you want to generate coset multipliers for a specific collection of words you can do so by specifying the coset system directly and using the -sub option as well. For example the command line gpmult -migm groupname.cos -sub sub1 would build the multipliers for the words specified as subgroup generators in groupname.sub1 relative to the subgroup specified in groupname.sub
When the -cos option is used gpmult will, by default, compute the new multipliers using the determinised general multiplier for the coset system. If instead you would like a MIDFA multiplier then specify the -migm option.
gporder [loglevel] [-steps] [ reduction_method] rwsname [-cos [subsuffix]] [-i | -read filename | word]
This program can be used to find the order of torsion elements. If the -cos option is used then the order modulo the subgroup is found, i.e. the least power of the word which is in the subgroup. It is assumed that automata has previously run, and that is has output at least one FSA which makes at least provisional word reduction possible. If not, or if you specify that gporder should use an automaton that is not available, then gporder will exit with an error message and error level.
gporder reduces successive powers of the word(s) until it reaches the identity word. If the group is infinite and a word-acceptor is available MAF will also check, for each power it computes, whether the word-acceptor will accept that word infinitely often - which proves the element is not a torsion element. If the maximum word length is exceeded before either of these things happens then the program will crash. The program does not yet attempt to use information about the order of a finite group to work more efficiently, and the method used to detect that an element is not a torsion-element is not yet as good as it could be.
If an input file containing a GAP list of words is supplied gporder will calculate the orders of all the words in the list and output a list consisting of the words and the corresponding order.
If the -i option is used gporder runs in an interactive mode, and will report the order of words as you enter them. Each word should be terminated by a comma - pressing the <Enter> or carriage return key has no effect otherwise, since MAF will assume that the word whose order is to be found is to be continued on the next line. To leave the program enter a ; or hit Ctrl+C.
gpsublowindex [loglevel] [-start n] -stop n [-named] [-simplify] [ -rws | -pres] groupname [subsuffix]
gpsublowindex is a utility which can be used to find low index subgroups of a group, and generate substructure files for them. It finds conjugacy class representatives of the subgroups which have an index between 2, or the value specified for -start, and the value specified for -stop, and generates substructure files for them. The substructure file will be for a simple coset system unless the -named option is used. If the -simplify option is used MAF will eliminate as many of the specified generators as it can before generating the substructure file. If the -rws option is used a new input file is generated, which presents the specified subgroup. The -pres option is similar but generates GAP source code. The new substructure files will be output to groupname.sublow_in1_n2, where n1 is the index of the subgroup and n2 is a sequence number. If a subsuffix is specified the files will be output to groupname.subsuffix_in1_n2 instead.
gpsubmake [loglevel] -derived | -named | -power n groupname [subsuffix]
gpsubmake is a utility which can be used to generate certain substructure files for an input file describing a group.
If the -derived option is specified, a substructure file for the derived subgroup is generated. The coset system will be a normal closure coset system. It is perfectly possible that MAF will not be able to process this successfully, since it might be the case that the derived subgroup is not finitely generated. The new substructure file will be output to groupname.sub_derived unless a value is specified for subsuffix.
If the -named option is specified, then either groupname.sub, if no value is specified for subsuffix, or groupname.subsuffix otherwise, must be the substructure file for a simple coset system that has already been successfully processed by automata. A new substructure file for the same subgroup is created, using named generators. The new generators will be taken from groupname.subsuffix.rws.simplify if this exists, or groupname.subsuffix.rws otherwise. The new substructure file is given the name groupname.subsuffix_named
If the -power n option is specified then n should be a small integer. MAF attempts to find a finite collection of words of the form wn that generate a finite index normal subgroup which appears to contain all nth powers. The substructure file is for a simple coset system. If new substructure file will be output to groupname.sub_power_n unless a value is specified for subsuffix.
gpsubpres [loglevel] [reduction_method] [-rs | -rws] [-schreier] [-no_composite] [-kb] [-pres [-keep]] groupname [subsuffix]
gpsubpres computes a presentation of the subgroup described in groupname.subsuffix (or groupname.sub if subsuffix is not specified). The presentation will normally be output as a new input file named groupname.subsuffix.rws , but if the -pres option is used it is output as GAP source code to groupname.subsuffix.pres. MAF cannot itself process files in this alternative format.
gpsubpres can use three methods to compute a presentation:
If the -pres option is used the generators are given new names, even if the presentation is on named subgroup generators, unless you also specify -keep, in which case MAF will attempt to keep as many generator names the same as possible. Since GAP uses group rather than monoid generators, at least one of each pair of mutually inverse generators will disappear.
gpsubwa [loglevel] [format] [reduction_method] groupname [subsuffix]
gpsubwa attempts to compute a subgroup word-acceptor for the subgroup H of the automatic group G (whose automatic structure must previously have been computed), where the rewriting system for G is defined in groupname, and the generators for H are defined in the file, groupname.subsuffix, where subsuffix defaults to sub. See Substructure files for information on the format of the latter file.) The computed automaton is output to the file groupname.subsuffix.wa.
gpsubwa requires the word-acceptor and general multiplier for an automatic structure of G, which it inputs from the files groupname.wa and groupname.gm. These need to be computed first by running automata or its constituents. It also requires word-reduction automata for G and for cosets of H in G, as described above. These are input, respectively, from groupname.diff2 and groupname.cosname.kbprog by default, but these defaults may be changed by using the appropriate options (see below). Here cosname is derived from subsuffix by substituting the string cos for sub as described in Chapter "Subgroup and Coset Files".
automata and autcos will normally attempt to create the subgroup word-acceptor as the last thing they do. However, if you forgot to first find the automatic structure for the group itself, this stage is skipped, or you may have used an option to prevent this acceptor from being computed, so it is sometimes useful to be able to do this separately. It is also sometimes possible to create a subgroup word-acceptor even when MAF could not find an automatic coset structure, and in these circumstances also you will need to run gpsubwa separately.
gptcenum [loglevel] [format] [-table] [-wa] [-rs] [-pres] [-work_space n] [-max_cosets n][-strategy str] [-compression_mode n] [-max_hole_percentage n] [-allow_holes n] [-as_is | -cr_mode n] [-ep_mode n] [-queued_definitions n] [-groupname [-cos] subsuffix]]
gptcenum attempts to enumerate the cosets of either the trivial subgroup, if -cos is not specified, or the subgroup specified in groupname.subsuffix, in groupname. groupname must be the rewriting system for a group otherwise gptcenum will produce meaningless results. If the computation is successful the index of the subgroup is reported.
If the -table option is specified the standardised coset table is output. This automaton is identical to the automaton computed by gpcosets. Unlike automata, which always outputs the automata it computes, gptcenum allows for all the results of its computations to be discarded, except for the index of the subgroup, (or the size of the group if you are not using a coset system). If the -wa option is specified gptcenum computes the word-acceptor (or coset word-acceptor if this is a coset system).
When the -cos option is specified, then if the -rs option is specified gptcenum uses the Reidemeister-Schreier algorithm to compute a presentation of the subgroup. This will usually be highly redundant, and will typically have many more generators than a presentation computed from an automatic structure, (even one using Schreier generators). You can use simplify to improve the presentation. The presentation will be output, to groupname.subsuffix.rws, as a new input file, unless you use the -pres option, in which case it is output as GAP source code to groupname.subsuffix.pres, in the same way as subgroup presentations generated by autcos with -p.
Both the -wa and -rs options require a coset table, so you are recommended to use the -table option in both these cases. The coset table provides probably the fastest means of word reduction available in MAF, because the reduction is always done in a single step, so there is a minimal amount of rewriting and no back-tracking at all.
-work_space n sets the maximum size of the coset table in bytes. For example -work_space 1G will allow a coset table to use up to 1 Gigabyte of memory. The amount of memory used for other purposes by the program is typically rather small, so usually you could specify an option that was close to the amount of RAM available, or even a little more than this (on the author's computer gptcenum performs quite well even when the coset table is allowed to be 50% larger than the amount of RAM actually present). If the value you specify is too large the program will be slowed down because of excessive page faulting. Instead of using -work_space n you can specify -max_cosets n, which specifies the maximum number of rows in the table. In that case the amount of memory needed will be this number+1 multiplied by the number of monoid generators multiplied by the amount of memory needed to store an integer, which is 4 bytes, whether you are using the 32-bit or the 64-bit version of MAF, unless you use a 64-bit version of MAF which has enabled support for automata with more than 2^31 states, in which case it is 8 bytes. (A computer needs at least 50GB of RAM to be able to build the smallest possible coset table of this type). If you do do not specify either of these options gptcenum will grow the coset table dynamically. This is likely to cause page-faulting sooner than if you had specified a maximum coset table size, because the operating system will be forced to reallocate huge blocks of memory. You should note that easy enumerations will completely more quickly if the maximum coset table size is set to a small value. Even though it has no effect on the amount of work done by the program setting the table to a large size will make it run markedly more slowly because the CPU cache is used less effectively. So if you wanted to enumerate a large number of similar input files it would make sense to set a low limit on the table size in the hope that most enumerations would succeed with this size. Then you could try the ones that failed with a larger table size. Overall this might well be quicker than doing all the enumerations using the larger table size for all of them.
-strategy string sets the strategy. This is a complex and powerful option. It can contain the name of a predefined strategy: sims:1, sims:2,... sims:10, fel:0, fel:1, easy, hlt, hard, long, lucky, pure_c, def, or default (the last 2 are not the same). Alternatively you can specify a custom strategy with up to 8 phases. The first 10 of these, the various sims:n strategies exactly match the ten coset enumeration strategies implemented by Charles Sims's TEN_CE coset enumerator in Chapter 5 of [Sims94], (which is not quite what the corresponding strategies in ACE do. Also ACE only supports the odd numbered Sims strategies). Most of the others, apart from "long", "lucky" and "default" try to match the corresponding strategy in ACE. However, strategies that use short-gap filling will not behave quite like their ACE counterparts, because MAF does this in a slightly different way to ACE. In fact no attempt has been made to duplicate ACE's behaviour exactly: the complexity of the ACE source code would make this very difficult to achieve.
The "default" strategy is used if no strategy option is specified. This will do an HLT+look-ahead enumeration until the table overflows, and then alternates three phases in different styles, to mix "HLT" style processing and "Felsch" style processing. The -strategy def option does something similar, but the mix is slightly different and is intended to match the corresponding ACE strategy. In fact neither of these strategies work as well as they might, because the initial HLT+look-ahead phase will often leave the coset table so full that the other phases do not have much opportunity to do anything before the enumeration fails.
As its name may suggest the "long" strategy is suited to input files where some of either the relators, or the subgroup generators, are long words. This strategy is mostly Felsch style, but one relator is possibly applied HLT style to some position (anywhere in the table), with each relator having a roughly equal chance of being selected, each time a row is filled. MAF attempts to apply the relator in the "best" position, where this means a position requiring the smallest possible number of new definitions. In fact, the code that does this is intentionally not quite correct, and sometimes no relator will be applied, because the gaps will have closed already where MAF chooses to apply the relator. Experiments indicated that this strategy behaved better like this than if MAF goes to the trouble of noticing when this has happened and then picking a different relator and position to apply. This strategy is different from anything ACE can do. The author's intention was to try to match what a human might do when performing an enumeration by hand. It is not always the optimal strategy, but it has outstandingly good behaviour on many input files. For example it can handle the degen4b presentation within a few seconds (though not degen4c). The author found several input files where this strategy succeeded when all the other predefined strategies failed. The lucky strategy is similar but there is a small amount of pseudo-random variation.
For custom strategies each phase's descriptor start with /. Unlike all other command line options in MAF, the flags that set the strategy are mostly case insensitive. The strategy for a phase is set as follows:
* marks the phase which is returned to when all phases have been executed. (Usually this would probably be the second phase).
H denotes an HLT type phase, M denotes a CHLT phase (the "mendelsohn:1" option in ACE terminology), G,N partial CHLT phases (only useful for normal closure coset systems). G causes the cyclic conjugates of the relators to be applied, N causes the cyclic conjugates of the subgroup generators to be applied. (When gptcenum processes a normal closure coset system the subgroup generators are effectively extra relators).
If none of H,M,G,N are specified the phase is "Felsch" type.
2 or 3 select alternate orderings of the conjugated relators for CHLT type phases.
C enables the deduction stack for HLT/CHLT type phases (like Sims:3/Sims:4 or Sims:7/Sims:8).
L enables "look-ahead" for HLT type phases.
P causes an HLT type row to be filled before relators are applied (rather than after).
S enables short gap filling, Q enables queued short gap filling.
U limits short gap filling using the fill factor (on its own it implies the S option as well)
R limits short gap filling by requiring the scan to pass through a "low" row (one that is already filled), and also implies the S option.
Valid combinations of the various short gap options are /S, /Q, /R, /QR, /U, /QU, /UR, /QUR. The S option turns off R,U and Q. The Q option turns off R and U.
X enables "long" gap filling, and completes at most one scan per iteration,in a most nearly complete position; relators are considered round-robin fashion.
A selects ACE definition order (back filled).
B selects MAF definition order (balanced filling). This means that when MAF is either applying a relator to a coset HLT style, or filling a short gap, the next coset to be defined will be attached to the one nearest the beginning of the table. That means that when a relator is applied HLT style cosets will alternately be defined from the front and the back of the relator.
T selects "traditional" definition order (forward filled).
The definition order options have a usually minor impact on both the number of cosets defined in the course of an enumeration, and the maximum number of live cosets. For a particular input file any one of them may give the best results
V means that some aspects of the phase are randomly changed. The options that can change are the definition order, the style of short gap filling, and the order of CHLT style processing, if this is selected.
W means that the phase uses a randomly chosen style
: introduces the "duration" part of the phase. This consists of a number, followed by a D (duration measured by number of cosets defined), or R (duration measured by number of rows processed). If the number is -1 then the phase lasts for ever, or, if look-ahead is enabled and there are more phases, until a look-ahead is performed.
One can also combine these options with predefined strategies: for example, -strategy sims:1/a would make modify the sims:1 strategy to behave as it would in ACE. (By default the various "sims" strategies exactly match Sims' book.). The a, t, and b strategy descriptor options are particularly useful in this case. For instance the "long" strategy is equivalent to -strategy /bxq. Experiments indicate that -strategy /axq or -strategy /txq often work a little better, and this is what you get if you use -strategy long/a or -strategy long/t. If you do this you need to be aware that some predefined strategies have more than one phase. For example, to modify the fel:1 strategy to use MAF definition order you would need to specify -strategy fel:1//b because otherwise you would only be setting the definition order on a phase which applies the relators to the identity coset. However, these modifiers work well on the first 8 Sims strategies (which use traditional definition order by default), on "easy" and "hlt" (which use ACE definition order), and "long" and "lucky" (which use MAF definition order). The definition order makes no difference unless either relators are applied to cosets or short gaps are being filled.
In theory "long" and "lucky" might not be correct coset enumeration strategies, because it is possible that the enumerator would spend all its time filling short gaps, and so never fill rows from the beginning of the table. In practice this never seems to happen. The MAF enumerator is different from ACE in this respect. ACE is capable of going wrong with this kind of strategy when there are relators of length 3 (for example with the Fibonacci groups). MAF will not go wrong in that case, because it completely excludes relators of length 3 from gap filling. However, if you want to play safe you can add the 'R' descriptor as a modifier, (for example, -strategy long/r. It would also be possible to add the 'U' modifier, but this tends not to work well with strategies that use "long" gap filling unless you set a high fill factor. However, enumerations that would have succeeded without either of these restrictions on gap filling will quite often fail with them if the input file is difficult.
Another example: -strategy /mcpb:1r/bsx. All cyclic conjugates of the relators are applied to a pre-filled row 1 using balanced scans with a deduction stack. Then an unlimited Felsch phase is run which does unlimited immediate short-gap filling, and applies one relator per filled row in round-robin fashion.
-fill_factor n sets the fill-factor used by U limited short gap filling. The higher the value set the more gaps will be filled. The value 0 is handled as in ACE, and is probably the best value to use, unless you are using a strategy that does "long" gap filling, where it may well be necessary to set a very high value.
-queued_definitions n sets the size of the queue for Q type short gap filling. If this is set to 0 then the strategies that use queued short gap filling do immediate gap filling instead. If it is set to -1 then all short gap filling is disabled completely.
-cr_mode n or -as_is specify how the input file is pre-processed. Specify -cr_mode 0 or -as_is to use the presentation as is. Specify -cr_mode 1 to use cyclically and freely reduced relators, where the conjugate picked is the first in the reduction ordering. Specify -cr_mode 2 to similarly reduce relators, but this time picking the conjugate which gives the best equation when right-balanced. You might expect that do be the same as -cr_mode 1 but usually it is not, unless all the relators have odd length.
-ep_mode n, where 0 ≤ n ≤ 5, selects further processing of the relators as follows:
Experiments indicate that while one might expect -cr_mode 2 -ep_mode 1 to have the best behaviour this is by no means always the case. Quite often it seems to be the case that if you do this the maximum number of live cosets is as small as possible (which is good, because it means a difficult enumeration has more chance to complete), but the total number of cosets ever defined is more than it would be with some other equivalent presentation, which means that the enumeration takes longer.
In future more options of this type might be added. For example it is plausible that picking the set of conjugates which gave the smallest number of cosets after applying relators to the cosets corresponding to each generator as well as coset 1 would be a better choice than just applying cosets to coset 1.
-compression_mode n where 0 ≤ n ≤ 3 determines what happens to the "deduction stack" if the table is compressed mid-enumeration.
In any case the enumerator tries to avoid invoking compression when the deduction stack is not empty, and may discard the deduction stack if it believes it would be faster to scan the entire table with an HLT "look-ahead".
-max_hole_percentage n : selects maximum percentage of holes. If this is set to 100, compression will be disabled completely. MAF sets this to 30%, whereas with ACE most strategies set it to 10%. MAF computes this as a percentage of the maximum table size, not the current size. The idea is to compress the table as seldom as possible, because compressing the table is a waste of time (except that it may marginally improve the effectiveness of CPU RAM caching).
-allow_holes n sets a number of holes which will be tolerated if the enumeration is about to fail due to lack of space, or the table is going to be resized. Setting this to a high value will prevent fruitless "look-aheads" when HLT+Lookahead mode is used. The default value is 10000. If you are going to have a very large coset table size, then it might be best to set this to a higher value.
-build_standardised causes the table to be built in BFS order, so that it is "standardised". This means that as rows are filled some rows may be interchanged with rows defined earlier. If one consults Sims, one sees that doing this is often quite beneficial, but that when it is not beneficial it can have an extremely bad effect. For some input files using this option will increase the amount of time and memory needed for the enumeration to complete exponentially. The author does not recommend the use of this option.
The word-ordering method will affect some coset enumeration strategies, because the order in which relators are applied to cosets will be affected by the ordering of the relators in the reduction order. If a coset table is created gptcenum labels it correctly for the chosen word-ordering. You should note that this is a very simple computation when the word-ordering is "shortlex"
, but considerably more complex for orderings such as recursive. In future gptcenum will probably have an option to produce the confluent rewriting system that can also be deduced from the structure of the completed coset table.
The author implemented gptcenum to see if he could gain any insight from coset enumeration that would help him to improve his Knuth-Bendix implementation, and to see to what extent, if any, the oft repeated claim that coset enumeration is better than Knuth-Bendix is really justified. As yet he has not come to any firm conclusions, nor applied any lessons learned. The author is satisfied, that for the great majority of input files, automata -nowd will perform acceptably when compared with gptcenum, only a little more slowly on average, and that for difficult presentations it is usually superior to gptcenum. However, for easy presentations, for which HLT style enumeration succeeds quickly gptcenum with that strategy will probably run at least 50% faster than automata. For some presentations gptcenum requires only about 30% of the time that automata requires. It is also interesting to compare memory usage. Both programs construct rather similar data structures. In both there are rows of transitions which have as many columns as there are monoid generators, and these take up the same amount of memory in each program. But automata stores 32 bytes of information for every node in its table (or 40 bytes in the 64-bit version), which means that for the typical 2-generator group where the table has either 3 or 4 columns) automata requires significantly more memory. Also automata keeps this amount of information both at L0 nodes, which correspond to rows in a coset table, but also at L1 and some L2 nodes, which do not appear in a coset table at all. On the other hand automata's index automaton rarely needs to create L0 nodes for every coset. If a wreath product word-ordering is being used the total number of nodes may be minute compared with the group's order. Also, for difficult presentations, automata will typically collapse far sooner than gptcenum. For example, for the 6561#8 presentation Derek Holt's "Handbook of computational group theory" remarks that more than 30,000,0000 cosets are needed to perform a Felsch style enumeration and 100,000,000 for an HLT style enumeration. The author is not able to reproduce these results. The best predefined strategy for the presentation he uses for this group seems to be "hard" which completes with under 21 million cosets. The "long" strategy needs around 45 million, fel:1 around 50 million, and none of the other strategies tried worked with 60 million cosets. The fastest enumeration required 66 seconds. Yet automata proves this group has order 6561 using only a maximum of around 70,000 L0 nodes, on the same PC in atound 30 seconds.
gpvital [loglevel] [-axioms] [-raw | -kb] [-diff2c] groupname [-cos [subsuffix]]
gpvital calculates a new presentation for a group whose automatic structure has already been computed. The presentation is usually based on the correct primary word-difference machine (the .diff1c automaton), and is in some sense a canonical presentation for the particular generating set and word-ordering method used in the rewriting system. In principle there is an axiom which proves the necessity of each transition in this word-difference machine. However, MAF will usually simplify the new presentation somewhat by eliminating axioms that are obvious consequences of earlier ones. This simplification can be prevented by use of the -raw option, or you can ask MAF to make a more determined effort to eliminate unnecessary axioms by using the -kb option.
The -axioms option will cause the original axiom set to be included first. This option may be of interest when you have finally succeeded in finding the automatic structure for a highly pathological group presentation: any extra axioms in the new presentation will normally be equations that are very hard to prove using the original axiom set, and the automatic structure will usually be much easier to compute using the new presentation.
gpsubwa [loglevel] [format] [reduction_method] groupname [subsuffix]
gpsubwa attempts to compute a subgroup word-acceptor for the subgroup H of the automatic group G (whose automatic structure must previously have been computed), where the rewriting system for G is defined in groupname, and the generators for H are defined in the file, groupname.subsuffix, where subsuffix defaults to sub. See Substructure files for information on the format of the latter file.) The computed automaton is output to the file groupname.subsuffix.wa.
gpsubwa requires the word-acceptor and general multiplier for an automatic structure of G, which it inputs from the files groupname.wa and groupname.gm. These need to be computed first by running automata or its constituents. It also requires word-reduction automata for G and for cosets of H in G, as described above. These are input, respectively, from groupname.diff2 and groupname.cosname.kbprog by default, but these defaults may be changed by using the appropriate options (see below). Here cosname is derived from subsuffix by substituting the string cos for sub as described in Chapter "Subgroup and Coset Files".
automata and autcos will normally attempt to create the subgroup word-acceptor as the last thing they do. However, if you forgot to first find the automatic structure for the group itself, this stage is skipped, or you may have used an option to prevent this acceptor from being computed, so it is sometimes useful to be able to do this separately. It is also sometimes possible to create a subgroup word-acceptor even when MAF could not find an automatic coset structure, and in these circumstances also you will need to run gpsubwa separately.
gpwa [loglevel] [format] [-cos] [dm_selector] [options] groupname [subsuffix]
gpwa computes the word-acceptor for a difference machine. There is an option to calculate an "outer geodesic" acceptor. This is an acceptor which rejects all the words which are definitely not geodesic, and accepts all the others. So its language definitely includes all the geodesic words, but probably includes some words that are not geodesic. If a shortlex automatic structure exists then this acceptor always exists, whereas the true geodesic acceptor may not. So it could be useful if you are interested in enumerating geodesic words.
dm_selector can be one of: -diff2c,-diff1c,-diff2,-pdiff2,-pdiff1. Several of these word-difference machines are only available if the automatic structure has been constructed already; it is sometimes interesting to compare how quickly the word-acceptor can be built from each of the word-difference machines.
You can use this program to build a provisional word-acceptor for a group or coset system for which automata was unsuccessful.
The following special options are accepted:
If the -outer_geodesic option is specified gpwa constructs an FSA which accepts all words that cannot be reduced to a shorter word by the word difference machine. This option only makes sense if the word-ordering for the rewriting system is geodesic. Note that the FSA built in this case will usually also accept some words that can in fact be reduced to a shorter word.
If the -max_length n option is specified, where n is a positive integer, then when the FSA is being constructed, new states are only created if the defining word has length at most n, so that the output FSA will only reject words which can be reduced with a rule with a LHS word of length n or below.
If the -max_states n option is specified, then prior to minimisation the FSA constructed is allowed to have at most n states. This has a similar effect to -max_length k for some k, but usually some rules of length k+1 would reject words.
Both the -max_length n and the -max_states n option are to allow an FSA to be built when the build process would otherwise fail due to time or space constraints.
gpxlatwa [loglevel] groupname [subsuffix]
gpxlatwa computes a unique word-acceptor for a group using a new set of generators, which are specified as subgroup generators in groupname.subsuffix (or groupname.sub if subsuffix is not specified). The subgroup must have index 1, and the substructure file must specify names and inverses for the subgroup generators. The word-acceptor groupname.wa must previously have been computed, as must the MIDFA coset general multiplier groupname.cosssuffix.migm. The new word-acceptor is output to the file groupname.subsuffix.xwa. gpxlatwa works by first computing words in the new generators for each of the old generators. Then it computes the FSA which accepts words if and only if it is a word formed by translating a word which was accepted by the original acceptor using these words to map from the original generators to the new generators. Then it modifies the FSA so that each accepted word is replaced by its free reduction.
For example, suppose the original set of generators is [a,A,b,B,c,C]
and that [c,a*a*b*A*A*B]
is one of the axioms, so that the group is also generated by [a,A,b,B]
. Furthermore, suppose that c*b
is an accepted word in the original generators. The direct translation into the new generators would be a*a*b*A*A*B*b
, therefore the new acceptor will accept the word a*a*b*A*A for the corresponding group element.
isconjugate [loglevel] [reduction_method] groupname word1 [word2]
isconjugate can be used to attempt to determine either whether two words are conjugates (if two words are specified on the command line), or the set of words that are conjugate to the given word (if only one word is specified). If requires that automata or a similar program has computed an automaton that allows word-reduction to take place.
When two words are specified isconjugate will attempt to find the conjugacy class of each word, and stop if it finds that these have a common element, at which point it can compute an element u such that word1^u=u^-1*word1*u = word2. The return value is 0 if a conjugating element is found, or 2 if the group is finite and there is no such element, or the group is infinite but either word1 or word2 have only finitely many conjugates and are not conjugate to each other. In the case where both conjugacy classes are infinite and different isconjugate will run until it runs out of memory and crashes or it tries to create a word that is longer than the maximum word length allowed. isconjugate is very simple-minded, and there might be much better ways to discover whether two elements could possibly be conjugates: for example by checking if the elements had different orders using gporder. isconjugate will allow you to process coset systems using the usual -cos subsuffix syntax, though the author is not sure why you would ever want to do this. In this case the two words will be considered to be conjugate if there is some word u such that u^-1*word*u and word2 are in the same coset.
If only one word is specified isconjugate will only stop if the word has finitely many conjugates, at which point it will output a list containing all of them.
isnormal [loglevel] groupname [subsuffix]
Assuming automata or a similar program has computed an automaton that allows coset word-reduction to take place, isnormal will tell you whether the specified subgroup is normal. If is not, then it will list a subgroup generator and a group generator that shows this.
isnormal will not against a normal closure coset system.
makecosfile [-sg] [-ni] groupname [subsuffix]
makecosfile can be used to generate a coset system from an input file and a substructure file. Usually is done automatically when you use automata or one of the KBMAG compatibility programs to process a coset system. However you can use makecosfile to generate the coset system without processing it as an input file. You can do that if you want to edit the coset system manually, or create it a non-standard way. In that case you will need to invoke other utilities with the name of the coset system itself, rather than use the -cos option, otherwise the coset system would be regenerated.
makecosfile takes its input from the input file groupname, which should contain a declaration of a record defining a rewriting system, in the format described in Input files, and from a substructure file groupname.subsuffix, which should contain a list of subgroup generators and (optionally) names for them, in one of the three formats described in Substructure Files. The input file should use one of the following orderings: "shortlex"; "wreathprod"; "rt_wreathprod"; "rt_recursive". makecosfile generates the coset system corresponding to the two input files. Output is to the file groupname.cossuffix, and is a coset system, the contents of which will be described in the next paragraph. How cossuffix is determined is explained in Coset system filenames.
If the option -sg is not specified, then the generated coset system will either be a simple coset system, or a normal closure coset system. This will be the case even if the subgroup generators are named in the substructure file.
If the option -sg (for "subgroup generators") is specified, then the record defined in the substructure file must contain a subGeneratorNames field. The coset system will then be generated as a coset system with named subgroup generators.
By default, if the substructure file does not contain a subGeneratorInverseNames field, then inverses of the H-generators will be appended as further H-generators. (The inverse symbol for H-generator x is named x^-1.) If the -ni option is specified, however, then these inverse generators are not introduced. (This option is provided because KBMAG has it: it is not clear to the author of MAF why this would ever be useful, because it is much more difficult for MAF to analyse such a coset system since it is no longer possible to use balancing to move H-generators on the LHS of equations to the RHS.
This program can be used to reduce words to their normal form in the ordering specified in file rwsname. It is assumed that automata has previously run, and that it has output at least one FSA which makes at least provisional word reduction possible. If not, or if you specify that reduce should use an automaton that is not available, then reduce will exit with an error message.
reduce reduces words using one of the automata produced by automata. The reductions will always be correct in the sense that the output word will represent the same element as the input word. If automata has completed successfully, then at least one of the the first five automata in the list of values that can be specified for reduction_method will exist, and reduce will use this automaton to perform the word reduction. It can therefore be used to solve the word problem in the monoid. If automata produced only provisional output, then there will usually be some pairs of words which are really equal as elements, but which reduce to distinct words, and so this program cannot be used to solve the word problem.
If the -steps option is specified MAF will apply one reduction at a time to the word and output each word.
The KBMAG option -mrl maxreducelen is accepted, but ignored. Throughout MAF, words are limited to a length of MAX_WORD symbols, which currently equals 65533 symbols.
output_file may only be specified if the -read filename option has been used. If the -i option is used then the program will display a prompt and allow words to be input interactively. Words must be terminated with a ',' or a ';' when you want to quit the program. On most operating systems it will be necessary to press Enter after the ',' or ';' character.
If the -read option is specified, then the input file should be a GAP list using the following syntax:
words := [ word1, word2, ... wordn ];
Each word should follow the same rules for input as are used in MAF's usual input files.
simplify [loglevel] [-strategy n] [-abelian] [-keep_generators] [-no_simplify] [-kb] [-keep_going] [-long_length n] [-max_elimination n] [-max_seen n] [-raw] input_file [-overwrite | -o | output_file]
simplify is a Tietze transformation program. It is not yet capable of performing all types of Tietze transformation - it can remove redundant generators, but it cannot introduce a new generator. It can reduce the length of a presentation as much as possible, and it can also use Knuth-Bendix methods to try to remove some redundant relators. simplify can be used to simplify the ".rws" files generated by gpsubpres or automata, or indeed any rewriting system in KBMAG/MAF format. simplify will happily increase the size of the presentation if it can eliminate generators by doing so, except that it will usually stop if the total length of the axioms has doubled (so running it more than once may eliminate more generators at the cost of increasing the total length of the relators further).
-strategy n, where n is a number from 0 to 15, influences the order in which generators and axioms are eliminated. 0,1, or 3 is usually best, 4-7 often poor. Even numbered strategies prefer to eliminate later generators first, odd numbered strategies try to select a generator to eliminate based on the estimated increase in size of the presentation.
-keep_generators tells simplify not to eliminate generators. -max_elimination n limits elimination of generators to relators of length n or below. -max_seen n limits elimination of generators to relators containing at most n distinct generators but with no limit on the length of the relator.
-keep_going tells simplify to eliminate as many generators as possible, even if the size of the presentation increases a lot.
-no_simplify tells simplify not to try to eliminate axioms, except when eliminating an unnecessary generator.
-abelian tells simplify to compute the presentation of the abelian quotient of the presentation. It does this without using linear algebra. This is slow (so that programs like GAP may well be able to do much better), but it does mean that abelianisation can be computed for presentations with very large generating sets for which linear algebra techniques might not be practical.
-best_equation_relators causes simplify to keep the cyclic conjugate that gives the best equation when right balanced. The default is to keep the least conjugate in the word-ordering. The default usually works better and is faster.
-raw speeds simplify up by not filtering the axioms through a new MAF rewriting system at the end.
-kb tells MAF to perform Knuth-Bendix expansion to eliminate more axioms. In this case -long_length n limits this options to axioms of length < n. This should only be attempted when the number of generators is less than about 50 as otherwise it is likely to be prohibitively slow.
If not output filename is specified simplify creates a new input file called input_file.simplify, but -overwrite causes simplify to overwrite the original input file instead.
It is often worth running simplify more than once. Especially when the relators are long it sometimes happens that when simplify creates the new input file for the group MAF is able to reduce the length of some of the relators further. This happens because MAF usually conjugates all the relators as it is doing so. It might then happen that this new short relator can be used to simplify some of the relators that have already been added. It also sometimes happens that changing the order of the generators can lead to further simplification of the presentation.
It is often the case that after simplification there are some short relators that are redundant. However, eliminating these is usually not a good idea as without them MAF may find the input file harder to process. In particular coset enumeration is liable to be much more difficult, and any subgroup presentations computed from the new input file may be much more difficult to handle.