The CADE ATP System Competition

Design and Organization


This document contains information about the:

The rules, specifications, and deadlines given here are absolute. Only the competition panel has the right to make exceptions. It is assumed that all entrants have read the web pages related to the competition, and have complied with the competition rules. Non-compliance with the rules could lead to disqualification. A "catch-all" rule is used to deal with any unforeseen circumstances: No cheating is allowed. The panel is allowed to disqualify entrants due to unfairness, and to adjust the competition rules in case of misuse.

Disclaimer

Every effort has been made to organize the competition in a fair and constructive manner. No responsibility is taken if, for one reason or the other, your system does not win.


Changes

The design and procedures of this CASC evolved from those of previous CASCs. Important changes for this CASC are:


Divisions

CASC is run in divisions according to problem and system characteristics. There are competition divisions in which systems are explicitly ranked, and a demonstration division in which systems demonstrate their abilities without being formally ranked. Some divisions are further divided into problem categories, which make it possible to analyse, at a more fine grained level, which systems work well for what types of problems. The problem categories have no effect on the competition rankings, which are made at only the division level.

Competition Divisions

The competition divisions are open to ATP systems that meet the required system properties. Each competition division uses problems that have certain logical, language, and syntactic characteristics, so that the ATP systems that compete in the division are, in principle, able to attempt all the problems in the division. In the following, really non-propositional means with an infinite Herbrand universe. Effectively propositional means syntactically non-propositional with a finite Herbrand Universe. The problems section explains what problems are eligible for use in each division and category. The system evaluation section explains how the systems are ranked in each division.

Demonstration Division

ATP systems that cannot run in the competition divisions for any reason (e.g., the system requires special hardware, or the entrant is an organizer) can be entered into the demonstration division. Demonstration division systems can run on the competition computers, or the computers can be supplied by the entrant. Computers supplied by the entrant may be brought to CASC, or may be accessed via the internet. The entry specifies which competition divisions' problems are to be used. The demonstration division results are presented along with the competition divisions' results, but might not be comparable with those results. The systems are not ranked and no prizes are awarded.


Infrastructure

Computers

The computers are Dell PowerEdge blade computers, each having: In the non-batch divisions systems may use only one core, and will be limited to a fraction of the memory (multiple jobs will be run on each node). In the batch divisions each system will be allocated one node, and may use all the cores and memory.

Problems

Problem Selection
Problems for CASC are taken from the TPTP Problem Library. Additionally, problems for the MZR@Turing division will be taken from the MPTP2078 problem set, and problems for the SMO category of the LTB division will be taken from a problem set developed for CASC-J6. The TPTP version used for CASC is released after the competition has started, so that new problems have not been seen by the entrants. Access to and use of the non-TPTP problem sets is controlled to ensure that the system complied with the CASC tuning restrictions.

The problems have to meet certain criteria to be eligible for selection:

The problems used are randomly selected from the eligible problems at the start of the competition, based on a seed supplied by the competition panel.

Number of Problems
The minimal numbers of problems that must be used in each division and category, to ensure sufficient confidence in the competition results, are determined from the numbers of eligible problems in each division and category (the competition organizers have to ensure that there are sufficient computers available to run the ATP systems on this minimal number of problems). The minimal numbers of problems are used in determining the time limits imposed on each solution attempt.

A lower bound on the total number of problems to be used is determined from the number of computers available, the time allocated to the competition, the number of ATP systems to be run on the competition computers over all the divisions, and the time limit per problem, according to the following relationship:

                   NumberOfComputers * TimeAllocated
NumberOfProblems = ---------------------------------
                     NumberOfATPSystems * TimeLimit
It is a lower bound on the total number of problems because it assumes that every system uses all of the time limit for each problem. Since some solution attempts succeed before the time limit is reached, more problems can be used.

The numbers of problems used in each division and problem category are (roughly) proportional to the numbers of eligible problems, after taking into account the limitation on very similar problems. The numbers of problems used in each division and category are determined according to the judgement of the competition organizers. For the MZR@Turing division, which has features that encourage machine learning from proofs found, at least 300 problems will be used.

Problem Preparation
The problems are in TPTP format, with include directives (included files are found relative to the TPTP environment variable). The problems in each non-batch division, and each LTB batch, are given in increasing order of TPTP difficulty rating. This is aesthetic in the non-batch divisions, but practically important in the batches where it is possible to learn from proofs found earlier in the batch.

In order to ensure that no system receives an advantage or disadvantage due to the specific presentation of the problems in the TPTP, the problems are preprocessed to:

In order to prevent systems from recognizing problems from their file names, symbolic links are made to the selected problems, using names of the form CCCNNN.p for the symbolic links. CCC is the division or problem category name, and NNN runs from 001 to the number of problems in the division or category. The problems are specified to the ATP systems using the symbolic link names.

In the SMO problem category of the LTB division, the conjecture role is replaced by the question role, to indicate that answers are desired.

In the demonstration division the same problems are used as for the competition divisions, with the same preprocessing applied. However, the original file names can be retained for systems running on computers provided by the entrant.

Batch Specification Files
The problems for each batch division and category are listed in a batch specification file, containing one or more batch specifications. Each batch specification consists of:

BatchSampleMZRMZR, BatchSampleLTBSMO, and BatchSampleLTBMZR are examples.

Resource Limits

Non-Batch divisions
CPU and wall clock time limits are imposed. The minimal CPU time limit per problem is 240s. The maximal CPU time limit per problem is determined using the relationship used for determining the number of problems, with the minimal number of problems as the NumberOfProblems. The CPU time limit is chosen as a reasonable value within the range allowed, and is announced at the competition. The wall clock time limit is imposed in addition to the CPU time limit, to limit very high memory usage that causes swapping. The wall clock time limit per problem is double the CPU time limit. An additional memory limit of 6GB will be imposed. The time limits are imposed individually on each solution attempt.

In the demonstration division, each entrant can choose to use either a CPU or a wall clock time limit, whose value is the CPU time limit of the competition divisions.

MZR@Turing division
For each batch there is a there is an overall wall clock time limit, which is available as a command line parameter. The overall limit is the at least 30s multiplied by the number of problems in the division. There are no CPU time limits.

LTB division
For each batch there is a wall clock time limit per problem, which is provided in the configuration section at the start of each batch. The minimal wall clock time limit per problem is 30s. For each problem category there is an overall wall clock time limit, which is available as a command line parameter. The overall limit is the sum over the batches of the batch's per-problem limit multiplied by the number of problems in the batch. Time spent before starting the first problem of a batch (e.g., preloading and analysing the batch axioms), and times spent between ending a problem and starting the next (e.g., learning from a proof just found), are not part of the times taken on the individual problems, but are part of the overall time taken. There are no CPU time limits.


System Evaluation

For each ATP system, for each problem, four items of data are recorded: whether or not the problem was solved, the CPU time taken (not in the MZR@Turing division), the wall clock time taken (not in the MZR@Turing division), and whether or not a solution (proof or model) was output. In the LTB division, the wall clock time is the time from when the system reports starting on a problem and reports ending on a problem - the time spent before starting the first problem, and times spent between ending a problem and starting the next, are not part of the time taken on problems.

The systems are ranked in the competition divisions, from the performance data. The THF, TFA, EPR, and LTB divisions have an assurance ranking class, ranked according to the number of problems solved, but not necessarily accompanied by a proof or model (thus giving only an assurance of the existence of a proof/model). The CASC@Turing, FOF, and FNT divisions have a proof/model ranking class, ranked according to the number of problems solved with an acceptable proof/model output. Ties are broken according to the average time over problems solved (CPU time for the non-batch divisions, wall clock time for the batch divisions). In the competition divisions, class winners are announced and prizes are awarded.

The competition panel decides whether or not the systems' proofs and models are acceptable for the proof/model ranking classes. The criteria include:

In the assurance ranking classes the ATP systems are not required to output solutions (proofs or models). However, systems that do output solutions are highlighted in the presentation of results.

In addition to the ranking criteria, other measures are made and presented in the results:

At some time after the competition, all high ranking systems in the competition divisions are tested over the entire TPTP. This provides a final check for soundness (see the section on system properties regarding soundness checking before the competition). If a system is found to be unsound during or after the competition, but before the competition report is published, and it cannot be shown that the unsoundness did not manifest itself in the competition, then the system is retrospectively disqualified. At some time after the competition, the proofs and models from the winners of the proof/model ranking classes are checked by the panel. If any of the proofs or models are unacceptable, i.e., they are significantly worse than the samples provided, then that system is retrospectively disqualified. All disqualifications are explained in the competition report.


System Entry

To be entered into CASC, systems must be registered using the CASC system registration form. No registrations are accepted after the registration deadline. For each system entered, an entrant has to be nominated to handle all issues (including execution difficulties) arising before and during the competition. The nominated entrant must formally register for CASC. It is not necessary for entrants to physically attend the competition.

Systems can be entered at only the division level, and can be entered into more than one division (a system that is not entered into a competition division is assumed to perform worse than the entered systems, for that type of problem - wimping out is not an option). Entering many similar versions of the same system is deprecated, and entrants may be required to limit the number of system versions that they enter. Systems that rely essentially on running other ATP systems without adding value are deprecated; the competition panel may disallow or move such systems to the demonstration division. The division winners of the previous CASC are automatically entered into their divisions, to provide benchmarks against which progress can be judged.

System Description

A system description has to be provided for each ATP system entered, using this HTML schema. The schema has the following sections:

The system description has to be emailed to the competition organizers by the system description deadline. The system descriptions, along with information regarding the competition design and procedures, form the proceedings for the competition.

Sample Solutions

For systems in the proof/model classes, representative sample solutions must be emailed to the competition organizers by the sample solutions deadline. Use of the TPTP format for proofs and finite interpretations is encouraged. The competition panel decides whether or not proofs and models are acceptable for the proof/model ranking classes.

Proof samples for the FOF proof class must include a proof for SEU140+2. Model samples for the FNT model class must include models for NLP042+1 and SWV017+1. The sample solutions must illustrate the use of all inference rules. An explanation must be provided for any non-obvious features.

For systems competing for the ISA problem category prize in the LTB division, representative sample proofs or lists of axioms must be emailed to the competition organizers by the sample solutions deadline. Use of the SZS standards is required. Samples must include a proof or list for SEU140+2. For systems competing for the SMO problem category prize in the LTB division, representative sample answers must be emailed to the competition organizers by the sample solutions deadline. Samples must include an answer for CSR082+1.


System Requirements

System Properties

Entrants must ensure that their systems execute in a competition-like environment, and have the following properties. Entrants are advised to check these properties, and the listed system checks, well in advance of the system delivery deadline. This gives the competition organizers time to help resolve any difficulties encountered. Entrants do not have access to the competition computers.

Soundness and Completeness

  1. Systems must be sound. At some time before the competition all the systems in the competition divisions are tested for soundness. Non-theorems are submitted to the systems in the FOF@Turing, MZR@Turing, THF, TFA, FOF, EPR, and LTB divisions, and theorems are submitted to the systems in the FNT@Turing, FNT, and EPR divisions. Finding a proof of a non-theorem or a disproof of a theorem indicates unsoundness. If a system fails the soundness testing it must be repaired by the unsoundness repair deadline or be withdrawn. The soundness testing eliminates the possibility of a system simply delaying for some amount of time and then claiming to have found a solution. For systems running on computers supplied by the entrant in the demonstration division, the entrant must perform the soundness testing and report the results to the competition organizers.
  2. Systems do not have to be complete in any sense, including calculus, search control, implementation, or resource requirements.
  3. All techniques used must be general purpose, and expected to extend usefully to new unseen problems. The precomputation and storage of information about individual TPTP problems or their solutions is not allowed. Strategies and strategy selection based on individual TPTP problems or their solutions are not allowed. If machine learning procedures are used, the learning must ensure that sufficient generalization is obtained so that no there is no specialization to individual problems or their solutions.
  4. The system's performance must be reproducible by running the system again.
Execution
  1. Systems must run on a single locally provided standard UNIX computer (the competition computers). ATP systems that cannot run on the competition computers can be entered into the demonstration division.
  2. Systems must be executable by a single command line, using an absolute path name for the executable, which might not be in the current directory. In the non-batch divisions the command line arguments are the absolute path name of a symbolic link as the problem file name, the individual problem time limit (if required by the entrant), and entrant specified system switches. In the batch divisions the command line arguments are the absolute path name of the batch specification file, the overall category time limit (if required by the entrant), and entrant specified system switches. No shell features, such as input or output redirection, may be used in the command line. No assumptions may be made about the format of file names.
  3. Systems must be fully automatic, i.e., all command line switches have to be the same for all problems in each division.
Output
  1. In the non-batch divisions all solution output must be to stdout. In the batch divisions all solution output must be to the named output file for each problem.
  2. In the LTB division the systems must print SZS notification lines to stdout when starting and ending work on a problem (including any cleanup work, such as deleting temporary files). For example
    % SZS status Started for /home/graph/tptp/TPTP/Problems/CSR/CSR075+2.p
      ... (system churns away, result and solution output to file)
    % SZS status Theorem for /home/graph/tptp/TPTP/Problems/CSR/CSR075+2.p
    % SZS status Ended for /home/graph/tptp/TPTP/Problems/CSR/CSR075+2.p
  3. For each problem, the systems must output a distinguished string indicating what solution has been found or that no conclusion has been reached. The distinguished strings the problem status should use the SZS ontology and standards. For example
    % SZS status Theorem for SYN075+1
    or
    % SZS status GaveUp for SYN075+1
    Regardless of whether the SZS status values are used, the distinguished strings must be different for: The first distinguished string output is accepted as the system's result.

    In batch divisions this line must use the SZS standards, including the problem file name, and must be output to both stdout and the solution file. In the LTB division this line must be output as the last thing before the ending notification line.

  4. When outputting proofs/models, the start and end of the proof/model must be delimited by distinguished strings. The distinguished strings should use the SZS ontology and standards. For example
    % SZS output start CNFRefutation for SYN075+1
      ...
    % SZS output end CNFRefutation for SYN075+1
    Regardless of whether the SZS output forms are used, the distinguished strings must be different for: The string specifying the problem status must be output before the start of a proof/model. Use of the TPTP format for proofs and finite interpretations is encouraged.
  5. When outputting a list of axioms sufficient for a proof in the ISA problem category of the LTB division, the start and end of the list must be delimited by distinguished strings. The distinguished strings should use the SZS ontology and standards. For example
    % SZS output start ListOfFOF for SYN075+1
      ...
    % SZS output end ListOfFOF for SYN075+1
  6. When outputting answers in the SMO problem category of the LTB division, the answers must be output using the Tuple or Instantiated answer form of the proposed TPTP standard for answer reporting.
Resource Usage
  1. The systems that run on the competition computers must be interruptible by a SIGXCPU signal, so that the CPU time limit can be imposed, and interruptable by a SIGALRM signal, so that the wall clock time limit can be imposed. For systems that create multiple processes, the signal is sent first to the process at the top of the hierarchy, then one second later to all processes in the hierarchy. The default action on receiving these signals is to exit (thus complying with the time limit, as required), but systems may catch the signals and exit of their own accord. If a system runs past a time limit this is noticed in the timing data, and the system is considered to have not solved that problem.
  2. If an ATP system terminates of its own accord, it may not leave any temporary or intermediate output files. If an ATP system is terminated by a SIGXCPU or SIGALRM, it may not leave any temporary or intermediate output files anywhere other than in /tmp. Multiple copies of the ATP systems must be executable concurrently, in the same (NFS cross mounted) directory. It is therefore necessary that temporary files have unique names.
  3. For practical reasons excessive output from an ATP system is not allowed. A limit, dependent on the disk space available, is imposed on the amount of output that can be produced. The limit is at least 10MB per system.

System Delivery

For systems running on the competition computers, entrants must email an installation package to the competition organizers by the system delivery deadline. (See the systems descriptions page for these descriptions.) The installation package must be a .tgz file containing the system source code, any other files required for installation, and a ReadMe file. The ReadMe file must contain:

The installation procedure may require changing path variables, invoking make or something similar, etc., but nothing unreasonably complicated. All system binaries must be created in the installation process; they cannot be delivered as part of the installation package. If the ATP system requires any special software, libraries, etc, which is not part of a standard installation, the competition organizers must be told in the system registration. The system is installed onto the competition computers by the competition organizers, following the instructions in the ReadMe file. Installation failures before the system delivery deadline are passed back to the entrant (i.e., delivery of the installation package before the system delivery deadline provides an opportunity to fix things if the installation fails!). After the system delivery deadline no further changes or late systems are accepted. If you are in doubt about your installation package or procedure, please email the competition organizers.

For systems running on entrant supplied computers in the demonstration division, entrants must deliver a source code package to the competition organizers by the start of the competition. The source code package must be a .tgz file containing the system source code.

After the competition all competition division systems' source code is made publicly available on the CASC web site. In the demonstration division, the entrant specifies whether or not the source code is placed on the CASC web site. An open source license is encouraged.

System Execution

Execution of the ATP systems on the competition computers is controlled by a perl script, provided by the competition organizers. The jobs are queued onto the computers so that each computer is running one job at a time. In the non-LTB divisions, all attempts at the Nth problems in all the divisions and categories are started before any attempts at the (N+1)th problems. In the LTB division all attempts in each category in the division are started before any attempts at the next category.

During the competition a perl script parses the systems' outputs. If any of an ATP system's distinguished strings are found then the time used to that point is noted. A system has solved a problem iff it outputs its termination string within the time limit, and a system has produced a proof/model iff it outputs its end-of-proof/model string within the time limit. The result and timing data is used to generate an HTML file, and a web browser is used to display the results.

The execution of the demonstration division systems is supervised by their entrants.

System Checks