The CADE ATP System Competition

Design and Organization


This document contains information about the:

The CASC rules, specifications, and deadlines are absolute. Only the competition panel has the right to make exceptions. It is assumed that all entrants have read the web pages related to the competition, and have complied with the competition rules. Non-compliance with the rules can lead to disqualification. A "catch-all" rule is used to deal with any unforeseen circumstances: No cheating is allowed. The panel is allowed to disqualify entrants due to unfairness, and to adjust the competition rules in case of misuse.

Disclaimer

Every effort has been made to organize the competition in a fair and constructive manner. No responsibility is taken if, for one reason or another, your system does not win.


Changes

The design and procedures of this CASC evolved from those of previous CASCs. Important changes for this CASC are:


Divisions

CASC is divided into divisions according to problem and system characteristics. There are competition divisions in which systems are explicitly ranked, and a demonstration division in which systems demonstrate their abilities without being ranked. Some divisions are further divided into problem categories, which makes it possible to analyse, at a more fine grained level, which systems work well for what types of problems. The problem categories have no effect on the competition rankings, which are made at only the division level.

Competition Divisions

The competition divisions are open to ATP systems that meet the required system properties. Each division uses problems that have certain logical, language, and syntactic characteristics, so that the ATP systems that compete in the division are, in principle, able to attempt all the problems in the division. The problems section explains what problems are eligible for use in each division and category. The system evaluation section explains how the systems are ranked in each division.

Demonstration Division

ATP systems that cannot run in the competition divisions for any reason (e.g., the system requires special hardware, or the entrant is an organizer) can be entered into the demonstration division. Demonstration division systems can run on the competition computers, or on computers supplied by the entrant. The entry specifies which competition divisions' problems are to be used. The demonstration division results are presented along with the competition divisions' results, but might not be comparable with those results. The systems are not ranked.


Infrastructure

Computers

The competition computers have:

One ATP system runs on one CPU at a time, with access to half (128GB) the memory. Systems can use all the cores on the CPU (which is advantageous in the divisions where a wall clock time limit is used).

Problems

Problem Selection
Problems for the THF, TFA, FOF, FNT, UEQ, and EPR divisions are taken from the TPTP Problem Library. The TPTP version used for CASC is not released until after the competition has started, so that new problems have not been seen by the entrants. The problems have to meet certain criteria to be eligible for selection. The problems used are randomly selected from the eligible problems based on a seed supplied by the competition panel.

The problems for the LTB division are taken from various sources, with each problem category being based on one source. Entrants are expected to honestly not use publicly available problem sets for tuning before the competition. The process for selecting problems depends on the problem source. The batch presentation allows the ATP systems to load and preprocess the common core set of axioms just once, and to share logical and control results between proof searches. The training problems and solutions facilitate and promote learning from previous proofs.

Number of Problems
In the TPTP-based divisions, the minimal numbers of problems that must be used in each division and category, to ensure sufficient confidence in the competition results, are determined from the numbers of eligible problems in each division and category (the competition organizer have to ensure that there are sufficient computers available to run the ATP systems on this minimal number of problems). The minimal numbers of problems are used in determining the time limit imposed on each solution attempt. The numbers of problems to be used in each division of the competition are determined from the number of computers available, the time allocated to the competition, the number of ATP systems to be run on the competition computers over the divisions, and the time limit imposed on each solution attempt, according to the following relationship:

                   NumberOfComputers * TimeAllocated
NumberOfProblems = ---------------------------------
                     NumberOfATPSystems * TimeLimit
It is a lower bound on the number of problems because it assumes that every system uses all of the time limit for each problem. Since some solution attempts succeed before the time limit is reached, more problems can be used. The number of problems used in each division and problem category is (roughly) proportional to the numbers of eligible problems, after taking into account the limitation on very similar problems, determined according to the judgement of the competition organizer.

In the LTB division the number of problems in each category is determined by the number of problems in the corresponding problem set. In CASC-27, the one problem category has 10000 problems.

Problem Preparation
The problems are given to the ATP systems in TPTP format, with include directives. In order to ensure that no system receives an advantage or disadvantage due to the specific presentation of the problems in the TPTP, the problems in the TPTP-based divisions are obfuscated by:

In the LTB division the formulae are not obfuscated, thus allowing the ATP systems to take advantage of natural structure that occurs in the problems.

In the TPTP-based divisions the problems are given to the ATP systems in increasing order of TPTP difficulty rating. In the LTB division the problems in each batch are given in the natural order of their creation for the problem sets.

Batch Specification Files
The problems for each problem category of the LTB division are listed in a batch specification file, containing global data lines and one or more batch specifications. The global data lines are:

Each batch specification consists of: Additional Notes for CASC-27 An example is BatchSampleLTBHL4, which refers to the training data file TrainingData.HL4.tgz.

Resource Limits

In the TPTP-based divisions, CPU and wall clock time limits are imposed for each problem. The minimal CPU time limit for each problem is 120s. The maximal CPU time limit for each problem is determined using the relationship used for determining the number of problems, with the minimal number of problems as the NumberOfProblems. The CPU time limit is chosen as a reasonable value within the range allowed, and is announced at the competition. In the FEW category of the FOF division there is no CPU time limit. The wall clock time limit is imposed in addition to the CPU time limit, to limit very high memory usage that causes swapping. The default wall clock time limit for each problem is double the CPU time limit. In the FEW category of the FOF division the wall clock time limit is the same as the CPU time limit of the FEQ category of the FOF division. An additional memory limit is imposed, depending on the computers' memory.

In the LTB division, wall clock time limits are imposed. For each batch there might be a wall clock time limit for each problem, provided in the configuration section at the start of each batch. If there is a wall clock time limit for each problem, the minimal limit is 15s, and the maximal limit is 90s. For each batch there is an overall wall clock time limit, provided in the configuration section at the start of each batch. The overall limit is proportional to the number of problems in the batch, e.g. (but not necessarily), the batch's per-problem time limit multiplied by the number of problems in the batch. Time spent before starting the first problem of a batch (e.g., preloading and analysing the batch axioms), and times spent between the end of an attempt on a problem and the starting of the next (e.g., learning from a proof just found), are not part of the times taken on the individual problems, but are part of the overall time taken. There are no CPU time limits.


System Evaluation

For each ATP system, for each problem, four items of data are recorded: whether or not the problem was solved, the CPU time taken, the wall clock time taken, and whether or not a proof or model was output.

The systems are ranked in the competition divisions, from the performance data. The THF, TFA, FOF, FNT, UEQ, and LTB divisions are ranked according to the number of problems solved with an acceptable proof/model output. The EPR division is ranked according to the number of problems solved, but not necessarily accompanied by a proof or model (but systems that do output proofs/models are highlighted in the presentation of results). Ties are broken according to the average time taken over problems solved (CPU time or wall clock time, depending on the type of limit in the division). Trophies are awarded to the competition divisions' winners. An additional trophy will be awarded to the winner of the FEW category of the FOF division.

The competition panel decides whether or not the systems' proofs and models are "acceptable". The criteria include:

In addition to the ranking criteria, other measures are made and presented in the results:

At some time after the competition, all high ranking systems in the competition divisions are tested over the entire TPTP. This provides a final check for soundness (see the section on system properties regarding soundness checking before the competition). If a system is found to be unsound during or after the competition, but before the competition report is published, and it cannot be shown that the unsoundness did not manifest itself in the competition, then the system is retrospectively disqualified. At some time after the competition, the proofs and models from the winners (of divisions ranked by the numbers of proofs/models output) are checked by the panel. If any of the proofs or models are unacceptable, i.e., they are significantly worse than the samples provided, then that system is retrospectively disqualified. All disqualifications are explained in the competition report.


System Entry

To be entered into CASC, systems must be registered using the CASC system registration form, by the registration deadline. For each system entered an entrant must be nominated to handle all issues (e.g., installation and execution difficulties) arising before and during the competition. The nominated entrant must formally register for CASC. It is not necessary for entrants to physically attend the competition.

Systems can be entered at only the division level, and can be entered into more than one division. A system that is not entered into a competition division is assumed to perform worse than the entered systems, for that type of problem - wimping out is not an option. Entering many similar versions of the same system is deprecated, and entrants may be required to limit the number of system versions that they enter. Systems that rely essentially on running other ATP systems without adding value are deprecated; the competition panel may disallow or move such systems to the demonstration division.

The division winners of the previous CASC are automatically entered into their demonstration divisions, to provide benchmarks against which progress can be judged. Prover9 2009-11A is automatically entered into the FOF division, to provide a fixed-point against which progress can be judged.

System Descriptions

A system description must be provided for each ATP system entered, using this HTML schema. The schema has the following sections:

The system description must be emailed to the competition organizer by the system description deadline. The system descriptions form part of the competition proceedings.

Sample Solutions

For systems in the divisions that require proof/model output, representative sample solutions must be emailed to the competition organizer by the sample solutions deadline. Use of the TPTP format for proofs and finite interpretations is encouraged. The competition panel decides whether or not proofs and models are acceptable.

Proof/model samples are required as follows:

An explanation must be provided for any non-obvious features.


System Requirements

System Properties

Entrants must ensure that their systems execute in the competition environment, and have the following properties. Entrants are advised to finalize their installation packages and check these properties well in advance of the system delivery deadline. This gives the competition organizer time to help resolve any difficulties encountered.

Execution, Soundness, and Completeness

  1. Systems must be fully automatic, i.e., all command line switches have to be the same for all problems in each division.
  2. Systems' performance must be reproducible by running the system again.
  3. Systems must be sound. At some time before the competition all the systems in the competition divisions are tested for soundness. Non-theorems are submitted to the systems in the THF, TFA, FOF, EPR, UEQ, and LTB divisions, and theorems are submitted to the systems in the FNT and EPR divisions. Finding a proof of a non-theorem or a disproof of a theorem indicates unsoundness. If a system fails the soundness testing it must be repaired by the unsoundness repair deadline or be withdrawn. For systems running on computers supplied by the entrant in the demonstration division, the entrant must perform the soundness testing and report the results to the competition organizer.
  4. Systems do not have to be complete in any sense, including calculus, search control, implementation, or resource requirements.
  5. All techniques used must be general purpose, and expected to extend usefully to new unseen problems. The precomputation and storage of information about individual problems that might appear in the competition or their solutions is not allowed. (It's OK to store information about LTB training problems.) Strategies and strategy selection based on individual problems or their solutions are not allowed. If machine learning procedures are used to tune a system, the learning must ensure that sufficient generalization is obtained so that no there is no specialization to individual problems or their solutions. The system description must explain any such tuning or training that has been done. The competition panel may disqualify any system that is deemed to be problem specific rather than general purpose. If you are in doubt, contact the competition organizer.
Output
  1. In all divisions except LTB all solution output must be to stdout. In the LTB division all solution output must be to the named output file for each problem, in the directory specified as the second argument to the starexec_run script. If multiple attempts are made on a problem in an unordered batch, each successive output file must overwrite the previous one.
  2. In the LTB division the systems must print SZS notification lines to stdout when starting and ending work on a problem (including any cleanup work, such as deleting temporary files). For example
      % SZS status Started for CSR075+2.p
        ... (system churns away, progress output to file)
      % SZS status GaveUp for CSR075+2.p
      % SZS status Ended for CSR075+2.p 
    ... and later in another attempt on that problem ...
      % SZS status Started for CSR075+2.p
        ... (system churns away, progress, result, and solution appended to file)
      % SZS status Theorem for CSR075+2.p
      % SZS status Ended for CSR075+2.p 
  3. For each problem, the system must output a distinguished string indicating what solution has been found or that no conclusion has been reached. Systems must use the SZS ontology and standards for this. For example
    % SZS status Theorem for SYN075+1.p
    or
    % SZS status GaveUp for SYN075+1.p
    In the LTB division this line must be the last line output before the ending notification line. The line must also be output to the output file.
  4. When outputting proofs/models, the start and end of the proof/model must be delimited by distinguished strings. Systems must use the SZS ontology and standards for this. For example
    % SZS output start CNFRefutation for SYN075+1.p
      ...
    % SZS output end CNFRefutation for SYN075+1.p
    The string specifying the problem status must be output before the start of a proof/model. Use of the TPTP format for proofs and finite interpretations is encouraged.
Resource Usage
  1. Systems that run on the competition computers must be interruptible by a SIGXCPU signal, so that CPU time limits can be imposed, and interruptable by a SIGALRM signal, so that wall clock time limits can be imposed. For systems that create multiple processes, the signal is sent first to the process at the top of the hierarchy, then one second later to all processes in the hierarchy. The default action on receiving these signals is to exit (thus complying with the time limit, as required), but systems may catch the signals and exit of their own accord. If a system runs past a time limit this is noticed in the timing data, and the system is considered to have not solved that problem.
  2. If a system terminates of its own accord, it may not leave any temporary or intermediate output files. If a system is terminated by a SIGXCPU or SIGALRM, it may not leave any temporary or intermediate output files anywhere other than in /tmp.
  3. For practical reasons excessive output from an ATP system is not allowed. A limit, dependent on the disk space available, is imposed on the amount of output that can be produced.

System Delivery

Entrants must email a StarExec installation package to the competition organizer by the system delivery deadline. The installation package must be a .tgz file containing only the components necessary for running the system (i.e., not including source code, etc.). The entrants must also email a .tgz file containing the source code and any files required for building the StarExec installation package to the competition organizer by the system delivery deadline.

For systems running on entrant supplied computers in the demonstration division, entrants must email a .tgz file containing the source code and any files required for building the executable system to the competition organizer by the system delivery deadline.

After the competition all competition division systems' source code is made publicly available on the CASC web site. In the demonstration division, the entrant specifies whether or not the source code is placed on the site. An open source license is encouraged.

System Execution

Execution of the ATP systems is controlled by StarExec. The jobs are queued onto the computers so that each CPU is running one job at a time. All attempts at the Nth problems in all the divisions and categories are started before any attempts at the (N+1)th problems.

A system has solved a problem iff it outputs its termination string within the time limit, and a system has produced a proof/model iff it outputs its end-of-proof/model string within the time limit. The result and timing data is used to generate an HTML file, and a web browser is used to display the results.

The execution of the demonstration division systems is supervised by their entrants.

System Checks