The CADE ATP System Competition

Design and Organization


This document contains information about the:

The rules, specifications, and deadlines given here are absolute. Only the competition panel has the right to make exceptions.

Disclaimer

Every effort has been made to organize the competition in a fair and constructive manner. No responsibility is taken if, for one reason or the other, your system does not win.


Changes

The design and procedures of this CASC evolved from those of previous CASCs. Important changes for this CASC are:


Divisions

CASC is run in divisions according to problem and system characteristics. There are competition divisions in which systems are explicitly ranked, and a demonstration division in which systems demonstrate their abilities without being formally ranked. Some divisions are further divided into problem categories, which make it possible to analyze, at a more fine grained level, which systems work well for what types of problems. The problem categories have no effect on the competition rankings, which are made at only the division level.

Competition Divisions

Each competition division uses problems that have certain logical, language, and syntactic characteristics, so that the ATP systems that compete in the division are, in principle, able to attempt all the problems in the division. The Problems section explains what problems are eligible for use in each division and category. The System Evaluation section explains how the systems are ranked in each division.

Demonstration Division

ATP systems that cannot run on the competition computers, or cannot be entered into the competition divisions for any other reason, can be entered into the demonstration division. Demonstration division systems can run on the competition computers, or the computers can be supplied by the entrant. Computers supplied by the entrant may be brought to CASC, or may be accessed via the internet.

The entry specifies which competition divisions' problems are to be used. The results are presented along with the competition divisions' results, but may not be comparable with those results.


Infrastructure

Computers

The competition computers are Dell computers, each having:

Problems

Problem Selection
The problems are from the TPTP Problem Library. The TPTP version used for the competition is not released until after the system installation deadline, so that new problems have not been seen by the entrants.

The problems have to meet certain criteria to be eligible for selection:

The problems used are randomly selected from the eligible problems at the start of the competition, based on a seed supplied by the competition panel.

Number of Problems
The minimal numbers of problems that have to be used in each division and category, to ensure sufficient confidence in the competition results, are determined from the numbers of eligible problems in each division and category (the competition organizers have to ensure that there is sufficient CPU time available to run the ATP systems on this minimal number of problems). The minimal numbers of problems are used in determining the CPU time limit imposed on each solution attempt.

A lower bound on the total number of problems to be used is determined from the number of computers available, the time allocated to the competition, the number of ATP systems to be run on the competition computers over all the divisions, and the CPU time limit, according to the following relationship:

                   NumberOfComputers * TimeAllocated
NumberOfProblems = ---------------------------------
                   NumberOfATPSystems * CPUTimeLimit
It is a lower bound on the total number of problems because it assumes that every system uses all of the CPU time limit for each problem. Since some solution attempts succeed before the CPU time limit is reached, more problems can be used.

The numbers of problems used in each division and problem category is (roughly) proportional to the numbers of eligible problems, after taking into account the limitation on very similar problems.

The numbers of problems used in each division and category are determined according to the judgement of the competition organizers.

Problem Preparation
In order to ensure that no system receives an advantage or disadvantage due to the specific presentation of the problems in the TPTP, the tptp2X utility (distributed with the TPTP) is used to:

Further, to prevent systems from recognizing problems from their file names, symbolic links are made to the selected problems, using names of the form CCCNNN-1.p for the symbolic links, with NNN running from 001 to the number of problems in the respective division or category. The problems are specified to the ATP systems using the symbolic link names.

In the demonstration division the same problems are used as for the competition divisions, with the same tptp2X transformations applied. However, the original file names can be retained.

Resource Limits

In the competition divisions, CPU and wall clock time limits are imposed on each solution attempt. A minimal CPU time limit of 240 seconds is used. The maximal CPU time limit is determined using the relationship used for determining the number of problems, with the minimal number of problems as the NumberOfProblems. The CPU time limit is chosen as a reasonable value within the range allowed, and is announced at the competition. The wall clock time limit is imposed in addition to the CPU time limit, to limit very high memory usage that causes swapping. The wall clock time limit is double the CPU time limit.

In the demonstration division, each entrant can choose to use either a CPU or a wall clock time limit, whose value is the CPU time limit of the competition divisions.


System Evaluation

All the divisions have an assurance ranking class, ranked according to the number of problems solved (a "yes" output, giving an assurance of the existence of a proof/model). The FOF, CNF, FNT, and SAT divisions additionally have a proof/model ranking class, ranked according to the number of problems solved with an acceptable proof/model output on stdout. Ties are broken according to the average CPU times over problems solved. All systems are automatically ranked in the assurance classes, and are ranked in the proof/model classes if they output acceptable proofs/models.

For each ATP system, for each problem, three items of data are recorded: whether or not a solution was found, the CPU time taken, and whether or not a solution (proof or model) was output on stdout. The systems are ranked from this performance data. Division and class winners are announced and prizes are awarded.

The competition panel decides whether or not the systems' proofs and models are acceptable. The criteria include:

In the assurance classes, and the EPR and UEQ divisions, the ATP systems are not required to output solutions (proofs or models). However, systems that do output solutions on stdout are highlighted in the presentation of results.

If a system is found to be unsound during or after the competition, but before the competition report is published, and it cannot be shown that the unsoundness did not manifest itself in the competition, then the system is retrospectively disqualified. At some time after the competition, all high ranking systems in the competition divisions are tested over the entire TPTP. This provides a final check for soundness (see the section on System Properties regarding soundness checking before the competition). At some time after the competition, the proofs from the winners of the FOF and CNF division proof classes, and the models from the winner of the FNT and SAT division model class, are checked by the panel. If any of the proofs or models are unacceptable, i.e., they are significantly worse than the samples provided, then that system is retrospectively disqualified. All disqualifications are explained in the competition report.


System Entry

To be entered into CASC, systems have to be registered using the CASC system registration form. No registrations are accepted after the registration deadline. For each system entered, an entrant has to be nominated to handle all issues (including execution difficulties) arising before and during the competition. The nominated entrant must formally register for CASC. However, it is not necessary for entrants to physically attend the competition.

Systems can be entered at only the division level, and can be entered into more than one division (a system that is not entered into a competition division is assumed to perform worse than the entered systems, for that type of problem - wimping out is not an option). Entering many similar versions of the same system is deprecated, and entrants may be required to limit the number of system versions that they enter. The division winners from the previous CASC are automatically entered into their divisions, to provide benchmarks against which progress can be judged.

It is assumed that each entrant has read the WWW pages related to the competition, and has complied with the competition rules. Non-compliance with the rules could lead to disqualification. A "catch-all" rule is used to deal with any unforseen circumstances: No cheating is allowed. The panel is allowed to disqualify entrants due to unfairness, and to adjust the competition rules in case of misuse.

System Description

A system description has to be provided for each ATP system entered, using this HTML schema. The system description must fit onto two pages, using 12pt times font. The schema has the following sections:

The system description has to be emailed to the competition organizers by the system description deadline. The system descriptions, along with information regarding the competition design and procedures, form the proceedings for the competition.

Sample Solutions

For systems in the proof and model classes representative sample solutions must be emailed to the competition organizers before the sample solutions deadline. Proof samples for the FOF proof class must include a proof for SYN075+1. Proof samples for the CNF proof class must include a proof for SYN075-1. Model samples for the the FNT model class must include models for MGT019+2 and SWV010+1. Model samples for the the SAT model class must include a model for MGT031-1. The sample solutions must illustrate the use of all inference rules. A key must be provided if any non-obvious abbreviations for inference rules or other information are used.


System Requirements

System Properties

Systems are required to have the following properties: Entrants must ensure that their systems execute in a competition-like environment, according to the system checks. Entrants are advised to perform these checks well in advance of the system installation deadline. This gives the competition organizers time to help resolve any difficulties encountered. Entrants will not have access to the competition computers.

System Delivery

For systems running on the competition computers, entrants must email an installation package to the competition organizers by the installation deadline. The installation package must be a .tar.gz file containing the system source code, any other files required for installation, and a ReadMe file. The ReadMe file must contain:

The installation procedure may require changing path variables, invoking make or something similar, etc, but nothing unreasonably complicated. All system binaries must be created in the installation process; they cannot be delivered as part of the installation package. The system is reinstalled onto the competition computers by the competition organizers, following the instructions in the ReadMe file. Installation failures before the installation deadline are passed back to the entrant (i.e., deliver your installation package before the installation deadline so if the installation fails you have a chance to fix it!). If you are in doubt about your installation package or procedure, please email the competition organizers.

For systems running on entrant supplied computers in the demonstration division, entrants must deliver a source code package to the competition organizers by the start of the competition. The source code package must be a .tar.gz file containing the system source code.

After the competition all competition division systems' source code, is made publically available on the CASC WWW site. In the demonstration division, the entrant specifies whether or not the source code is placed on the CASC WWW site.

System Execution

Execution of the ATP systems on the competition computers is controlled by a perl script, provided by the competition organizers. The jobs are queued onto the computers so that each computer is running one job at a time. All attempts at the Nth problems in all the divisions and categories are started before any attempts at the (N+1)th problems.

During the competition a perl script parses the systems' outputs. If any of an ATP system's distinguished strings are found then the CPU time used to that point is noted. A system has solved a problem iff it outputs its termination string within the CPU time limit, and a system has produced a proof/model iff it outputs its end-of-proof/model string within the CPU time limit. The result and timing data is used to generate an HTML file, and a WWW browser is used to display the results.

The execution of the demonstration division systems is supervised by their entrants.

System Checks