The CADE ATP System Competition

Design and Organization


This document contains information about the:


Design and Organization

The design and organization of CASC has evolved over the years to a sophisticated state. Important changes for CASC-J12 were (for readers already familiar with the general design of CASC): The CASC rules, specifications, and deadlines are absolute. Only the panel has the right to make exceptions. It is assumed that all entrants have read the documentation related to the competition, and have complied with the competition rules. Non-compliance with the rules can lead to disqualification. A catch-all rule is used to deal with any unforeseen circumstances: No cheating is allowed. The panel is allowed to disqualify entrants due to unfairness, and to adjust the competition rules in case of misuse.

Disclaimer

Every effort has been made to organize the competition in a fair and constructive manner. No responsibility is taken if, for one reason or another, your system does not win.

A Tense Note

Attentive readers will notice changes between the present and past tenses in this document. Many parts of CASC are established and stable – they are described in the present tense (the rules are the rules). Aspects that are particular to this CASC are described in the future tense so that they make sense when reading this before the event.


Divisions

CASC is divided into divisions according to problem and system characteristics, in a coarse version of the TPTP problem library's Specialist Problem Classes (SPCs). Each division uses problems that have certain logical, language, and syntactic characteristics, so that the systems that compete in a division are, in principle, able to attempt all the problems in the division. Some divisions are further divided into problem categories that make it possible to analyze, at a more fine-grained level, which systems work well for what types of problems. The problems section explains what problems are eligible for use in each division and category.

Systems that cannot be entered into the competition divisions (e.g., the system requires special hardware, or the entrant is an organizer) can be entered into the demonstration division. The demonstration division uses the same problems as the competition divisions, and the entry specifies which competition divisions' problems are to be used.

The divisions and problem categories of CASC-29 are as follows:


Infrastructure

Computers

The competition computers have:

One ATP system runs on one CPU at a time. (Each StarExec computer has two sockets, i.e., two CPUs, and 256 GiB memory. StarExec uses Linux's sched_setaffinity to restrict each system run to a single CPU, and setrlimit to limit memory use to 128 GiB.) Systems can use all the cores on the CPU, which can be advantageous in divisions where a wall clock time limit is used. StarExec copies the systems and problems to the compute nodes before starting execution, so that there are no network delays. The StarExec computers used for CASC are the same as are publicly available to the TPTP community, which allows system developers to test and tune their systems in exactly the same environment as is used for the competition.

Demonstration division systems can run on the competition computers, or the computers can be supplied by the entrant. The CASC-29 demonstration division systems all used the competition computers.

Problems for the TPTP-based Divisions

Problems for the THF, TFA, FOF, FNT, and UEQ divisions are taken from the TPTP Problem Library. The TPTP version used for CASC is released after the competition has started, so that new problems in the release have not been seen by the entrants. The problems have to meet certain criteria to be eligible for selection. The problems used are randomly selected from the eligible problems based on a seed supplied by the competition panel. In order to ensure that no system receives an advantage or disadvantage due to the specific presentation of the problems in the TPTP, the problems in the TPTP-based divisions are obfuscated by: The numbers of problems used in each division and problem category are constrained by the the numbers of eligible problems, the number of systems entered across the divisions, the number of CPUs available, the time limits (see Section 2.3), and the time available for running the competition live in one conference day, i.e., in about 6 hours. The numbers of problems used are set within these constraints according to the judgement of the organizers.

The problems used are randomly selected from the eligible problems based on a seed supplied by the competition panel:

The problems are given to the ATP systems in TPTP format, with include directives, in increasing order of TPTP difficulty rating.

Problems for the SLH Division

Problems for the SLH division are generated by Isabelle's Sledgehammer system. The problem set is the same as used for CASC-29, which provided these sample problems and used these 1000 problems. Appropriately difficult problems are chosen based on performance data similar to that in the TSTP. The problems are not modified by any preprocessing, thus allowing the ATP systems to take advantage of natural structure that occurs in the problems.

The number of problems is based on the CPU time limit, using a calculation similar to that used for the TPTP-based divisions. The problems are given in a roughly estimated increasing order of difficulty.

Problems for the ICU Division

Each entrant will submit 5 to 10 FOF problems that have a conjecture. The problems must be provided in decreasing order of desired use in the division, i.e., probably from hardest to easiest for other systems. The problems must all be different, as assessed by the competition organizers. At least five problems will be taken from each entrant’s submission, in the order specified, and all systems will attempt all the selected problems. The problems will be given in the reverse of the decreasing order of desired use, so that the "easier" problems are used before "harder" ones.

It is expected that each entrant will submit problems that are easy enough for that entrant's system, but difficult for the other entrants' systems, i.e., each entrant is saying to the others: "I Challenge yoU!".

Time Limits

In the THF, TFA, TFN, FOF, FNT, and UEQ divisions a time limit is imposed for each problem. The minimal time limit for each problem is 120s. The maximal time limit for each problem is determined using the relationship used for determining the number of problems, with the minimal number of problems as the NumberOfProblems. The time limit is chosen as a reasonable value within the range allowed according to the judgement of the organizers, and is announced at the competition. In CASC-29 a wall clock time limit was imposed for each problem, and no CPU time limits were imposed (so that it could be advantageous to use all the cores on the CPU).

In the SLH division a CPU time limit is imposed for each problem. The minimal time limit is 15s, and the maximal time limit per problem is 90s, which is the range of CPU time that can be usefully allocated for a proof attempt in the Sledgehammer context (according to Jasmin Blanchette, and he should know). The time limit is chosen as a reasonable value within the range allowed according to the judgement of the organizers, and is announced at the competition.

In the ICU division a wall clock time limit will be imposed for each problem, and no CPU time limits will imposed (so that it can be advantageous to use all the cores on the CPU). The limit will be between 300 s and 600 s will be used, and will be announced at the competition.


System Entry, Delivery, and Execution

Systems can be entered at only the division level, and can be entered into more than one division. A system that is not entered into a division is assumed to perform worse than the entered systems, for that type of problem -- wimping out is not an option. Entering many similar versions of the same system is deprecated, and entrants might be required to limit the number of system versions that they enter. Systems that rely essentially on running other ATP systems without adding value are deprecated; such systems might be disallowed or moved to the demonstration division.

The ATP systems entered into CASC are delivered to the competition organizer as StarExec installation packages, which the organizer installs and tests on StarExec. Source code is delivered separately, under the trusting assumption that the installation package does correspond to the source code. After the competition all competition division systems' StarExec and source code packages are made available on the CASC web site. This allows anyone to use the systems on StarExec, and to examine the source code. An open source license is encouraged, to allow the systems to be freely used, modified, and shared. Many of the StarExec packages include statically linked binaries that provide further portability and longevity of the systems.

The ATP systems are required to be fully automatic. They are executed as black boxes, on one problem at a time. Any command line parameters have to be the same for all problems in each division. The ATP systems are required to be sound, and are tested for soundness by submitting non-theorems to the systems in the THF, TFA, FOF, UEQ, SLH, and ICU divisions, and theorems to the systems in the TFN division. Claiming to have found a proof of a non-theorem or a disproof of a theorem indicates unsoundness. If a system fails the soundness testing it must be repaired by the unsoundness repair deadline or be withdrawn.


System Evaluation

The ATP systems are ranked at the division level. For each ATP system, for each problem, four items of data are recorded: whether or not the problem was solved, the CPU and wall clock times taken (as measured by StarExec's runsolver utility, and prepended to each line of the system's stdout), and whether or not a solution (proof or model) was output.

The systems are ranked according to the number of problems solved with an acceptable solution output. Ties are broken according to the average time taken over problems solved. Trophies are awarded to the competition divisions' winners.

The competition panel decides whether or not the systems' solutions are "acceptable". The criteria include:

In addition to the ranking criteria, three other measures are presented in the results:

At some time after the competition all high ranking systems in the competition divisions are tested over the entire TPTP. This provides a final check for soundness (see the section on system properties regarding soundness checking before the competition). If a system is found to be unsound during or after the competition, but before the competition report is published, and it cannot be shown that the unsoundness did not manifest itself in the competition, then the system is retrospectively disqualified. At some time after the competition, the solutions from the winners are checked by the panel. If any of the solutions are unacceptable, i.e., they are sufficiently worse than the samples provided, then that system is retrospectively disqualified. All disqualifications are explained in the competition report.

The demonstration division results are presented along with the competition divisions' results, but might not be comparable with those results. The demonstration division is not ranked.


System Registration

To be entered into CASC systems must be registered using the CASC system registration form by the registration deadline. For each system entered an entrant must be nominated to handle all issues (e.g., installation and execution difficulties) arising before, during, and after the competition. The nominated entrant must formally register for CASC. It is not necessary for entrants to physically attend the competition.

Systems can be entered at only the division level, and can be entered into more than one division. All systems that are entered into a division are assumed to perform better than all systems not entered, for that type of problem - wimping out is not an option. Entering many similar versions of the same system is deprecated, and entrants may be required to limit the number of system versions that they enter. Systems that rely essentially on running other ATP systems without adding value are deprecated; the competition panel may disallow or move such systems to the demonstration division.

The division winners of the previous CASC and the Prover9 1109a system are automatically entered into the demonstration division, to provide benchmarks against which progress can be judged.

System Description

A system description must be provided for each ATP system, using this HTML schema. The schema has the following sections:

The system description must be emailed to the competition organizer by the system description deadline. The system descriptions form part of the competition proceedings.

Sample Solutions

For systems in the divisions that require solution output, representative sample solutions must be emailed to the competition organizer by the sample solutions deadline. Use of the TPTP format for proofs and finite interpretations is encouraged. The competition panel decides whether or not each system's solutions are acceptable.

Proof/model samples are required as follows:

An explanation must be provided for any non-obvious features.


System Requirements

System Properties

Entrants must ensure that their systems execute in the competition environment, and have the following properties. Entrants are advised to finalize their installation packages and check these properties well in advance of the system delivery deadline. This gives the competition organizer time to help resolve any difficulties encountered.

Execution, Soundness, and Completeness

  1. Systems must be fully automatic, i.e., all command line switches have to be the same for all problems in each division.
  2. Systems' performances must be reproducible by running the system again.
  3. Systems must be sound.
  4. Systems do not have to be complete in any sense, including calculus, search control, implementation, or resource requirements.
  5. All techniques used must be general purpose, and expected to extend usefully to new unseen problems. The precomputation and storage of information about individual problems that might appear in the competition, or their solutions, is not allowed. Strategies and strategy selection based on individual problems or their solutions are not allowed. If machine learning procedures are used to tune a system, the learning must ensure that sufficient generalization is obtained so that no there is no specialization to individual problems. The system description must explain any such tuning or training that has been done. The competition panel may disqualify any system that is deemed to be problem specific rather than general purpose. If you are in doubt, contact the competition organizer.
Output
  1. In all divisions the solution output must be to stdout.
  2. For each problem, the system must output a distinguished string indicating what solution has been found or that no conclusion has been reached. Systems must use the SZS ontology and standards for this. For example
    % SZS status Theorem for SYN075+1.p
    or
    % SZS status GaveUp for SYN075+1.p
  3. When outputting a solution, the start and end of the solution must be delimited by distinguished strings. Systems must use the SZS ontology and standards for this. For example
    % SZS output start CNFRefutation for SYN075+1.p
      ...
    % SZS output end CNFRefutation for SYN075+1.p
    The string specifying the problem status must be output before the start of a solution. Use of the TPTP format for proofs and finite interpretations is encouraged.
  4. Solutions may not have irrelevant output (e.g., from other threads running in parallel) interleaved in the solution.
Resource Usage
  1. Systems that run on the competition computers must be interruptible by a SIGXCPU signal so that CPU time limits can be imposed, and interruptable by a SIGALRM signal so that wall clock time limits can be imposed. For systems that create multiple processes the signal is sent first to the process at the top of the hierarchy, then one second later to all processes in the hierarchy. The default action on receiving these signals is to exit (thus complying with the time limit, as required), but systems may catch the signals and exit of their own accord. If a system runs past a time limit this is noticed in the timing data, and the system is considered to have not solved the problem.
  2. If a system terminates of its own accord it may not leave any temporary or intermediate output files. If a system is terminated by a SIGXCPU or SIGALRM it may not leave any temporary or intermediate output files anywhere other than in /tmp.
  3. For practical reasons excessive output from an ATP system is not allowed. A limit, dependent on the disk space available, is imposed on the amount of output that can be produced.

System Delivery

Entrants must email a StarExec installation package to the competition organizer by the system delivery deadline. The installation package must be a .tgz file containing only the components necessary for running the system (i.e., not including source code, etc.). Different starexec_run_ scripts may be provided for different divisions for only the following differences: • CPU vs. Wall clock time limits; • Theorem proving vs. Model finding; • TPTP vs. Other problems. For example, different starexec_run_ scripts may be provided for the THF and SLH divisions (TPTP vs. Other problems), and for the FOF and FNT divisions (Theorem proving vs. Model finding). The competition organizer can provide a tool to detect the SPC of a problem, to enable different parameters settings for such similar divisions. Different starexec_run_ scripts may not be provided for the THF, TFA, FOF, and UEQ divisions (all use Wall clock time limits, and all are Theorem proving). The entrants must also email a .tgz file containing the source code and any files required for building the StarExec installation package to the competition organizer by the system delivery deadline.

For systems running on entrant supplied computers in the demonstration division, entrants must email a .tgz file containing the source code and any files required for building the executable system to the competition organizer by the system delivery deadline.

After the competition all competition division systems' source code is made publicly available in Zenodo. In the demonstration division the entrant specifies whether or not the source code is placed on the site. An open source license is encouraged.

Entrants are encouraged to make a public release of their systems ASAP after the competition, so that users can enjoy the latest capabilities.

System Execution

Execution of the ATP systems is controlled by StarExec. The jobs are queued onto the computers so that each CPU is running one job at a time. All attempts at the Nth problems in all the divisions and categories are started before any attempts at the (N+1)th problems.

A system has solved a problem iff it outputs its termination string within the time limit, and a system has produced a solution iff it outputs its end-of-solution string within the time limit. The result and timing data is used to generate an HTML file, and a web browser is used to display the results.

The execution of demonstration division systems is supervised by their entrants.

System Checks