The CADE ATP System Competition
Design and Organization
This document contains information about the:
The CASC rules, specifications, and deadlines are absolute.
Only the competition panel has the right to make exceptions.
It is assumed that all entrants have read the web pages related to the competition, and have
complied with the competition rules.
Non-compliance with the rules can lead to disqualification.
A "catch-all" rule is used to deal with any unforeseen circumstances:
No cheating is allowed.
The panel is allowed to disqualify entrants due to unfairness, and
to adjust the competition rules in case of misuse.
Disclaimer
Every effort has been made to organize the competition in a fair and constructive manner.
No responsibility is taken if your system does not win.
A Tense Note
Attentive readers will notice changes between the present and past tenses in this document.
Many parts of CASC are established and stable - they are described in the present tense (the
rules are the rules).
Aspects that are particular to this CASC are described in the future tense so that they make
sense when reading this before the event.
Changes
The design and procedures of this CASC evolved from those of previous CASCs.
Important changes for this CASC are:
- Proofs must be output in TPTP format.
Their syntax and structure
will be checked
using TPTP4X and GDV.
- The EPR division has returned from hiatus.
Divisions
CASC is divided into divisions according to problem and system characteristics.
There are competition divisions in which systems are explicitly ranked, and a demonstration
division in which systems demonstrate their abilities without being ranked.
Some divisions are further divided into problem categories, which makes it possible to analyse,
at a more fine grained level, which systems work well for what types of problems.
The competition rankings are at only the division level.
Competition Divisions
The competition divisions are open to ATP systems that meet the required
system properties.
Each division uses problems that have certain logical, language, and syntactic characteristics,
so that the ATP systems that compete in the division are, in principle, able to attempt all the
problems in the division.
- The THF division:
Typed (monomorphic) Higher-order Form theorems (axioms with a provable conjecture).
The THF division has two problem categories:
- The TNE category: THF with No Equality
- The TEQ category: THF with EQuality
- The TFA division:
Typed (monomorphic) First-order theorems (axioms with a provable conjecture), with Arithmetic.
The TFA division has two problem categories:
- The TFI category: TFA with only Integer arithmetic
- The TFE category: TFA with only rEal arithmetic
- The TFN division:
Typed (monomorphic) First-order Non-theorems
(axioms with a countersatisfiable conjecture, and satisfiable axiom sets),
without arithmetic
- The FOF division:
First-Order Form theorems (axioms with a provable conjecture).
The FOF division has two problem categories:
- The FNE category: FOF with No Equality
- The FEQ category: FOF with EQuality
- The EPR division:
Effectively PRopositional clause normal form theorems and non-theorems (clause sets).
Effectively propositional means that the problems are syntactically non-propositional
but are known to be reducible to propositional problems, e.g., CNF problems that have no
functions with arity greater than zero.
The EPR division has two problem categories:
- The EPU category: Effectively Propositional Unsatisfiable clause sets
- The EPS category: Effectively Propositional Satisfiable clause sets
- The UEQ division:
Unit EQuality clause normal form unsatisfiable clause sets.
- The SLH division:
Typed (monomorphic) higher-order theorems without arithmetic (axioms with a provable
conjecture), generated by Isabelle's Sledgehammer system.
- The ICU division:
First-order theorems (axioms with a provable conjecture) provided by the entrants, by which
they say to the other entrants: "I Challenge yoU".
The problems section explains what problems are eligible for use in each
division and category.
The system evaluation section explains how the systems are ranked in
each division.
Demonstration Division
ATP systems that do not run in the competition divisions for any reason
(e.g., the system requires special hardware, or the entrant is an organizer)
can be entered into the demonstration division.
Demonstration division systems can run on the competition computers, or on
computers supplied by the entrant.
The entry specifies which competition divisions' problems are to be used.
The demonstration division results are presented along with the competition
divisions' results, but might not be comparable with those results.
The systems are not ranked.
Infrastructure
Computers
The competition computers have:
- An octa-core Intel(R) Xeon(R) E5-2620 v4 @ 2.10GHz, without hyperthreading.
- 128GB memory
- The CentOS Linux release 7.4.1708 (Core) operating system,
Linux kernel 3.10.0-693.el7.x86_64.
One ATP system runs on one CPU at a time.
Systems can use all the cores on the CPU, which can be advantageous in divisions where a wall clock
time limit is used.
Problems for the TPTP-based Divisions
Problems for the THF, TFA, TFN, FOF, EPR, and UEQ divisions are taken from the
TPTP Problem Library.
The TPTP version used for CASC is released only after the competition has started, so that new
problems in the release have not been seen by the entrants.
The problems have to meet certain criteria to be eligible for selection.
The problems used are randomly selected from the eligible problems based on a seed supplied by
the competition panel.
- The TPTP tags problems that are designed specifically to be suited or ill-suited to some ATP
system, calculus, or control strategy as biased.
They are excluded from the competition.
- The problems must be syntactically non-propositional.
- The TPTP uses system performance data in the Thousands of Solutions from Theorem Provers
(TSTP) solution library to compute problem difficulty ratings in the range 0.00 (easy) to
1.00 (unsolved).
Difficult problems with a rating in the range 0.21 to 0.99 are eligible.
Problems of lesser and greater ratings might also be eligible in some divisions if there are
not enough problems with ratings in that range.
Systems can be submitted before the competition so that their performance data is used in
computing the problem ratings.
In order to ensure that no system receives an advantage or disadvantage due to the specific
presentation of the problems in the TPTP, the problems in the TPTP-based divisions are obfuscated
by:
- stripping out all comment lines, including the problem header
- randomly reordering the formulae/clauses (include directives are left before
formulae, type declarations and definitions are left before the symbols' uses)
- randomly swapping the arguments of associative connectives, and randomly reversing
implications
- randomly reversing equalities
The numbers of problems used in each division and problem category are constrained by the the
numbers of eligible problems, the number of systems entered across the divisions, the number of
CPUs available, the time limits, and the time available for running the
competition live in one conference day, i.e., in about 6 hours.
The numbers of problems used are set within these constraints, according to the judgement of the
organizers.
The problems used are randomly selected from the eligible problems based on a seed supplied by
the competition panel:
- The selection is constrained so that no problem category contains an excessive number of
very similar problems, according to the "very similar problems" (VSP) lists distributed with
the TPTP:
For each problem category in each division, if the category is going to use N
problems and there are L VSP lists that have an intersection of at least
N/(L + 1) with the eligible problems for the category, then maximally
N/(L + 1) problems are taken from each of those VSP lists.
- In order to combat excessive tuning towards problems that are already in the preceding TPTP
version, the selection is biased to select problems that are new in the TPTP version used,
until 50% of the problems in each problem category have been selected or there are no more
new problems to select, after which random selection from old and new problems continues.
The problems are given to the ATP systems in TPTP format, with include directives, in
increasing order of TPTP difficulty rating.
Problems for the SLH Division
Problems for the SLH division are generated by Isabelle's Sledgehammer system.
The problem set is the same as used for CASC-29 and
CASC-J12, which used
these (CASC-29) and
these (CASC-J12) problems.
Appropriately difficult problems are chosen based on performance data similar to that in the TSTP.
The problems are not modified by any preprocessing, thus allowing the ATP systems to take
advantage of natural structure that occurs in the problems.
The number of problems is based on the CPU time limit, using a calculation similar to that used
for the TPTP-based divisions.
The problems are given in a roughly estimated increasing order of difficulty.
Problems for the ICU Division
Each system (or group of related systems) will submit 10 to 20 FOF theorems, i.e., problems with
axioms and a provable conjecture.
The problems must be provided in decreasing order of desired use in the division, i.e., probably
from hardest to easiest for other systems.
The problems must all be different, as assessed by the competition organizer and the panel.
At least 10 problems will be taken from each submission, in the order specified, and all systems
will attempt all the selected problems.
The problems will be given in the reverse of the decreasing order of desired use, so that the
"easier" problems are used before "harder" ones.
It is expected that each system's problems will be easy enough for that system, but difficult for
the other systems, i.e., each system is saying to the others: "I Challenge yoU!".
Time Limits
In the THF, TFA, TFN, FOF, EPR, and UEQ divisions a wall clock time limit is imposed for each
problem, and no CPU time limits are imposed (so that it can be advantageous to use all the cores
on the CPU).
The minimal time limit is 120s.
The maximal time limit is determined using the relationship used for determining the number of
problems, with the minimal number of problems as the NumberOfProblems.
The time limit is chosen as a reasonable value within the range allowed according to the
judgement of the organizers, and is announced at the competition.
In the SLH division a CPU time limit is imposed for each problem.
The minimal time limit is 15s, and the maximal time limit is 90s, which is the range of CPU time
that can be usefully allocated for a proof attempt in the Sledgehammer context (according to Jasmin
Blanchette, and he should know).
The time limit is chosen as a reasonable value within the range allowed according to the
judgement of the organizers, and is announced at the competition.
In the ICU division a wall clock time limit is imposed for each problem, and no CPU time
limits are imposed (so that it can be advantageous to use all the cores on the CPU).
The minimal time limit is 300s, and the maximal time limit is 600s.
The time limit is chosen as a reasonable value within the range allowed according to the
judgement of the organizers, and is announced at the competition.
System Evaluation
For each ATP system, for each problem, four items of data are recorded:
whether or not the problem was solved,
the CPU time taken,
the wall clock time taken,
and whether or not a solution (proof or model) was output.
The systems are ranked in the competition divisions according to the number of problems solved
with an acceptable solution output, except in the
EPR
division that is ranked according to the number of problems solved, but not necessarily
accompanied by a solution (but systems that do output solutions are highlighted in the
presentation of results).
Ties are broken according to the average time taken over problems solved.
Trophies are awarded to the competition divisions' winners.
In the demonstration division the systems are not ranked, and no trophies are awarded.
The competition panel decides whether or not the systems' solutions are "acceptable".
The criteria include:
- Proofs must be in
TPTP format.
This will be
checked
using TPTP4X.
- Proofs must be structurally correct:
- Proofs must have formulae from the problem as leaves, and end at the conjecture (for
axiomatic proofs) or $false formula (for proofs by contradiction, e.g., CNF
refutations).
- For solutions that use translations from one form to another, e.g., translation of FOF
problems to CNF, the translations must be adequately documented.
- Proofs must show only relevant inference steps.
Proof structure will be
checked
using GDV.
- Inference steps must be reasonably fine-grained, except in the SLH division where just a
single inference step from the axioms used (no unused axioms) to the conjecture is acceptable.
- An unsatisfiable set of ground instances of clauses is acceptable for establishing the
unsatisfiability of a set of clauses.
- Models must be complete, documenting the domain, function maps, and predicate maps.
The domain, function maps, and predicate maps may be specified by explicit ground lists (of
mappings), or by any clear, terminating algorithm.
In addition to the ranking criteria, three other measures are presented in the results:
- The state-of-the-art contribution (SotAC) quantifies the unique abilities of each
system (excluding the previous year's winners that are earlier versions of competing systems).
For each problem solved by a system, its SotAC for the problem is the fraction of systems
that do not solve the problem, and a system's overall SotAC is the average over the problems
it solves but that are not solved by all the systems.
- The core usage measures the extent to which the systems take advantage of multiple
cores.
It is the average of the ratios of CPU time used to wall clock time used, over the problems
solved.
- The efficiency measure balances the number of problems solved with the time taken.
It is the average solution rate over the problems solved (the solution rate for one problem
is the reciprocal of the time taken to solve it), multiplied by the fraction of problems
solved.
Efficiency is computed for both CPU time and wall clock time, to measure how efficiently the
systems use one core and multiple cores respectively.
At some time after the competition all high ranking systems in the competition divisions are
tested over the entire TPTP.
This provides a final check for soundness (see the section on
system properties regarding soundness checking before the competition).
If a system is found to be unsound during or after the competition, but before the competition
report is published, and it cannot be shown that the unsoundness did not manifest itself in the
competition, then the system is retrospectively disqualified.
At some time after the competition, the solutions from the winners
are checked by the panel.
If any of the solutions are unacceptable, i.e., they are sufficiently worse than the samples
provided, then that system is retrospectively disqualified.
All disqualifications are explained in the competition report.
System Entry
To be entered into CASC systems must be registered using the
CASC system registration form
by the registration deadline.
For each system entered an entrant must be nominated to handle all issues
(e.g., installation and execution difficulties) arising before, during, and
after the competition.
The nominated entrant must
formally register for CASC.
It is not necessary for entrants to physically attend the competition.
Systems can be entered at only the division level, and can be entered
into more than one division.
All systems that are entered into a division are assumed to perform better
than all systems not entered, for that type of problem - wimping out is not
an option.
Entering many similar versions of the same system is deprecated, and entrants
may be required to limit the number of system versions that they enter.
Systems that rely essentially on running other ATP systems without adding
value are deprecated; the competition panel may disallow or move such
systems to the demonstration division.
The division winners of the previous CASC are automatically entered into
the corresponding demonstration divisions, to provide benchmarks against which progress can be
judged.
Prover9 1109a is automatically entered into the FOF demonstration division, to provide a
fixed-point against which progress can be judged.
System Description
A system description must be provided for each ATP system, using this
HTML schema.
The schema has the following sections:
- Architecture.
This section introduces the ATP system, and describes the calculus and
inference rules used.
- Strategies.
This section describes the search strategies used, why they are effective,
and how they are selected for given problems.
Any strategy tuning that is based on specific problems' characteristics
must be clearly described (and justified in light of the
tuning restrictions).
- Implementation.
This section describes the implementation of the ATP system, including
the programming language used, important internal data structures, and
any special code libraries used.
The availability of the system is also given here.
- Expected competition performance.
This section makes some predictions about the performance of the ATP
system for each of the divisions and categories in which it is competing.
- References.
The system description must be emailed to the competition organizer by
the system description deadline.
The system descriptions form part of the competition proceedings.
Sample Solutions
For systems in the divisions that require solution output, representative sample solutions must be
emailed to the competition organizer by the sample solutions deadline.
Use of the TPTP format for
proofs is required, and
use of the new TPTP format for
interpretations is encouraged.
The competition panel decides whether or not each system's solutions are
acceptable.
Proof/model samples are required as follows:
An explanation must be provided for any non-obvious features.
System Requirements
System Properties
Entrants must ensure that their systems execute in the competition environment, and have the
following properties.
Entrants are advised to finalize their installation packages and check these properties
well in advance of the system delivery deadline.
This gives the competition organizer time to help resolve any difficulties encountered.
Execution, Soundness, and Completeness
- Systems must be fully automatic, i.e., all command line switches have to be the same for all
problems in each division.
- Systems' performances must be reproducible by running the system again.
- Systems must be sound.
At some time before the competition all the systems in the competition divisions are tested
for soundness.
Non-theorems are submitted to the systems in the THF, TFA, FOF, EPR, and UEQ
divisions, and theorems are submitted to the systems in the TFN and EPR
divisions.
Finding a proof of a non-theorem or a disproof of a theorem indicates unsoundness.
If a system fails the soundness testing it must be repaired by the
unsoundness repair deadline or be withdrawn.
- Systems do not have to be complete in any sense, including calculus, search control,
implementation, or resource requirements.
- All techniques used must be general purpose, and expected to extend usefully to new unseen
problems.
The precomputation and storage of information about individual problems that might appear in
the competition, or their solutions, is not allowed.
Strategies and strategy selection based on individual problems or their solutions are not
allowed.
If machine learning procedures are used to tune a system, the learning must ensure that
sufficient generalization is obtained so that there is no specialization to individual
problems.
The system description must explain any such tuning or training that has been done.
The competition panel may disqualify any system that is deemed to be problem specific rather
than general purpose.
If you are in doubt, contact the competition organizer.
Output
-
All output must be to stdout.
- For each problem, the system must output a distinguished string
indicating what solution has been found or that no conclusion has been reached.
Systems must use the SZS ontology and
standards for this.
For example
% SZS status Theorem for SYN075+1.p
or
% SZS status GaveUp for SYN075+1.p
- When outputting a solution, the start and end of the solution must be delimited by
distinguished strings.
Systems must use the SZS ontology and
standards for this.
For example
% SZS output start CNFRefutation for SYN075+1.p
...
% SZS output end CNFRefutation for SYN075+1.p
The string specifying the problem status must be output before the start of a solution.
Use of the TPTP format for
proofs is required,
and use of the new TPTP format for
interpretations is
encouraged.
- Solutions may not have irrelevant output (e.g., from other threads running in parallel)
interleaved in the solution.
Resource Usage
- Systems that run on the competition computers must be interruptible by a SIGXCPU
signal so that CPU time limits can be imposed, and interruptable by a SIGALRM signal
so that wall clock time limits can be imposed.
For systems that create multiple processes the signal is sent first to the process at the top
of the process hierarchy, then one second later to all processes (even if they have
disconnected from the process hierarchy).
The default action on receiving these signals is to exit (thus complying with the time limit,
as required), but systems may catch the signals and exit of their own accord.
If a system runs past a time limit this is noticed in the timing data, and the system is
considered to have not solved the problem.
- If a system terminates of its own accord it may not leave any temporary or intermediate
output files.
If a system is terminated by a SIGXCPU or SIGALRM it may not leave any
temporary or intermediate output files anywhere other than in /tmp.
- For practical reasons excessive output from an ATP system is not allowed.
A limit, dependent on the disk space available, is imposed on the amount of output that can
be produced.
System Delivery
Entrants must email a StarExec installation package to the competition organizer by the
system delivery deadline.
The installation package must be a .tgz file containing only the components necessary
for running the system (i.e., not including source code, etc.).
The entrants must also email a .tgz file containing the source code and any files required
for building the StarExec installation package to the competition organizer by the
system delivery deadline.
For systems running on entrant supplied computers in the demonstration division, entrants must
email a .tgz file containing the source code and any files required for building the
executable system to the competition organizer by the
system delivery deadline.
After the competition all competition division systems' source code is made publicly available on
the CASC web site.
In the demonstration division the entrant specifies whether or not the source code is placed on
the site.
An open source license is encouraged.
Entrants are encouraged to make a public release of their systems ASAP after the competition, so
that users can enjoy the latest capabilities.
System Execution
Execution of the ATP systems is controlled by StarExec.
The jobs are queued onto the computers so that each CPU is running one job at a time.
All attempts at the Nth problems in all the divisions and categories are started before any
attempts at the (N+1)th problems.
A system has solved a problem iff it outputs its termination string within the time limit, and a
system has produced a solution iff it outputs its end-of-solution string within the time limit.
The result and timing data is used to generate an HTML file, and a web browser is used to display
the results.
The execution of demonstration division systems is supervised by their entrants.
System Checks
- Check: You can
login to StarExec Miami.
If not, apply for
an account in the TPTP community.
- Check: You can access the TPTP space. If not, email the competition organizer.
- Check: You can create and upload a
StarExec installation package.
The competition organizer has exemplar StarExec installation packages that you can use as
a starting point - email the competition organizer to get one that is appropriate for your
ATP system.
- Check: You can create a job and run it, and your ATP system gets the correct result.
Use the SZS post processor.
- Check: Your ATP system can solve a problem that has include directives.
Because of the way StarExec runs jobs, your ATP system must implement the TPTP requirement
that "Include files with relative path names are expected to be found either under the
directory of the current file, or if not found there then under the directory specified in
the TPTP environment variable."
- Check: You can email your StarExec installation package to the competition organizer for
testing.