The CADE ATP System Competition
Design and Organization
This document contains information about the:
The CASC rules, specifications, and deadlines are absolute.
Only the competition panel has the right to make exceptions.
It is assumed that all entrants have read the web pages related
to the competition, and have complied with the competition rules.
Non-compliance with the rules can lead to disqualification.
A "catch-all" rule is used to deal with any unforeseen circumstances:
No cheating is allowed.
The panel is allowed to disqualify entrants
due to unfairness, and to adjust the competition rules in case of misuse.
Disclaimer
Every effort has been made to organize the competition in a fair and
constructive manner.
No responsibility is taken if, for one reason or another, your system
does not win.
Changes
The design and procedures of this CASC evolved from those of
previous CASCs.
Important changes for this CASC are:
- All divisions use a wall clock time limit, to promote use of
all the cores on the CPU.
- The EPR division has gone on hiatus.
- A new variant of the SotAC measure has been adopted, to allow
comparison of SotAC values between CASC editions.
Divisions
CASC is divided into divisions according to problem and system characteristics.
There are competition divisions in which systems are explicitly
ranked, and a demonstration division in which systems demonstrate
their abilities without being ranked.
Some divisions are further divided into problem categories, which
makes it possible to analyse, at a more fine grained level, which systems
work well for what types of problems.
The problem categories have no effect on the competition rankings, which
are made at only the division level.
Competition Divisions
The competition divisions are open to ATP systems that meet the required
system properties.
Each division uses problems that have certain logical, language, and syntactic
characteristics, so that the ATP systems that compete in the division are, in
principle, able to attempt all the problems in the division.
- The THF division:
Typed Higher-order Form theorems (axioms with a provable conjecture).
The THF division has two problem categories:
- The TNE category: THF with No Equality
- The TEQ category: THF with EQuality
- The TFA division:
Typed First-order with Arithmetic theorems (axioms with a provable
conjecture).
The TFA division has two problem categories:
- The TFI category: TFA with only Integer arithmetic
- The TFE category: TFA with only rEal arithmetic
- The FOF division:
First-Order Form theorems (axioms with a provable conjecture).
The FOF division has two problem categories:
- The FNE category: FOF with No Equality
- The FEQ category: FOF with EQuality
- The FNT division:
First-order form Non-Theorems (axioms with a countersatisfiable
conjecture, and satisfiable axiom sets).
The FNT division has two problem categories:
- The FNN category: FNT with No equality
- The FNQ category: FNT with eQuality
- The UEQ division:
Unit EQuality clause normal form theorems
(unsatisfiable clause sets).
- The LTB division:
Theorems (axioms with a provable conjecture) from Large Theories,
presented in Batches.
A large theory has many functions and predicates, and many axioms of
which typically only a few are required for the proof of a theorem.
The problems in a batch are given to an ATP system all at once, and
typically have a common core set of axioms.
The batch presentation allows the ATP systems to load and preprocess
the common core set of axioms just once, and to share logical and
control results between proof searches.
Each problem category might be accompanied by a set of training problems
and their solutions, taken from the same source as the competition
problems.
The training data can be used for ATP system tuning during (typically
at the start of) the competition.
In CASC-J10 the LTB division has one problem category:
- The HL4 category:
Problems exported from
HOL4
in "chainy" mode.
This category is accompanied by training data.
Eight versions of each problem are provided -
two FOF versions,
two TF0 versions,
one TF1 version,
two TH0 versions,
and
one TH1 version.
Systems can attempt as many of the versions as they want, in any order
including in parallel, and a solution to any version counts as a solution
to the problem.
Some of the HL4 problems have less than
100 axioms, and some have over 100000 axioms.
The problems section explains what problems are
eligible for use in each division and category.
The system evaluation section explains how the
systems are ranked in each division.
Demonstration Division
ATP systems that cannot run in the competition divisions for any reason
(e.g., the system requires special hardware, or the entrant is an organizer)
can be entered into the demonstration division.
Demonstration division systems can run on the competition computers, or on
computers supplied by the entrant.
The entry specifies which competition divisions' problems are to be used.
The demonstration division results are presented along with the competition
divisions' results, but might not be comparable with those results.
The systems are not ranked.
Infrastructure
Computers
The competition computers have:
- Two octa-core Intel(R) Xeon(R) E5-2667, 3.20GHz CPUs
- 256GB memory
- The CentOS Linux release 7.4.1708 operating system,
kernel 3.10.0-957.12.2.el7.x86_64.
One ATP system runs on one CPU at a time, with access to half (128GB)
the memory.
Systems can use all the cores on the CPU (which is advantageous in the
divisions where a wall clock time limit is used).
Problems
Problem Selection
Problems for the THF, TFA, FOF, FNT, and UEQ divisions are taken from
the TPTP Problem Library.
The TPTP version used for CASC is not released until after the competition
has started, so that new problems have not been seen by the entrants.
The problems have to meet certain criteria to be eligible for selection.
The problems used are randomly selected from the eligible problems based on
a seed supplied by the competition panel.
- The TPTP tags problems that are designed specifically to be suited or
ill-suited to some ATP system, calculus, or control strategy as
biased, and they are excluded from the competition.
- The problems are syntactically non-propositional.
- The TPTP uses system performance data in the Thousands of Solutions
from Theorem Provers (TSTP) solution library to compute problem
difficulty ratings in the range 0.00 (easy) to 1.00 (unsolved).
Difficult problems with a rating in the range 0.21 to 0.99 are eligible.
Problems of lesser and greater ratings might also be eligible
in some divisions if there are not enough problems with ratings in
that range.
Systems can be submitted before the competition so that their
performance data is used for computing the problem ratings.
- The selection is constrained so that no division or category contains
an excessive number of very similar problems.
- The selection is biased to select problems that are new in the TPTP
version used, until 50% of the problems in each problem category have
been selected, after which random selection (from old and new problems)
continues.
The actual percentage of new problems used depends on how many new
problems are eligible and the limitation on very similar problems.
The problems for the LTB division are taken from various sources, with
each problem category being based on one source.
Entrants are expected to honestly not use publicly available problem
sets for tuning before the competition.
The process for selecting problems depends on the problem source.
Number of Problems
In the TPTP-based divisions, the minimal numbers of problems that must be
used in each division and category, to ensure sufficient confidence in the
competition results, are determined from the numbers of eligible problems
in each division and category (the competition organizer have to ensure
that there are sufficient computers available to run the ATP systems on
this minimal number of problems).
The minimal numbers of problems are used in determining the
time limit imposed on each solution attempt.
The numbers of problems to be used in each division of the competition
are determined from the number of computers available, the time allocated
to the competition, the number of ATP systems to be run on the competition
computers over the divisions, and the time limit imposed on each solution
attempt, according to the following relationship:
NumberOfComputers * TimeAllocated
NumberOfProblems = ---------------------------------
NumberOfATPSystems * TimeLimit
It is a lower bound on the number of problems because it assumes that
every system uses all of the time limit for each problem.
Since some solution attempts succeed before the time limit is reached, more
problems can be used.
The number of problems used in each division and problem category is (roughly)
proportional to the numbers of eligible problems, after taking into account
the limitation on very similar problems, determined according to the judgement
of the competition organizer.
In the LTB division the number of problems in each category is
determined by the number of problems in the corresponding problem source.
In CASC-J10 the HL4 problem category has 10000 problems (with eight versions
of each problem).
Problem Preparation
The problems are given to the ATP systems in TPTP format, with
include directives.
In order to ensure that no system receives an advantage or disadvantage
due to the specific presentation of the problems in the TPTP, the problems
in the TPTP-based divisions are obfuscated by:
- stripping out all comment lines, including the problem header
- randomly reordering the formulae/clauses
(include directives are left before formulae,
type declarations and definitions are kept before the symbols' uses)
- randomly swapping the arguments of associative connectives, and
randomly reversing implications
- randomly reversing equalities
In the LTB division the formulae are not obfuscated, thus
allowing the ATP systems to take advantage of natural structure that occurs
in the problems.
In the TPTP-based divisions the problems are given to the ATP systems
in increasing order of TPTP difficulty rating.
In the LTB division the problems in each batch are given in their natural order
in the problem source.
Batch Specification Files
The problems for each problem category of the LTB division are listed in a
batch specification file, containing global data lines and one or
more batch specifications.
The global data lines are:
- A problem category line of the form
division.category LTB.category_mnemonic
- The name of a .tgz file (relative to the directory holding the
batch specification file) that contains training data in the form
of problems in TPTP format and one or more solutions to each problem in
TSTP format, in a line of the form
division.category.training_data tgz_file_name
The .tgz file expands in place to three directories:
Axioms, Problems, and Solutions.
Axioms contains all the axiom files that are used in the
training and competition problems.
Problems contains the training problems.
Solutions contains a subdirectory for each of the
Problems, containing TPTP format solutions to the problem.
The language of a solution might not be the same as the language of the
problem, e.g., a proof to a THF problem might be written in FOF, or the
proof of a TFF problem might be written in THF.
Each batch specification consists of:
- A header line % SZS start BatchConfiguration
- A specification of whether or not the problems in the batch must be
attempted in order is given, in a line of the form
execution.order
ordered/unordered
If the batch is ordered the ATP systems may not start any attempt on
a problem, including reading the problem file, before ending the attempt
on the preceding problem.
For CASC-J10 it is
execution.order unordered
- A specification of what output is required from the ATP systems
for each problem, in a line of the form
output.required space_separated_list
where the available list values are the SZS values
Assurance, Proof, Model, and Answer.
For CASC-J10 it is
output.required Proof
- The wall clock time limit for each problem, in a line of the form
limit.time.problem.wc limit_in_seconds
A value of zero indicates no per-problem limit.
For CASC-J10 it is
limit.time.problem.wc 0
- The overall wall clock time limit for the batch, in a line of the form
limit.time.overall.wc limit_in_seconds
- A terminator line % SZS end BatchConfiguration
- A header line % SZS start BatchIncludes
- include directives that are used in every problem.
All the problems in the batch have these include directives, and
can also have other include directives that are not listed here.
For CASC-J10, see the additional notes below.
- A terminator line % SZS end BatchIncludes
- A header line % SZS start BatchProblems
- Pairs of problem file names (relative to the directory holding the batch
specification file), and output file names where the output for the
problem must be written.
The output files must be written in the directory specified
as the second argument to the starexec_run script (the first
argument is the name of the batch specification file).
For CASC-J10, see the additional notes below.
- A terminator line % SZS end BatchProblems
Additional Notes for CASC-J10
- In the BatchProblems section, the multiple versions of each
problem are specified using UNIX * globbing, e.g.,
HL400001*.p.
The versions of each problem have extensions as follows:
the first FOF version uses +4,
the second FOF version uses +5,
the first TF0 version uses _4,
the second TF0 version uses _5,
the only TF1 version uses _7,
the first TH0 version uses ^4,
the second TH0 version uses ^5,
and
the only TH1 version uses ^7.
- Proof output must identify which version of the problem was solved -
see the section on output notification lines.
The proof may not have irrelevant output (e.g., from other threads
running attempts on other versions of the problem) interleaved in the
proof.
- In the BatchIncludes section (not in problem files), multiple
versions of included axiom files may be specified using UNIX *
globbing, e.g., include('Axioms/HL4002*.ax') could refer to all
of
HL4002+4.ax,
HL4002_4.ax,
HL4002^4.ax,
HL4002+5.ax,
HL4002_5.ax,
HL4002^5.ax,
HL4002_7.ax,
HL4002^7.ax.
For a given problem, systems should use only the axiom files whose version
matches that of the problem file (there may be none),
e.g., if the problem version is +4 then use only the axiom
files with the version +4.
Using any other versions could lead to weird results.
Have a look at these sample LTB problems.
An example batch specification file is
BatchSampleLTBHL4,
which refers to the training data file
TrainingData.HL4.tgz.
Resource Limits
In the TPTP-based divisions, a wall clock time limit is imposed
for each problem.
The minimal time limit for each problem is 120s.
The maximal time limit for each problem is determined using the
relationship used for determining the number of problems, with the minimal
number of problems as the NumberOfProblems.
The time limit is chosen as a reasonable value within the range allowed,
and is announced at the competition.
There are no CPU time limits (i.e., using all cores makes sense).
An additional memory limit is imposed, depending on the
computers' memory.
In the LTB division, wall clock time limits are imposed.
For each batch there might be a wall clock time limit for each problem,
provided in the configuration section at the start of each batch.
If there is a wall clock time limit for each problem, the minimal limit for
each problem is 15s, and the maximal limit for each problem is 90s.
For each batch there is an overall wall clock time limit, provided in the
configuration section at the start of each batch.
The overall limit is proportional to the number of problems in the batch,
e.g. (but not necessarily), the batch's per-problem time limit multiplied
by the number of problems in the batch.
Time spent before starting the first problem of a batch (e.g., preloading
and analysing the batch axioms), and times spent between the
end of an attempt on a problem and the starting of the
next (e.g., learning from a proof just found), are not part of the times
taken on the individual problems, but are part of the overall time taken.
There are no CPU time limits.
System Evaluation
For each ATP system, for each problem, four items of data are recorded:
whether or not the problem was solved,
the CPU time taken,
the wall clock time taken,
and whether or not a proof or model was output.
The systems are ranked in the competition divisions according to the
number of problems solved with an acceptable proof/model output.
Ties are broken according to the average wall clock time taken over problems
solved.
Trophies are awarded to the competition divisions' winners.
The competition panel decides whether or not the systems' proofs and models are
"acceptable".
The criteria include:
- Derivations must be complete, starting at formulae from the problem,
and ending at the conjecture (for axiomatic proofs) or a false
formula (for proofs by contradiction, e.g., CNF refutations).
- For proofs that use translations from one form to another, e.g.,
translation of FOF problems to CNF, the translations must be adequately
documented.
- Derivations must show only relevant inference steps.
- Inference steps must document the parent formulae, the inference rule
used, and the inferred formula.
- Inference steps must be reasonably fine-grained.
- An unsatisfiable set of ground instances of clauses is acceptable for
establishing the unsatisfiability of a set of clauses.
- Models must be complete, documenting the domain, function maps,
and predicate maps.
The domain, function maps, and predicate maps may be specified by
explicit ground lists (of mappings), or by any clear, terminating
algorithm.
In addition to the ranking criteria, other measures are made and presented
in the results:
- The state-of-the-art contribution (SotAC) quantifies the unique
abilities of each system.
For each problem solved by a system, its SotAC for the problem is the
fraction of systems that do not solved the problem, and a system's
overall SotAC is the average over the problems it solves but which are
not solved by all the systems.
- The core usage is the average of the ratios of CPU time to
wall clock time used, over the problems solved.
This measures the extent to which the systems take advantage of multiple
cores.
- The efficiency measure balances the number of problems solved
with the time taken.
It is the average of the inverses of the times taken for problems solved,
multiplied by the fraction of problems solved.
This can be interpreted intuitively as the average of the solution rates
for problems solved, multiplied by the fraction of problems solved.
Efficiency is computed for both CPU time and wall clock time, to measure
how efficiently the systems use one core and how efficiently systems
use multiple cores, respectively.
At some time after the competition, all high ranking systems in the
competition divisions are tested over the entire TPTP.
This provides a final check for soundness (see the section on
system properties regarding soundness
checking before the competition).
If a system is found to be unsound during or after the competition, but
before the competition report is published, and it cannot be shown that the
unsoundness did not manifest itself in the competition, then the system
is retrospectively disqualified.
At some time after the competition, the proofs and models from the winners
(of divisions ranked by the numbers of proofs/models output) are checked
by the panel.
If any of the proofs or models are unacceptable, i.e., they are significantly
worse than the samples provided, then that system is retrospectively
disqualified.
All disqualifications are explained in the competition report.
System Entry
To be entered into CASC, systems must be registered using the
CASC system registration form,
by the registration deadline.
For each system entered an entrant must be nominated to handle all issues
(e.g., installation and execution difficulties) arising before and during
the competition.
The nominated entrant must
formally register for CASC.
It is not necessary for entrants to physically attend the competition.
Systems can be entered at only the division level, and can be entered
into more than one division.
A system that is not entered into a competition division is assumed to
perform worse than the entered systems, for that type of problem -
wimping out is not an option.
Entering many similar versions of the same system is deprecated, and entrants
may be required to limit the number of system versions that they enter.
Systems that rely essentially on running other ATP systems without adding
value are deprecated; the competition panel may disallow or move such
systems to the demonstration division.
The division winners of the previous CASC are automatically
entered into their demonstration divisions, to provide benchmarks
against which progress can be judged.
Prover9 1109a is automatically entered into the FOF division, to provide
a fixed-point against which progress can be judged.
System Descriptions
A system description must be provided for each ATP system entered, using
this HTML schema.
The schema has the following sections:
- Architecture. This section introduces the ATP system, and describes
the calculus and inference rules used.
- Strategies. This section describes the search strategies used, why
they are effective, and how they are selected for given problems.
Any strategy tuning that is based on specific problems' characteristics
must be clearly described (and justified in light of the
tuning restrictions).
- Implementation. This section describes the implementation of the ATP
system, including the programming language used, important internal
data structures, and any special code libraries used.
The availability of the system is also given here.
- Expected competition performance. This section makes some
predictions about the performance of the ATP system in each of the
divisions and categories in which it is competing.
- References.
The system description must be emailed to the competition organizer by
the system description deadline.
The system descriptions form part of the competition proceedings.
Sample Solutions
For systems in the divisions that require proof/model output, representative
sample solutions must be emailed to the competition organizer by the
sample solutions deadline.
Use of the TPTP format for
proofs and
finite
interpretations is encouraged.
The competition panel decides whether or not proofs and models are
acceptable.
Proof/model samples are required as follows:
An explanation must be provided for any non-obvious features.
System Requirements
System Properties
Entrants must ensure that their systems execute in the competition environment,
and have the following properties.
Entrants are advised to finalize their installation packages and check these
properties
well in advance of the system delivery deadline.
This gives the competition organizer time to help resolve any difficulties
encountered.
Execution, Soundness, and Completeness
- Systems must be fully automatic, i.e., all command line switches have
to be the same for all problems in each division.
- Systems' performance must be reproducible by running the system again.
- Systems must be sound.
At some time before the competition all the systems in the competition
divisions are tested for soundness.
Non-theorems are submitted to the systems in the
THF, TFA, FOF, EPR, UEQ, and LTB divisions, and theorems are submitted
to the systems in the FNT and EPR divisions.
Finding a proof of a non-theorem or a disproof of a theorem indicates
unsoundness.
If a system fails the soundness testing it must be repaired by
the unsoundness repair deadline or be
withdrawn.
For systems running on computers supplied by the entrant in the
demonstration division, the entrant must perform the soundness testing
and report the results to the competition organizer.
- Systems do not have to be complete in any sense, including
calculus, search control, implementation, or resource requirements.
- All techniques used must be general purpose, and expected to extend
usefully to new unseen problems.
The precomputation and storage of information about individual problems
that might appear in the competition or their solutions is not allowed.
(It's OK to store information about LTB training problems.)
Strategies and strategy selection based on individual problems
or their solutions are not allowed.
If machine learning procedures are used to tune a system, the learning
must ensure that sufficient generalization is obtained so that no there
is no specialization to individual problems or their solutions.
The system description must explain any such tuning or training that has
been done.
The competition panel may disqualify any system that is deemed to be
problem specific rather than general purpose.
If you are in doubt, contact the competition organizer.
Output
- In all divisions except LTB all solution output must be to stdout.
In the LTB division all solution output must be to the named output
file for each problem, in the directory specified as the second argument
to the starexec_run script.
If multiple attempts are made on a problem in an unordered batch, each
successive output file must overwrite the previous one.
- In the LTB division the systems must print SZS notification lines to
stdout when starting and ending work on a problem (including
any cleanup work, such as deleting temporary files).
For example
% SZS status Started for CSR075+2.p
... (system churns away, progress output to file)
% SZS status GaveUp for CSR075+2.p
% SZS status Ended for CSR075+2.p
... and later in another attempt on that problem ...
% SZS status Started for CSR075+2.p
... (system churns away, progress, result, and solution appended to file)
% SZS status Theorem for CSR075+2.p
% SZS status Ended for CSR075+2.p
- For each problem, the system must output a distinguished string
indicating what solution has been found or that no conclusion has been
reached.
Systems must use the
SZS ontology and standards for this.
For example
% SZS status Theorem for SYN075+1.p
or
% SZS status GaveUp for SYN075+1.p
In the LTB division this line must be the last line output before the
ending notification line.
The line must also be output to the output file.
- When outputting proofs/models, the start and end of the proof/model must
be delimited by distinguished strings.
Systems must use the
SZS ontology and standards for this.
For example
% SZS output start CNFRefutation for SYN075+1.p
...
% SZS output end CNFRefutation for SYN075+1.p
The string specifying the problem status must be output before the start
of a proof/model.
Use of the TPTP format for
proofs and
finite
interpretations is encouraged.
Resource Usage
- Systems that run on the competition computers must be
interruptible by a SIGXCPU signal, so that CPU time limits
can be imposed, and interruptable by a SIGALRM signal,
so that wall clock time limits can be imposed.
For systems that create multiple processes, the signal is sent first to
the process at the top of the hierarchy, then one second later to all
processes in the hierarchy.
The default action on receiving these signals is to exit (thus complying
with the time limit, as required), but systems may catch the signals
and exit of their own accord.
If a system runs past a time limit this is noticed in the timing
data, and the system is considered to have not solved that problem.
- If a system terminates of its own accord, it may not leave any
temporary or intermediate output files.
If a system is terminated by a SIGXCPU or SIGALRM,
it may not leave any temporary or intermediate output files anywhere other
than in /tmp.
- For practical reasons excessive output from an ATP system is not allowed.
A limit, dependent on the disk space available, is imposed on the amount
of output that can be produced.
System Delivery
Entrants must email a
StarExec installation package to the competition organizer by the
system delivery deadline.
The installation package must be a .tgz file containing
only the components necessary for running the system (i.e., not including
source code, etc.).
The entrants must also email a .tgz file containing the source
code and any files required for building the StarExec installation package
to the competition organizer by the system delivery
deadline.
For systems running on entrant supplied computers in the demonstration
division, entrants must email a .tgz file containing the source code
and any files required for building the executable system to the competition
organizer by the system delivery deadline.
After the competition all competition division systems' source code
is made publicly available on the CASC web site.
In the demonstration division, the entrant specifies whether or not
the source code is placed on the site.
An open source license is
encouraged.
System Execution
Execution of the ATP systems is controlled by StarExec.
The jobs are queued onto the computers so that each CPU is running
one job at a time.
All attempts at the Nth problems in all the divisions and categories
are started before any attempts at the (N+1)th problems.
A system has solved a problem iff it outputs its termination string within
the time limit, and a system has produced a proof/model iff it outputs
its end-of-proof/model string within the time limit.
The result and timing data is used to generate an HTML file, and a web
browser is used to display the results.
The execution of the demonstration division systems is supervised by
their entrants.
System Checks
- Check: You can
login to StarExec. If not,
apply for an account in the TPTP community.
- Check: You can access the TPTP space. If not,
email the competition organizer.
- Check: You can create and upload a
StarExec installation package.
The competition organizer have examplar StarExec installation packages
that you can use as a starting point - email the competition organizer
to get one that is appropriate for your ATP system.
- Check: You can create a job and run it, and your ATP system gets the
correct result.
Use the SZS post processor.
- Check: Your ATP system can solve a problem that has include
directives.
Because of the way StarExec runs jobs, your ATP system must implement
the TPTP requirement that "Include files with relative path names are
expected to be found either under the directory of the current file,
or if not found there then under the directory specified in the
TPTP environment variable."
- Check: You can email your StarExec installation package to the
competition organizer for testing.