The CADE ATP System Competition
Design and Organization
This document contains information about the:
The CASC rules, specifications, and deadlines are absolute.
Only the competition panel has the right to make exceptions.
It is assumed that all entrants have read the web pages related
to the competition, and have complied with the competition rules.
Non-compliance with the rules can lead to disqualification.
A "catch-all" rule is used to deal with any unforeseen circumstances:
No cheating is allowed.
The panel is allowed to disqualify entrants 
due to unfairness, and to adjust the competition rules in case of misuse.
Disclaimer
Every effort has been made to organize the competition in a fair and
constructive manner.
No responsibility is taken if, for one reason or another, your system
does not win.
Changes
The design and procedures of this CASC evolved from those of
previous CASCs.
Important changes for this CASC are:
-  The "Slammer Hammer" (SLH) division has been added for CASC-28.
 
-  The TFA division has gone on hiatus.
 
-  Only one proof by contradictory axioms, for each contradictory set, 
     will count towards the ranking in the LTB division (and hopefully this
     year's axioms will be consistent anyway).
Divisions
CASC is divided into divisions according to problem and system characteristics.
There are competition divisions in which systems are explicitly
ranked, and a demonstration division in which systems demonstrate
their abilities without being ranked.
Some divisions are further divided into problem categories, which
makes it possible to analyse, at a more fine grained level, which systems
work well for what types of problems.
The problem categories have no effect on the competition rankings, which
are made at only the division level.
Competition Divisions
The competition divisions are open to ATP systems that meet the required
system properties.
Each division uses problems that have certain logical, language, and syntactic 
characteristics, so that the ATP systems that compete in the division are, in 
principle, able to attempt all the problems in the division.
-  The THF division:
     Typed (monomorphic) Higher-order Form theorems (axioms with a provable 
     conjecture).
     The THF division has two problem categories:
     
     -  The TNE category: THF with No Equality
     
-  The TEQ category: THF with EQuality
     
 
 
-  The FOF division:
     First-Order Form theorems (axioms with a provable conjecture).
     The FOF division has two problem categories:
     
     -  The FNE category: FOF with No Equality
     
-  The FEQ category: FOF with EQuality
     
 
 
-  The FNT division:
     First-order form Non-Theorems (axioms with a countersatisfiable 
     conjecture, and satisfiable axiom sets).
     The FNT division has two problem categories:
     
     -  The FNN category: FNT with No equality
     
-  The FNQ category: FNT with eQuality
     
 
 
-  The UEQ division:
     Unit EQuality clause normal form theorems
     (unsatisfiable clause sets).
 
-  The SLH division:
     Typed (monomorphic) higher-order theorems without arithmetic (axioms with 
     a provable conjecture), generated by Isabelle's SledgeHammer system.
 
-  The LTB division:
     Theorems (axioms with a provable conjecture) from Large Theories, 
     presented in Batches.
     A large theory has many functions and predicates, and many axioms of 
     which typically only a few are required for the proof of a theorem. 
     The problems in a batch are given to an ATP system all at once, and 
     typically have a common core set of axioms. 
     The batch presentation allows the ATP systems to load and preprocess 
     the common core set of axioms just once, and to share logical and 
     control results between proof searches.
     Each problem category might be accompanied by a set of training problems 
     and their solutions, taken from the same source as the competition 
     problems.
     The training data can be used for ATP system tuning during (typically 
     at the start of) the competition.
     In CASC-28 the LTB division has one problem category:
     
     This category is accompanied by training data.
     Five versions of each problem are provided - 
     a FOF version,
     a TF0 version, 
     a TF1 version, 
     a TH0 version, 
     and 
     a TH1 version.
     Systems can attempt as many of the versions as they want, in any order
     including in parallel, and a solution to any version counts as a solution
     to the problem.
     The problems have from 14 to 6613 axioms.
     There are no common include files.
 
The problems section explains what problems are
eligible for use in each division and category.
The system evaluation section explains how the
systems are ranked in each division.
Demonstration Division
ATP systems that cannot run in the competition divisions for any reason 
(e.g., the system requires special hardware, or the entrant is an organizer)
can be entered into the demonstration division.
Demonstration division systems can run on the competition computers, or on
computers supplied by the entrant.
The entry specifies which competition divisions' problems are to be used.
The demonstration division results are presented along with the competition 
divisions' results, but might not be comparable with those results.
The systems are not ranked.
Infrastructure
Computers
The competition computers have: 
-  Two octa-core Intel(R) Xeon(R) E5-2667, 3.20GHz CPUs
-  256GB memory
-  The CentOS Linux release 7.4.1708 (Core) operating system, 
     Linux kernel 3.10.0-693.el7.x86_64.
One ATP system runs on one CPU at a time, with access to half (128GB)
the memory.
Systems can use all the cores on the CPU (which is advantageous in the
divisions where a wall clock time limit is used).
Problems
Problem Selection
Problems for the THF, FOF, FNT, and UEQ divisions are taken from 
the TPTP Problem Library.
The TPTP version used for CASC is released only after the competition
has started, so that new problems in the release have not been seen by the 
entrants.
The problems have to meet certain criteria to be eligible for selection.
The problems used are randomly selected from the eligible problems based on 
a seed supplied by the competition panel.
-  The TPTP tags problems that are designed specifically to be suited or
     ill-suited to some ATP system, calculus, or control strategy as
     biased, and they are excluded from the competition.
-  The problems must be syntactically non-propositional.
-  The TPTP uses system performance data in the Thousands of Solutions 
     from Theorem Provers (TSTP) solution library to compute problem 
     difficulty ratings in the range 0.00 (easy) to 1.00 (unsolved).
     Difficult problems with a rating in the range 0.21 to 0.99 are eligible.
     Problems of lesser and greater ratings might also be eligible
     in some divisions if there are not enough problems with ratings in
     that range.
     Systems can be submitted before the competition so that their
     performance data is used in computing the problem ratings.
-  The selection is constrained so that no division or category contains
     an excessive number of very similar problems.
-  The selection is biased to select problems that are new in the TPTP 
     version used, until 50% of the problems in each problem category have
     been selected, after which random selection from old and new problems
     continues.
     The number of new problems used depends on how many new
     problems are eligible and the limitation on very similar problems.
Problems for the SLH division were generated by Isabelle's SledgeHammer system.
Appropriately difficult problems were chosen based on performance data similar 
to that in the TSTP.
Here are some sample problems that have been
extracted from the collection, for you to get a feel for the SLH division
problems.
Problems for the LTB division are taken from various sources, with
each problem category being based on one source.
Entrants are expected to honestly not use publicly available problem
sets for tuning before the competition.
The process for selecting problems depends on the problem source.
Number of Problems
In the TPTP-based divisions, the minimal numbers of problems that must be 
used in each division and category, to ensure sufficient confidence in the 
competition results, are determined from the numbers of eligible problems 
in each division and category. The competition organizer have to ensure 
that there are sufficient computers available to run the ATP systems on 
this minimal number of problems.
The minimal numbers of problems are used in determining the
time limit imposed on solution attempts.
The minimal numbers of problems to be used in each division of the competition
are determined from the number of computers available, the time allocated
to the competition, the number of ATP systems to be run on the competition
computers over the divisions, and the time limit imposed on solution attempts,
according to the following relationship:
                   NumberOfComputers * TimeAllocated
NumberOfProblems = ---------------------------------
                     NumberOfATPSystems * TimeLimit
It is a lower bound on the number of problems because it assumes that
every system uses all of the time limit for each problem.
Since some solution attempts succeed before the time limit is reached, more 
problems can be used.
The number of problems used in each division and problem category is (roughly) 
proportional to the numbers of eligible problems, after taking into account 
the limitation on very similar problems, determined according to the judgement 
of the competition organizer.
In the LTB division the number of problems in each category is 
determined by the number of problems in the corresponding problem source.
In CASC-28 the JJT problem category has TBA problems (with five versions
of each problem).
Problem Preparation
The problems are given to the ATP systems in TPTP format, with 
include directives.
In order to ensure that no system receives an advantage or disadvantage 
due to the specific presentation of the problems in the TPTP, the problems 
in the TPTP-based divisions are obfuscated by:
-  stripping out all comment lines, including the problem header
-  randomly reordering the formulae/clauses 
     (include directives are left before formulae, 
     type declarations and definitions are kept before the symbols' uses)
-  randomly swapping the arguments of associative connectives, and
     randomly reversing implications
-  randomly reversing equalities 
In the SLH and LTB division the formulae are not obfuscated, thus allowing the 
ATP systems to take advantage of natural structure that occurs in the problems.
In the TPTP-based divisions the problems are given to the ATP systems 
in increasing order of TPTP difficulty rating.
In the SLH division the problems are given in a roughly estimated increasing 
order of difficulty.
In the LTB division the problems in each batch are given in their natural order
in the problem source.
Batch Specification Files
The problems for each problem category of the LTB division are listed in a
batch specification file, containing global data lines and one or 
more batch specifications.
The global data lines are:
-  A problem category line of the form
 division.category LTB.category_mnemonic
 For CASC-28 it is
 division.category LTB.JJT
 
-  The name of a .tgz file (relative to the directory holding the 
     batch specification file) that contains training data in the form
     of problems in TPTP format and one or more solutions to each problem in
     TSTP format, in a line of the form 
 division.category.training_data tgz_file_name
 For CASC-28 it is
 division.category.training_data TrainingData/TrainingData.JJT.tgz
 The .tgz file expands in place to three directories: 
     Axioms, Problems, and Solutions.
     Axioms contains all the axiom files that are used in the 
     training and competition problems.
     Problems contains the training problems.
     Solutions contains a subdirectory for each of the 
     Problems, containing TPTP format solutions to the problem.
     The language of a solution might not be the same as the language of the 
     problem, e.g., a proof to a THF problem might be written in FOF, or the 
     proof of a TFF problem might be written in THF.
Each batch specification consists of:
-  A header line % SZS start BatchConfiguration
-  A specification of whether or not the problems in the batch must be
     attempted in order is given, in a line of the form
 execution.order
     ordered/unordered
 If the batch is ordered the ATP systems may not start any attempt on 
     a problem, including reading the problem file, before ending the attempt 
     on the preceding problem.
     For CASC-28 it is
 execution.order unordered
 
-  A specification of what output is required from the ATP systems
     for each problem, in a line of the form
 output.required space_separated_list
 where the available list values are the SZS values
     Assurance, Proof, Model, and Answer.
     For CASC-28 it is
 output.required Proof
-  The wall clock time limit for each problem, in a line of the form
 limit.time.problem.wc limit_in_seconds
 A value of zero indicates no per-problem limit.
     For CASC-28 it is
 limit.time.problem.wc 0
-  The overall wall clock time limit for the batch, in a line of the form
 limit.time.overall.wc limit_in_seconds
-  A terminator line % SZS end BatchConfiguration
-  A header line % SZS start BatchIncludes
-  include directives that are used in every problem.
     All the problems in the batch have these include directives, and
     can also have other include directives that are not listed here.
     For CASC-28, see the additional notes below.
-  A terminator line % SZS end BatchIncludes
-  A header line % SZS start BatchProblems
-  Pairs of problem file names (relative to the directory holding the batch
     specification file), and output file names where the output for the 
     problem must be written.
     The output files must be written in the directory specified
     as the second argument to the starexec_run script (the first 
     argument is the name of the batch specification file).
     For CASC-28, see the additional notes below.
-  A terminator line % SZS end BatchProblems
Additional Notes for CASC-28
-  In the BatchProblems section, the multiple versions of each 
     problem are specified using UNIX * globbing, e.g., 
     JJT00001*.p.
     The versions of each problem have extensions as follows: 
     the FOF version uses +1,
     the TF0 version uses _1,
     the TF1 version uses _2,
     the TH0 version uses ^1,
     and
     the TH1 version uses ^2.
-  Proof output must identify which version of the problem was solved -
     see the section on output notification lines.
-  In the BatchIncludes section (not in problem files), multiple 
     versions of included axiom files may be specified using UNIX * 
     globbing, e.g., include('Axioms/JJT002*.ax') could refer to all 
     of 
     JJT001+1.ax,
     JJT001_1.ax,
     JJT001_2.ax,
     JJT001^1.ax,
     JJT001^2.ax.
     For a given problem, systems may use only the axiom files whose version
     matches that of the problem file (there might be none),
     e.g., if the problem version is +1 then use only the axiom
     files with the version +1.
     Using any other versions could lead to weird results.
Have a look at these sample LTB 
problems.
An example batch specification file is 
BatchSampleLTBJJT,
which refers to the training data file
TrainingData.JJT.tgz.
Resource Limits
In the TPTP-based divisions, a wall clock time limit is imposed
for each problem.
The minimal time limit for each problem is 120s.
The maximal time limit for each problem is determined using the 
relationship used for determining the number of problems, with the minimal 
number of problems as the NumberOfProblems.
The time limit is chosen as a reasonable value within the range allowed,
and is announced at the competition.
There are no CPU time limits (i.e., using all cores on the CPU makes sense).
% An additional memory limit is imposed, depending on the 
% computers' memory.
In the SLH division, a CPU time limit is imposed for each problem.
The minimal time limit for each problem is 15s, and
the maximal time limit for each problem is 90s.
The time limit is chosen as a reasonable value within the range allowed, and 
is announced at the competition.
In the LTB division, wall clock time limits are imposed.
For each batch there might be a wall clock time limit for each problem, 
provided in the configuration section at the start of each batch.
If there is a wall clock time limit for each problem, the minimal limit for
each problem is 15s, and the maximal limit for each problem is 90s.
For each batch there is an overall wall clock time limit, provided in the 
configuration section at the start of each batch.
The overall limit is proportional to the number of problems in the batch,
e.g. (but not necessarily), the batch's per-problem time limit multiplied 
by the number of problems in the batch.
Time spent before starting the first problem of a batch (e.g., preloading
and analysing the batch axioms), and times spent between the 
end of an attempt on a problem and the starting of the 
next (e.g., learning from a proof just found), are not part of the times 
taken on the individual problems, but are part of the overall time taken.
There are no CPU time limits.
System Evaluation
For each ATP system, for each problem, four items of data are recorded:
whether or not the problem was solved,
the CPU time taken,
the wall clock time taken,
and whether or not a solution (proof or model) was output.
The systems are ranked in the competition divisions according to the 
number of problems solved with an acceptable solution output.
Ties are broken according to the average time taken over problems solved.
Trophies are awarded to the competition divisions' winners.
The competition panel decides whether or not the systems' solutions are
"acceptable".
The criteria include:
-  Derivations must be complete, starting at formulae from the problem, 
     and ending at the conjecture (for axiomatic proofs) or a false
     formula (for proofs by contradiction, e.g., CNF refutations).
-  For solutions that use translations from one form to another, e.g.,
     translation of FOF problems to CNF, the translations must be adequately 
     documented.
-  Derivations must show only relevant inference steps.
-  Inference steps must document the parent formulae, the inference rule
     used, and the inferred formula.
-  Inference steps must be reasonably fine-grained, except in the SLH
     division where just a single inference step from the axioms to the
     conjecture is also an acceptable output.
-  An unsatisfiable set of ground instances of clauses is acceptable for
     establishing the unsatisfiability of a set of clauses.
-  Models must be complete, documenting the domain, function maps,
     and predicate maps.
     The domain, function maps, and predicate maps may be specified by
     explicit ground lists (of mappings), or by any clear, terminating
     algorithm.
In addition to the ranking criteria, three other measures are presented
in the results:
-  The state-of-the-art contribution (SotAC) quantifies the unique
     abilities of each system (excluding the previous year's winners that are
     earlier versions of competing systems).
     For each problem solved by a system, its SotAC for the problem is the 
     fraction of systems that do not solve the problem, and a system's 
     overall SotAC is the average over the problems it solves but that are 
     not solved by all the systems.
-  The core usage measures the extent to which the systems take
     advantage of multiple cores.
     It is the average of the ratios of CPU time to wall clock time used, over
     the problems solved.
-  The efficiency measure balances the number of problems solved 
     with the time taken.
     It is the average solution rate over the problems solved
     (the solution rate for one problem is the reciprocal of the time taken to
     solve it),
     multiplied by the fraction of problems solved.
     Efficiency is computed for both CPU time and wall clock time, to measure
     how efficiently the systems use one core and multiple cores respectively.
At some time after the competition all high ranking systems in the
competition divisions are tested over the entire TPTP.
This provides a final check for soundness (see the section on
system properties regarding soundness
checking before the competition).
If a system is found to be unsound during or after the competition, but
before the competition report is published, and it cannot be shown that the
unsoundness did not manifest itself in the competition, then the system
is retrospectively disqualified.
At some time after the competition, the solutions from the winners
(of divisions ranked by the numbers of solutions output) are checked 
by the panel.
If any of the solutions are unacceptable, i.e., they are sufficiently worse 
than the samples provided, then that system is retrospectively disqualified.
All disqualifications are explained in the competition report.
System Entry
To be entered into CASC systems must be registered using the
CASC system registration form
by the registration deadline.
For each system entered an entrant must be nominated to handle all issues 
(e.g., installation and execution difficulties) arising before, during, and
after the competition.
The nominated entrant must
formally register for CASC.
It is not necessary for entrants to physically attend the competition.
Systems can be entered at only the division level, and can be entered
into more than one division.
A system that is not entered into a division is assumed to perform worse than 
the entered systems, for that type of problem - wimping out is not an option.
Entering many similar versions of the same system is deprecated, and entrants
may be required to limit the number of system versions that they enter.
Systems that rely essentially on running other ATP systems without adding
value are deprecated; the competition panel may disallow or move such
systems to the demonstration division.
The division winners of the previous CASC are automatically 
entered into their demonstration divisions, to provide benchmarks
against which progress can be judged.
Prover9 1109a is automatically entered into the FOF division, to provide
a fixed-point against which progress can be judged.
System Description
A system description must be provided for each ATP system entered, using
this HTML schema.
The schema has the following sections:
-  Architecture. 
     This section introduces the ATP system, and describes the calculus and 
     inference rules used.
-  Strategies. 
     This section describes the search strategies used, why they are effective,
     and how they are selected for given problems.
     Any strategy tuning that is based on specific problems' characteristics
     must be clearly described (and justified in light of the
     tuning restrictions).
-  Implementation. 
     This section describes the implementation of the ATP system, including 
     the programming language used, important internal data structures, and 
     any special code libraries used.
     The availability of the system is also given here.
-  Expected competition performance. 
     This section makes some predictions about the performance of the ATP 
     system for each of the divisions and categories in which it is competing.
-  References.
The system description must be emailed to the competition organizer by
the  system description deadline.
The system descriptions form part of the competition proceedings.
Sample Solutions
For systems in the divisions that require solution output, representative 
sample solutions must be emailed to the competition organizer by the 
sample solutions deadline.
Use of the TPTP format for
proofs and
finite
interpretations is encouraged.
The competition panel decides whether or not solutions are 
acceptable.
Proof/model samples are required as follows:
An explanation must be provided for any non-obvious features.
System Requirements
System Properties
Entrants must ensure that their systems execute in the competition environment,
and have the following properties.
Entrants are advised to finalize their installation packages and check these
properties
well in advance of the system delivery deadline.
This gives the competition organizer time to help resolve any difficulties
encountered.
Execution, Soundness, and Completeness
-  Systems must be fully automatic, i.e., all command line switches have
     to be the same for all problems in each division.
-  Systems' performances must be reproducible by running the system again.
-  Systems must be sound.
     At some time before the competition all the systems in the competition
     divisions are tested for soundness.
     Non-theorems are submitted to the systems in the 
     THF, FOF, EPR, UEQ, and LTB divisions, and theorems are submitted 
     to the systems in the FNT and EPR divisions.
     Finding a proof of a non-theorem or a disproof of a theorem indicates
     unsoundness.
     If a system fails the soundness testing it must be repaired by
     the unsoundness repair deadline or be
     withdrawn.
-  Systems do not have to be complete in any sense, including
     calculus, search control, implementation, or resource requirements.
-  All techniques used must be general purpose, and expected to extend 
     usefully to new unseen problems.
     The precomputation and storage of information about individual problems 
     that might appear in the competition or their solutions is not allowed.
     (It's OK to store information about LTB training problems.) 
     Strategies and strategy selection based on individual problems 
     or their solutions are not allowed.
     If machine learning procedures are used to tune a system, the learning 
     must ensure that sufficient generalization is obtained so that no there 
     is no specialization to individual problems or their solutions.
     The system description must explain any such tuning or training that has 
     been done.
     The competition panel may disqualify any system that is deemed to be 
     problem specific rather than general purpose.
     If you are in doubt, contact the competition organizer.
Output
-  In all divisions except LTB the solution output must be to stdout.
     In the LTB division the solution output must be to the named output
     file for each problem, in the directory specified as the second argument 
     to the starexec_run script.
     If multiple attempts are made on a problem in an unordered batch, each
     successive output file must overwrite the previous one.
-  In the LTB division the systems must print SZS notification lines to
     stdout when starting and ending work on a problem (including
     any cleanup work, such as deleting temporary files).
     For example
  
  % SZS status Started for CSR075+2.p
    ... (system churns away, progress output to file)
  % SZS status GaveUp for CSR075+2.p
  % SZS status Ended for CSR075+2.p ... and later in another attempt on that problem ...
  % SZS status Started for CSR075+2.p
    ... (system churns away, progress, result, and solution overwrites file)
  % SZS status Theorem for CSR075+2.p
  % SZS status Ended for CSR075+2.p 
-  For each problem, the system must output a distinguished string
     indicating what solution has been found or that no conclusion has been 
     reached.
     Systems must use the 
     
     SZS ontology and standards for this.
     For example
% SZS status Theorem for SYN075+1.p or
% SZS status GaveUp for SYN075+1.p In the LTB division this line must be the last line output before the 
     ending notification line.
     The line must also be output to the output file.
-  When outputting a solution, the start and end of the solution must
     be delimited by distinguished strings.
     Systems must use the
     
     SZS ontology and standards for this.
     For example
% SZS output start CNFRefutation for SYN075+1.p
  ...
% SZS output end CNFRefutation for SYN075+1.p The string specifying the problem status must be output before the start
     of a solution.
     Use of the TPTP format for
     proofs and
     finite
     interpretations is encouraged.
-  Solutions may not have irrelevant output (e.g., from other threads
     running in parallel) interleaved in the solution.
Resource Usage
-  Systems that run on the competition computers must be
     interruptible by a SIGXCPU signal so that CPU time limits can be 
     imposed, and interruptable by a SIGALRM signal so that wall clock 
     time limits can be imposed.
     For systems that create multiple processes the signal is sent first to
     the process at the top of the hierarchy, then one second later to all
     processes in the hierarchy.
     The default action on receiving these signals is to exit (thus complying
     with the time limit, as required), but systems may catch the signals
     and exit of their own accord.
     If a system runs past a time limit this is noticed in the timing
     data, and the system is considered to have not solved the problem.
-  If a system terminates of its own accord it may not leave any
     temporary or intermediate output files.
     If a system is terminated by a SIGXCPU or SIGALRM
     it may not leave any temporary or intermediate output files anywhere other
     than in /tmp.
-  For practical reasons excessive output from an ATP system is not allowed.
     A limit, dependent on the disk space available, is imposed on the amount
     of output that can be produced.
System Delivery
Entrants must email a
StarExec installation package to the competition organizer by the
system delivery deadline.
The installation package must be a .tgz file containing
only the components necessary for running the system (i.e., not including
source code, etc.).
The entrants must also email a .tgz file containing the source
code and any files required for building the StarExec installation package
to the competition organizer by the system delivery 
deadline.
For systems running on entrant supplied computers in the demonstration
division, entrants must email a .tgz file containing the source code
and any files required for building the executable system to the competition
organizer by the system delivery deadline.
After the competition all competition division systems' source code
is made publicly available on the CASC web site.
In the demonstration division the entrant specifies whether or not
the source code is placed on the site.
An open source license is
encouraged.
Entrants are encouraged to make a public release of their systems ASAP after 
the competition, so that users can enjoy the latest capabilities.
System Execution
Execution of the ATP systems is controlled by StarExec.
The jobs are queued onto the computers so that each CPU is running 
one job at a time.
All attempts at the Nth problems in all the divisions and categories 
are started before any attempts at the (N+1)th problems.
A system has solved a problem iff it outputs its termination string within
the time limit, and a system has produced a solution iff it outputs
its end-of-solution string within the time limit.
The result and timing data is used to generate an HTML file, and a web
browser is used to display the results.
The execution of the demonstration division systems is supervised by
their entrants.
System Checks
-  Check: You can
     login to StarExec. If not, 
     
     apply for an account in the TPTP community.
 
-  Check: You can access the TPTP space. If not,
     email the competition organizer.
 
-  Check: You can create and upload a
     StarExec installation package.
     The competition organizer have examplar StarExec installation packages
     that you can use as a starting point - email the competition organizer
     to get one that is appropriate for your ATP system.
 
-  Check: You can create a job and run it, and your ATP system gets the 
     correct result.
     Use the SZS post processor.
 
-  Check: Your ATP system can solve a problem that has include 
     directives.
     Because of the way StarExec runs jobs, your ATP system must implement
     the TPTP requirement that "Include files with relative path names are 
     expected to be found either under the directory of the current file, 
     or if not found there then under the directory specified in the 
     TPTP environment variable."
 
-  Check: You can email your StarExec installation package to the 
     competition organizer for testing.