The TPTP Problem Library
TPTP
v9.0.0
Geoff Sutcliffe
Department of Computer Science
University of Miami
geoff@cs.miami.edu

Abstract
This report provides a detailed description of the TPTP Problem Library for
automated theorem proving systems.
The library forms a common basis for development of and experimentation with
automated theorem provers.
This report provides:
 the motivations for building the library;
 a discussion of the inadequacies of previous problem collections, and
how these have been resolved in the TPTP;
 a description of the library structure, including overview information;
 descriptions of supplementary utility programs;
 guidelines for obtaining and using the library.
If you're new to all this and don't want to wade through the
details provided in this manual, you might want to start at the
TPTP and TSTP Quick Guide.
You could also work your way through the slides of the
TPTP World Tutorial.

Table of Contents
Introduction
The TPTP (Thousands of Problems for Theorem Provers) is a library of
problems, in classical logic with an interpreted equality symbol, for
Automated Theorem Proving (ATP) systems.
The TPTP contains problems in
monomorphic and polymorphic typed higherorder form (THF),
monomorphic and polymorphic typed firstorder form (TFF, including the
extended firstorder form (TXF)),
firstorder form (FOF), and clause normal form (CNF).
Interpreted arithmetic types and symbols are supported in THF, TXF, and TFF.
The TPTP supplies the automated reasoning community with a comprehensive
library of the ATP test problems that are available today, in order to provide
an overview and a simple, unambiguous reference mechanism.
The principal motivation for the TPTP is to support the testing and evaluation
of ATP systems, to help ensure that performance results accurately reflect
the capabilities of the ATP system being considered.
A common library of problems is necessary for meaningful system
evaluations, meaningful system comparisons, repeatability of testing,
and the production of statistically significant results.
The TPTP problems are stored in a specifically designed, easy to understand
format.
Utilities for manipulating the problems, for converting the problems
to other known ATP formats, and for finding problems with certain
characteristics, are provided.
Since its first release in 1993, many researchers have used the TPTP as
an appropriate and convenient basis for ATP system evaluation.
This technical report explains the motivations and reasoning behind the
development of the TPTP, and thus implicitly explains the design decisions
made.
It also serves as a manual explaining the structure and use of the TPTP:
it provides a full description of the TPTP contents and organization, details
of the utility programs, and guidelines for obtaining and using the TPTP.
What's New in TPTP
v9.0.0
(since TPTP v8.0.0):
 This is the first release with nonclassical logic problems.
There are 147 nonclassical logic problems in 8 domains.
 1 new domain has been added:
MVA (MValgebras)
 The separator for arithmetic problems is no longer '=', but rather whatever is
used for the host language, i.e., '_' for TFF and TXF problems, '^' for
THF problems.
 The SZS ontology has a new value USM (Unsemantic), and been extended to include nonlogical
data.
 Many many changes to SyntaxBNF.
Previous Problem Collections
A large number of interesting problems had accumulated over the years in the
ATP community.
Besides publishing particularly interesting individual problems, from early on
researchers collected problems in order to obtain a basis for experimentation.
Problems in First Order Form (FOF), published by mathematicians and logicians
prior to the mechanization of reasoning (e.g.,
[Chu56]) provided the first source for
ATP researchers.
The first major publication in this regard was
[MOW76], which provides
an explicit listing of clauses for 63 Clause Normal Form (CNF) problems, many
of which are still relevant today
(to our knowledge, the first circulation of problems for testing ATP
systems was due to Larry Wos in the late sixties).
In the same year
[WM76] documented 86 CNF problems which were commonly used for ATP testing.
However, the problem clauses are not supplied in
[WM76].
A second major thrust was provided by
[Pel86], which lists 75
problems in both FOF and CNF.
Other early papers are
[BL+86],
[Qua92],
[MW92], and
[McC93], to name a few.
The Journal of Automated Reasoning's Problem Corner also provided interesting
challenge problems.
Problems published in hardcopy form are, however, often not suitable for
testing ATP systems, because they have to be transcribed to electronic form.
This is a cumbersome, errorprone process, and is feasible for only small
numbers of small problems.
The sparsness of research into ATP systems for FOF problems meant that
no electronic collections of FOF test problems had previously been commonly
available.
A CNF problem collection in electronic form was made publicly available by
Argonne National Laboratories (in Otter format
[McC94])
in 1988
[ANL].
This collection was a major source of problems for ATP researchers.
Other electronic collections of CNF problems have been available, but not
announced officially (e.g., that distributed with the SPRFN ATP system
[SPRFN]).
Although some of these collections provided significant support to
researchers, and formed the early core of the TPTP library, none (with the
possible exception of the ANL collection) was specifically designed to serve
as a common basis for ATP research.
Rather, these collections typically were built in the course of research into
a particular ATP system.
As a result there are several factors that limited their usefulness as
a common basis for research.
In particular, previously existing problem collections:
 were often hard to discover and obtain.
System development and system evaluations typically relied on a
small set of test problems, depending on the collections of problems
available to the researcher.
 needed to be transformed to the syntax of the ATP system being considered.
The problem format used in a collection was often inappropriate for the
desired purpose, and a comparatively large effort was required just to
make the problems locally usable (which in practice often meant that such
collections of problems were simply ignored).
 were often limited in scope and size.
The problems used were often homogeneous, and thus could not be used
for broad testing of the capabilities of the ATP system under
consideration.
If there are too few problems, statistically significant testing is
not possible.
 were often be outdated.
The problems did not sufficiently reflect the current stateoftheart in
ATP research.
 were sometimes designed and tuned (regarding formula selection,
formula ordering, and formula structure) for a particular ATP system.
Using a collection designed and tuned for a particular ATP system
leads to biases in results.
 provided no indication of the significance or difficulty of the problems.
The significance and difficulty of a problem, with respect to the current
stateoftheart in ATP systems, is hard to assess by newcomers to the
field.
Existing test problems are often not adequate anymore (e.g., Schubert's
Steamroller
[Sti86]), while others might be solvable only with
specialized techniques (e.g., LIM+
[Ble90]),
and therefore are much too hard to start with.
 were inconsistent in their presentation of equally named problems.
Copies and variants of the same "original" problem existed in different
collections.
This meant that unambiguous identification of problems, and therefore a
clear interpretation of performance figures for given problems, was
difficult.
 were usually undocumented.
It was hard to obtain information on problem semantics, the
original problem source, and the particular style of axiomatization.
This also contributed to the problem of unambiguous problem identification.
 were almost always unserviced.
Collections did not provide a mechanism for adding new problems or
correcting errors in existing problems, and could be not be used to
electronically distribute new and corrected problems to the ATP community.
This in turn perpetuated the use of old and erroneous problems.
 provided no guidelines for their use.
Quite often, inadequate system evaluations were performed.
As a consequence, results that provided little indication
of the system properties were reported.
The problem of meaningfully interpreting results can be even worse than
indicated.
A few problems might be selected and handtuned (formulae arranged in a special
way, irrelevant formulae omitted, lemmas added in, etc) specifically for the
ATP system being tested.
The presentation of a problem can significantly affect the nature of the
problem, and changing the formulae clearly makes a different problem
altogether.
Nevertheless the problem might be referenced under the same name as it was
presented elsewhere.
As a consequence the experimental results reveal little.
Some researchers avoid this ambiguity by listing the formulae explicitly,
but obviously this usually cannot be done for a large number of problems or
for large individual problems.
The only satisfactory solution to these issues is a common and stable library
of problems.
The TPTP is such a library.
What is Required?
The goal for building the TPTP has been to overcome previous drawbacks, and
to centralize the burden of problem collection and maintenance.
The TPTP tries to address all relevant issues.
In particular, the TPTP:
 is easy to discover and obtain.
Awareness of the TPTP is assured by extensive formal and informal
announcements.
The TPTP is available online, and is thus easily available to the
research community.
 is easy to use.
Problems are presented in a specifically designed, easy to understand
format. Automatic conversion to other known formats is also provided,
thus eliminating the necessity for any other transcription.
 spans a diversity of subject matters.
This reduces biases in the development and testing of ATP systems,
which arise from the use of a limited scope of problems.
It also provides an overview of the domains in which ATP systems are
used.
 is large enough for statistically significant testing.
In contrast to common practice, an ATP system should be evaluated over
a large number of problems, rather than a small set of judiciously
selected examples. The large size of the TPTP makes this possible.
 is comprehensive.
The TPTP contains most problems known to the community.
There is no longer a need to look elsewhere.
 is uptodate.
As new problems appear in the literature and elsewhere,
they are added to the TPTP as soon as possible.
 is independent of any particular ATP system.
The problems are arranged so as to be modular and humanreadable,
rather than arranged for a particular ATP system.
 contains problems varying in difficulty, with a difficulty rating for
each problem.
The difficulty of problems in the TPTP ranges from very simple problems
through to open problems.
This allows all interested researchers, from newcomers to experts, to
rely on the same problem library.
The difficulty ratings provided with each problem are important for
several reasons:
 It simplifies problem selection according to the user's intention.
 It allows the quality of an ATP system to be judged.
 Over the years, changes in the problem ratings will provide an
indicator of the advancement in ATP.
 provides statistics for each problem and for the library as a whole.
This provides information about the syntactic nature of the problems.
 has an unambiguous naming scheme.
This provides unambiguous problem reference, and makes the comparison of
results meaningful.
 is well structured and documented.
This allows effective and efficient use of the library.
Useful background information, such as an overview of ATP application
domains, is provided.
 documents each problem.
This contributes to the unambiguous identification of each problem.
 provides a mechanism for adding new problems.
The TPTP contains standard axiomatizations that can be used in new
problems.
This simplifies the collection of new problems.
A template is provided for submission of new problems.
The TPTP is thus a channel for making new problems available to the
community, in a simple and effective way.
 provides a mechanism for correcting errors in existing problems.
All errors, noticed by the developers or reported by users,
are corrected.
Patched TPTP releases are made regularly.
 provides guidelines for its use in evaluating ATP systems.
A standard library of problems together with evaluation guidelines
makes reported results meaningful and reproducible by others.
This will in turn simplify and improve system comparisons, and
allow ATP researchers to accurately gauge their progress.
The development of the TPTP problem library is an ongoing project, with the
aim to provide all of the desired properties.
Current Limitations of the TPTP.
The TPTP library currently limited to problems expressed in classical
logic.
Inside the TPTP
Scope.
The TPTP contains problems in classical and nonclassical logic, with an interpreted equality
symbol.
Release v9.0.0
of the TPTP contains:
The TPTP documents
OverallSynopsis,
THFSynopsis,
TFFSynopsis,
FOFSynopsis,
and
CNFSynopsis
provide some overall statistics about the TPTP.
The problems in the TPTP are syntactically diverse, as is indicated by
the ranges of the values in the tables.
The problems in the TPTP are also semantically diverse, as is indicated
by the range of domains that are covered.
The problems are grouped into domains, covering topics in the fields of logic,
mathematics, computer science, engineering, social sciences, arts and
humanities, and others.
Sources
The problems have been collected from various sources.
Early sources were existing electronic problem collections and the ATP
literature.
Other sources include logic programming, mathematics, puzzles, and
correspondence with ATP researchers.
Many people and organizations have contributed towards the TPTP:
the foundations of the TPTP were laid with David Plaisted's SPRFN collection;
many problems were taken from Argonne National Laboratory's ATP problem library
(special thanks to Bill McCune here); Art Quaife provided several hundred
problems in set theory and algebra; the Journal of Automated Reasoning, CADE
Proceedings, and Association for Automated Reasoning Newsletters have provided
a wealth of material; many problems have been provided by ATP users and
system developers (see the Acknowledgements).
Releases
The TPTP is managed in the manner of a software product, in the sense that fixed releases are made.
Each release of the TPTP is identified by a release number, in the form
vVersion.Edition.Patch level.
The Version number enumerates major new releases of the TPTP, in which
important new features have been added.
The Edition number is incremented each time new problems are added to the current version.
The Patch level is incremented each time errors, found in the current edition, are
corrected.
All nontrivial changes are recorded in a history file, as well as in the file for an affected
problem.
The Domain Structure
This section provides the structure according to which the problems are grouped into domains.
Some information about the domains is also given.
An attempt has been made to classify the totality of the TPTP problems in a systematic and natural
way.
The resulting domain scheme reflects the natural hierarchy of scientific domains, as presented
in standard subject classification literature.
The current classification is based mainly on the Dewey Decimal Classification (DDC)
[CB+89]
and the Mathematics Subject Classification (MSC)
[MSC92]
used for the Mathematical Reviews by the American Mathematical Society.
Seven fields are defined:
logic, mathematics, computer science, engineering, social sciences, arts and humanities, and other.
Each field contains further subdivisions, called domains.
Each domain is identified by a threeletter mnemonic.
These mnemonics are also part of the TPTP
problem naming scheme.
The TPTP domains constitute the basic units of the classification.
The full classification scheme is shown in
Figure
The Domain Structure of the TPTP,
and the numbers of abstract problems, problems, and generic problems
in each domain are shown in the TPTP document
OverallSynopsis
Logic
 Combinatory logic

 COL

 Logic calculi

 LCL

 Henkin models

 HEN

Mathematics
 Set theory

 SET, SEU, and SEV

 Graph theory

 GRA

 Algebra
 Relation algebra
 REL


 MV Algebras
 MVA


 Boolean algebra
 BOO


 Robbins algebra
 ROB


 Left distributive
 LDA


 Lattices
 LAT


 Quantales
 QUA


 Kleene algebra
 KLE


 Groups
 GRP


 Rings
 RNG


 Fields
 FLD


 Homological algebra
 HAL


 Real Algebra
 RAL


 General algebra
 ALG

 Number theory

 NUM and NUN

 Topology

 TOP

 Analysis

 ANA

 Geometry

 GEO

 Category theory

 CAT

Computer science
 Computing theory

 COM

 Knowledge representation

 KRS

 Natural Language Processing

 NLP

 Planning

 PLA

 Agents

 AGT

 Commonsense Reasoning

 CSR

 Semantic Web

 SWB

 Interactive Theorem Proving

 ITP

 Data Structures

 DAT

 Software creation

 SWC

 Software verification

 SWV and SWW and SWX

Science and Engineering
 Biology

 BIO

 Hardware creation

 HWC

 Hardware verification

 HWV

 Medicine

 MED

 Processes

 PRO

 Products

 PRD

Social sciences
 Social Choice Theory

 SCT

 Management

 MGT

 Geography

 GEG

Arts and Humanities
 Philosophy

 PHI

Other
 Arithmetic

 ARI

 Syntactic

 SYN and SYO

 Puzzles

 PUZ

 Miscellaneous

 MSC

The Domain Structure of the TPTP.
A brief description of the domains, with a nonATP reference for a
general introduction and a generic ATP reference, is given below.
For each domain, appropriate DDC and MSC numbers are also given if available:
 AGT Agents.
An agent is an autonomous software component of a computer program, typically designed to act
intelligently and communicate with other agents.
Indices: DDC ???; MSC 68T35.
References:
General
[RN95];
ATP .
 ALG Algebra.
Algebra is a branch of mathematics concerning the study of structure, relation, and quantity.
An algebra is a set with a system of operations defined on it.
Indices: DDC 512; MSC 06XX, 20XX.
References:
General
[Bou89,
BM65,
BB70];
ATP .
 ARI Arithmetic.
Arithmetic problems that do arithmetic computations (but do not reason about
arithmetic (see the NUM domain).
Indices: DDC 513; MSC 97FXX.
References: General
[Hig10];
ATP
[Sim86].
 ANA Analysis.
Analysis is a branch of mathematics concerned with functions and limits.
The main parts of analysis are differential calculus, integral calculus, and the theory of
functions.
Indices: DDC 515; MSC 26XX.
References: General
[Ros90];
ATP
[Ble90].
 BIO Biology.
Biology is a natural science concerned with the study of life and living organisms, including
their structure, function, growth, evolution, distribution, and taxonomy.
Indices: DDC 570; MSC 92XX.
References: General
[RU+13];
ATP
[CC+13,
CDI13].
 BOO Boolean Algebra.
A Boolean algebra is a set of elements with two binary operations that are idempotent,
commutative, and associative.
These operations are mutually distributive, there exist universal bounds 0,
1, and there is a unary operation of complementation.
Indices: DDC 511.324, 512.89; MSC 06EXX.
References:
General
[Whi61,
BM65,
BB70];
ATP .
 CAT Category Theory.
A category is a mathematical structure together with the morphisms that preserve this
structure.
Indices: DDC 512.55; MSC 18XX.
References:
General
[Mac71];
ATP
[MOW76].
 COL Combinatory Logic.
Combinatory logic is about applying one function to another.
It can be viewed as an alternative foundation of mathematics (or, due to its
Turingcompleteness, as a programming language).
More formally, it is a system satisfying two combinators and satisfying reflexivity, symmetry,
transitivity, and two equality substitution axioms for the function that exists implicitly for
applying one combinator to another.
Indices: DDC 510.101; MSC 03B40.
References:
General
[CF58,
CHS72,
Bar81];
ATP
[WM88].
 COM Computing Theory.
Computing theory is a subfield of computer science dealing with theoretical issues such as
decidability (whether or not a given problem admits an algorithmic solution), completeness
(does an algorithm always find a solution if one exists?), correctness (are only solutions
produced?), and computational complexity (the resource requirements of algorithms).
Indices: DDC 004006; MSC 68XX.
References:
General
[HU79];
ATP .
 CSR Commonsense Reasoning.
Commonsense reasoning is the branch of artificial intelligence concerned with replicating
human thinking.
There are several components to this problem, including:
developing adequately broad and deep commonsense knowledge bases;
developing reasoning methods that exhibit the features of human thinking, including
the ability to reason with knowledge that is true by default,
the ability to reason rapidly across a broad range of domains, and
the ability to tolerate uncertainty in your knowledge;
developing new kinds of cognitive architectures that support multiple reasoning methods and
representations.
Indices: DDC 121.3; MSC 68TXX.
References:
General
[Sha97];
ATP
[Lif95,
McC59,
SME04]
 DAT Data Structures.
A data structure is a particular way of storing and organizing data in a computer so that it
can be used efficiently.
Indices: DDC 005.73; MSC 68P05.
References: General
[AHU87];
ATP .
 FLD Field Theory.
A field is ring (see below) in which the * operation is commutative, and for which there is an
identity element in G, and for which each nonidentity element of G has an inverse in G.
Indices: DDC 512.32; MSC 12XX.
References:
General
[Ada82];
ATP
[Dra93].
 GEG Geography.
Geography is the study of the Earth and its lands, features, inhabitants, and phenomena.
Indices: DDC 910; MSC 91C99.
References:
General
[NGS99];
ATP .
 GEO Geometry.
Geometry is a branch of mathematics that deals with the measurement, properties, and
relationships of points, lines, angles, surfaces, and solids.
Indices: DDC 516; MSC 51.
References:
General
[Tar51,
Tar59];
ATP
[Qua92].
 GRA Graph Theory.
A graph consists of a finite nonempty set of vertices together with a prescribed set of
edges, each edge connecting a pair of vertices.
Indices: DDC 510.09; MSC 05CXX, 68R10.
References:
General
[Har69,
BB70];
ATP .
 GRP Group Theory.
A group is a set G and a binary operation +:GxG > G which is associative, and for
which there is an identity element in G, and for which each element of G has an inverse
in G.
Indices: DDC 512.2; MSC 20
References:
General
[Bou89,
BM65];
ATP
[MOW76].
 HAL Homological Algebra.
Homological algebra is an abstract algebra concerned with results valid for many different
kinds of spaces.
Modules are the basic tools used in homological algebra.
Indices: DDC 512.55; MSC 18XX.
References:
General
[Wei94];
ATP .
 HEN Henkin Models.
Henkin models provide a generalized semantics for higher order logics.
This leads to a larger class of models and, as a consequence, fewer true sentences.
However, in contrast to standard semantics, complete and correct calculi can be found.
Indices: DDC 160; MSC 03CXX.
References:
General
[Hen50,
Leb83];
ATP .
 HWC Hardware Creation.
Computer hardware is created by interconnecting logic gates.
Hardware creation is used to form a circuit that will transform given input patterns to
required output patterns.
Indices: DDC 621.395; MSC 94CXX.
References:
General
[Hay93];
ATP
[WW83].
 HWV Hardware Verification.
Hardware verification is used to ensure that a previously designed circuit performs the
desired transformation of input patterns to required output patterns.
One approach is to check the performance of the circuit for every possible combination of
given inputs.
Other techniques are also used.
Indices: DDC 621.395; MSC 94CXX.
References:
General
[Hay93];
ATP
[Woj83].
 ITP Interactive Theorem Proving.
An interactive theorem prover is a software tool to assist with the development of formal
proofs by humanmachine collaboration.
This involves some sort of interactive proof editor, or other interface, with which a human
can guide the search for proofs, the details of which are stored in, and some steps provided
by, a computer.
Indices:
References:
General
[Wie06,
Geu09];
ATP
[Urb06,
BG+19].
 KLE Kleene Algebra.
A Kleene Algebra is a bounded distributive lattice with an involution satisfying De Morgan's
laws, and the inequality x∧−x ≤ y∨−y.
Alternatively, a Kleene Algebra is an algebraic structure that generalizes the operations
known from regular expressions.
Indices: DDC 512.74; MSC 06A99
References:
General
[Koz90,
BMS03];
ATP
[HS07].
 KRS Knowledge Representation.
Knowledge Representation is concerned with writing down descriptions that can be manipulated
by a machine.
Indices: DDC 006.3; MSC 68T30.
References:
General
[BL85,
CM87];
ATP .
 LAT Lattice Theory.
A lattice is a set of elements, with two binary operations which are idempotent, commutative,
and associative, and which satisfy the absorption law.
Indices: DDC 512.865; MSC 06BXX.
References:
General
[BM65];
ATP
[McC88].
 LCL Logic Calculi.
A logic calculus defines axioms and rules of inference that can be used to prove theorems.
Indices: DDC 511.3; MSC 03XX.
References:
General
[Luk63];
ATP
[MW92].
 LDA Left Distributive Algebra.
LDalgebras are related to large cardinals.
Under a very strong large cardinal assumption, the freemonogenic LDalgebra can be
represented by an algebra of elementary embeddings.
Theorems about this algebra can be proven from a small number of properties, suggesting the
definition of an embedding algebra.
Indices: DDC 512; MSC 20N02, 03E55, 08B20
References:
General ;
ATP
[Jec93].
 LIN Linear Algebra.
Linear algebra is the branch of mathematics concerning vector spaces and linear mappings
between such spaces.
Indices: DDC 512.5; MSC 15Axx.
References:
General
[LH+01];
ATP .
 MVA MV Algebras.
An MV algebra is an algebraic structure with a binary "plus" operation, a unary "negate"
operation, and the constant 0, satisfying certain axioms.
Indices: DDC 512; MSC 03G20
References:
General
[Mun11,
GT75];
ATP .
 MED Medicine.
The science of diagnosing and treating illness, disease, and injury.
Indices: DDC 610; MSC .
References:
General
[LH+01];
ATP
[HLB05].
 MGT Management.
Management is the study of systems, and their use and production of resources.
Indices: DDC 302303; MSC 90XX.
References:
General ;
ATP
[PM94,
PB+94].
 MSC Miscellaneous.
A collection of problems which do not fit well into any
other domain.
 NLP Natural Language Processing.
Natural language processing considers the automated generation and comprehension of languages
used by humans.
Indices: DDC 006.3; MSC 68T50
References:
General
[Obe89i,
GWS86];
ATP
[Bos00].
 NUM Number Theory.
Number theory is the study of integers and their properties.
Indices: DDC 512.7; MSC 11YXX.
References:
General
[HW92];
ATP
[Qua92].
 PHI Philosophy.
Philosophy is the study of general and fundamental problems, such as those connected with
reality, existence, knowledge, values, reason, mind, and language.
Indices: DDC 100; MSC 
References:
General
[TE99];
ATP .
 PLA Planning.
Planning is the process of determining the sequence of actions to be performed by an agent,
to reach a specified desired state from a specified initial state.
Indices: DDC 006.3; MSC 68T99.
References:
General
[AK+91];
ATP
[Pla81,
Pla82].
 PRO Processes.
Process information includes production scheduling, process planning, workflow, business
process reengineering, simulation, process realization process modeling, and project
management.
Indices: DDC 670; MSC 93XX.
References:
General
[Sch99];
ATP
[Rei01,
BG05].
 PRD Products.
A product is anything that can be offered to a market that might satisfy a want or need.
Indices: DDC ???; MSC ??.
References:
General
[KA+06];
ATP .
 QUA Quantales.
Quantales are partially ordered algebraic structures that generalize locales (point free
topologies) as well as various lattices of multiplicative ideals from ring theory and
functional analysis.
Quantales are sometimes referred to as complete residuated semigroups.
Indices: DDC 512; MSC 06F07.
References:
General
[Mul01,
Con71];
ATP .
 PUZ Puzzles.
Puzzles are designed to test the ingenuity of humans.
Indices: DDC 510, 793.73; MSC .
References:
General
[Car86,
Smu78,
Smu85];
ATP .
 RAL Real Algebra.
Real algebra is the study of "real" objects such as real rings, real
places and real varieties.
Indices: DDC 512; MSC 13J30
References:
General
[Lam84];
ATP .
 REL Relation Algebra.
A relation algebra is a residuated Boolean algebra supporting an
involutary unary operation called converse.
The motivating example of a relation algebra is the algebra 2^X^2 of
all binary relations on a set X, with RoS interpreted as the usual
composition of binary relations.
Indices: DDC 512; MSC 03G15, 06D99
References:
General
[SS93,
Mad06];
ATP
[HS08].
 RNG Ring Theory.
A ring is a group (see above) in which the binary operation is
commutative, with an associative and distributive operation
*:GxG > G for which there is an identity element in G.
Indices: DDC 200; MSC .
References:
General
[Bou89,
BB70];
ATP
[MOW76].
 ROB Robbins Algebra.
The Robbins Algebra domain revolves around the question "Is every
Robbins algebra Boolean?".
Most of the problems in this domain identify conditions that make a
nearBoolean algebra Boolean.
Indices: DDC 512; MSC 03G15.
References:
General
[HMT71];
ATP
[Win90].
 SCT Social Choice Theory.
Social choice theory is a theoretical framework for measuring individual
interests, values, or welfares as an aggregate towards collective
decision..
Indices: DDC 330.1; MSC .
References:
General
[Arr63];
ATP
[Nip09].
 SET, SEU, and SEV Set Theory.
Classically, a set is a totality of certain definite, distinguishable
objects of our intuition or thought  called the elements of the
set. Due to paradoxes that arise from such a naive definition,
mathematicians now regard the notion of a set as an undefined,
primitive concept
[How72].
Indices: DDC 511.322, 512.817; MSC 03EXX, 04XX.
References:
General
[Neu25,
Qui69];
ATP
[Qua92].
 SWB Semantic Web.
Semantic Web is a group of methods and technologies to allow machines to
understand the meaning – or "semantics" – of information on the World
Wide Web.
Indices: DDC ??.?; MSC .
References:
General
[BHL01];
ATP
[HV06].
 SWC Software Creation.
Software creation is used to form a computer program that meets given
specifications.
Indices: DDC ???.?; MSC 68N30.
References:
General ;
ATP .
 SWV and SWW and SWX Software Verification.
Software verification formally establishes that a computer program
does the task it is designed for.
Indices: DDC 005.14; MSC 68Q60.
References:
General ;
ATP
[WO+92,
MOW76].
 SYN and SYO Syntactic.
Syntactic problems have no obvious semantic interpretation.
Indices: DDC ; MSC .
References:
General
[Chu56];
ATP
[Pel86].
 TOP Topology.
Topology is the study of properties of geometric configurations
(e.g., point sets) which are unaltered by elastic deformations
(homeomorphisms, i.e., functions that are 11 mappings between sets,
such that both the function and its inverse are continuous).
Indices: DDC 514; MSC 46AXX.
References:
General
[Kel55,
Mun75];
ATP
[WM89].
Problem Versions and Standard Axiomatizations
There are often many ways to formulate a problem for presentation
to an ATP system.
Thus, in the TPTP, there are often alternative presentations of a problem.
The alternative presentations are called versions
of the underlying abstract problem.
As the problem versions are the objects that ATP systems must deal
with, they are referred to simply as problems, and the
abstract problems are referred to explicitly as such.
Each problem is stored in a separate physical file.
In the TPTP the most coarse grain differentiation between problem versions
is whether the problem is presented in THF (with the subdivisions TH0 for
monomorphic logic and TH1 for polymorphic logic), TFF (with the subdivisions
TXF for extended logic (with the subdivisions TX0 for monomorphic logic
and TX1 for polymorphic logic), TF0 for monomorphic logic, and TF1 for
polymorphic logic), FOF, or CNF.
Within a given presentation, the primary reason for different versions of
an abstract problem is the use of different axiomatizations.
This issue is discussed below.
A secondary reason is different formulations of the problem to be solved,
e.g., different clausal forms of a FOF problem.
Different Axiomatizations
Commonly, many different axiomatizations of a theory exist.
By using different axiomatizations of a particular theory, different
versions of a problem can be formed.
In the TPTP an axiomatization is standard if it is a
complete axiomatization of
an established theory,
no lemmas have been added, and
it is not designed specifically to be suited or
illsuited to any ATP system, calculus, or control strategy.
A problem version is standard if it uses a standard axiomatization.
(Note: A standard axiomatization might be redundant, because some axioms are
dependent on others.
In general, it is not known whether or not an axiomatization is minimal, and
thus the possibility of redundancy in standard axiomatizations must be
tolerated.)
In the TPTP, standard axiomatizations are kept in separate axiom files,
and are included in problems as appropriate.
Sets of specialization axioms might be used to extend or constrain an
axiomatization.
Specialization axioms are also kept in separate axiom files.
Within the ATP community, some problems have been created with
nonstandard axiomatizations.
A nonstandard axiomatization can be formed by modifying a standard
axiomatization:
the standard axiomatization can be reduced (i.e., axioms are removed) and
the result is an incomplete axiomatization, or it can be
augmented
(i.e., lemmas are added) and the result is a redundant axiomatization.
Incomplete and redundant axiomatizations are typically used to find a proof
of a conjecture (based on the axiomatization) using a particular ATP system.
An axiomatization can also be nonstandard because it is biased,
i.e., designed specifically to be suited or illsuited to some ATP system,
calculus, or control strategy.
A problem version is incomplete, redundant, or biased if its axiomatization is.
Finally, an axiomatization can be nonstandard because it does not capture
any established theory. i.e., a standard axiomatization does not exist, but
the axioms are not biased.
A problem version with such an axiomatization is especial.
Typically, the axioms in an especial problem are specific to that problem,
and are not used in any other problem.
In any 'real' application of an ATP system, a standard or especial
problem version would typically be used, at least initially.
The use of standard axiomatizations is particularly desirable.
Equality Axiomatization
In the TPTP equality is represented in infix using = and !=
for equality and inequality.
An inequality has the same meaning as a negated equality.
If equality is present in a problem, axioms of equality are not provided; it is assumed that
equality reasoning is builtin to every ATP system.
If equality axioms are required by an ATP system they can be added using the
TPTP2X or TPTP4X utility.
If any axioms are added when the problems are submitted to an ATP system, then the addition must
be explicitly noted in reported results/
Problem Generators
Some abstract problems have a generic nature, and particular instances of the abstract problem are
formed according to some size parameter(s).
An example of a generic problem is the Nqueens problem: place N queens on a N by N chess board
such that no queen attacks another.
The formulae for any size of this problem can be generated automatically, for any size of
N >= 2.
Note that the problem status might depend on the problem size.
Up to TPTP v1.1.3, the TPTP contained problem files for particular sizes of generic problems.
This, however, was undesirable.
Firstly, only a finite number of different problem sizes could be included, and therefore a
desired size might have been missing.
Secondly, even a small number of different problem sizes for each generic problem could consume a
considerable amount of disk space.
To overcome these problems, the TPTP contains generator files.
Generator files are used to generate instances of generic problems, according to user supplied
size parameters.
The generators are used in conjunction with the tptp2X utility.
For convenience, the TPTP still contains, where they exist, interesting size instances of each
generic problem.
The TPTP sizes were chosen to be nontrivial,
The statistics in the TPTP documents take into account these instances of generic problems.
Problem and Axiomatization Naming
Providing unambiguous names for all problems is necessary in a problem library.
A naming scheme has been developed for the TPTP, to provide unique, stable names for abstract
problems, problems, axiomatizations, and generators.
Files are assigned names according to the schemes depicted in
Tables Problem Naming and
Axiom Naming.
The DDDNNN combination provides an unambiguous name for an abstract problem or
axiomatization.
The DDDNNNFV[.SSS] combination provides an unambiguous name for a problem or generator,
and the DDDNNNFE combination provides an unambiguous name for a set of axioms.
The complete file names are unique within the TPTP.
For example, the file GRP1351.002.p contains the group theory problem number
135, version number 1, generated size 2.
Similarly, the file CAT0010.ax contains the basic category theory axiomatization
number 1.
A regular expression for recognizing problem file names is
"[AZ]{3}[09]{3}[+^=_][19][09]*(\.[09]{3})*\.p".
A regular expression for recognizing axiom file names is
"[AZ]{3}[09]{3}[+^=_][19][09]*\.ax".
DDD
 NNN
 F
 V
 .SSS
 .T


DDD  Domain name abbreviation.

 NNN  Abstract problem number.
It is unique within the domain.


 F  Form.
^ for THF, _ for TFF without arithmetic,
= for TFF with arithmetic, + for FOF, and
 for CNF.

 
 V  Version number.
It differentiates between different versions of the
abstract problem.

  
 SSS  Size parameter(s).
These only occur for generated problems, and give the size parameter(s).

   
 T  File name extension.
p for problem files, g for generator files.

Problem file naming scheme.
The version numbers used for each abstract problem do not always start at 1, and are not always
successive.
This is because the same version number is assigned (wherever possible) to all problems that come
from the same source, within each domain.
DDD
 NNN
 F
 V
 .TT


DDD  Domain name abbreviation.

 NNN  Axiomatization number.
It is unique within the domain.


 F  Form.
^ for THF, _ for TFF without arithmetic,
= for TFF with arithmetic, + for FOF, and
 for CNF.

 
 V  Specialization number.
It identifies sets of axioms that are used to specialize an axiomatization.
Axiomatizations of basic theories are allocated the number 0, and
specialization axiom sets are numbered from 1 onwards.

  
 T  File name extension.
An extension is always ax.

Axiom file naming scheme.
If a file is ever removed from or renamed in the TPTP, then its extension is changed to
.rm.
The file is not physically removed, and a comment is added to the file to explain what has
happened.
This mechanism maintains the unique identification of problems and axiomatizations.
Problem Presentation
The physical presentation of the TPTP problem library is such that ATP researchers can easily use
the problems.
The syntax of all files is that of Prolog (with some operators defined).
This conformance makes it trivial to manipulate the files using Prolog.
All information that is not for use by ATP systems is formatted as comments.
Comments extend from a % character to the end of the line, or may be block comments
within /* ... */ bracketing.
The TPTP file format, for problem files and axiom files, has three main sections.
The first section is a header section that contains information for the user.
This information is not for use by ATP systems.
The second section contains include directives for axiom files.
The last section contains the formulae that are specific to the problem or axiomatization.
TPTP generator files have three sections, different from the problem and axiom files.
The header section of generator files is similar to that of problem and axiom files, but with
placeholders for information that is filled in based on the size of problem and the formulae
generated.
Following that comes Prolog source code to generate the formulae, and finally data describing the
permissible sizes and the chosen TPTP size for the problem.
More details are provided here.
A full BNF specification of
the problem and axiom file formats is provided in the
Documents directory of the TPTP.
The TPTP2X and TPTP4X utilities can be used to convert
TPTP files to other known ATP system formats.
A description of the information contained in TPTP files is given below.
The problem files SYN000* are contrived to use most features of the TPTP language, and
are thus suitable for testing parsers, etc.
They are not suitable for evaluating ATP systems' reasoning capabilities.
This section contains information about the problem, for the user.
It is divided into four parts separated by blank lines.
The first part identifies and describes the problem.
The second part provides information about occurrences of the problem in the literature and
elsewhere.
The third part gives the problem's ATP status and a table of syntactic measurements made on the
problem.
The last part contains general information about the problem.
An example of a TPTP header, from the problem file GRP194+1.p, is shown in
Figure Example header.
%
% File : GRP194+1 : TPTP v8.0.0. Released v2.0.0.
% Domain : Group Theory (Semigroups)
% Problem : In semigroups, a surjective homomorphism maps the zero
% Version : [Gol93] axioms.
% English : If (F,*) and (H,+) are two semigroups, phi is a surjective
% homomorphism from F to H, and id is a left zero for F,
% then phi(id) is a left zero for H.
% Refs : [Gol93] Goller (1993), Anwendung des Theorembeweisers SETHEO a
% Source : [Gol93]
% Names :
% Status : Theorem
% Rating : 0.22 v7.5.0, 0.25 v7.4.0, 0.10 v7.3.0, 0.14 v7.2.0, 0.10 v7.1.0, 0.13 v7.0.0, 0.20 v6.4.0, 0.23 v6.3.0, 0.21 v6.2.0, 0.36 v6.1.0, 0.47 v6.0.0, 0.35 v5.5.0, 0.37 v5.4.0, 0.50 v5.3.0, 0.48 v5.2.0, 0.40 v5.1.0, 0.33 v4.1.0, 0.35 v4.0.1, 0.30 v4.0.0, 0.29 v3.7.0, 0.15 v3.5.0, 0.16 v3.4.0, 0.21 v3.3.0, 0.14 v3.2.0, 0.18 v3.1.0, 0.11 v2.7.0, 0.17 v2.6.0, 0.14 v2.5.0, 0.12 v2.4.0, 0.25 v2.3.0, 0.33 v2.2.1, 0.00 v2.1.0
% Syntax : Number of formulae : 8 ( 2 unt; 0 def)
% Number of atoms : 21 ( 4 equ)
% Maximal formula atoms : 4 ( 2 avg)
% Number of connectives : 13 ( 0 ~; 0 ; 6 &)
% ( 1 <=>; 6 =>; 0 <=; 0 <~>)
% Maximal formula depth : 8 ( 5 avg)
% Maximal term depth : 3 ( 1 avg)
% Number of predicates : 3 ( 2 usr; 0 prp; 22 aty)
% Number of functors : 5 ( 5 usr; 3 con; 03 aty)
% Number of variables : 15 ( 14 !; 1 ?)
% SPC : FOF_THM_RFO_SEQ
% Comments :
%
Example of a problem file header (GRP194+1.p).
The % File field.
This field contains three items of information.
The first item is the problem's name.
The current TPTP release number is given next, followed by the TPTP release
in which the problem was released or last bugfixed (i.e., the formulae were
changed).
The % Domain field.
The domain field identifies the domain, and
possibly a subdomain, from which the problem is drawn.
The domain corresponds to the first three letters of the problem name.
The % Problem field.
This field provides a oneline, highlevel description of the abstract problem.
In axiom files, this field is called % Axioms, and provides a
oneline, highlevel description of the axiomatization.
The % Version field.
This field gives information that differentiates this version of the problem
from other versions of the problem.
The first possible differentiation is the axiomatization that is used.
If a specific axiomatization is used, a citation is provided.
If the axiomatization is a pure equality axiomatization (uses only the
equal/2 predicate) then this is noted too.
The second possible differentiation is the
status of the axiomatization.
There are 12 possiblities:
 empty. Indicates that the axiomatization is standard, i.e., it is
complete and has had no lemmas added (it might be redundant).
 Especial. Indicates that the axiomatization does not capture any
established theory.
 Biased. Indicates that the axiomatization is designed
specifically to be suited or illsuited to some ATP system, calculus,
or control strategy.
 Incomplete. Indicates that the axiomatization is incomplete.
 Reduced > Complete. Indicates that a existing standard
axiomatization has had axioms removed, and the result is complete.
The existing axiomatization is necessarily redundant.
 Reduced > Incomplete. Indicates that a existing standard
axiomatization has had axioms removed, and the result is nonstandard
due to incompleteness.
 Augmented. Indicates that a existing standard axiomatization has
had lemmas added, and the result is nonstandard due to redundancy.
 Reduced & Augmented > Complete. Indicates that a existing
standard axiomatization has had axioms removed and lemmas added, and
the result is complete.
(Nonstandard) after Complete. >
 Reduced & Augmented > Incomplete. Indicates that a existing
standard axiomatization has had axioms removed and lemmas added, and
the result is nonstandard due to incompleteness.
 Incomplete > Reduced > Incomplete. Indicates that an
existing incomplete axiomatization has had axioms removed, and the
result is nonstandard due to incompleteness.
 Incomplete > Augmented > Complete. Indicates that
an existing incomplete axiomatization has had axioms added, and
the result is complete.
 Incomplete > Augmented > Incomplete.
Indicates that an existing
incomplete axiomatization has had lemmas added, and the result is
nonstandard due to incompleteness.
 Incomplete > Reduced & Augmented > Complete.
Indicates that an
existing incomplete axiomatization has had axioms removed and lemmas
added, and the result is complete.
(Nonstandard) after Complete. >
 Incomplete > Reduced & Augmented > Incomplete.
Indicates that an
existing incomplete axiomatization has had axioms removed and lemmas
added, and the result is nonstandard due to incompleteness.
 Especial > Reduced > Especial.
Indicates that an especial axiomatization has had axioms removed,
and the result remains especial.
 Especial > Augmented > Especial.
Indicates that an especial axiomatization has had axioms added,
and the result remains especial.
The third possible differentiation between problem versions is in the
theorem formulation. Variations in the theorem formulation are noted in
a Theorem formulation entry, e.g.,
% Version : [McCharen, et al., 1976] (equality) axioms.
% Theorem formulation : Explicit formulation of the commutator.
The % English field.
This field provides a full description of the problem, if the oneline
description in the % Problem field is too terse.
The % Refs field.
This field provides a list of abbreviated references for material in which
the problem has been presented.
Other relevant references are also listed.
The reference keys identify BibTeX entries in the Bibliography.bib
file supplied with the TPTP.
The % Source field.
This field acknowledges the source of the problem, usually as a citation.
If the problem was sourced from an existing problem collection then the
collection name is given in [ ] brackets.
The problem collections used thus far are:
 ANL  the Argonne National Laboratory library
[ANL]
 OTTER  the collection distributed with the Otter ATP system
[McC98]
 Quaife  Art Quaife's collection of set theory (based) problems
[Qua92]
 ROO  the problems used to test the ROO ATP system
[LM92]
 SPRFN  the collection distributed with the SPRFN ATP system
[SPRFN]
 TPTP  the problem first ever appeared in the TPTP
 SETHEO  the collection used to test the SETHEO ATP system
[LS+92]
The % Names field.
Problems that have appeared in other problem collections or the literature,
often have names which are known in the ATP community.
This field lists such names for the problem, along with the source of the name.
If the source is an existing problem collection then the collection name
is cited, as in the % Source field.
If the source of a name is a paper then a citation is given.
If a problem is not given a name in a paper then ``'' is used as the known
name and a citation is given.
Problems that first appeared in the TPTP have source TPTP, and no other name.
In generator files all known names for instances of the abstract
problem are listed, with the instance size given before the source.
A reverse index, from known names to TPTP names, is distributed with the TPTP.
The % Status field.
This field gives the ATP status of the problem, according to the
SZS problem status ontology.
For THF, TFF, and FOF problems it is one of the following values.
For THF problems the default semantics is Henkin with extensionality and choice.
If the status is different for other semantics, this is noted on lines after
the % Status field.
 Theorem  Every model of the axioms (and other nonconjecture
formulae, e.g., hypotheses and lemmas), and there are some such models,
is a model of all the conjectures.
 ContradictoryAxioms  There are not models of the axioms (and
other nonconjecture formulae, e.g., hypotheses and lemmas).
 CounterSatisfiable  Some models of the axioms (and there are
some) are models of the negation of at least one of the conjectures.
 Unsatisfiable  There are no conjectures, and no interpretations
are models of the axioms.
 Satisfiable  There are no conjectures, and some interpretations
are models of the axioms.
 Unknown  The problem has never been solved by an ATP system.
 Open  The abstract problem has never been solved.
For CNF problems it is one of:
 Unsatisfiable  No interpretations are models of the clauses.
 Satisfiable  Some interpretations are models of the clauses.
 Unknown  The problem has never been solved by an ATP system.
 Open  The abstract problem has never been solved.
The % Rating field.
This field gives the difficulty of the problem, measured relative to
stateoftheart ATP systems.
It is a comma separated list of ratings, each with a TPTP release number.
The rating is a real number in the range 0.0 to 1.0, where 0.0 means that all
stateoftheart ATP systems can solve the problem (i.e., the problem is easy),
and 1.0 means no stateoftheart ATP system can solve the problem (i.e., the
problem is hard).
The rating is followed by a TPTP release number, indicating in which release
the rating was assigned.
If no rating has been assigned, a ? is given.
The % Syntax field.
This field lists various syntactic measures of the problem's formulae.
The measures for THF problems are:
 the numbers of formulae, unit formulae,
type formulae, definition formulae,
 the numbers of atoms, equality atoms,
connective atoms,
 the maximal and average number of atoms in a formula,
 the numbers of logical connectives, each logical connective,
 the maximal and average formula depth,
the number of nested formulae,
 the numbers of tuples, ifthenelse formulae, and letin formulae,
 the numbers of arithmetic formulae, terms, numbers, and variables,
 the numbers of types, user types,
 the numbers of type connectives, each type connective,
 the numbers of distinct (nonvariable) symbols, user symbols,
constant symbols,
and the arity range of the symbols,
 the numbers of uses of the Π, Σ, ε, ι, and typed
equality constants,
 the numbers of distinct variables,
lambda quantifications, universal quantifications, existential
quantifications, typed variables, defined variables,
Π, Σ, ε, and ι binders.
The measures for TFF problems are:
 the numbers of formulae, unit formulae,
type formulae, definition formulae,
 the numbers of atoms, equality atoms,
 the maximal and average number of atoms in a formula,
 the numbers of logical connectives, each logical connective,
 the maximal and average formula depth,
 the maximal and average term depth,
 the numbers of extended terms, nested formulae, boolean variables,
 the numbers of tuples, ifthenelse formulae, and letin formulae,
 the numbers of arithmetic formulae, terms, numbers, and variables,
 the numbers of types, user types,
 the numbers of type connectives, each type connective,
 the numbers of distinct predicates, user predicates, propositions,
and the arity range of the predicates.
 the numbers of distinct function, user functions, constants,
and the arity range of the functions,
 the numbers of distinct variables,
universal quantifications, existential quantifications,
typed variables, defined variables,
Π and Σ binders.
The measures for FOF problems are:
 the numbers of formulae, unit formulae, definition formulae,
 the numbers of atoms, equality atoms,
 the maximal and average number of atoms in a formula,
 the numbers of logical connectives, each logical connective,
 the maximal and average formula depth,
 the maximal and average term depth,
 the numbers of distinct predicates, user predicates, propositions,
and the arity range of the predicates.
 the numbers of distinct function, user functions, constants,
and the arity range of the functions,
 the numbers of distinct variables,
universal quantifications, existential quantifications.
The measures for CNF problems are:
 the numbers of clauses, unit clauses,
nonHorn clauses, range restricted clauses,
 the numbers of literals, equality literals, negated literals,
 the maximal and average clause size,
 the maximal and average term depth,
 the numbers of distinct predicates, user predicates, propositions,
and the arity range of the predicates.
 the numbers of distinct function, user functions, constants,
and the arity range of the functions,
 the numbers of distinct variables, singletons.
See A HREF="#InsideTheTPTP">the summary of this information over the
entire TPTP.
The % SPC field.
This field indicates to which Specialist Problem Class (SPC) the
problem belongs.
SPCs are sets of problems that have similar syntactic characteristics, so
that that the problems are reasonably homogeneous with respect ATP systems'
performances on the problems.
SPCs are used for caclulating the difficulty rating
[SS01], and
for recommending systems to use on problems
[SS99].
The characteristics used to build the SPCs are:
 The form of the problem:
TH1,
TH0,
TX1,
TX0,
TF1,
TF0,
FOF,
CNF
 The known or expected logical (SZS) status:
THM (theorem),
CAX (contradictory axioms),
CSA (countersatisfiable),
UNS (unsatisfiable),
SAT (satisfiable),
UNK (unknown),
OPN (open)
 The effective order of the logic
(for FOF and CNF problems):
PRP (propositional),
EPR (effectively propositional),
RFO (really firstorder),
 The amount of equality in the problem:
EQU (with equality),
SEQ (some, but not only, equality),
PEQ (pure equality),
UEQ (unit equality)
NUE (nonunit pure equality),
NEQ (no equality),
 The amount of arithmetic (for TFF problems):
ARI (with arithmetic)
NAR (no arithmetic),
 Horness (for CNF problems):
HRN (Horn),
NHN (nonHorn)
The % Comments field.
This field contains free format comments about the problem, e.g.,
the significance of the problem, or the reason for creating the problem.
If the problem was created using the TPTP2X utility
then the TPTP2X parameters are given.
The % Bugfixes field.
This field describes any bugfixes that have been done to the formulae of
the problem.
Each entry gives the release number in which the bugfix was done, and
attempts to pinpoint the bugfix accurately.
The include section contains include directives for TPTP axiom
files.
An example extracted from the problem file GEO146+1.p
is shown below.
%
%Include simple curve axioms
include('Axioms/GEO004+0.ax').
%Include axioms of betweenness for simple curves
include('Axioms/GEO004+1.ax').
%Include oriented curve axioms
include('Axioms/GEO004+2.ax').
%Include trajectory axioms
include('Axioms/GEO004+3.ax',[connect_defn,symmetry_of_at_the_same_time]).
%
Example of a problem file include section (adapted from GEO146+1.p).
Each of the include directives indicates that the formulae in the
named file should be included at that point.
Include files with relative path names are expected to be found either under
the directory of the current file, or if not found there then under the
directory specified in the $TPTP environment variable.
If the include directive has a []ed list of formulae names
as a second argument, the named file must be filtered to retain only those
formulae (this filter also applies to formulae that have been recursively
included into the named file).
If any member of the list cannot be found, or there are multiple formulae
with a given name, that is an error condition.
If there is no second argument, or the second argument is all,
then all the formulae in the file are included.
In the example above only connect_defn and
symmetry_of_at_the_same_time are included from
Axioms/GEO004+3.ax.
Full versions of TPTP problems (without include directives) can be
created by using the TPTP2X and
TPTP4X utilities.
A side effect of separating out the axioms into axiom files is that the
formula order in the TPTP presentation of problems is typically different
from that of any original presentation.
This reordering is acceptable because the performance of a decent ATP
system should not be very dependent on a particular formula ordering.
TPTP problems and TSTP solutions are built from annotated formulae of the form
language(name,role,formula,source,[useful_info])..
The languages currently supported are
thf  formulae in typed higherorder form,
tff  formulae in typed firstorder form (including extended form),
fof  formulae in first order form,
and
cnf  formulae in clause normal form.
The name identifies the formula within the problem.
The role gives the user semantics of the formula, one of
axiom, hypothesis, definition, assumption,
lemma, theorem, corollary, conjecture,
negated_conjecture, plain, type, interpretation,
logic, and
unknown.
 The axiomlike formulae are those with the roles
axiom, hypothesis, definition, assumption,
lemma, theorem, and corollary.
They are accepted, without proof, as a basis for proving conjectures
in THF, TFF, and FOF problems.
In CNF problems the axiomlike formulae are accepted as part of the set whose
satisfiability has to be established.
There is no guarantee that the axiomlike formulae of a problem are consistent.
 hypothesiss are assumed to be true for a particular problem.
 definitions are used to define symbols.
 assumptions must be discharged before a derivation is complete.
 lemmas and theorems have been proven from the other
axiomlike formulae, and are thus redundant wrt those axiomlike formulae.
theorem is used also as the role of proven conjectures, in output.
A problem containing a lemma or theorem that is not
redundant wrt the other axiomlike formulae is illformed.
theorems are more important than lemmas from the
user perspective.
 corollarys have been proven from the axioms and a
theorem, and are thus redundant wrt the other axiomlike and
theorem formulae.
A problem containing a corollary that is not redundant wrt the
other axiomlike formulae and theorem formulae is illformed.
 conjectures occur in only THF, TFF, and FOF problems, and are to all
be proven from the axiomlike formulae.
A problem is solved only when all conjectures are proven.
(TPTP problems never contain more than one conjecture,
and the creation of problems with more than one conjecture
is currently deprecated, because contemporary ATP systems commonly treat them
as a disjunction, and will prove only one of them.)
 negated_conjectures are formed from negation of a conjecture,
typically in FOF to CNF conversion.
 plain formulae have no special user semantics, and are typically used in derivation
output.
 type formulae define the types of symbols globally.
 interpretation formulae are used for writing interpretations, typically models of
satisfiable sets of formulae, in particular the axioms and negated_conjecture
of a nontheorem.
 logic formulae are used for defining the logic in
nonclassical logics.
 unknowns have unknown role, and this is an error situation.
The syntax for atoms is that of Prolog:
variables start with upper case letters,
atoms and terms are written in prefix notation,
uninterpreted predicates and functors either start with lower case
and contain alphanumerics and underscore, or are in 'single quotes'.
The language also supports interpreted predicates and functors.
These come in two varieties: defined predicates and functors, whose
interpretation is specified by the TPTP language, and system
predicates and functors, whose interpretation is ATP system specific.
Interpreted predicates and functors are syntactically distinct from
uninterpreted ones  they are = and !=, or start with
a $, a ", or a digit.
Nonvariable symbols can be given a type globally, in the formula with
role type.
The defined types are
$o  the Boolean type,
$i  the type of individuals,
$real  the type of reals,
$rat  the type of rational,
and
$int  the type of integers.
New types are introduced in formulae with the type role, based on
$tType  the type of all types.
Full details of the THF and TFF type systems are provided in
[SS+12,BP13,SB10,KSR16],
with the last providing an overview of the first three.
The defined predicates recognized so far are
 $true and $false, with the obvious interpretations.
In typed languages they are of type $o.
 = and != for equality and inequality.
In typed languages they are ad hoc polymorphic with both sides having
the same type.
 $distinct, whose arguments are hence known to be unequal from
each other (but not necessarily unequal to any other constants).
$distinct maybe overloaded with different arities.
$distinct is used in only the typed languages, and is ad hoc
polymorphic with all arguments having the same type.
 The arithmetic predicates described below.
The arithmetic predicates are used in only the THF and TFF languages.
The defined functors recognized so far are
 "distinct object"s, written in double quotes.
All "distinct object"s are unequal to all "different
distinct object"s (but not necessarily unequal to any other
constants), e.g., "Apple" != "Microsoft".
One way of implementing this is to interpret "distinct object"
as the domain element in the double quotes.
In typed contexts "distinct object"s are of type $i
(and as a result, are unequal to all numbers because their interpretation
domains are disjoint in the TPTP type systems).
 Numbers (numeric constants).
These are used in the THF and TFF languages.
Numbers are interpreted as themselves (as domain elements).
All different numbers are unequal, e.g., 1 != 2
(but not necessarily unequal to any other constants).
All numbers are also unequal to terms of type $i (because
their interpretation domains are disjoint in the TPTP type systems).
Numbers are not allowed in FOF and CNF, but for many applications an
adequate alternative is to write "numbers" as distinct objects,
e.g., "27", so that different "numbers" are known to be
unequal to each other, but possibly equal to other terms of type
$i.
 The arithmetic functions described below.
The arithmetic functions are used in only the THF and TFF languages.
System predicates and functors are used for interpreted predicates and
functors that are available in particular ATP tools.
System predicates and functors start with $$.
The names are not controlled by the TPTP language, so they must be used
with caution.
The connectives used to build nonatomic formulae are written using intuitive
notations.
The universal quantifier is !, the existential quantifier is
?, and the lambda binder is ^.
Quantified formulae are written in the form
Quantifier [Variables] :
Formula.
In the THF, TFF, and FOF languages, every variable in a Formula
must be bound by a preceding quantification with adequate scope.
Typed Variables are given their type by a :type
suffix.
The binary connectives are
infix  for disjunction,
infix & for conjunction,
infix <=> for equivalence,
infix => for implication,
infix <= for reverse implication,
infix <~> for nonequivalence (XOR),
infix ~ for negated disjunction (NOR),
infix ~& for negated conjunction (NAND),
infix @ for application.
The only unary connective is prefix ~ for negation.
Negation has higher precedence than quantification, which in turn has
higher precedence than the binary connectives.
No precedence is specified between the binary connectives; brackets are
used to ensure the correct association.
The binary connectives are left associative.
The THF and TFF languages have conditional and let expressions.
 Conditional expressions have $ite as the functor.
The expressions are parametric polymorphic, taking a boolean expression as
the first argument, then two expressions of any one type as the second and
third arguments, as the true and false return values respectively, i.e.,
the return type is the same as that of the second and third arguments.
Let expressions have $let as the functor.
 The expressions provide the types of defined symbols, definitions for the
symbols, and a formula/term in which the definitions are applied.
Each type declaration is the same as a type declaration in an annotated
formula with the type role, and multiple type declarations are given
in []ed tuples of declarations.
Each definition defines the expansion of one of the declared symbols, and
multiple definitions are given in []ed tuples of definitions.
If variables are used in the lefthand side of a definition, their values
are supplied in the defined symbol's use.
Such variables do not need to be declared (they are implicitly declared to
be of the type defined by the symbol declaration), but must be toplevel
arguments of the defined symbol and be pairwise distinct.
The useful_info field of an annotated formula is optional, and if
it is not used then the source field becomes optional.
The source field is used to record where the annotated formula came from,
and is most commonly a file record or an inference record.
A file record stores the name of the file from which the annotated
formula was read, and optionally the name of the annotated formula as it
occurs in that file  this might be different from the name of the annotated
formula itself, e.g., if an ATP systems reads an annotated formula,
renames it, and then prints it out.
An inference record stores information about an inferred formula.
The useful_info field of an annotated formula is a list of arbitrary
useful information formatted as Prolog terms, as required for user
applications.
An example of a THF formula section, extracted from the problem file
LCL633^1.p, is shown below.
An example of a TFF formula section, extracted from the problem file
DAT013=1.p, is shown below that.
Example THF formulae.
An example of a FOF formula section, extracted from the problem file
GRP194+1.p, is shown below that.
An example of a clause section, extracted from the problem file
GRP0397.p, is shown below that.
%
%Signature
thf(a,type,(
a: $tType )).
thf(p,type,(
p: ( a > $i > $o ) > $i > $o )).
thf(g,type,(
g: a > $i > $o )).
thf(e,type,(
e: ( a > $i > $o ) > a > $i > $o )).
thf(r,type,(
r: $i > $i > $o )).
%Axioms
thf(positiveness,axiom,(
! [X: a > $i > $o] :
( mvalid
@ ( mimpl @ ( mnot @ ( p @ X ) )
@ ( p
@ ^ [Z: a] :
( mnot @ ( X @ Z ) ) ) ) ) )).
thf(g,definition,
( g
= ( ^ [Z: a,W: $i] :
! [X: a > $i > $o] :
( mimpl @ ( p @ X ) @ ( X @ Z ) @ W ) ) )).
thf(e,definition,
( e
= ( ^ [X: a > $i > $o,Z: a,P: $i] :
! [Y: a > $i > $o] :
( mimpl @ ( Y @ Z )
@ ( mbox @ r
@ ^ [Q: $i] :
! [W: a] :
( mimpl @ ( X @ W ) @ ( Y @ W ) @ Q ) )
@ P ) ) )).
%Conjecture
thf(thm,conjecture,
( mvalid
@ ^ [W: $i] :
! [Z: a] :
( mimpl @ ( g @ Z ) @ ( e @ g @ Z ) @ W ) )).
%
Example of a THF problem file formulae section (LCL633^1.p).
%
tff(list_type,type,(
list: $tType )).
tff(nil_type,type,(
nil: list )).
tff(mycons_type,type,(
mycons: ( $int * list ) > list )).
tff(sorted_type,type,(
fib_sorted: list > $o )).
tff(empty_fib_sorted,axiom,(
fib_sorted(nil) )).
tff(single_is_fib_sorted,axiom,(
! [X: $int] : fib_sorted(mycons(X,nil)) )).
tff(double_is_fib_sorted_if_ordered,axiom,(
! [X: $int,Y: $int] :
( $less(X,Y)
=> fib_sorted(mycons(X,mycons(Y,nil))) ) )).
tff(recursive_fib_sort,axiom,(
! [X: $int,Y: $int,Z: $int,R: list] :
( ( $less(X,Y)
& $greatereq(Z,$sum(X,Y))
& fib_sorted(mycons(Y,mycons(Z,R))) )
=> fib_sorted(mycons(X,mycons(Y,mycons(Z,R)))) ) )).
tff(check_list,conjecture,(
fib_sorted(mycons(1,mycons(2,mycons(4,mycons(7,mycons(100,nil)))))) )).
%
Example of a TFF problem file formulae section (DAT002=1.p).
%
%Definition of a homomorphism
fof(homomorphism1,axiom,
( ! [X] :
( group_member(X,f)
=> group_member(phi(X),h) ) )).
fof(homomorphism2,axiom,
( ! [X,Y] :
( ( group_member(X,f)
& group_member(Y,f) )
=> multiply(h,phi(X),phi(Y)) = phi(multiply(f,X,Y)) ) )).
fof(surjective,axiom,
( ! [X] :
( group_member(X,h)
=> ? [Y] :
( group_member(Y,f)
& phi(Y) = X ) ) )).
%Definition of left zero
fof(left_zero,axiom,
( ! [G,X] :
( left_zero(G,X)
<=> ( group_member(X,G)
& ! [Y] :
( group_member(Y,G)
=> multiply(G,X,Y) = X ) ) ) )).
%The conjecture
fof(left_zero_for_f,hypothesis,
( left_zero(f,f_left_zero) )).
fof(prove_left_zero_h,conjecture,
( left_zero(h,phi(f_left_zero)) )).
%
Example of a FOF problem file formulae section (GRP194+1.p).
%
%Redundant two axioms
cnf(right_identity,axiom,
( multiply(X,identity) = X )).
cnf(right_inverse,axiom,
( multiply(X,inverse(X)) = identity )).
... some clauses omitted here for brevity
cnf(property_of_O2,axiom,
( subgroup_member(X)
 subgroup_member(Y)
 multiply(X,element_in_O2(X,Y)) = Y )).
%Denial of theorem
cnf(b_in_O2,negated_conjecture,
( subgroup_member(b) )).
cnf(b_times_a_inverse_is_c,negated_conjecture,
( multiply(b,inverse(a)) = c )).
cnf(a_times_c_is_d,negated_conjecture,
( multiply(a,c) = d )).
cnf(prove_d_in_O2,negated_conjecture,
( ~ subgroup_member(d) )).
%
Example of a CNF problem file clause section (GRP0397.p).
The Arithmetic System
Arithmetic must be done in the context of the THF or TFF types logics, which
support the predefined atomic numeric types
$int, $rat, and $real.
Using THF or TFF enforces semantics that separate
$int from $rat from $real from $i from $o.
The numbers are unbounded, and the reals of infinite precision (rather
than some specific implementation such as 32 bit 2's complement integer, or
IEEE floating point).
Systems that implement limited arithmetic must halt in an
SZS error state if they hit overflow.
The following interpreted predicates and interpreted functions are defined.
Each symbol is adhoc polymorphic over the numeric types
exception  $quotient is not defined for $int).
All arguments must have the same numeric type.
All the functions, except for $quotient of two $ints, and the coercion functions
$to_int and $to_rat, have the same range type as their arguments.
For example,
$sum can be used with the type signatures
($int * $int) > $int,
($rat * $rat) > $rat,
and
($real * $real) > $real.
The $quotient of two $ints is a $rat.
The coercion function $to_int always has a $int result,
and the coercion function $to_rat always has a $rat result.
All the predicates have a $o result.
For example,
$less can be used with the type signatures
($int * $int) > $o,
($rat * $rat) > $o,
and
($real * $real) > $o.
Symbol
 Operation
 Comments, examples 

$int
 The type of integers
 123, 123
<integer> :: (<signed_integer><unsigned_integer>)
<signed_integer> :: <sign><unsigned_integer>
<unsigned_integer> :: <decimal>
<decimal> :: (<zero_numeric><positive_decimal>)
<positive_decimal> :: <non_zero_numeric><numeric>*
<sign> ::: [+]
<zero_numeric> ::: [0]
<non_zero_numeric> ::: [19]
<numeric> ::: [09]

$rat
 The type of rationals
 123/456, 123/456, +123/456
<rational> :: (<signed_rational><unsigned_rational>)
<signed_rational> :: <sign><unsigned_rational>
<unsigned_rational> :: <decimal><slash><positive_decimal>
<slash> ::: [/]

$real
 The type of reals
 123.456, 123.456,
123.456E789, 123.456e789,
123.456E789,
123.456E789, 123.456E789
<real> :: (<signed_real><unsigned_real>)
<signed_real> :: <sign><unsigned_real>
<unsigned_real> :: (<decimal_fraction><decimal_exponent>)
<decimal_exponent> :: (<decimal><decimal_fraction>)<exponent><decimal>
<decimal_fraction> :: <decimal><dot_decimal>
<dot_decimal> :: <dot><numeric><numeric>*
<dot> ::: [.]
<exponent> ::: [Ee]

= (infix)
 Comparison of two numbers.
 The numbers must be the same atomic type (see the
type system).

$less/2
 Lessthan comparison of two numbers.
 $less, $lesseq, $greater,
and $greatereq are related by
! [X,Y] : ( $lesseq(X,Y) <=> ( $less(X,Y)  X = Y ) )
! [X,Y] : ( $greater(X,Y) <=> $less(Y,X) )
! [X,Y] : ( $greatereq(X,Y) <=> $lesseq(Y,X) )
i.e, only $less and equality need to be implemented
to get all four relational operators.

$lesseq/2
 Lessthanorequalto comparison of two numbers.

$greater/2
 Greaterthan comparison of two numbers.

$greatereq/2
 Greaterthanorequalto comparison of two numbers.

$uminus/1
 Unary minus of a number.
 $uminus, $sum, and $difference are
related by
! [X,Y] : $difference(X,Y) = $sum(X,$uminus(Y))
i.e, only $uminus and $sum need to
be implemented to get all three additive operators.

$sum/2
 Sum of two numbers.

$difference/2
 Difference between two numbers.

$product/2
 Product of two numbers.


$quotient/2
 Exact quotient of two numbers of the same type.
 For nonzero divisors, the result can be computed.
For zero divisors the result is not specified.
In practice, if an ATP system does not "know" that the divisor
is nonzero, it should simply not evaluate the $quotient.
Users should always guard their use of $quotient using
inequality, e.g.,
! [X: $real] : ( X != 0.0 => p($quotient(5.0,X)) )
For $rat or $real operands the result is the same type.
For $int operands the result is $rat; the result might be simplied.

$quotient_e/2, $quotient_t/2, $quotient_f/2
 Integral quotient of two numbers.
 The three variants use different rounding to an integral result:
 $quotient_e(N,D)  the Euclidean quotient, which
has a nonnegative remainder.
If D is positive then $quotient_e(N,D) is
the floor (in the type of N and D) of the
real division N/D, and if D is negative
then $quotient_e(N,D) is the ceiling of N/D.
 $quotient_t(N,D)  the truncation of the real
division N/D.
 $quotient_f(N,D)  the floor of the real
division N/D.
For zero divisors the result is not specified.

$remainder_e/2, $remainder_t/2, $remainder_f/2
 Remainder after integral division of two numbers.
 For τ ∈ {$int,$rat,
$real}, ρ ∈ {e,
t,f}, $quotient_ρ and
$remainder_ρ are related by
! [N:τ,D:τ] :
$sum($product($quotient_ρ(N,D),D),$remainder_ρ(N,D)) = N
For zero divisors the result is not specified.

$floor/1
 Floor of a number.
 The largest integral value (in the type of the argument) not greater than
the argument.

$ceiling/1
 Ceiling of a number.
 The smallest integral value (in the type of the argument) not less than
the argument.

$truncate/1
 Truncation of a number.
 The nearest integral value (in the type of the argument) with magnitude
not greater than the absolute value of the argument.

$round/1
 Rounding of a number.
 The nearest integral value (in the type of the argument) to the argument.
If the argument is halfway between two integral values, the nearest even integral
value to the argument.

$is_int/1
 Test for coincidence with an $int value.


$is_rat/1
 Test for coincidence with a $rat value.


$to_int/1
 Coercion of a number to $int.
 The largest $int not greater than the argument.
If applied to an argument of type $int this is the
identity function.

$to_rat/1
 Coercion of a number to $rat.
 This function is not fully specified.
If applied to a $int the result is the argument over
1.
If applied to a $rat this is the identity function.
If applied to a $real that is (known to be) rational the
result is the $rat value.
For other reals the result is not specified.
In practice, if an ATP system does not "know" that the argument
is rational, it should simply not evaluate the $to_rat.
Users should always guard their use of $to_rat using
$is_rat, e.g.,
! [X: $real] : ( $is_rat(X) => p($to_rat(X)) )

$to_real/1
 Coercion of a number to $real.


The extent to which ATP systems are able to work with the arithmetic
predicates and functions can vary, from a simple ability to evaluate ground
terms, e.g., $sum(2,3) can be evaluated to 5, through
an ability to instantiate variables in equations involving such functions,
e.g., $product(2,$uminus(X)) = $uminus($sum(X,2))
can instantiate X to 2, to extensive algebraic manipulation
capability.
The syntax does not axiomatize arithmetic theory, but may be used to write
axioms of the theory.
The Nonclassical Logics
The TPTP recognizes several normal modal logics, and one temporal logic.
Each logic has a define name:
 Generic modal  $modal
 Alethic modal  $alethic_modal
 Deontic modal  $deontic_modal
 Epistemic modal  $epistemic_modal
 Doxastic modal  $doxastic_modal
 Temporal (instantbased)  $temporal_instant
Connectives
The nonclassical connectives of NTF have the form {$name}.
For the logics recognized by the TPTP the connectives are:
 $modal  {$box}, {$dia}
 $alethic_modal  {$necessary}, {$possible}
 $deontic_modal  {$obligatory}, {$permissible},
{$forbidden}
 $epistemic_modal  {$knows}, {$canKnow}
 $doxastic_modal  {$believes}, {$canBelieve}
 $temporal_instant  {$future}, {$past}, {$henceforth},
{$hitherto}
(System names may also be used for userdefined connectives, e.g.,
{$$canadian_conditional}, thus allowing use of the TPTP syntax when experimenting with
logics that have not yet been formalized in the TPTP.)
A connective can be parameterized to reflect more complex nonclassical connectives.
The form is
{$name(param_{1},param_{2},param_{2})}.
If the connective is indexed the index is given as the first parameter prefixed with a #,
e.g., {$knows(#manuel)} @ (nothing)},
so that the connective is {$knows(#manuel)} (and not the connective {$knows}
applied to the index #manuel).
All other parameters are keyvalue assignments, e.g., to list the agents of common knowledge the
form might be {$common($agents:=[alice,bob,claire])}.
In NXF the nonclassical connectives are applied in a mixed
higherorderapplied/firstorderfunctional style, with the connectives applied using @
to a ()ed list of arguments.
In NHF the nonclassical connectives are applied using @ in usual higherorder style with
curried function applications.
There are also short form unary connectives for unparameterised {$box} and
{$dia}, applied directly like negation: [.] and <.>, e.g.,
{$box} @ (p) can be
written [.] p.
Short forms and long forms can be used together, e.g., it’s OK to use {$necessary} and
[.] in the same problem or formula.
Full specification of the connectives and their use in formulae is in the BNF
starting at <nxf_atom>
and
<thf_defined_atomic>.
Semantics Specification
An annotated formula with the role logic is used to specify the semantics of formulae.
The semantic specification typically comes first in a file.
A semantic specification consists of the defined name of the logic followed by
== and a list of properties value assignments.
Each specification is the property name, followed by == and either a value or a tuple
of specification details.
If the first element of a list of details is a value, that is the default value for all cases
that are not specified in the rest of the list.
Each detail after the optional default value is the name of a relevant part of the vocabulary of
the problem, followed by == and a value for that named part.
The BNF grammar is here.
The grammar is not very restrictive on purpose, to enable working with other logics as well.
It is possible to create a lot of nonsense specifications, and to say the same thing in different
meaningful ways.
A tool to check the sanity of a specification is available.
A semantic specification changes the meaning of things such as the boolean type $o,
universal quantification with !, etc  their existing meaning in classical logic should
not be confused with the meaning in the declared logic.
For plain $modal and all the *_modal logics the properties that may be
specified are $domains, $designation, $terms, and $modalities.
 $domains
 $constant  Constant domain semantics has all world's domains fixed the same.
 $varying  Varying domain semantics has (potentially) different domains for
each world.
 $cumulative  Cumulative domain semantics and varying, and the domain of each
world is a superset of the domains of the worlds from which it can be reached.
 $decreasing  Decreasing domain semantics and varying, and the domain of each
world is a subset of the domains of the worlds from which it can be reached.
 $designation
 $rigid  Rigid designation independent of worlds.
 $flexible  Flexible designation dependent on worlds.
 $terms
 $local  Terms are interpreted in the local world
 $global  Terms are interpreted globally in all the worlds
 $modalities
 A known system name in the form $modal_system_Sys
Sys ∈ { K,KB,K4,K5,K45,KB5,D,DB,D4,D5,D45,M,B,S4,S5,S5U }
 A tuple of known axiom names in the form
[ $modal_axiom_Ax_{1},
$modal_axiom_Ax_{2}, ... ]
Ax_{i} ∈ { K,M,B,D,4,5,CD,BoxM,C4,C }
For $temporal_instant the properties are the $domains, $designation,
and $terms of the modal logic, $modalities with different possible values,
and another property $time.
 $modalities
 A tuple of known axiom names in the form
[ $modal_axiom_AxTm_{1},
$modal_axiom_AxTm_{2}, ... ]
Ax_{i} ∈ { K,M,B,D,4,5 },
Tm_{i} ∈ { +, },
 $time
 A tuple of known time properties in the form
[ P_{1},P_{2}, ... ]
P_{i}
∈ { $reflexivity, $irreflexivity, $transitivity, $asymmetry, $anti_symmetry,
$linearity, $forward_linearity, $backward_linearity, $beginning, $end, $no_beginning,
$no_end, $density, $forward_discreteness, $backward_discreteness }
The formulae of a problem can be either local (true in the current world) or global (true in all
worlds).
By default, formulae with the roles hypothesis and conjecture are local, and
all others are global.
These defaults can be overridden by adding a subrole, e.g., axiom$local,
conjecture$global.
Physical Organization
The TPTP is physically organized into six subdirectories, as follows:
Utilities
The TPTP2X Utility
The TPTP2X utility is a multifunctional utility for reformatting,
transforming, and generating TPTP problem files. In particular, it
 Converts from the TPTP format to other ATP formats.
 Applies various transformations to TPTP problems.
 Controls the generation of TPTP problem files from TPTP generator files.
TPTP2X is written in Prolog, and should run on most Prolog systems.
Before using TPTP2X, it is necessary to install the code for the dialect of
Prolog that is to be used.
This and other installation issues are described next.
WARNING:
For historical reasons TPTP2X omits the quotes around some constants
and function symbols that should be 'Quoted', e.g., as found in
PUZ001+2.
I'm really sorry about this  it's an artifact of a very generic printing
framework that became overloaded when THF formulae came into the TPTP.
Changing it would be a HUGE task, and now there's
TPTP4X that is better in most respects.

Installation
The TPTP2X utility consists of the following files:
 tptp2X 
A shell script for running the tptp2X utility.
 tptp2X_install 
A csh script for installing the tptp2X utility.
 tptp2X.config 
The configuration file with site specific information.
 tptp2X.main 
The main source code file of the tptp2X utility.
 tptp2X.read 
Procedures for reading TPTP problem files.
 tptp2X.generate 
Procedures for using TPTP generator files.
 tptp2X.syntax 
Procedures for extracting syntactic measures from files.
 tptp2X.format 
Procedures used by format modules.
 transform.<TRAN> 
Procedures for doing <TRAN> transformations on TPTP problems.
 format.<ATP> 
Procedures for outputing formulae in <ATP> format.
TPTP2X is installed by running tptp2X_install, which will prompt
for required information.
To install TPTP2X by hand, the following must be attended to:
 Copy the *.uninstalled files to their basenames (without
.uninstalled).
 In the tptp2X script file:
 TPTPDirectory must be set to the absolute path name of the
TPTP directory.
 PrologInterpreter must be set to the absolute path name of
the Prolog interpreter.
 PrologArguments must be set to any command line arguments
for the Prolog interpreter.
 The Gawk variable must be set to the absolute path name of
gawk or awk.
 In the tptp2X.config file:
 The appropriate facts for the desired Prolog dialect must be
uncommented.
 The absolute path name of the TPTP directory must be set in the
tptp_directory/1 fact.
 In the tptp2X.main file:
 If your Prolog interpreter does not support compile/1 for
loading source code, the compile/1 directives must be
changed to something appropriate, e.g., [ ].
 The compile directives for unwanted output format modules
can be commented out.
Using TPTP2X
The most convenient way of using the TPTP2X utility is through the
tptp2X script.
The use of this script is:
tptp2X [h][q<Level>][i][s<Size>][t<Transform>][f<Format>] [d<Directory>] l<NamesFile><TPTPFiles>
The h flag specifies that usage information should be output.
The q<Level> flag specifies the level of quietness at which the
script should operate.
There are three quietness levels;
0 is verbose mode,
1 suppresses informational output from the Prolog interpreter, and
2 suppresses all informational output except lines showing what files
are produced.
The default quietness level is 1.
The i flag specifies that the script should enter interactive
mode after processing all other command line parameters.
Interactive mode is described below.
The other command line parameter values are:
 s<Size> : This specifies the required sizes when
generating problems. <Size> is either an integer,
a <Low>..<High> integer size range, or
a : separated list of <Sizes>.
 An integer directly specifies the required problem size.
 Each integer in a size range is used to generate a separate set
of formulae.
 A : separated string of <Sizes> is used for generators
that require multiple parameters, one <Size> for each size
parameter required. For example, s3..5:2 means the three
sizes 3:2, 4:2, and 5:2.
s<Size> is ignored for problem (.p) files.
 t<Transform> : Specifies format in which the
output is to be written.
 d<Directory> : Specifies the top level directory below
which the output files are to be placed.
If the <Directory> value is , then all output files are
written to standard output, and all informational output is suppressed
(even quieter than q2).
If the input file has a TPTP domain name as its first three characters,
the output files for that input file are placed in a subdirectory below
the <Directory>, named according to the domain.
Otherwise the output files are written in the <Directory>.
The default <Directory> is a subdirectory below the TPTP
Problems directory, named according to the <Format>.
 l<NamesFile> : Specifies a file containing the names
of the input files which are to be processed, one per line.
This can be used instead of an explicit list of files, described next.
 <TPTPFiles> : Lists the input files which are to be processed.
The TPTP problem name can be given in which case TPTP2X will
find it in your TPTP installation.
Three letter domain names can be given, which processes all problems
in those domains.
If no files are specified, the entire TPTP is processed.
If the file name  is specified then input is taken from
standard input and all output is written to standard output (overriding
any <Directory> specification).
The default <TPTPFiles> is all TPTP problem files.
Hint: If your command shell complains about too many arguments as a
result of expanding the <TPTPFiles> argument to a too large number
of files, e.g., ~/TPTP/Problems/S*/*.p, place the
<TPTPFiles> argument in 'single quotes', and
TPTP2X will do the expansion internally.
The output files produced by TPTP2X are named according to the input file
name and the options used.
The TPTP problem name (the input file name without the .p or
.g) forms the basis of the output file names.
For files created from TPTP generators, the size parameters are appended to
the base name, separated from the base name by a ".".
Then, for each transformation applied, a
suffix is appended preceded by a +.
Finally an extension to indicate the
output format is appended to the file name.
To record how a TPTP2X output file has been formed, the TPTP2X parameters
are given in a % Comments field entry of the output file.
TPTP2X Transformations
The transformations are:
 stdfof,
to remove the connectives <=, <~>,
~, and ~& from FOF problems, by
rewriting formulae to use the other connectives.
Suffix +stdfof.
 expand:conjectures,
to expand a single FOF problem with multiple conjectures into
multiple FOF problems with single conjectures.
Suffix +xc_<Index>_<conjecture_name>.
 clausify:<Algorithm>,
to transform FOF problems to CNF problems.
The <Algorithm> is one of:
Details of these algorithms can found in
[SM96].
Note that these transformations do not simplify the resultant
clause set; see the next two transformations for this.
Suffix +cls_<Algorithm>.
 simplify,
to simplify a set of clauses.
Details of the simplifications performed can be found in
[SM96].
Suffix +simp.
 cnf:<Algorithm>,
to do clausify:<Algorithm> followed by
simplify.
Details of the performance of these transformations can be
found in [SM96].
Generally cnf:otter and cnf:bundy produce
clause sets with lower symbol counts than the other two.
Suffix +cnf_<Algorithm>.
 fofify:obvious,
to convert a set of 1st order clauses to a set of firstorder
formulae, converting negated_conjecture clauses into
a correspoding conjecture.
The transformation is very naive.
Suffix +fof_obvious.
 axiomate,
to remove all conjecture formulae.
Suffix +axed.
 tff2fof[:<Inequalities>[:<KnownTypes>]],
to convert TFF formulae to FOF.
The value of :<Inequalities> can be either with or
without, to indicate whether or not axioms that specify
that terms from different domains are unequal should be generated.
The default is with.
The value of :<KnownTypes> is a []ed list of domains
for which type guards are not needed.
The default is [$i,$int,$rat,$real], i.e., assuming that the
output will be given to a system that knows about terms and the numeric
types.
Suffix +tff2fof.
 propify,
to convert a set of 1st order clauses to a set of propositional
clauses, preserving satisfiability.
The transformation is very naive.
Suffix +prop.
 lr,
to reverse the literal ordering in the clauses of CNF problems.
Suffix +lr.
 cr,
to reverse the clause ordering in CNF problems.
Suffix +cr.
 clr,
to do both clausal reversals.
Suffix +clr.
 fr,
to reverse the formula ordering in FOF problems.
Suffix +fr.
 random,
to randomly reorder the clauses and literals in CNF problems,
or to randomly reorder the formulae in FOF problems.
The rearrangement of formulae, clauses, and literals in a problem
facilitates testing the sensitivity of an ATP system to the order
of presentation.
Suffix +ran.
 er,
to reverse the arguments of unit equality clauses in CNF problems.
Suffix +er.
 ran_er,
to reverse the arguments of randomly selected unit equality
clauses in CNF problems.
Suffix +ran_er.
 add_equality[:<Add>],
to add missing equality axioms to problems that contain equality.
If the optional :<Add> flags are omitted, then all
equality axioms are added.
If the optional :<Add> flags are included, then
<Add> is a string that indicates which equality axioms to
add.
The characters that can be in the string are as for the
rm_equality transformation.
Suffix +add_<Add>.
 rm_equality[:<Remove>],
to remove equality axioms.
If the optional :<Remove> flags are omitted, then all
equality axioms are removed.
If the optional :<Remove> flags are included, then
<Remove> is a string that indicates which equality axioms to
remove.
The characters that can be in the string are:
 r, to remove reflexivity,
 s, to remove symmetry,
 t, to remove transitivity,
 f, to remove function substitution,
 p, to remove predicate substitution.
For example, t rm_equality:stfp would remove symmetry,
transitivity, function substitution, and predicate substitution.
This transformation works only if the equality axiomatization is
complete (i.e., including the axioms of reflexivity, symmetry,
transitivity, function substitution for all functors, and predicate
substitution for all predicate symbols).
Suffix +rm_eq_<Remove>.
 set_equality[:<Set>],
to set the equality axioms in problems that contain equality.
If the optional :<Set> flags are omitted, then all
equality axioms are set.
If the optional :<Set> flags are included, then
<Set> is a string that indicates which equality axioms to
set.
The characters that can be in the string are as for the
rm_equality transformation.
Suffix +set_eq_<Set>.
 to_equality,
to convert CNF problems to a pure equality representation.
Every nonequality literal is converted to an equality literal
with the same sign.
The arguments of the new equality literal are the atom of the
nonequality literal and the constant true.
Suffix +2eq.
 ifof:<Axioms> to convert intuitionistic propositional
formulae to FOF.
The <Axioms> specify which axiomatic basis to use.
The options are:
Suffix +ifof_<_<Axioms>.
 mfof:<Logic>:<Axioms> to convert modal
propositional formulae to FOF.
The <Logic> specifies which modal logic to use.
The options are:
 mm for multimodal logic
 k
 d
 m
 b
 s4
 s5
The <Axioms> specify which modal logic axioms to use.
The options are:
 benzmueller
 k
 d
 m
 b
 s4
 s5
Suffix +mfof_<_<Logic>_<Axioms>.
 magic,
to apply Mark Stickel's magic set transformation
[Sti94]
to CNF problems.
Suffix +magic.
 shorten,
to replace all the symbols in the problem by short, meaningless
symbols.
This is useful if you are only interested in the syntax of the
problem, and do not want to read through the long, meaningful
symbols that are often used in TPTP problems.
Note that equality atoms are not affected.
Suffix +shorten.
 none,
to do nothing (same as omitting the t parameter, but
required for advanced use.
No suffix.
The default transformation is none.
TPTP2X Output Formats
The available output formats are:
 bliksem,
to convert CNF problems to the Bliksem format
[deN98].
Suffix .blk.
 dedam
to convert CNF unit equality problems to the Dedam format
[NRV97].
Suffix .dedam.
 dfg,
to convert CNF problems to the DFG format
[HKW96].
Suffix .dfg.
 dimacs,
to convert propositional CNF problems to the DIMACS format
[DIM].
Suffix .dimacs.
 eqp
to convert CNF unit equality problems to the EQP format
[McC00].
Suffix .eqp.
 geo,
to convert problems to the Geo format
[dNM06].
Suffix .geo.
 leancop,
to convert problems to the leanCoP format
[OB03].
Suffix .leancop.
 lf
to convert problems to the LF format
[Pfe89].
Suffix .lf.
 metitarski,
to convert problems to the MetiTarski format
[AP10].
Suffix .mski.
 otter:<SoS>:'<Otter options>',
to convert FOF and CNF problems to the Otter .in format
[McC94].
<SoS> specifies the SetofSupport to use. It can be one of:
 conjecture,
to use the formulae whose type is conjecture
 not_conjecture,
to use the formulae whose type is not conjecture
 hypothesis,
to use the formulae whose type is hypothesis or
conjecture
 axioms,
to use the formulae whose type is axiom
 positive,
to use the positive clauses
 negative,
to use the negative clauses
 unit,
to use the unit clauses
 all,
to use all formulae
 none,
to use no formulae (needed fo Otter's auto mode)
'<Otter options>' is a quoted (to avoid
UNIX shell errors), comma separated list of Otter options, which
will be output before the formula lists.
See the Otter Reference Manual and Guide
[McC94]
for details of the available options.
For example,
f otter:none:'set(auto),assign(max_seconds,5)'
would configure Otter to use its auto mode with a
time limit of 5 seconds.
As the auto mode is commonly used with Otter, the TPTP2X
script allows the abbreviation f otter to specify
f otter:none:'set(auto),set(tptp_eq),clear(print_given)'.
If no t parameter is specified then f otter also
sets t equality:stfp.
Suffix .in.
 prover9:<Options>,
to convert problems to the Prover format
[McCURL].
The <Options> specifies any Prover9 options.
Suffix .in.
 smt2,
to convert TFF and FOF problems to the SMT 2.0 format
[BST10].
Suffix .smt2.
 tps,
to convert THF problems to the TPS format
[AB06].
Suffix .tps.
 tptp,
to convert FOF and CNF problems to the TPTP format
[SZS03].
Suffix .tptp.
 waldmeister,
to convert CNF unit equality problems to the Waldmeister format
[HBF96].
Suffix .pr.
The default format is tptp.
Examples
~/TPTP/TPTP2X> tptp2X tfr,random f prover9 PUZ001+1
This applies two separate transformations to the problem
ALG0011.
The transformations are formula order reversal and formula order randomizing.
The transformed problems are output in prover9 format, in a directory
prover9/PUZ below the current directory.
The file names are PUZ001+1+fr.in and PUZ001+1+ran.in.
~/TPTP/TPTP2X> tptp2X q2 s3..5 fotter d~tptp/tmp SYN0011.g
This generates three sizes of the generic problem SYN0011.
The sizes are 3, 4, and 5.
The output files are formatted for Otter, to use its auto mode.
The files are placed in \verb/~/tptp/tmp/SYN, and are named
SYN0011.003.lop, SYN0011.004.lop, and
SYN0011.005.lop.
The quietness level is set to 2, thus suppressing all informational output
except the lines showing what files are produced.
Note that the file SYN0011.g is automatically found in the generators
directory.
~/TPTP/TPTP2X> tptp2X tmagic+shorten  < ~tptp/TPTP/Problems/GRP/GRP0011.p
This uses the TPTP2X script as a filter, to apply the nonHorn magic set
transformation and then the symbol shortening transformation to
GRP0011.p.
GRP0011.p is fed in from standard input, and the output is written to
standard output (thus all informational output is suppressed).
The TPTP2X Interactive Mode
If the i flag is set, the TPTP2X script enters interactive mode after
processing all other command line parameters. In interactive mode the user can
change the value of any of the command line parameters.
The user is prompted for the <TPTPFiles>, the <Size>,
the <Transform>, the <Format>, and the <Directory>.
In each prompt the current value is given.
The user can respond by specifying a new value or by entering <cr>
to accept the current value.
The promptrespond loop continues for each parameter until the user
accepts the value for the parameter.
Running TPTP2X from within Prolog
The TPTP2X utility can also be run from within a Prolog interpreter.
The tptp2X.main file has to be loaded, and the entry point is then
tptp2X/5, in the form:
?tptp2X(<TPTPFile>,<Size>,<Transform>,<Format>,<Directory>).
The parameters are similar to the TPTP2X script
command line parameters.
A summary, highlighting differences with "(!)", is given here.
 <TPTPFile> is the name of a single TPTP file.
If the file name is not absolute, then it is considered to be relative
to the directory specified in the tptp_directory/1 fact in the
tptp2X.config file (!).
If the file name is user (!), then input is taken from standard
input.
 <Size> is either
an integer,
a : separated string of <Size>s,
a <Low>..<High> integer size range,
or a Prolog list of <Size>s (!).
Each <Size> in a Prolog list of <Size>s is used to generate
separate sets of formulae.
 <Transform> is either
a single transformation specifier,
a + separated string of <Transform>s, or
a Prolog list (!) of <Transform>s.
 <Format> is an output format or a Prolog list (!) of output formats.
An output file is written for each output format specified.
For the otter format, the syntax is
otter:<SoS>:[<Otter options>] (!),
i.e., the Otter options form a Prolog list.
 <Directory> specifies the directory in which the output files
are to be placed. If the <Directory> is user (!) then
output is sent to standard output.
Writing your own Transformations and Output Formats
The transformations and output formatting are implemented in Prolog, in the
files transform.<TRAN> and format.<ATP>,
respectively.
It is simple to add new transformations and output formats to the TPTP2X
utility, by creating new transformation and format files.
New files should follow the structure of the existing files.
Typically, a new file can be created by modifying a copy of one of the
existing files.
The entry point in a transformation file is <Transform>/6.
The first three arguments are inputs: a list of the problem's formulae,
the variable dictionary (a bit complicated), and the transformation
specification.
The next three arguments are outputs: the transformed formulae, the
transformed variable dictionary (typically the same as the input
dictionary), and the transformation suffix.
As well as the <Transform>/6 entry point, a
<Transform>_file_information/2 fact must be provided.
The two arguments of the <Transform>_file_information/2 fact are
the atom transform and a description of the possible transformation
specifications (as used in the third argument of <Transform>/6).
See the existing transform.<TRAN> files for examples.
The entry point in a format file is <Format>/3.
The three arguments are inputs: the format specification, a list of the
problem's formulae, and the file header information.
(It is not necessary to output a file header here; this information is
available only for producing supplementary documentation.)
As well as the <Format>/3 entry point, a
<Format>_format_information/2 fact and a
<Format>_file_information/2 fact must be provided.
The two arguments of the <Format>_format_information/2 fact
are a character that can be used to start a comment in the output file and
the format extension, both as Prolog atoms.
The two arguments of the <Format>_file_information/2 fact are
the atom format and a description of the possible format
specifications.
See the existing format.<ATP> files for examples.
New transformation and format files must be compiled in with the other
TPTP2X files.
This is done by adding a compile query in the tptp2X.main file,
alongside the queries that compile in the existing files.
If you need help, please contact the TPTP developers.
The TPTP generators are implemented in Prolog.
It is simple to write new generators.
New files should follow the structure of the existing files.
The entry point in a generator file is <Problem name>/3, where
<Problem name> is the file name without the .g suffix.
The first argument is an input: the size parameter for generation.
The second and third arguments are outputs: the formulae generated and
the % Status of the formulae.
The formulae must be a Prolog list of formulae in TPTP format.
A <Problem name>_file_information/3 fact must also be provided.
The three arguments of the fact are the atom generator, a description
of the possible size parameters, and the TPTP size for this problem (this is
hard to determine!).
See the existing generator files for examples.
If you need help, please contact the TPTP developers.
The TPTP4X Utility
The TPTP4X utility is a multifunctional utility for reformatting,
transforming, and generating TPTP problem files.
It is the successor to the TPTP2X utility, and offers some of the same
functionality, and some extra functionality.
TPTP4X is written in C, and is thus faster than TPTP2X.
The TPTP4X usage is Usage: tptp4X <options> <files>.
The options are (defaults in ()s):
 q<quietness>  control amount of output (1)
 d<directory>  output directory (stdout)
 f<format>  output format (tptp)
 tptp  long tptp format
 tptp:short  short tptp format
 t<transform>  transform the formulae (none)
 add_equality  adds all axioms of equality
 add_equality:rstfp  adds selected axioms of equality
 fofify V  make universally quantified fof
 fofify:! V  universally quantify fof
 noint  rename integers to constants
 aritize  make function and predicate names unique by arity
 dequote  make function and predicate names unquoted
 numbernamesN  add N digit extension to formula names
 uniquenamesN  add N digit extension to duplicate formula names
 randomize  randomize formulae and their order
 negate_conjectures  negate all conjectures
 u<user>  user type (human)
 human  pretty printed with indenting
 machine  one line per formula
 r<delay>  delay between formula output (none)
 fixed  fixed delay of that many milliseconds
 min:max  random delay from min to max milliseconds
 V  allow free variables (no)
 N  allow duplicate formula names (no)
 x  expand includes (no)
 c  clean away nonlogicals (no)
 w  warnings (no)
 z  SZS ontology status output (no)
 v  print version number
 h  print this help
The TPTP2T Script
The TPTP2T script selects lines from either the
ProblemAndSolutionStatistics
file, by problem and solution characteristics.
The TPTP2T script is written in perl.
Installation
Installation of the TPTP2T utility requires simple changes in the
tptp2T script.
These changes can be made by running tptp2T_install, which will prompt
for required information.
Using TPTP2T
The use of this script is:
tptp2T [f FileName] [q1 or q2 or q3] [pp or ps or pps] {[]Constraint {[and/or] []Constraint}}
 The optional f <ProblemFile> flag specifies the name
of a file containing TPTP problem names.
tptp2T will select statistics file lines for only those problems
whose names appear in the <ProblemFile>.
The problem names (without the .p extension) must appear one
per line in the <ProblemFile>, and lines that start with
# are ignored.
 The optional q flag sets quietness:
1=continuous update, 2=final count (default), 3=quiet.
 The option p flag indicates what output is required.
pp prints problem lines,
ps prints solution lines,
pps prints both.
(Defaults: Only problem constraints = pp,
only solution constraints = ps,
both types of constraints = pps)
The Constraints specify required problem and solution
characteristics that must be satisfied for the statistics file
line(s) to be selected.
There are two sets of constraints, problem constraints and solution
constraints, which apply to problem lines and solution lines separately.
For a solution line to be printed, the problem line it accompanies must
pass all problem constraints, if any are provided.
Different sets of solution constraints can be applied to different systems
individually.
 An optional  (hyphen) negates the meaning of any constraint.
 or allows for logical or between constraints.
 and allows for logical and between constraints.
 A space between constraints is treated as an and.
 { } allow for grouping of terms.
 For constraints in which an upper and lower bound are required,
a  (hyphen) can be used to indicate don't care.
A problem Constraint is selected from:
 Domains ALG ANA ... TOP
Only the listed problem domains.
 Form ???
One of THF, TXF, TFF, FOF,
CNF, ANY. Default is ANY.
 Status SZSStatus
One of Theorem, CounterSatisfiable,
Unsatisfiable, Satisfiable,
Unknown, Open.
 Rating Min Max
In the range 0.00 (easiest) to 1.00 (hardest)
 Formulae Min Max
The number of formulae in the problem.
 UnitFormulae Min Max
The number of unit formulae
 TypeFormulae Min Max
The number of type formulae
 Atoms Min Max
The number of atoms.
 EqualityAtoms Min Max
The number of equality atoms.
 Equality
Only problems that use equality.
 PureEquality
Only problems that use only equality.
 UnitEquality
Only problems with only unitequality clauses/formulae.
 FOOLs
Only problems with FOOL logic atoms/terms
 Arithmetic
Only problems that use arithmetic.
 Types Min Max
The number of types used
 Symbols Min Max
The number of distinct symbols
 Predicates Min Max
The number of predicates
 MinimalPredicateArity Min Max
The minimal predicate arity
 MaximalPredicateArity Min Max
The maximal predicate arity in TFF/FOF/CNF problems.
 Propositional
Only propositional (only preducates of arity 0) problems
 Functions Min Max
The number of functions
 MinimalFunctionArity Min Max
The minimal function arity
 MaximalFunctionArity Min Max
The maximal function arity
 Variables
Only problems with variables.
 PIVariables
Only problems with PI variables.
A solution <Constraint> is selected from:
 System Name[Version]
ATP system which found a solution (ANY for any system).
 Result SZSStatus (Any SZS value, e.g., THM)
Solutions with this result, or a result lower in the SZS ontology.
 ResultTime Min Max
Solutions found in this time range.
 Output SZSDataform (Any SZS value, e.g., Ref)
Solutions with this output, or an output lower in the SZS ontology.
 SolutionFormulae Min Max
The number of formulae in the solution.
 SolutionLeaves Min Max
The number of leaves in the solution (for DAGs).
 SolutionDepth Min Max
The depth of the solution (for DAGs, from an axiom to a root).
 SolutionAtoms Min Max
The number of atoms in the soltuion.
 Equality
Only solutions that use equality.
 Arithmetic
Only solutions that use arithmetic.
 Selectivity Min Max
The ratio of solution leaves to problem formulae.
 Girth Min Max
The ratio of solution leaves to solution depth.
Getting and Using the TPTP
Quick Installation Guide
This section explains how to obtain and install the TPTP, and how to
syntaxconvert the TPTP problems.
Obtaining the TPTP
The distribution consists of two files:
 ReadMev9.0.0
contains an overview of the TPTP.
 TPTPv9.0.0
.tgz
(879.0
MByte,
9.8
GByte unpacked) contains
the library (including a copy of the
ReadMev9.0.0
file).
There also might be a file called
BuggedProblemsv9.0.0
,
containing a list of files that have been found to contain errors, in
the current release (bugs that have been discovered after the release has
been distributed).
These files are available online.
Installing the TPTP
prompt> tar xzf TPTPv9.0.0
.tgz
prompt> cd TPTPv9.0.0
prompt> ./Scripts/tptp2T_install
... the script will then ask for required information
prompt> ./TPTP2X/tptp2X_install
... the script will then ask for required information
If you don't have any Prolog installed, you need to get one first.
Both utilities can be installed in a default configuration by appending a
default flag to the install command.
Appropriate use of the TPTP allows developers and users to meaningfully
evaluate ATP systems.
To that end, the following guidelines for using the TPTP and presenting
results should be followed.
 The TPTP release number must be stated.
 Each problem must be referenced by its unambiguous syntactic name.
 The problem formulae should, as far as is possible, not be changed
in any way.
Any changes made (addition, removal, reordering, reformatting, etc.)
must be explicitly noted.
 Any information given to the ATP system, other than that in the
formulae, must be explicitly noted.
All system switches and settings must be recorded.
The header information in each problem may not be used by the ATP
system without explicit notice.
Abiding by these conditions will allow unambigous identification of the
problem, the formulae used, and further input to the ATP system.
If you follow the rules, please make it clear in any presentation of your
results, by an explicit statement.
We propose that you state
"These results were obtained in compliance with the guidelines for
use of the TPTP".
By making this clear statement, ATP researchers are assured of your
awareness of our guidelines.
Conversely, it will become clear when the guidelines might have been ignored.
 You find any mistakes in the TPTP.
 You are able to provide further information for a TPTP problem.
 You want to contribute a problem to the TPTP.
Please use the problem template that comes with the
distribution.
Fill in header information as far as possible.
Any unambiguous representation will do for the formulae.
 You have any suggestions for improving the TPTP library.
Conclusion
Current Activities
The collection of more problems is continuing.
Everybody is invited to submit problems for inclusion.
Current proposals for extensions to the TPTP are online at
http://www.tptp.org.
Comments are welcome.
If you'd like to be kept informed of developments, join the
TPTP World Google Group.
We are indebted to the following people and organizations who have helped with the construction
of the TPTP.
For contributing problems:
Argonne National Laboratory,
AR Research Group at TU München,
Negin Arhami,
Alessandro Armando,
Javier Alvez,
Rob Arthan,
Jaroslav Barta,
Peter Baumgartner,
Bjoern Pelzer,
Dan Benanav,
Alexander Bentkamp,
Christoph Benzmüller,
Marc Bezem,
Jasmin Blanchette,
Woody Bledsoe,
Conrad Bock,
Harald Boley,
Johan Bos,
Marc Boule,
Chad Brown,
Cristóbal Camarero,
Marco Caminati,
David Cerna,
Vinay Chaudri,
Koen Claessen,
Simon Colton,
Simon Cruanes,
David Deharbe,
Gerard de Melo,
Louise Dennis,
Alexander Dvorsky,
Christian Fermüller,
JeanChristophe Filliâtre,
Bernd Fischer,
Paul Fodor,
Deepak Garg,
Thibault Gauthier,
Tim Geisler,
Keith Goolsbey,
Sylvia Grewe,
Johan Gustafsson,
Jay Halcomb,
John Harrison,
Lifeng He,
Stefan Hetzl,
Thomas Hillenbrand,
Tim Hinrichs,
Peter Hoefner,
Arjen Hommersom,
Jack Horner,
Ullrich Hustadt,
Mikoláš Janota,
Thomas Jech,
Milene Johns,
Cezary Kaliszyk,
Mark Kaminski,
Zurab Khasidashvili,
Daniel Kuehlwein,
Boris Konev,
Laura Kovacs,
Ivan Kossey,
Evgeny Kotelnikov,
Tobias Kuhn,
Lars Kulik,
Timm Lampert,
Rob Lewis,
Daniel Li,
Di Long Li,
Meng Luan,
Johan Martensson,
Takuya Matsuzaki,
Bill McCune,
Andreas Meier,
Jia Meng,
Esteban Morales,
Cláudia Nalon,
Julian Parsert,
Juan Antonio Navarro Perez,
Georg Neis,
Jens Otten,
Andrei Paskevich,
Dominique Pastre,
Adam Pease,
David Plaisted,
Larry Paulson,
Art Quaife,
Pedro Quaresma,
Florian Rabe,
Silvio Ranise,
Thomas Raths,
Pace ReaganSmith,
Martin Riener,
Philipp Rümmer,
Johannes Schoisswohl,
Alberto Segre,
Renate Schmidt,
Michael Schneider,
Johannes Schoisswohl,
Stefan Schulz,
Martina Seidl,
Ted Sider,
John Slaney,
Nick Smallbone,
David Stanovsky,
Graham Steel,
Alexander Steen,
Mark Stickel,
Nik Sultana,
Hans Svensson,
Tanel Tammet,
Alwen Tiu,
Steven Trac,
Josef Urban,
Bob Veroff,
Petar Vukmirovic,
Uwe Waldmann,
Mark Wallace,
Peter Watson,
Christoph Weidenbach,
Christoph Wernhard,
Michael Wessel,
Patrick Wischnewski,
Thomas Wies,
Bruno Woltzenlogel Paleo,
Aparna Yerikalapudi,
Jian Zhang.
For helping with problems and/or pointing out errors:
Geoff Alexander,
Johan Belinfante,
Alexander Bentkamp,
Ahmed Bhayat,
Nikolaj Bjorner,
Woody Bledsoe,
Maria Poala Bonacina,
Heng Chu,
Koen Claessen,
Ingo Dahn,
Morgan Deters,
Damien Doligez,
Marisol Fernandez,
Alexander Fuchs,
Matthias Fuchs,
Tim Geisler,
John Harrison,
John Hester,
Thomas Jech,
Konstantin Korovin,
Ivan Kossey,
Harald Ganzinger,
Mark Lemay,
Reinhold Letz,
Ann Lilliestrom,
Thomas Ludwig,
Klaus Mayr,
Bill McCune,
Monty Newborn,
Xumin Nie,
Andrei Paskevich,
Dominique Pastre,
Jeff Pelletier,
Petr Pudlak,
Art Quaife,
Dimitris Raptis,
Michael Raskin,
Thomas Raths,
Piotr Rudnicki,
Masahiro Sakai,
Michael Schneider,
Len Schubert,
Stephan Schulz,
Alberto Segre,
Stuart Shapiro,
Guy Shefner,
Andrés SicardRamírez,
Nick Siegel,
John Slaney,
Nick Smallbone,
Alex Steen,
Mark Stickel,
Christoph Sticksel,
Martin Suda,
Nik Sultana,
Bob Veroff,
Russell Wallace,
TC Wang,
Christoph Weidenbach,
Hantao Zhang.
For support regarding the utilities:
Jesse Alma,
Stefan Berghofer,
Marc Bezem,
Lucas Dixon,
Bernd Fischer,
Alexander Fuchs,
Kal Hodgson,
Jose Morales,
Max Moser,
Gerd Neugebauer,
Vladimir Pavlov,
Grzegorz Prusak,
Petr Pudlak,
Alex Roederer,
Paul Tarau,
Frank Theiss,
Abdul Sattar,
Renate Schmidt,
Andrés SicardRamírez,
Jing Tang,
Andrei Tchaltsev,
Christoph Weidenbach,
Makarius Wenzel,
Jürgen Zimmer.
For general advice and comments:
Serge Autexier,
Jasmin Blanchette,
Reiner Hähnle,
Joe Hurd,
Reinhold Letz,
Selene Makarios,
Diego Páez,
Andrei Paskevich,
Jeff Pelletier,
Florian Rabe,
Stephan Schulz,
Mark Stickel,
Allen van Gelder,
Uwe Waldmann,
Russell Wallace,
Christoph Walther.