Call for Participation: the 2020 SAT Competition
The 2020 SAT Competition is a competitive event for solvers of the Boolean Satisfiability (SAT) problem. It is organized as a satellite event to the 23th International Conference on Theory and Applications of Satisfiability Testing and stands in the tradition of the yearly SAT Competitions, Races, and Challenges. The deadline for submitting benchmarks and solvers is Tuesday March 31, 2018 (23:59 GMT -12).
The area of SAT solving has seen tremendous progress over the last years. E.g. many problems in hardware and software verification that seemed to be completely out of reach a decade ago can now be handled routinely. Besides new algorithms and better heuristics, refined implementation techniques turned out to be vital for this success.
To keep up the driving force in improving SAT solvers, we want to motivate developers to present their work to a broader audience and to compare it with that of others.
The 2020 SAT Competition will consist of the following tracks:
- Main Track (including a Glucose-Hack subtrack)
- Parallel Track
- Cloud Track (new)
- Incremental Track
- Planning Track (new)
- No-Limits Track
New this year
- Cloud track: Amazon Web Services is enabling and sponsoring a “cloud track” for the SAT Competition in 2020. The solvers in this track will run on 100 16-core computers each with 64GB RAM that can communicate using standard protocols (ssh, MPI) with a timeout of 1000 seconds (wall-clock time) per benchmark problem. Amazon Web Services will provide $1000 in AWS credits for building and evaluating solvers, which should be more than enough for development and testing purposes. The benchmark suite will consists of hard instances, both old (unsolvable instances from prior competitions) and new ones, including formulas provided by AWS and participating teams. See details online
- Planning track: Apart from evaluating solvers on a broad range of benchmarks, the SAT Competition will start with a track that is dedicated to a single application of SAT solvers. The application chosen for 2020 is Automated Planning. This track includes 100 satisfiable and 100 unsatisfiable formulas of varying size and difficulty. All the formulas will encode either classical or HTN (Hierarchical Task Network) planning problem instances using several of the available encoding schemas.
Changes incorporated from recent competitions
- BYOB - Bring your own Benchmarks: each Main Track participant (team) is required to submit 20 new benchmark instances (not seen in previous competitions). At least 10 of those benchmarks should be “interesting”: not too easy (i.e., not solved by MiniSat within one minute on modern hardware) or too hard (unsolvable by the participants own solver within one hour on a computer similar to the nodes of the StarExec cluster).
- The solvers will ranked using the PAR-2 scheme: The score of a solver is defined as the sum of all runtimes for solved instances + 2*timeout for unsolved instances, lowest score wins.