Call for Participation: the 2021 SAT Competition
The 2021 SAT Competition is a competitive event for solvers of the Boolean Satisfiability (SAT) problem. It is organized as a satellite event to the 24th International Conference on Theory and Applications of Satisfiability Testing and stands in the tradition of the yearly SAT Competitions, Races, and Challenges. The deadline for submitting benchmarks and solvers is Tuesday, April 27, 2021 (23:59 GMT -12).
The area of SAT solving has seen tremendous progress over the last years. Many problems in applications such as hardware and software verification that seemed to be completely out of reach a decade ago can now be handled routinely. Besides new algorithms and better heuristics, refined implementation techniques turned out to be vital for this success.
To keep up the driving force in improving SAT solvers, we want to motivate developers to present their work to a broader audience and to compare it with that of others.
The 2021 SAT Competition will consist of the following tracks:
- Main Track (including a new CaDiCaL-Hack subtrack)
- Parallel Track
- Cloud Track
- Incremental Track
- Crypto Track (new)
- No-Limits Track
New this year:
- Crypto track
-
Apart from evaluating solvers on a broad range of benchmarks, the SAT Competition started with a track that is dedicated to a single application of SAT solvers. The application chosen for 2021 is Cryptography. This track includes 100 satisfiable and 100 unsatisfiable formulas of varying size and difficulty.
- CaDiCaL-Hack subtrack
-
After the success of the MiniSAT and Glucose hack tracks in the past, the 2021 SAT Competition will use the recently released CaDiCaL version 1.4. Participants of the Main track will qualify as a CaDiCaL Hack if the diff between the patched and modified sources of CaDiCaL 1.4 is less than 1000 non-space characters.
Changes incorporated from recent competitions
- BYOB - Bring your own Benchmarks
-
each Main Track participant (team) is required to submit 20 new benchmark instances (not seen in previous competitions). At least 10 of those benchmarks should be “interesting”: not too easy (i.e., not solved by MiniSat within one minute on modern hardware) or not too hard (unsolvable by the participant’s own solver within one hour on a computer similar to the nodes of the StarExec cluster).
- The solvers will be ranked using the PAR-2 scheme
-
The score of a solver is defined as the sum of all runtimes for solved instances plus 2*timeout for unsolved instances, lowest score wins.
- Cloud track
-
Amazon Web Services is enabling and sponsoring a “cloud track” for the SAT Competition in 2021. The solvers in this track will run on 100 16-core computers each with 64GB RAM that can communicate using standard protocols (ssh, MPI) with a timeout of 1000 seconds (wall-clock time) per benchmark problem. Amazon Web Services will provide $1000 in AWS credits for building and evaluating solvers, which should be more than enough for development and testing purposes. The benchmark suite will consist of hard instances, both old (unsolvable instances from prior competitions) and new ones, including formulas provided by AWS and participating teams. See https://satcompetition.github.io/2021/track_cloud.html