Dear Akihisa,
for SRS-Standard (certified), for several benchmarks,
example ICFP_2010/4991.xml, these results are displayed:
- matchbox: "YES (without \checkmark)"
https://www.starexec.org/starexec/services/jobs/pairs/567222906/stdout/1?li…
- ttt2: "post-processor error" (post processor timeout after 15 min)
https://www.starexec.org/starexec/services/jobs/pairs/567222903/log
these are logically the same thing?
(both did produce a certificate?)
Ah - noh, for matchbox, the certifier did not time-out,
but "certification-err Stack space overflow: current size 8388608..."
Locally, I am not getting this error (certification succeeds
in 50 seconds for matchbox' certificate, 55 seconds for ttt2's)
I am using ceta-postproc (contains CeTA-2.42) compiled with ghc-9.0.2
- Johannes.
Dear all,
some jobs of the first run have been started. Please monitor via:
https://termcomp.github.io/Y2022/
(Not all jobs because of the popular permission problems.)
Good luck!
Akihisa
Dear all,
FLoC will give us five medals. We distribute them as follows:
Gold, Silver, and Bronze medals go to the top three teams according to a
competition-wide ranking. (*)
Two special medals go to the best two teams in advancing the state of
the art. (**)
The unit "team" is chosen to ensure medals go to different places. When
the registration is closed, I'll suggest a definition of teams. Please
raise comments on it if necessary. It will ultimately be decided by SC
minus conflict of interest.
The detail of the ranking is as follows:
Recall that for each benchmark, each claim (Termination, Nontermination,
Upper-bound, Lower-bound, and their CERTIFIED versions) yields score in
range [0,1].
The virtual best solver (VBS) records the best (consistent) score for
each claim collected at least since 2018.
(*) Teams are ranked by the Euclidean norms of the normalized score
vector. Each component of the vector is the score of the team in a
category, divided by the score of the VBS in the category.
(**) If a team gets higher score for a benchmark and a claim than
previous year's VBS, then the team gets the difference as a special
score, and teams are ranked by sums of these scores. In short, if you
claim YES/NO while no tool in the past claimed so, then you get special
score 1.
Best,
Akihisa
Dear all,
In order to make the TermCOMP 2022 'demo' category of termination
of CTRSs more attractive, we submitted a new collection of oriented CTRSs
from COPS.
We did not provide any suggestion of name for the collection.
Thus, for TermCOMP 2022 I see that it has been named Gutierrez_22
(Raúl submitted the collection of examples).
Perhaps COPS_oriented22 would be more appropriate as it recognizes the
real origin of the examples. It is not a big problem, but I think better to
update the name of the collection.
Sorry for not proposing the name of the collection in time.
Best regards,
Salvador.
*Termination and Complexity Competition 2022*
http://www.termination-portal.org/wiki/Termination_Competition_2022
*Call for Participation*
Since the beginning of the millennium, many research groups developed tools
for fully automated termination and complexity analysis.
After a tool demonstration at the 2003 Termination Workshop in Valencia,
the community decided to start an annual termination competition to spur
the development of tools and termination techniques.
The termination and complexity competition focuses on automated termination
and complexity analysis for all kinds of programming paradigms, including
categories for term rewriting, imperative programming, logic programming,
and functional programming. In all categories, we also welcome the
participation of tools providing certifiable proofs. The goal of the
termination and complexity competition is to demonstrate the power of the
leading tools in each of these areas.
The competition will be affiliated with IJCAR 2022 (
https://easychair.org/smart-program/FLoC2022/IJCAR-index.html) and it will
be part of the Olympic Games at the Federated Logic Conference (FLoC 2022) (
https://www.floc2022.org/floc-olympic-games). It will be run on the
StarExec platform (http://www.starexec.org/). The final run and a
presentation of the final results will be live at FLoC.
We strongly encourage all developers of termination and complexity analysis
tools to participate in the competition. We also welcome the submission of
termination and complexity problems, especially problems that come from
applications.
A category is only run in the competition if there are at least 2
participants and at least 40 examples for this category in the underlying
termination problem data base. If there is no category that is convenient
for your tool, you can contact the organizers, since other categories can
be considered as well if enough participants are guaranteed.
For further information, we refer to the website of the termination and
complexity competition:
http://www.termination-portal.org/wiki/Termination_Competition_2022
*Important dates*
- Tool and Problem Submission: July 24, 2022
- First Run: July 25, 2022
- Bug/Conflict Report Deadline: July 30, 2022
- Bugfix Deadline: August 4, 2022
- Final Run: August 5, 2022
- Award Ceremony at FLoC: August 9, 2022
- Presentations at WST: August 12, 2022
Dear all,
the GitHub repository is ready for termCOMP 2022 registration:
https://github.com/TermCOMP/starexec-master
I hope the instruction there is clear enough. Please post questions on
this list otherwise.
You can monitor the status of registration and run at
http://termcomp.github.io/Y2022/
Best,
Akihisa