Dear all,
for uncertified categories that use the new ARI format, you can start
testing with the following post-processor:
plain.ari.24.1 (784)
I tested it successfully on 1(!) example, so it might still have some
issues.
Best
Florian
Dear all,
here is the information that I have about the current StarExec
configuration:
root@starexec2:~# uname -r
5.15.0-112-generic
oot@starexec2:~# dpkg -l libc6-dev:amd64 python3
Desired=Unknown/Install/Remove/Purge/Hold
|
Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-===============-===============-============-=========================================================================
ii libc6-dev:amd64 2.35-0ubuntu3.8 amd64 GNU C Library:
Development Libraries and Header Files
ii python3 3.10.6-1~22.04 amd64 interactive high-level
object-oriented language (default python3 version)
Note that the information here
https://www.starexec.org/starexec/public/machine-specs.txt
is outdated!
Best
Florian
Dear all,
at the last WST, there were some discussions about the input format for
TRSs. My takeaway was that not everyone is happy with the current XML
format. So we (the TermComp SC) recently started to discuss whether the
format should be changed. As far as I can see, there are three obvious
options:
(a) keep the current format
Pro: no work
Contra: not human readable
(b) go back to the old format [1] (here's [2] an example)
Pro:
- human readable
- the same format is used for other categories (e.g., Complexity of
ITSs and Termination of Integer TRSs)
- some tools can still parse it
Contra: There were some issues with the semantics, in particular if
there were several VAR sections. However, from my understanding, these
issues can be resolved by forbidding more than one VAR section.
(c) a new S-expression based format has been proposed for the confluence
competition [3]
Pro: human readable(?)
Contra: no tool can parse it yet
So if you compete, or are interested in, the TRS categories at TermComp,
please let us know what you think. I'm also particularly interested in
the following questions:
- If you participate, can your tool still parse the old format?
- Do you recall more details about the reasons for the switch from the
old format to the current one?
A related (but mostly independent) question is whether we should
separate the properties to be analyzed (full vs innermost rewriting,
termination vs complexity etc.) from the actual TRS. That could avoid a
lot of redundancy in the TPDB. If you have an opinion on that, please
share it.
Thanks,
Florian
[1] https://www.lri.fr/~marche/tpdb/format.html
[2] https://aprove.informatik.rwth-aachen.de/interface/v-AProVE2023/trs_wst
[3] http://cl-informatik.uibk.ac.at/iwc/2023/proceedings.pdf, page 32
Dear all,
since, in contrast to other categories, there is no converter from the
new ARI to the old XTC format for higher-order TRSs, we decided to use
the XTC format instead of the ARI format for the higher-order category
one last time. In this way, tools that participated in the past can
enter the competition with little effort, and tool authors have plenty
of time to implement a parser for the new ARI format before termCOMP '25.
If you intended to join the higher-order category this year, and your
tool already supports the new ARI format, then please get in touch, so
that we can discuss whether we should have an additional higher-order
category with the new ARI format.
Best
Florian
Dear all,
while the registration deadline for termCOMP is approaching, StarExec is
still unavailable, unfortunately. The reason is an update which takes
longer than expected due to technical problems.
Hence, no testing is possible at the moment, neither for the
participants, nor for me as organizer.
Hopefully, StarExec will be back online soon. In any case, these
technical issues should not prevent anyone from participating, of
course. So if necessary, we will extend the registration deadline
further. Let's reassess the situation when StarExec is again available.
Best
Florian
*Termination and Complexity Competition 2024*
http://www.termination-portal.org/wiki/Termination_Competition_2024
*Call for Participation*
Since the beginning of the millennium, many research groups developed
tools for fully automated termination and complexity analysis.
After a tool demonstration at the 2003 Termination Workshop in Valencia,
the community decided to start an annual termination competition to spur
the development of tools and termination techniques.
The termination and complexity competition focuses on automated
termination and complexity analysis for all kinds of programming
paradigms, including categories for term rewriting, imperative
programming, logic programming, and functional programming. In all
categories, we also welcome the participation of tools providing
certifiable proofs. The goal of the termination and complexity
competition is to demonstrate the power of the leading tools in each of
these areas.
The competition will be affiliated with the International Joint
Conference on Automated Reasoning (IJCAR 2024,
https://merz.gitlabpages.inria.fr/2024-ijcar/). It will be run on the
StarExec platform (https://www.starexec.org/). The final run and a
presentation of the final results will be live at IJCAR.
We strongly encourage all developers of termination and complexity
analysis tools to participate in the competition. We also welcome the
submission of termination and complexity problems, especially problems
that come from applications.
A category is only run in the competition if there are at least 2
participants and at least 40 examples for this category in the
underlying termination problem data base. If there is no category that
is convenient for your tool, you can contact the organizers, since other
categories can be considered as well if enough participants are guaranteed.
For further information, we refer to the website of the termination and
complexity competition:
https://termination-portal.org/wiki/Termination_Competition_2024
<https://termination-portal.org/wiki/Termination_Competition_2024>
_Important dates_
As StarExec is currently unavailable due to an update, the deadline has
been extended to give all participants the opportunity to properly test
their tools before submission.
* Tool and Benchmark Submission (*extended*): June 19, 2024
* First Run: June 24, 2024
* Bugfix Deadline: June 30, 2024
* Final Run: July 3/4, 2024
* Presentation of the results at IJCAR: July 5, 2024