Dear all,
I created the Jobs for the final run. So from now on, I'll only accept
updates for very good reasons. I'll start the first jobs as soon as we
have a screen to display the results at IJCAR.
You can check the current state of the competition here:
https://termcomp.verify.rwth-aachen.de/Y2024/
There, you can also see the configurations that are registered for your
tool. Please make sure that they are correct.
This year, we'll have certificates for all winners, and I'll hand them
out during the termCOMP session (Friday, 17:30). So if you attend IJCAR
and participate in one of the competition categories, make sure you're
there! If you don't attend IJCAR, please ask someone else to (maybe)
pick up your certificate for you. If you don't expect to win, note that
there will also be some special prices, so please (ask someone to)
attend the session anyway.
Best
Florian
Dear all,
the testing for the competition categories is over. Some tools still
have problems regarding their performance (rejects in certified
categories or severe regressions compared to last year), but from a
technical point of view, all tools (*) are working.
If your tool has rejects or significant regressions, then you've been
notified. I'll continue to accept updates, but I won't run any more
tests. So if you update your tool, please make sure that the updated
version works on StarExec.
Moreover, the links to the proofs in the demo categories are working again.
Best
Florian
(*) AutoNon still has a few issues, but that's a 'historical' tool that
was just registered for reference. I'll take care of updating it.
Dear all,
the registration for termCOMP is over. Some teams worked really hard to
fix issues caused by the StarExec update, resulting in some last minute
registrations. Thanks for that!
If you missed the deadline, but intended to participate, please get in
touch.
Best
Florian
Dear all,
I was just informed that the benchmarking framework that we used in the
past (BenchExec) is no longer available on StarExec. Hence, we'll have
to work with "runsolver" instead. If I remember correctly, we used
runsolver at some point, but then we switched to BenchExec due to issues
with the measurement of the runtime. If you remember any details / your
tool was affected by these problems, please let me know.
Best
Florian
Dear all,
can I express in CPF (for Ceta-2)
that I want to apply "flat context closure" to a subset of rules only?
specifically, I want this for rules with empty lhs or rhs,
as it seems to increase the power of matrix interpretations.
(as shown by Aprove in TC 23).
I am looking at
http://cl2-informatik.uibk.ac.at/rewriting/mercurial.cgi/IsaFoR/file/825022…
but the example seems to close all rules.
I guess I can indeed context-close all rules uniformly,
and use the same interpretation, to remove the same rules.
But perhaps I want other methods that refer to the original rules.
Well it's a bit late anyway.
- J.
Dear all,
just to clarify, we expect the tools to output *CPF2 proofs* in the
certified categories (just like last year). If your tool already
supports the new CPF3 format and you planned to make use of that at
termCOMP, please let me know.
Best
Florian