Dear all,
at the last WST, there were some discussions about the input format for
TRSs. My takeaway was that not everyone is happy with the current XML
format. So we (the TermComp SC) recently started to discuss whether the
format should be changed. As far as I can see, there are three obvious
options:
(a) keep the current format
Pro: no work
Contra: not human readable
(b) go back to the old format [1] (here's [2] an example)
Pro:
- human readable
- the same format is used for other categories (e.g., Complexity of
ITSs and Termination of Integer TRSs)
- some tools can still parse it
Contra: There were some issues with the semantics, in particular if
there were several VAR sections. However, from my understanding, these
issues can be resolved by forbidding more than one VAR section.
(c) a new S-expression based format has been proposed for the confluence
competition [3]
Pro: human readable(?)
Contra: no tool can parse it yet
So if you compete, or are interested in, the TRS categories at TermComp,
please let us know what you think. I'm also particularly interested in
the following questions:
- If you participate, can your tool still parse the old format?
- Do you recall more details about the reasons for the switch from the
old format to the current one?
A related (but mostly independent) question is whether we should
separate the properties to be analyzed (full vs innermost rewriting,
termination vs complexity etc.) from the actual TRS. That could avoid a
lot of redundancy in the TPDB. If you have an opinion on that, please
share it.
Thanks,
Florian
[1] https://www.lri.fr/~marche/tpdb/format.html
[2] https://aprove.informatik.rwth-aachen.de/interface/v-AProVE2023/trs_wst
[3] http://cl-informatik.uibk.ac.at/iwc/2023/proceedings.pdf, page 32
Dear Termination-Portal Organization.
I hope this email finds you well. My name is Yurika Hirobe, and I am
currently a master's student at the University of Tokyo.
I am writing to you today as I have been utilizing your
Termination-Competition dataset, which is available at the following link:
[ https://github.com/TermCOMP/TPDB ], for my research purposes.
Then I'm in trouble because expected answers corresponding to the
benchmarks included in the dataset cannot be found.
From my understanding, I believe that there should be XML files that
provide comprehensive information, including the expected answers for each
benchmark. Unfortunately, I have encountered some difficulties in locating
such files within the dataset. It is crucial for my research to have access
to this information to ensure the accuracy and completeness of my analysis.
Hence, I kindly request your guidance in this matter. Could you please
direct me to the specific location or resources where I can find the
expected answers for the dataset? Your assistance in this regard would be
of immense help to my research project.
Thank you for your time and consideration. I look forward to hearing from
you.
Best regards,
Yutika Hirobe
Master's Student
University of Tokyo
yurikahirobe(a)gmail.com