Dear all,
FLoC will give us five medals. We distribute them as follows:
Gold, Silver, and Bronze medals go to the top three teams according to a competition-wide ranking. (*)
Two special medals go to the best two teams in advancing the state of the art. (**)
The unit "team" is chosen to ensure medals go to different places. When the registration is closed, I'll suggest a definition of teams. Please raise comments on it if necessary. It will ultimately be decided by SC minus conflict of interest.
The detail of the ranking is as follows:
Recall that for each benchmark, each claim (Termination, Nontermination, Upper-bound, Lower-bound, and their CERTIFIED versions) yields score in range [0,1].
The virtual best solver (VBS) records the best (consistent) score for each claim collected at least since 2018.
(*) Teams are ranked by the Euclidean norms of the normalized score vector. Each component of the vector is the score of the team in a category, divided by the score of the VBS in the category.
(**) If a team gets higher score for a benchmark and a claim than previous year's VBS, then the team gets the difference as a special score, and teams are ranked by sums of these scores. In short, if you claim YES/NO while no tool in the past claimed so, then you get special score 1.
Best, Akihisa
Dear all,
as announced, ranking is made among teams. This is my proposal of the definition of teams.
$teams = [ 'AProVE' => ['AProVE'], 'iRankFinder' => ['iRankFinder'], 'LoAT' => ['LoAT'], 'Matchbox' => ['matchbox'], 'MU-TERM' => ['MuTerm'], 'MultumNonMulta' => ['MnM'], 'NaTT' => ['NaTT'], 'NTI+cTI' => ['NTI', 'NTI+cTI'], 'SOL' => ['SOL'], 'Tyrolean Tools' => ['TTT2', 'TcT'], 'Ultimate' => ['Ultimate'], 'Wanda' => ['Wanda'], ];
To reserve some excitement for the final run, I will not show the ranking in the first run.
Best, Akihisa
On 7/19/2022 3:05 PM, YAMADA, Akihisa wrote:
Dear all,
FLoC will give us five medals. We distribute them as follows:
Gold, Silver, and Bronze medals go to the top three teams according to a competition-wide ranking. (*)
Two special medals go to the best two teams in advancing the state of the art. (**)
The unit "team" is chosen to ensure medals go to different places. When the registration is closed, I'll suggest a definition of teams. Please raise comments on it if necessary. It will ultimately be decided by SC minus conflict of interest.
The detail of the ranking is as follows:
Recall that for each benchmark, each claim (Termination, Nontermination, Upper-bound, Lower-bound, and their CERTIFIED versions) yields score in range [0,1].
The virtual best solver (VBS) records the best (consistent) score for each claim collected at least since 2018.
(*) Teams are ranked by the Euclidean norms of the normalized score vector. Each component of the vector is the score of the team in a category, divided by the score of the VBS in the category.
(**) If a team gets higher score for a benchmark and a claim than previous year's VBS, then the team gets the difference as a special score, and teams are ranked by sums of these scores. In short, if you claim YES/NO while no tool in the past claimed so, then you get special score 1.
Best, Akihisa
definition of teams ...
Loat is "Team Aprove"? It's right there in the name:
Hello,
I think it makes sense to have a separate team for LoAT, since it's developed independently of AProVE, and I also worked on it quite a lot when I wasn't affiliated with the RWTH Aachen. True, it's currently hosted on the same GitHub account as AProVE, but that doesn't mean anything -- I could fork it and continue the development elsewhere anytime...
Best Florian
On 7/27/22 18:05, Johannes Waldmann wrote:
definition of teams ...
Loat is "Team Aprove"? It's right there in the name:
https://aprove-developers.github.io/LoAT/ _______________________________________________ Termtools mailing list -- termtools@lists.rwth-aachen.de To unsubscribe send an email to termtools-leave@lists.rwth-aachen.de
termtools@lists.rwth-aachen.de