With the Task 2 evaluation server, I have been trying to understand how the MCC score is computed, and in particular how the reported statistics relate to the MCC explanation on page 17 of the Technical Annex document. In particular, the server reports "false_free", "true_free", "false_occupied", and "true_occupied" totals along with the total MCC score. However, the explanation in the annex describes these quantities as "Correct collision", "Missed collision", "Correct free", and "False collision". For a while, I thought I understood the relationship to be "true_occupied"="Correct collision", "True Free"="Correct free", etc., but after a recent test (explained below), I am not sure anymore. Can anyone help to clarify this?
For the test I refer to above, I submitted an octomap where only ~50 voxels were labeled as occupied, and the rest were labeled as free. When I submitted this to the evaluation server, the resulting statistics were as follows:
========================================
Collision statistics:
false_occupied: 51136
false_free: 527
true_occupied: 70296
true_free: 8869
total tested points: 153542
Detection Score (MCC): 0.271
========================================
Since so few voxels were occupied in the octomap I submitted, the "true_occupied" category does not seem to represent points where the evaluation should find an occupied voxel and do indeed find a collision. Similarly, the "false_occupied" category seems like it should represent the points where there is an occupied voxel that causes a collision, when it should in fact be free. Any help in sorting out the mapping between these statistics and the explanation in the technical annex would be much appreciated. Thanks!