In preparation of the final evaluation, we like to announce the changes that we are going to make in the dataset / simulation environment.
Trial evaluation run:
You have the option of uploading your contestant VM containing your solution once, and we will perform a test-evaluation on the current training datasets. We highly encourage you to make use of this, such that you can fix problems that prevent a successful final evaluation beforehand. The final evaluation is definite and there will be no interactions with teams in case of problems with their VM. Bugs appearing during the final evaluation cannot be corrected!
Log-in information for the upload will be sent to you by email separately. Please only upload the contestant VM (the one with your solution) in a zip archive and indicate in the filename that it is a test run. Uploaded test VMs will not be used automatically for the final evaluation. You can remove your source code from the VM, but make sure the launch-files for each subtask work.
Evaluation hardware for Task 1 and Task 2:
We want you to get good scores! Thus, in response to performance issues and non-deterministic timings that were observed on Core2 machines, we will use a host computer with a Core i7 processor. This also aligns with the hardware choice that was recently made for stage 2. With the reported issues, it looks like especially memory-bound algorithms perform much worse on the virtualized Core2 processor, and it makes indeed no sense anymore to optimize for old hardware.
The evaluation machine has a Q 820 Core i7 CPU @ 1.73GHz and 8 GB RAM. The settings for the contestant VM remain unchanged.
Example timings with libelas for reference:
Native:
- Code: Select all
Processing: img/cones_left.pgm, img/cones_right.pgm
Descriptor 22.9 ms
Support Matches 75.3 ms
Delaunay Triangulation 6.4 ms
Disparity Planes 7.7 ms
Grid 5.8 ms
Matching 142.1 ms
L/R Consistency Check 21.2 ms
Remove Small Segments 40.7 ms
Gap Interpolation 8.9 ms
Adaptive Mean 71.1 ms
========================================
Total time 402.1 ms
In the VM:
- Code: Select all
Processing: img/cones_left.pgm, img/cones_right.pgm
Descriptor 48.9 ms
Support Matches 78.2 ms
Delaunay Triangulation 8.3 ms
Disparity Planes 10.7 ms
Grid 11.2 ms
Matching 184.2 ms
L/R Consistency Check 34.7 ms
Remove Small Segments 58.1 ms
Gap Interpolation 18.7 ms
Adaptive Mean 86.9 ms
========================================
Total time 539.9 ms
Changes to the scenario in the final evaluation:
Task 1: The datasets are taken in the same environment, where we did another flight and roughly followed the path in the current training datasets.
Task 2: The dataset recording took place in the same room. The obstacle setup and the path flown are different. The scene is the same for all three subtasks. Note that we may start the evaluation of T2.3 first, such that no re-using of the map created in T2.1 is possible. The dataset for Task 2.3 does not contain moving objects.
Task 3 / 4 general:
- Model parameters, as announced in the technical annex, will change slightly in order to reflect some model uncertainty.
- Noise levels will stay the same. The initial seed for generating sensor noise will be chosen "randomly" once, and will be the same for each team to ensure comparability.
Task 3.2: The wind magnitude remains, but the direction will change.
Task 3.3: The wind magnitude remains, but the direction and the duration will change.
Task 4.1: The waypoints will change, but their distances remain.
Task 4.2: The waypoints will change, but their distances remain. Areas where sensors work, will change.
Task 4.3: The waypoints will change, but their distances remain.
Best, and we wish you much success with the final preparations!
The ETH Challenge hosts.