Image synchronization depending on RT factor




Image synchronization depending on RT factor

Postby jwienke » Fri 31. Oct 2014, 11:04

While testing the new v3 tasks with RT factor 0.2 we have noticed that the synchronization between the depth and rgb images for the kinects is significantly worse at lower RT factors.

I have implemented a very simple ROS node to verify this observation by simply writing out the timestamps received from the image headers for the two cameras attached to the robot. The results look like this for a task with RT factor 0.2:
Code: Select all
357228000000    tcp_rgb
357251000000    tcp_depth
357294000000    tcp_rgb
357317000000    tcp_depth
357360000000    tcp_rgb
357383000000    tcp_depth
357426000000    tcp_rgb
357449000000    tcp_depth
357492000000    tcp_rgb
357515000000    tcp_depth
357558000000    tcp_rgb
357581000000    tcp_depth
357624000000    tcp_rgb
357647000000    tcp_depth
357690000000    tcp_rgb
357713000000    tcp_depth
357756000000    tcp_rgb
357779000000    tcp_depth
357822000000    tcp_rgb
357845000000    tcp_depth
357888000000    tcp_rgb
...


And for other tasks with RT 0.6 on our computer we get:
Code: Select all
46002000000     tcp_rgb
46004000000     tcp_depth
46068000000     tcp_rgb
46070000000     tcp_depth
46134000000     tcp_rgb
46136000000     tcp_depth
46200000000     tcp_rgb
46202000000     tcp_depth
46266000000     tcp_rgb
46268000000     tcp_depth
46332000000     tcp_rgb
46334000000     tcp_depth
46398000000     tcp_rgb
46400000000     tcp_depth
46464000000     tcp_rgb
46466000000     tcp_depth
46530000000     tcp_rgb
46532000000     tcp_depth
46596000000     tcp_rgb
46598000000     tcp_depth
46662000000     tcp_rgb
46664000000     tcp_depth
46728000000     tcp_rgb
46730000000     tcp_depth
46794000000     tcp_rgb
46796000000     tcp_depth
46860000000     tcp_rgb
46862000000     tcp_depth
...


All timestamps are in nsecs simulation time.

That means there is a time delay of approximately 20 - 25 milliseconds in case of RT 0.2 tasks while the delay is only 2 milliseconds in case of RT 0.6 tasks.

Is this behavior intended? This makes using synchronized images during motion quite hard. Moreover, since the simulator has got much more time to compute images in the RT 0.2 situation, it doesn't even make sense to me from a technical perspective.

This is the program I used to generate the logs
Code: Select all
import roslib; # roslib.load_manifest('euroc_c2_demos2')
import rospy

from geometry_msgs.msg import *

from euroc_c2_msgs.msg import *
from euroc_c2_msgs.srv import *

from euroc_c2_system import euroc_c2_selector
from euroc_c2_system import euroc_c2_system

import yaml

import time
import sys
import signal
import math
import threading

from sensor_msgs.msg import Image

rospy.init_node('test', anonymous=True)

lock = threading.RLock()

with open('/tmp/time.log', 'w') as log_file:

    def debug_image(name, image):
        with lock:
            log_file.write('{}\t{}\n'.format(image.header.stamp.to_nsec(), name))

    def tcp_depth_image(image):
        debug_image('tcp_depth', image)

    def tcp_rgb_image(image):
        debug_image('tcp_rgb', image)

    euroc_interface_node = '/euroc_interface_node/'
    rospy.Subscriber(euroc_interface_node + 'cameras/tcp_rgb_cam', Image, tcp_rgb_image)
    rospy.Subscriber(euroc_interface_node + 'cameras/tcp_depth_cam', Image, tcp_depth_image)

    rospy.spin()
Member of the FlexIRoc team.
jwienke
 
Posts: 8
Joined: Wed 29. Oct 2014, 18:12

by Advertising » Fri 31. Oct 2014, 11:04

Advertising
 

Re: Image synchronization depending on RT factor

Postby jwienke » Mon 3. Nov 2014, 17:14

Ping ;) Having an answer for this would be very nice.
Member of the FlexIRoc team.
jwienke
 
Posts: 8
Joined: Wed 29. Oct 2014, 18:12

Re: Image synchronization depending on RT factor

Postby fschmidt » Tue 4. Nov 2014, 11:33

camera's are not synchronized to each other at all.
concerning sensor timing the only guarantee is an exact simulation timestamp and a constant rate.

this is to be understood as part of the challenge. imagine a seperate rgb camera (e.g. high quality rgb or gray camera) and a seperate depth camera (e.g. time of flight based) with no synchronization (via kable or software) between them.

we did not claim to simulate those kind of structured light sensors (xtion, kinect) where depth and color information is derived from the same image.

of course, in a good hardware setup one wood like to assume that all sensors measure at the same point in time - this is not given here. but atleast you can assume very accurate timestamps which is usually not given on real hardware.

if you want to merge color-sensor and depth-sensor information while moving you will have to interpolate (data+frame) from one sensor to the measurement-time of the other sensor.
fschmidt
 
Posts: 96
Joined: Fri 27. Jun 2014, 14:44


Return to Stage 1 - Simulation

Who is online

No registered users

cron