3D Tracking

Participants: Daniel Neil, Shih-Chii Liu, Tobi Delbruck, Ryad Benjamin Benosman, Greg Cohen, Luis Camunas, Francisco Barranco

This project was divided into two main sections, first focusing on the quadrocopters and second focusing on stereo vision with the DVS.

Quadrocopters

For the Quadrocopters, the main focus of this workshop is to combine sensor channels to obtain very accurate and timely estimates of the quadrocopter flight. To this end, three DVS cameras were installed, one focusing directly upwards and the other two orthogonal.

The data was transferred to the shuttle server, and can be accessed at \\Neurorouter\scratch\tell2013_quad\Data. Three full flights were captured, with IMU data including x,y,z acceleration, x,y,z, gyroscope, three dvs cameras, and one normal smartphone-mounted videocamera (Nexus 7). This data can be analyzed offline.

Java code

The screenshot below was captured by loading the "quad2.dat" data, running the MultiDVS128 chip, and the MultiGaussianTracker?. Ensure the correct visualization mode is selected in jAER by pressing "3" until the below visual appears. This shows two cameras, one facing forward, one facing sideways onto the object, with the events from each camera projected onto the wall. A rectangular prism is reconstructed at the intersection of these events.

Measurements

left------------up
                |
                |
                |
                |
                |
                |
           -----------
              front
  • up-to-left: 198 cm
  • left-vert-offset: 47 cm
  • up: 6mm lens
  • left: 6mm lens
  • front: 6mm lens
  • up-to-front: 229cm
  • front-vert-offset: 47cm
  • up-vert-offset: 12 cm
  • quadrotor: 56 cm across, edge-to-edge
  • At 79cm above the floor, it takes up 99 pixels

Stereo Vision

In this project, we have implemented an offline algorithm to reconstruct 3D shapes using the output spikes from two silicon retinas. This included the following steps:

- Calibration: first of all, we calibrated our retinas using a fixed 3D pattern of blinking LEDs with known coordinates, by setting the correspondence between each 3D point and the 2D points projected in both retinas. Therefore, we could calculate the fundamental matrix F which relates corresponding points between the retinas, and the projection matrices P1 and P2, which gives a vector space projection between 3D and 2D points. In the following video, we can see the 3D pattern of bliking LEDs in front of the 2-retina setup for calibration.

- Stereo matching: as both retinas send out spikes in response to the stimulus, it’s necessary to match pairs of events that correspond to the same point in space. For that, we have implemented an algorithm which consists on applying a number of restrictions to the possible candidates to match. These restrictions are: spike time, epipolar geometry, spike polarity and spike preferred orientation using gabor filters. After this step, we obtain a list of pairs of events.

- 3D reconstruction: using only those events which we could match, we applied the projection matrices that we calculated before to obtain their equivalent 3D coordinates. Also, measuring the displacement between matched events, we calculated the disparity map. The following videos show the disparity map and 3D reconstruction of a pen moving in front of the retinas.

Attachments