2010/DailyLogs/0712

Matthew Runchey talked about a message passing network to solve the camera pose problem given a time frame or event frame of  DVS silicon retina output. The message passing network embeds the interrelationships between maps of intensity, temporal change (the only input), intensity spatial gradient, and camera rotation (pan, tilt, translation). By converging this network of relations, all quantities can be inferred from only a single time-slice of histogrammed DVS output.


'Dave Lester' (Manchester) talked about the history of UK Computing. The electrical engineers who made the cryptanalysis machines at Bletchley Park were Tom Kilburn and Tommy Flowers. The real results of this work occurred when they linked up with the mathematical logic of mathematicians like Alan Turing. To Dave, the interesting aspect was that in creating the new discipline of Computer Science neither Turing nor Kilburn were as good at coding as their younger assistants. Tom Kilburn went on to build the first stored program machine at Manchetser in 1948. Turing joined the department later to try out his ideas.

The ARM chip is ubiquitous, 20 billion licensed. In microwave ovens, phones, car brakes and fuel systems etc.

The Steve Furber-led  SpiNNaker project is building multicore ARM chips with embedded NoC (network on chip) communication of small packets of information (a few bytes) between processors and between chips. Sold to funders as way of simulating large networks of spiking neurons; each arm can handle up to several thousand simple neurons (e.g. Izavikovitch) in real time, depending on connectivity.

Each SpiNNakker production chip will have 200M transistors, built in 130nm process, with 18 ARM 968 cores. Each chip is connected by six links to neighbors on the board. Each router has a table that is filled with 1k entries. Each entry is a ternary CAM (content addressable memory) that says what to do when the CAM hits. Broadcasts are enabled by this CAM to go out up to all 6 outgoing links, and internally to any or all of the cores on a chip.

At Telluride there is a board with 4 Spinaker chips each with 2 ARM cores. The projects are described at wiki:2010/rob10.

The chip has arrived and the race is on to write support software that supports streaming input and output and major software refactoring for end usability by outsiders to the Spin


Srinivas Kota (Queensland University) talked about bees and the their amazing cognitive capabilities for vision, navigation and communication. Can these tricks be applied to technological solutions to visual navigation by robots?

One thing that's different is that insects have ommatidia, bees have 5k in each eye with angular separation of 1-2 degrees to collect panoramic visual information. Each ommatidia has 3 color receptors. Stereo vision is hard because the disparity is tiny. So stereo is hard but motion is used extensively.

Motion parallax is used for translation and rotation. Rotation is critical to measure in order to extract the remaining relative optical flow that provides information about the relative distances of the objects in the scene.

By training bees to fly down a tunnel during foraging, Srini's lab observed over many years of experiments that the bees fly straight down the center of the tunnel. If one wall is moved opposite the direction of flight, then the fly moves towards the opposite wall and vice versa. The bee can measure velocity as observed by many experiments using these tunnels. Birds also show this kind of behavior.

How do bees control speed of flight? If you move pattern along direction of motion, bees speed up. Bees seem to try to maintain optical flow at 300 deg/second at peak. Works because in cluttered environments that automagically slows down flight for safety.

How are bees trained? Bees are trained by feeding sugar in a jar saturated with sugar inverted in a saucer. Bees find it, teach other bees and soon will be feeding. You move the sugar slowly towards the lab, step by step. You put a weak sugar solution outside the lab. They taste the weak solution and that way you can control the rate of bees coming in.

What about electrophysiology on bees? Turns out to be hard because bees are delicate. Much harder than with flies.

How does landing work? As bee lands velocity approaches zero. But too slow and may stall. By training bees on a rotating spiral, bee landing responses can be studied. It appears that looming or time-to-contact may not be measured directly - instead the bees may be trying to keep the flow constant during landing.

Speed vs height is linear. High=fast, low=slow. Observed experimentally and nicely solves the feedback control of landing. Boundary effects may trigger the leg extension, or maybe some binocular disparity.

Bees seem to handle integration of sparse visual cues for estimating the image flow and do not do a simple averaging of Riechhardt detectors. CHECK

Prediction of exponential decay of height vs. time. CHECK They do have compliant undercarriage. Angle is held constant betweeen 30 and 40 degrees. Bees don't have halteres but probably do have vestibular information. Bee's legs could play that role. In hawk moths the antennas seem to play this role.

Body angle is visually drive to possibly streamline, since drag goes up with square of velocity.

Bees can also be used to study neuroeconomic tradeoffs and reward mechanisms. They very nicely optimize their cost of travel distance and reward and relate that to probability. CHECK

How about hovering? Hovering is a different behavior but seems to involve taking a snapshot and stabilizing it. Movement of a flower towards and away from a bee produces a corresponding response.

What are aggressive behavior? Bees that get a whiff of alarm pheremone go into an aggressive mode where they slam into target (which target - CHECK).

How about long distance navigation and visual odometry? For distance, it seems to be driven largely by integration of optic flow. By using networks of tunnels that allow various polarization directions and by using tunnels with different patterns on walls (controlled by beamers for example) many experiments to explore the behaviors of long distance navigation. But basic question of memory of odometry information is unknown. Is it a place code? Do bees have place cells? Information must be held of times of many minutes.

von Frick weighted bees with small weights to ? But this caused bees to fly closer to ground and was confounded by this. CHECK

Jorg Conradt demonstrated the famous waggle dance that codes the direction (relative to sun) and the distance (by duration of waggle dance) to food sources. Distance is perimeter path distance in visual odometry optic flow units, not vector length directly to target. Direction is relative to polarization of sky (which works even when cloudy). CHECK. Enthusiasim and danger can be communicated. Enthusiasm by intervals between waggle dance loops; danger by another bee head butting the one dancing. Large numbers of dancing bees are needed to initiate foraging.

Do bees use landmarks? Yes, to some extent; if landmarks are removed after training, then visual odometry is substituted.