At the core of this system is a spike-based finite state automata parser. It uses a lexicalized grammar which does a state transition each time one of a set of words is seen, and in so doing, parses the syntax of a sentence. On some of the transitions, a semantic state is also updated. We have implemented the grammar and associated lexicon for performing the first Facebook task ("single supporting fact") in which one is given a set of sentences about who is where, then must answer one or more queries about locations of the people. For example,

Mary moved to the bathroom.
John went to the hallway.
Where is Mary? 	

The answer should be "bathroom".

The memory component of the system needs to do more than just remember a bunch of places. It needs to bind a person with a location for each input assertion, then retrieve the relevant location when a query is run. This is accomplished by Sergio will explain this

Around this, we have included a speech input mechanism (using pyaudio). With this, 72 different sentences can be input, and using speech output, the answer should be produced. It should be noted, though, that the grammar is modular and very extensible. The parser can handle a much larger grammar and lexicon.

The image below shows, at the bottom, the words coming in from three sentences. In the middle are the syntactic and semantic states generated during the parsing. And at the top are the outputs that get sent to the memory module.