2015/Results/nlp/TrueHappiness

We created a spiking neural network for sentiment prediction of words contained in Wikipedia.

The network was designed in the following steps:

1: train on Wikipedia to learn relations between words

2: 4000 examples of how happy words on 1 to 10 scale

3: train neural network to learn happiness of words

4: design a spiking network with integrate & fire neurons

5: Constrain the spiking network to implement it on TrueNorth:

→ exp(word vector)

→ discretize weights


The resulting quality of the sentiment predictions depends on both the neural network structure and the quality of the input representations.

When an artificial NN is trained using word vectors learned from Twitter data, the network structure is:

(256 inputs) x (64 rectified linear) x (1 linear output)

and the resulting score is

mean error: 0.471
pearson's r: 0.800

When a logarithmic transform is applied to the inputs, the score drops to

mean error: 0.532
pearson's r: 0.742

When the artificial NN is trained with our 64 length Wikipedia vectors, the network structure is:

(64 inputs) x (64 rectified linear) x (1 linear output)

and the resulting score is

mean error: 0.604
pearson's r: 0.637

And when the inputs are transformed with exp(), the scores improve to

mean error: 0.583
pearson's r: 0.647

This final model is the one that we implemented in spikes.


The spiking net (with neuron threshold of 4) got a correlation of:

 0.258 - 0.01 seconds of integration, 
 0.291 - 0.02 seconds of integration
 0.353 - 0.05 seconds of integration
 0.399 - 0.1 seconds of integration
 0.435 - 0.2 seconds of integration
 0.565 - 0.5 seconds of integration
 0.636 - 1 second of integration
 0.637 - original network

Attachments