LSTM Implementation in Caffe

August 30, 2016 ยท View on GitHub

Note that the master branch of Caffe supports LSTM now. (Jeff Donahue's implementation has been merged.)
This repo is no longer maintained.

Speed comparison (Titan X, 3-layer LSTM with 2048 units)

Jeff's code is more modularized, whereas this code is optimized for LSTM.
This code computes gradient w.r.t. recurrent weights with a single matrix computation.

  • Batch size = 20, Length = 100
CodeForward(ms)Backward(ms)Total (ms)
This code248291539
Jeff's code264462726
  • Batch size = 4, Length = 100
CodeForward(ms)Backward(ms)Total (ms)
This code131118249
Jeff's code140290430
  • Batch size = 20, Length = 20
CodeForward(ms)Backward(ms)Total (ms)
This code4959108
Jeff's code5292144
  • Batch size = 4, Length = 20
CodeForward(ms)Backward(ms)Total (ms)
This code292655
Jeff's code306191

Example

An example code is in /examples/lstm_sequence/.
In this code, LSTM network is trained to generate a predefined sequence without any inputs.
This experiment was introduced by Clockwork RNN.
Four different LSTM networks and shell scripts(.sh) for training are provided.
Each script generates a log file containing the predicted sequence and the true sequence.
You can use plot_result.m to visualize the result.
The result of four LSTM networks will be as follows:

  • 1-layer LSTM with 15 hidden units for short sequence Diagram
  • 1-layer LSTM with 50 hidden units for long sequence Diagram
  • 3-layer deep LSTM with 7 hidden units for short sequence Diagram
  • 3-layer deep LSTM with 23 hidden units for long sequence Diagram