python - Understanding dimension of input to pre-defined LSTM -
i trying design model in tensorflow predict next words using lstm.
tensorflow tutorial rnn gives pseudocode how use lstm ptb dataset.
reached step of generating batches , labels.
def generate_batches(raw_data, batch_size): global data_index data_len = len(raw_data) num_batches = data_len // batch_size #batch = dict.fromkeys([i in range(num_batches)]) #labels = dict.fromkeys([i in range(num_batches)]) batch = np.ndarray(shape=(batch_size), dtype=np.float) labels = np.ndarray(shape=(batch_size, 1), dtype=np.float) in xrange(batch_size) : batch[i] = raw_data[i + data_index] labels[i, 0] = raw_data[i + data_index + 1] data_index = (data_index + 1) % len(raw_data) return batch, labels
this code gives batch , labels size (batch_size x 1).
these batch , labels can size of (batch_size x vocabulary_size) using tf.nn.embedding_lookup()
.
so, problem here how proceed next using function rnn_cell.basiclstmcell
or using user defined lstm model?
input dimension lstm cell
, how used num_steps
?
size of batch , labels useful in scenario?
the full example ptb in source code. there recommended defaults (smallconfig
, mediumconfig
, , largeconfig
) can use.
Comments
Post a Comment