How do I implement masking in TensorFlow eager?

by DankMasterDan   Last Updated May 15, 2019 18:19 PM

I am training a stateful RNN on variable length sequences (optional: see my previous question for more details).

I padded the sequences to a fixed length with the value -1.

The when batches are loaded, some samples will be entirely -1. (e.g. the batches are shape [batchsize, ...] and samples 1,6,8 may be entirely composed of -1's). I would like to:

  1. not include these samples in the loss function calculation
  2. not perform operations on them so as to speedup training.

Attempt 1:

I tried using tf.Keras.layers.Masking as in:

input = tf.Keras.layers.Masking(mask_val=-1)(input)

But, this doesnt seem to do anything. The subsequent operations are still performed, and as far as I can tell the samples are still included in the loss function. Why is this?

NOTE: This question was previously asked on stackoverflow.com & datascience.stackexchange.com, but deleted (modified in latter case) due to lack of response. I think this will be a better home for it.



Related Questions




"Concurrent" LSTM network

Updated June 23, 2018 15:19 PM

LSTM with Keras: unprecise regression of a sinusoid

Updated December 08, 2017 22:19 PM

Recurrent Neural Network for integral calculation

Updated April 26, 2018 15:19 PM