-
Notifications
You must be signed in to change notification settings - Fork 48
Open
Description
The model can only get gradient for embedding layer. If the input of the model is word id and use Embedding Layer, the Integrated-Gradients return an error.
ValueError Traceback (most recent call last)
<ipython-input-8-223e71b35bf0> in <module>()
----> 1 ig = integrated_gradients(model)
2 exp = ig.explain([X_val[0][1], X_val[1][1], X_val[2][1], X_val[3][1]] )
./IntegratedGradients/IntegratedGradients.py in __init__(self, model, outchannels, verbose)
74 # Get tensor that calculates gradient
75 if K.backend() == "tensorflow":
---> 76 gradients = self.model.optimizer.get_gradients(self.model.output[:, c], self.model.input)
77 if K.backend() == "theano":
78 gradients = self.model.optimizer.get_gradients(self.model.output[:, c].sum(), self.model.input)
~/.local/lib/python3.6/site-packages/keras/optimizers.py in get_gradients(self, loss, params)
78 grads = K.gradients(loss, params)
79 if None in grads:
---> 80 raise ValueError('An operation has `None` for gradient. '
81 'Please make sure that all of your ops have a '
82 'gradient defined (i.e. are differentiable). '
ValueError: An operation has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.
However, I can get the gradient of embedding, how to determine the value of single words, for example in paper Section 6.3 Question Classification?
Metadata
Metadata
Assignees
Labels
No labels