You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Nov 5, 2022. It is now read-only.
Hi, could you please advice, where I'm wrong.
I don't have much experience and I'm trying to figure out how does it work.
I tried to to use model build with your TFX_Pipeline_for_Bert_Preprocessing.ipynb, but when I try to serve it via TF Serving I receive ""error": "Could not parse example input, value: 'You are very good person'\n\t [[{{node ParseExample/ParseExampleV2}}]]""
Change version of dataset from "0.1.0" to "1.0.0", cause only 1.0.0 is available
Install dependencies and build model locally
Run TF serving via docker and fit already built model
Make request curl -d '{"instances": ["You are very good person"]}' -X POST --output - http://localhost:8501/v1/models/my_model:predict
Receive { "error": "Could not parse example input, value: 'You are very good person'\n\t [[{{node ParseExample/ParseExampleV2}}]]" }
So I assume, that model is trained with tensor as an input. Also in the end of your notebook there is a test, trying model's "serving default" and we also fit a tensor to the model.
How could I achieve to pass the raw text in request to TF Serving ? Should TF Serving convert string to tensor?
Could you please advice where I'm wrong. Spent more than a week trying to solve this.