What I’m trying to do:
On the server I am using langchain and openai. As the full completion of a chat request takes quite some time (>20 seconds), I have added a BaseCallbackHandler that returns fragments of the answer ‘in completion’.
Now I want to stream these fragments to the client side so the enduser will see how the answer gets completed. The response of the interaction in this way is about 3 seconds instead of 20.
What I’ve tried and what’s not working:
I have tried to implement this using the call_async as described in:
https://anvil-labs.readthedocs.io/en/latest/guides/modules/non_blocking.html#
However, then I get the following error:
InvalidRequestError: [{}] is not valid under any of the given schemas - ‘input’
at /home/anvil/.env/lib/python3.10/site-packages/openai/api_requestor.py:687
What is the best way to get the data streamed from the server to the client? Is there any example available?
Thanks for your help.