Hello, I am currently have a simple workflow with open ai chat completion but it is very slow, is there a way to stream the data while its being completed for a more user friendly experience on the front end?
Of course, I’d be happy to help you with that. Streaming the data while it’s being completed can indeed provide a more user-friendly experience. To achieve this, you can make use of HTTP requests to interact with the OpenAI API.
Set up a HTTP requests.
Initiate the completion process by sending a request from the front end.
The server communicates with the OpenAI API to start the completion and begins streaming partial results.
Use server-sent events, web sockets, or chunked HTTP responses to stream partial completion results back to the front end.
Display the partial results to the user in real-time.
Once the completion process is complete, send the final result from the server to the front end.
I am new to make, do I use the open ai block or the http block. Could you give me some more guidance on it. I kind of don’t know what I am doing.
Let me provide some guidance on how you can approach using the OpenAI API and streaming data in your workflow Connect here for reliable communication
Hello, @Martin_T! Welcome to the community!
@Pro_Tanvee, it’s wonderful to see your willingness to offer assistance. We truly appreciate that kind of support here in the Make Community. However, I kindly request that you limit your offers for paid services to the #solution-exchange category.
Our community is specifically designed to promote learning from the solutions shared within it. By keeping these conversations in the designated space, we ensure that they remain accessible to the entire community and that everyone can learn from them.
Thank you for your understanding, and we look forward to seeing more of you in the community!