(Open AI Assistant) cutting the result in 2 messages and Make AI gets only the 2nd message

I have a problem with getting the results from API Assistant GPT. I suppose that there is some bug in the OpenAI side, but that’s a regular situation for them.

The point is that Assistant is answering with 2 cutted messages today (I checked it in the playground), but in the “Result” in the Integration it shows only the second small part, so it doesn’t takes the first part of the message.

How to fix it? We need to have in the “Result” both parts of the Assistant Answer/


Welcome to the Make community!

Which is the first part and which is the second part? It is not clear in your screenshots since I can’t read russian.

It will help if you can draw circles around each part and label them 1 & 2.

If you need further assistance, please provide the following:

1. Screenshots of module fields and filters

Please share screenshots of relevant module fields and filters in question? It would really help other community members to see what you’re looking at.

You can upload images here using the Upload icon in the text editor:
Screenshot_2023-10-07_111039

2. Scenario screenshot and blueprint

Please export the scenario blueprint file to allow others to view the mappings and settings. At the bottom of the scenario editor, you can click on the three dots to find the Export Blueprint menu item.

Screenshot_2023-08-24_230826
(Note: Exporting your scenario will not include private information or keys to your connections)

Uploading it here will look like this:

blueprint.json (12.3 KB)

Following these steps will allow others to assist you here. Thanks!

3 Likes

I can give you an example with English text. One sec

In the result there is “cutted part” of the message , there is missing the beginning of the ChatGPT generation

This problem is only today. And in all bots connected to Open Ai assistant API. Yesterday everything was working correct.

blueprint-3.json (95.4 KB)

I’ve made a sceenrecording with the explanation of the problem https://youtu.be/4W30hm1f020

I’m not seeing this issue on my assistant’s thread.

What is the Instructions and model settings in the OpenAI platform?

Have you tried to delete the thread and create a fresh thread?

2 Likes

It’s gpt4-turbo and gpt4-1106 and the issue is only when the assistant is writing long message, with short messages is working correct

The tread is automatically being deleted every new message , so that’s not the issue

The main differences between my assistant and yours is that I’m using the gpt-4-turbo-preview model, and my assistant Instructions are different from yours obviously.

This is unlikely a Make issue. You might want to report this to OpenAI support.

2 Likes

You have assistants that reply with long form messages and it’s working correct?

The answer from open ai is:

Hi there, Thank you for reaching out and providing detailed information about the issue you’re experiencing with the ChatGPT API. It sounds like you’re encountering a problem where the API response is truncated, showing only the latter part of the expected output in your integration, despite the full response being visible in the playground. This issue could be related to how the response is being handled or parsed in your integration. Here are a few steps you can take to troubleshoot and potentially resolve the issue:Check Response Format:Ensure that your integration is correctly handling the JSON response from the API. The response might be split into multiple parts if it’s a long message, and your integration needs to concatenate these parts correctly. Review API Documentation: Double-check the OpenAI API documentation to ensure that your request is formatted correctly and that you’re using the appropriate parameters for your use case. Inspect Payload Size:If the messages are particularly long, you might be hitting payload size limits. Consider adjusting the max_tokens parameter to control the size of the generated text. Error Handling: Make sure your code properly handles any errors returned by the API. An error in one part of the message generation process might result in incomplete output. API Version: Verify that you’re using the latest version of the API and the model (gpt-4 or gpt-4-turbo) that supports your requirements. Model updates or changes might affect how responses are generated or formatted. If after trying these steps, you’re still experiencing issues, it would be helpful to have a specific example of the request you’re making (without any sensitive information) and the response you’re receiving. This information can provide more context and help in diagnosing the problem more accurately. We’re here to help, so please let us know if the issue persists or if there’s any more information you can share.Best,

1 Like