OpenAI “Generate a response” module returns structured JSON instead of plain text when used between Webhook → OpenAI → Webhook response

:bullseye: What is your goal?

Build a simple Make.com scenario where a webhook receives text input, sends it to the OpenAI “Generate a response” module, and returns only the assistant’s plain text reply via a webhook response (to be used as a WhatsApp-style conversational interface for a prototype demo).

:thinking: What is the problem & what have you tried?

The scenario works end-to-end, but the OpenAI “Generate a response” module returns a structured JSON object instead of plain text. The actual assistant reply appears nested inside the result (e.g. content → output_text → text), and this field is not consistently visible or selectable in the mapping panel.

I tried mapping Result, Raw result, and other available fields into the Webhook Response body, but the webhook still returns either the full JSON or an empty body instead of just the assistant’s text. Expanding the Result in the mapping panel sometimes shows blank, making it unclear which field should be used to return only the plain text response.

I’m looking for the correct way to extract and return only the assistant’s text output from this module via a Webhook Response.

Hey there,

can you show a screenshot of the output of the module and a screenshot of what you have mapped in the webhook response?

Hi Stoyan,

Thanks for checking.

I’ve attached:

  1. A screenshot of the OpenAI “Generate a response” module output, showing that it returns a structured JSON object.

  2. A screenshot of the Webhook Response module mapping.

The issue was that I had mapped the full OpenAI response object into the Webhook Response, which caused the webhook to return JSON.

The fix was to map only the assistant’s text field (content → text) in the Webhook Response module and set the header to:

Content-Type: text/plain; charset=utf-8

After that, the webhook returns plain text as expected.

Thanks!