Hi everyone
I’m working on an automation that generates blog content with OpenAI and publishes it to WordPress. It’s working great for the content and title… but I’m stuck when trying to generate a DALL·E image based on the article content.
My scenario (in order):
Google Sheets — watch new row
OpenAI — generate Content Part 1
OpenAI — Content Part 2
OpenAI — Content Part 3
OpenAI — generate SEO Title
OpenAI — generate visual description (prompt for DALL·E)
DALL·E — generate image
WordPress — upload image
WordPress — create post with image + title
The issue:
My GPT module that generates the DALL·E prompt is AFTER the DALL·E module in the scenario
Therefore, I can’t map its output (choices[0].message.content) into the Prompt field of the DALL·E module
If I move GPT after DALL·E → I can’t use it
If I move it before → Make still doesn’t show it in the list of items to map
What I’ve tried:
Recreating the GPT module before DALL·E
Deleting and reinserting both modules
Making sure the GPT runs before image generation
Even when I place it visually before, Make still doesn’t allow me to access its output for DALL·E
I just want to:
Generate a DALL·E prompt using GPT based on my blog content
Then use that prompt for the image generation
And upload it as the featured image in WordPress
Any idea why Make won’t let me map the GPT output (even when it’s logically before in the flow)? Is there a better way to structure this?
Thanks
blueprint.json (261.2 KB)