this is my project for socail media posts.
those are my modules :
dropbox (watchfiles - for new images uploaded) → dropbox(getfiles - for these images to be files for gpt vision) → gpt vision (gpt4 turbo 24.04.09 - promted to analyse pic details) → ChatGPT(turbo 3.5 promted to gather info from vision analyzation) → router (for further needs, would like to make the posts on ther social media as well) → chatGPT(gpt 4 turbo - promted to make a fine post) → facebook Pages( uploading the post to facebook).
the problem is that for some reason the post uploaded to facebook looks like gpt vision does not pull any info from the images, I gget the post up;oaded, but without any concrete details that should apear from gpt vision activity … here are my vision module setting :
my gpt3.5 analyst module :
working on that by myself for a long time and just cant get the needed result, tried other OCR such as google vision, HTTP module with api.ocr.space and still no result…
Any ideas what have I done wrong ?
Hi @Daniel_Phoenix and welcome to the Make Community!
I’m curious, what does the output bundle that precedes the call to OpenAI (or the input bundle for the call) say is sent to ChatGPT?
Where I think the problem could be:
- You are passing a URL and not the data
- The data is sent in text format instead of PNG format
- The file is not shared properly from Dropbox so ChatGPT doesn’t actually get any data, maybe just an empty file
Without seeing the content of the input and output bundles (the numbers at the top right of each module), it’s hard to say what the problem is.
L
1 Like
Hi, thanks alot praying for the day I would know enough to answer posts instead of posting newbie questions hahaha
this is the proccess:
seems like the download flies doing good at the budle:
this is vision bundle(seems like it does the job well):
this is my info exctractor I would say (gpt3.5 turbo module - which gets no info for some reason):
other modules are working fine, so I guess there is no reason to upload them as well, right ?
Thanks alot for help!
kinda wired it says in bundle his role is assistant, I set him up as User …
The assistant provides the answer. The user provides the question.
It looks like the output from the vision call is not making it to the 3.5 call. Should 3.5 be taking input from whatever information the vision call detected? If I look at the first screenshots you sent, it looks like the 3.5 call is getting data directly from Dropbox.
But it might be easier/more helpful to provide the entire output bundle/blueprint here if you can. Just make sure to remove any confidential/personal information from them.
To get the output bundles:
Then you get a window with the content of the bundle. Select copy-paste the information and save it to a text file that you upload here.
Or, paste the content between codeblocks by clicking this:

The information wil look like this
{
text: "and it will keep the formatting which will get lost if you just paste directly"
}
To share your blueprint so someone can test it, export the scenario blueprint by clicking on the three dots and upload the resulting JSON here.
L
1 Like
Got it, here is the blueprint:
blueprint.json (97.7 KB)
I know the promts are not that good, still working on my prompt skills
please be gentle with me haha
thanks alot for help!
how can I leave a chear or a suggestion on your profile ?
So GPT-3.5 is taking results from the data that GPT-4-vision gave. I don’t have an image of what you are getting as input from Dropbox, so I can’t test it adequately on my end, but that’s the big problem I see.
Since you are saying that vision is doing its work correctly, the fix is fairly simple.
Your GPT-3.5 module configuration should look like this:
Fix that, and you should have no problem.
L
P.S. Consider testing with gpt-4o-mini instead of gpt-3.5. It costs less and usually gives better results. Same for gpt-4o vs gpt-4-turbo.
1 Like
Seems like it works now!
Thanks alot man, appriciated!
1 Like