I’m reaching out to you today to see if anyone has found a solution to a problem I’m facing and haven’t been able to resolve…
I want to use ChatGPT to analyze the content of a file and extract a certain amount of information. This works very well when I manually upload the file via the app or web interface.
However, now I’m trying to do the same thing using Make.com, and I just can’t get the same result.
I’ve tried everything (at least I think I have), whether it’s uploading the file directly and asking ChatGPT to analyze the content or using an Assistant and a Vector Store.
Direct upload is very limited in file size, and we quickly hit token limits. Using the Vector Store creates a memory for ChatGPT that causes it to mix up the contents of previously analyzed files, even after deleting them from the Vector Store. I’ve tried being as specific as possible in my prompt to make it ignore other files, but I quickly end up with complete hallucinations.
Has anyone found a solution to get ChatGPT to analyze a specific file through the API or Make?
Hi! What file is it exactly? Just text? Is it always one file at the time?
I can have a try when you give me a example file and what you want to extract.
When it worked with the Chat-GPT Uploader, than it should also work with the API.
Good morning,
Thanks for your message. In between I have been able to make it works. In fact I have been confused that when uploading a file manually using the chat allows to parse PDF directly and prompt on this file.
When you do the same via Make or API ChatGPT is not able to have the PDF parsed or OCR, you have tu use an additional service to OCR the file first.
This is really a shame, I would prefer not to to use another service, particularly knowing that ChatGPT is able to handle that…
I don’t know about the make chatgpt module but it will work when you are using the API.
I wrote a supabase function, that sends a image to chatgpt and let it generate alt-descriptions.
here is the code snippet (JavaScript):
const response = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${openaiApiKey}`
},
body: JSON.stringify({
model: "gpt-4o",
messages: [
{
role: "user",
content: [
{
type: "text",
text: `Please analyze this image and provide a JSON response with two fields: ` +
`'title' (a concise, descriptive title) and 'alt_text' (a brief, clear description ` +
`in 10-15 words, focused on key visual elements for accessibility). ` +
`Provide the response in ${languageName}. ` +
`Format the response as valid JSON.`
},
{
type: "image_url",
image_url: {
url: `data:image/jpeg;base64,${base64Image}`
}
}
]
}
],
response_format: { type: "json_object" }
})
});
Authorization: Bearer [your OpenAI key]
(Just paste your actual API key after "Bearer ")
For the Body part, choose “Raw” and “JSON” format, then paste this:
{
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Please analyze this image and provide a JSON response with two fields: 'title' (a concise, descriptive title) and 'alt_text' (a brief, clear description in 10-15 words, focused on key visual elements for accessibility). Provide the response in {{1}}. Format the response as valid JSON."
},
{
"type": "image_url",
"image_url": {
"url": "data:image/jpeg;base64,{{2}}"
}
}
]
}
],
"response_format": { "type": "json_object" }
}
Where it says {{1}}, map that to whatever variable has your language name. And for {{2}}, map that to your base64 image variable.
That’s it! This will send your image to GPT-4o and ask it to analyze the image and give you back a nice JSON with a title and alt text.