What is your goal?
My goal is to perform one single LLM analysis (OpenAI) over multiple documents, where:
-files are collected from different Google Drive folders,
-files have different types (PDF, DOC/DOCX, Excel),
-files are discovered at the very beginning of the scenario, using multiple router branches (based on folder / type),
-after collecting them, I want to move into a master branch that:
–uploads all files to OpenAI,
–and then triggers exactly ONE analysis request using all uploaded files together.
High-level logic I’m aiming for:
Search files (multiple folders)
→ Router (PDF/DOC | Excel | others)
→ Collect files
→ Upload each file to OpenAI
→ Close iteration
→ Aggregate ALL uploaded file IDs
→ Run ONE LLM analysis over all files
The LLM should:
-read all documents together,
-identify the master risk register (Excel),
-analyze PDF/DOC reports against it,
-and return a structured JSON result.
What is the problem?
I’m stuck on how to correctly merge multiple iterator branches into a single LLM call.
Key issues I’m facing:
-
Iteration vs single LLM call
-Uploading files works only inside an iterator (one bundle per file).
-But placing the OpenAI “Generate response / Assistant” module there causes the LLM to run once per file, which I must avoid.
-I need the upload to iterate, but the analysis to run once, after all uploads are finished. -
Multiple branches at the beginning
-Files are collected from different folders and branches first (PDF/DOC vs Excel).
-Only later do I want to enter a master branch where everything is merged.
-I’m unsure if this branching strategy is correct or if I should restructure the flow earlier.
3.Aggregation problems
I tried:
-Array Aggregator (from OpenAI Upload a file),
-Set Variable with map(),
-Data Store (one record per uploaded file + Search records).
What have you tried so far?
In all cases:
-aggregation technically works (I see multiple bundles),
-but the resulting structure is nested too deeply,
-mapping it into OpenAI file_ids or Code Interpreter resources fails,
-results are often null, empty, or only the first file is seen by the model.
4.OpenAI constraints
OpenAI expects:
-file IDs as clean strings (file-xxxxx),
-max length 64,
-no extra characters.
Aggregated results from Make often contain:
-collections instead of strings,
-keys mixed with metadata,
-or arrays that Make refuses to flatten cleanly.
Final effect
Even when aggregation reports IMTAGGLENGTH = 3,
-OpenAI sees only one file,
-or fails with “invalid file_id” / “expected string, got null”.
