What is your goal?
From a folder, retrieve documents and process them one by one using an iterator. For each document I get its content, set variables (text and length), and then use a router to split based on document length (to avoid OpenAI token limits). For now, I’m focusing on the short path where the document is summarized using OpenAI.
At this point everything works correctly and I get one summary per document (per bundle).
The problem is at the end. I want to aggregate all OpenAI summaries into one combined output and then continue the flow with that single result.
However, the Text Aggregator runs once per bundle instead of combining everything, so I never get one final merged output.
I tried placing the Text Aggregator after OpenAI, changing the source module, removing the router, and aggregating after setting variables, but it still behaves like it’s inside the iteration.
What I’m trying to understand:
Where exactly should the Text Aggregator be placed in this kind of flow?
What should be set as the source module (Iterator or OpenAI)?
How can I make sure aggregation happens after all bundles are processed and not per bundle?
Does the router break aggregation and do I need to merge paths before aggregating?
The ideal flow I want is:
Get files → Iterator → Get content → Set variables → Router (short/long) → OpenAI (summarize) → aggregate all summaries → continue with one combined result.
What is the problem & what have you tried?
As of now, after the AI step I implemented a Text Aggregator (source: Iterator), but it still runs for each individual bundle. As a result, it does not aggregate the results — the number of operations equals the number of bundles.
Secondly, even if this worked, I want to have a third path that runs only once after all documents in the iteration are processed, not per bundle.
(By the way, the OpenAI warning is just because I force-stopped the scenario.)

