Hi folks, I’m hoping to find some help with a two-scenario process.
These two scenarios are supposed to work together to do the following:
Scenario 1:
1. Gather up to 100 bundles of filtered & unread emails
2. Aggregate them in groups according to message_id
3. Create a data store record for each message_id
4. Trigger the second scenario ONCE after all of the bundles have been processed
Scenario 2:
1. Gather up ALL records in the data store with a single trigger
2. Aggregate groups by counting the number of unique cl_titles
3. Create a group_count variable for each cl_title group
4. Find and update each cl_title’s Airtable record ONCE with the aggregated group_count
5. Delete all of the data store records so it’s empty for the next run
Here are the ways that the pair of scenarios behaved that were NOT what was intended:
1. The “Run a scenario” step in the first scenario triggered 57 times - once for every bundle - rather than once
2. The second scenario triggered 57 times - once for every data store row - rather than once, which created a cascade effect that resulted in wildly inaccurate airtable updates
I already have a basic understanding of how and why this failed - but I’m having a heck of a time fixing it without just abandoning the Run a Scenario step and setting the second scenario on a schedule or on-demand.
Here are my questions:
- Is there a way to “gate” the first scenario between the data store step and the run a scenario step so that the run a scenario step waits until all of the bundles from the trigger step are processed?
- Can the data store trigger step in the second scenario be configured to grab all of the records in the data store at one time as a collection of bundles?
Thanks in advance for your help. : )
Shawn


