How do you manage timeouts (40 min) on massive processing scenarios?

I’m curious to know how you handle scenario timeouts : management of this warning, automatic restart of the scenario, etc :slightly_smiling_face:

Share your best practices.


Depends on if its a one off Data Cleanup Scenario, that runs for a day or two, then gets archived.

or if its a scheduled scenario

All examples below are about moving data between data storage systems/and cleaning the data in the process.

For one off scenarios

example partner gives gives a massive spreadsheet for user intake, or company merger. usually poorly formatted.

Let the scenario run till it errors out, while adding a “notes” cell with status for each item in the initial sheet. (often you may have to go back anyways and correct human error stuff to re-run, so different status’s in a single column is useful for audit)

when the 40 minutes is reached, restart scenario with “if x column is empty” start from there.

this method does add a significant amount of extra operations, and only should be used if an audit is required after the automation has completed. due to poorly/inconsistently formatted data.

for one off scenarios part 2

Reducing operation cost can be achieved by not using the audit column, and instead get start/end database size, and subtracting, then starting again from that many rows down.

For recurring scenarios.

Run Dummy data for the scenario of the longest processing time possible. Buffer that for unexpected issues.

Run the scenario in “units” of that many rows of data until no data left. schedule to every 45minutes, and set a trigger to text or message you on discord when a “cycle starts” with no data. to turn off the scenario, or use the :make: api with the HTTP module, to self shutdown at that point.