I’m looking for tips on how to speed up scenario execution. This topic could be a repository for all kinds of strategies for reducing execution time.
Of particular interest to me at the moment is reducing the time to search Google sheet rows and HTTP requests to custom webhook scenarios.
With regard to Google sheets, I wonder how much a broad column range like A-Z is impacting time? What about columns with data but not used by the scenario? Or sheets within the workbook that are not being used? What about settings like Field Type, Value render option, and Maximum returned rows? Would migrating to a different spreadsheet or database speed things up?
With regard to HTTP requests, if data is only being passed using Query String, does the “Request compressed content” setting affect execution speed?
With regard to custom webhooks, would sequential processing be faster than parallel assuming that the odds of two execution requests are not likely to occur within seconds of each other?
Lastly, how does the “Commit trigger last” scenario setting impact execution time?
These are all questions mostly dependent on the google api response.
You’ll have to experiment. Benchmarking hasn’t been done yet.
But the more important question: why does it matter how long it takes? Do you have a time sensitive automation? If so nocode won’t be most performant. Raw code will always be faster.
It matters because the webhook is serving as a form handler and after the user submits the form they have to wait for the webhook to finish execution before it responds with a redirect the appropriate webpage.
I don’t think that Make is suited for near real-time operations. If you insist on using a no-code solution you could try to integrate Make with other tools. For example, Glide platform lets you build apps based on sheets. Make provides a connector (Glide Integration | Workflow Automation | Make). Pls take this as an example, I have no ties to Glide btw., but you could use Glide to quickly build the front-end that saves stuff to a sheet (in real-time) and then use Make to do something in the background with the collected data.
You may wish to separate out processing to another scenario via web hook and then send back the response. It’s sort of parallel processing, but if you put the webhook response at the end of a longish scenario it won’t be a pretty user experience.
Another interesting approach, and more intensive TBH that effectively parallelizes work between 2 platforms effectively.
Can you show us your current scenario with Google sheets?
Excellent suggestion…thank you. I do that with a number of repetitive automations and I believe the referring scenario doesn’t wait for a webhook response if I don’t specify one within the webhook.
Here is one of many I’ve created:
If those below (with red question marks) are search steps you could do a search that returns all the rows and create an array instead (instead of your second Sheets search). And just use the map function to extract the data for each subscription
This is a similar set up that I built a few weeks ago. I cut down run time by 25X