High Volume Operations

Hi All,

I have several tasks that do some high volume operations, as when it gets one item there are multiple items it needs to go grab after. The provided apis from the vendor don’t allow me to bulk grab things.

Does anyone else run into loving the Make interface for bilding an integration, but the operation count is prohibitive to loaded/extracting data for their purposes?

Hi @Bryan_Arndorfer,

I 100% love the Make interface for building integrations and I do know these challenges.
There are some ways to save operations and achieve the same.

a) Sometimes there are certain API endpoints to bulk insert/update (Google Sheets: BatchUpdate)
b) Sometimes it makes sense to create your custom app, paginate through the results within one operation and use one operation for that → you need to know how to build custom apps, takes time…

But sometimes for integrations with a high operations count its just worth it to buy the operations :smiley:

Hope it helps you somehow,
Richard

1 Like

extending on what @richard_johannes Said.

I have been using :make: make in combination with apify to scrape some 30+ sites worth of data (sometimes up to every 5 minutes throughout the day)

and loads in about 30 million+ “rows of data” per month, with a cost of about 50k-100k operations. most of which are spent on scheduling “did the data update?” checks, and less is spent on the actual data.

I do this with a combination of

  • aggregate to csv
  • store csvs on a $1 a month amazon S3 bucket. (google drive would also work i believe)
  • Array edit IML functions, rather than edit one line at a time.
  • importdata(“pathtobakedcsvfile”)
  • Google sheets as a middleman for calculated fields.
  • only if needed a small google apps script to trigger “baking” of the importdata() into the sheets.
  • heavy use of filters to stop scenario execution if the predicted “change” to the data is less
    than “x” percent. and instead save the scraped data for a later update to batch together.
    batch updates with “make api call” whenever possible.
  • treating make Datastores as “nosql” databases rather than like relational databases
Longer Explanation of this item

is 1 record is say for example 10,000 rows of data, and the whole record is updated when needed, which costs 1 operation, rather than 10,000 seperate updates for a “statistic update that could happen hourly”` requires a different mindset when building a scenario.

Dont go overboard though on saving operations. my use case makes it a necessity. but often the point of :make: make is to save time so you can focus on the next thing. and saving 1000 operations a month might not be worth it takes too much time, or makes your scenario to “unintuitive to edit”.

I could optimize mine even further down to 15k-25k operations per month, pretty easily. but i made the decision that it isn’t worth the time cost. (if I start to run out of operations on my plan, ill do exactly that to squeeze a little more out).

2 Likes