I recently published a Make template for competitor Instagram content research, and I’d love to share it here.
This template is built for marketers, content teams, and creators who want to monitor competitor Instagram accounts and automatically turn new Reels into structured research data.
It can:
track competitor Instagram accounts from Google Sheets
detect newly published Reels
save new Reel records into a Google Sheets library
extract Reel transcripts automatically
send transcripts to AI for analysis
generate structured insights for content research
update progress at Reel level, account level, and run level
send Telegram notifications when the run is completed
The goal is to help turn competitor monitoring into a repeatable workflow instead of manually checking accounts, copying links, extracting transcripts, and summarizing everything by hand.
One thing I’m still not happy with is the scenario structure.
Because this workflow needs to handle many different outcomes, the scenario became very branch-heavy. I tried simplifying it with if-else and merge, but Make’s builder limitations made that much harder than I expected, so I ended up keeping many separate routes.
I’m now thinking about a few possible directions: keep it as one large scenario, split it into multiple scenarios, or move more logic into status fields / aggregators.
If you’ve built complex Make scenarios before, I’d really appreciate your advice. How would you simplify something like this?
Please comment with your suggestion, preferred structure, or the approach you’d choose. I’d love to learn from how others design branch-heavy scenarios in Make.
This is actually a solid use case turning competitor research into a system instead of a manual task is where a lot of teams fall behind.
What I like here is the end-to-end flow (tracking → extraction → analysis → storage), not just scraping content. That’s what makes it usable long-term.
Only thing I’d be curious about is how you’re handling transcript accuracy and rate limits on Instagram that’s usually where setups like this get fragile.
Overall though, this is the kind of workflow that compounds over time
You’re absolutely right — transcript accuracy and rate limits are probably the two places where workflows like this either hold up or start falling apart.
I tested a few different Actors on Apify before landing on the current setup. So far, the 2 Actors I’m using in this workflow have both been performing pretty well on those fronts, both in terms of transcript quality and overall stability.
Cost was also a big factor. The account-scanning Actor can generate transcripts too, but it was a lot more expensive for the way I wanted to run this workflow. So I ended up splitting the job: one Actor to scan accounts and find new Reels, and another one just for transcript extraction. That gave me a much better balance between cost and reliability.
Still early, but so far that setup feels a lot more practical.