Scraping multiple URLs using Apify (Web Crawler) in Make.com

I’ve been trying to scrape multiple URLs using Apify’s Web Crawler and process the data through Make.com to generate a Google Doc. However, instead of compiling all the scraped content into a single document, Make.com is creating separate Google Docs for each URL.

I set up the automation to take the extracted data and pass it to Google Docs, but I can’t figure out how to merge all the content into one document instead of multiple. I assume I need some kind of aggregator or text combiner, but I’m not sure what the best approach is within Make.com.

Has anyone dealt with this before? How can I modify my setup so all the scraped content is stored in one Google Doc rather than multiple files? Any guidance would be appreciated!

Workflow

  • Google Sheets (Watch New Rows) → Triggers the workflow when a new keyword is added.
  • Apify (Run an Actor – Google SERP Scraper) → Runs the Google SERP Scraper on the keyword to extract search results.
  • Apify (Get Dataset Items – Google SERP Scraper) → Retrieves the scraped Google search results.
  • Iterator → Processes each search result individually, which I believe might be causing the issue of multiple Google Docs being created.
  • Apify (Run an Actor – Website Content Crawler) → Uses a Website Content Crawler to scrape full website content from the URLs obtained in the Google SERP Scraper.
  • Apify (Get Dataset Items – Website Content Crawler) → Retrieves the extracted website content.
  • Array Aggregator → This step is supposed to combine all the extracted website content before sending it to Google Docs, but I’m not sure if it’s configured correctly.
  • Google Docs (Create a Document) → Generates a Google Doc, but instead of merging everything, it’s creating multiple separate documents.

The issue I noticed is that the Apify (Run an Actor – Website Content Crawler) runs multiple times, creating separate files instead of gathering all the data into one. I need a way to ensure that Make.com waits for the Website Content Crawler to finish running before proceeding to the next step.

How can I configure Make.com to wait until all the website content is fully scraped before sending it to the Array Aggregator and Google Docs?

Hi. I think I have had the same issues before. Hope this helps.

  1. Your Array aggregator should have the Iterator as the source to only run once in my experience. That should collect your data in a single bundle and fix the repeater issue.

  2. If the first Apify is not done before the iterator starts you can introduce an index filter.
    Click the node connection and setup a condition with these vars from the iterator. [Total number of bundles] Equal to [Bundle order position] to make it continue when it is done.

Hope it works :crossed_fingers:

1 Like