I can't scrape all the pages on the site

:bullseye: What is your goal?

retrieve data from all pages of the website using the https module, then scrape the email address

:thinking: What is the problem & what have you tried?

The problem is that the HTTPS module only retrieves data from the first page.

:camera_with_flash: Screenshots (scenario flow, module settings, errors)

Hi Dominique,

The HTTP module is working as expected. It only fetches one URL per request, so it will always return just the first page unless you explicitly tell it to load the others.

To scrape all pages, you need to first collect the page URLs, then loop over them. A common approach is to fetch the first page, extract all internal links (or pagination links) using a text parser or regex, and then pass those URLs into an iterator. Each URL can then be sent through the HTTP module to retrieve and scrape its content.

If the site uses pagination, you can also generate the page URLs yourself (for example by increasing a page= parameter) and iterate over those values.

In short, the missing piece isn’t the HTTP module itself, but the loop/iterator that tells Make to request every page one by one.

Hope this helps.

Regards,
Tony

Welcome to the Make community!

To do this, you can try using the DumplingAI “Crawl Website” module —

Crawls a website (1 Credit per page)

This returns an array of pages together with each page’s content!

e.g.:
Screenshot 2025-12-17 102726

For more information about the “Crawl Website” module and the DumplingAI app, see the documentation on the official website and the Help Centre documentation.

Filter — Check if any page’s content contains an email

Extracting a single email address (from ALL scraped pages) directly into a field

Example Output

View & Install Example Scenario

— @samliew

Thank you so much for the answer items