Stasus code 202 - fix?

My http module grabs from a website but always ends up with status code 202 and no html, I also receive no error message.

Is there a fix to this?

Hi Jan, and welcome to the Community.

A 202 error means that the request was accepted by the website, but processing wasn’t completed. Which would explain why you’re not seeing any HTML. This is a function of the website itself, not something that Make controls.

Can you post screenshots of your HTTP module?

1 Like

The website is https://www.redfin.com/CA/San-Francisco/727-33rd-Ave-94121/home/1959603A

random commerce website

Welcome to the Make community!

So you basically need to “visit” the site yourself to get the content. This is called Web Scraping.

Incomplete Scraping, No Errors

  1. Are you getting NO output from the HTTP “Make a request” module? This is because the website has employed anti-scraping measures, and has detected that the visit is not made by a human, and has blocked the request silently by returning no content.

  2. Are you getting NO output from the Text Parser “Match pattern/elements” module? This is because there is NO text content in the HTML! The entire page content you are scraping is hosted in a script tag, which is dynamically generated and placed onto the page using JavaScript when loaded and run on the user’s web browser on the client-side.

    Make is a server-side runtime environment, so using the HTTP modules, you get just the script tags, and those script tags are ignored by the Text Parser “HTML to Text” module because it is NOT a HTML layout element. Using the Make HTTP “Make a request” module does NOT run any of those JavaScript, so there is no content on the page other than a default message that tells you to enable JavaScript.

This is NOT a Make platform, apps, or Text Parser, or Regular Expression issue/bug.

You CANNOT use normal scraping integrations like ScrapingBee or HTTP “Make a request” module to fetch pages from this website.

You will need to use ScrapeNinja’s “Scrape (Real browser)” module to emulate a real person visiting the site using a web browser, as client-side JavaScript needs to run to parse the JSON data in the script tags, and generate the page structure and content.

For more information and demo using ScrapeNinja, see Scraping Bee Integration Runtime Error 400

Web Scraping

For web scraping, a service you can use is ScrapeNinja to get content from the page.

ScrapeNinja allows you to use jQuery-like selectors to extract content from elements by using an extractor function. ScrapeNinja also can run the page in a real web-browser, loading all the content and running the page load scripts so it closely simulates what you see, as opposed to just the raw page HTML fetched from the HTTP module.

If you want an example, take a look at Grab data from page and url - #5 by samliew

AI-powered “easier” method

You can also use AI-powered web scraping tools like Dumpling AI.

This is probably the easiest and quickest way to set-up, because all you need to do is to describe the content that you want, instead of inspecting the element to create selectors, or having to come up with regular expression patterns.

The plus-side of this is that such services combine BOTH fetching and extracting of the data in a single module (saving operations), and doing away with the lengthy setup from the other methods.

More information, other methods

For more information on the different methods of web scraping, see Overview of Different Web Scraping Techniques in Make 🌐

Hope this helps! If you are still having trouble, please provide more details.

— @samliew
P.S.: investing some effort into the tutorials in the Make Academy will save you lots of time and frustration using Make!

2 Likes

As always, a super comprehensive answer from @samliew !

The Redfin property pages are all built dynamically - there’s no static HTML. If you right-click on a page and select “View page source” you’ll see that it’s all loaded through scripting.

So you need to use a scraper that’s uses a cloud browser that can mimic a real browser.

Aside from ScrapeNinja you might also want to take a look at Airtop(an innovative LLM-powered scraper that allows you to prompt in plain language for what you want to scrape) or Apify.

Apify even has a specifically built scraper for Redfin.

1 Like