Impossible to parse a complicated web html page

Hello team,

I have a very small scenario and I’m struggling to do it as I want.
I want to check a web page on ASUS web store. And then I will check with the function contains() to check is the word “out of stock” is again present on the page. If the test is FALSE, I know the product is now available.

In my test, it was always FALSE as it didn’t find the word I wanted. But I checked the page and it’s always out of stock. So I’ve tried to send myself an email with the function:


To see what the crawler is seeing. The result was the following : Please enable JS and disable any ad blocker

So, according to my analysis, the robot that is looking for the web page on side it avoiding any complicated page with javascript, etc. (probably for securit reasons). So, ASUS tell the robot, I can’t display anything you don’t have a correct/modern browser or something similar.

Do you think my analysis is OK ?
If so, someone see a workaround to my small scenario? I don’t think it’s something very complicated.

thibault. :upside_down_face:

Hi @akril78,

Interesting - I tried to check this random page ASUS Vivobook 13 Slate OLED (T3300), and HTTP module returned all HTML content.

Best regards, Victor.

The page I tested was this one on my side:

I’m looking for the word “alerter”. If it’s still requesting alerter it’s mean “no stock”.
If not, there are some stock.

However, my condition is returning FALSE at each test.
Can you share with me your configuration?
I’m still a newbie on so maybe I made a mistake. ^^

Hi @akril78,

Ah ok, I see now. I tried to get this page in Make and in Postman (software to work with APIs), and I can see that in both cases it returns HTTP error 403 (Forbidden). And you can see this text about disabling adblocker and enabling javascript.

Which in my opinion means that Asus don’t want anyone to parse this page automatically and analyse its contents :slight_smile:
I tried to play with request headers a bit to simulate that request is originating from the browser, but didn’t succeed (and I am not a pro in web scraping).

Maybe someone else could advise how to make this work…


1 Like

Ohhhh that’s sound completely logic. I didn’t think about this at the beginning. You may be right it’s probably restricted or not completely open for automatic tools such as
Thank you, @VictorV, for stopping on my thread ^^

1 Like Make integration module was built exactly to solve this type of tasks.
Check this demo:
Monitoring a website for a piece of data, and alerting via push notifications: scraping for refurbished iPhones

ScrapeNinja has two different scraping engines packed into 1 module:

  1. high performance engine, with real browser TLS fingerprint, which bypasses a lot of basic scraping protections, with lowest possible latency
  2. Full blown programmatic Chrome browser which executes javascript of the website.

Thank you. I will give it a try :slight_smile: