How to get the new posts from a facebook group if i'm not an admin? Public and private groups

:bullseye: What is your goal?

I’m trying to trigger a flow when there’s a new post in a Facebook group, but I’m not an admin (just a member). This would be for both public and private groups if possible.

I found this blog that claims it’s possible by sending Facebook group posts to make webhook:

it ‘s a paid solution and they ask for a lot.. i’m wondering If there’s any way to do this directly with make, or using official Facebook integrations, without paying for a third party tool?

:thinking: What is the problem & what have you tried?

Sending facebook group posts to make & run an automation on it. I tried checking the official meta api but it seems like they stopped offering it but I see tools out there that claim to do that, I’m wondering how they are doing it

1 Like

Welcome to the Make community!

Without an official API, the only way is to perform web scraping.

One of the methods of web scraping, which they have employed, is by keeping your computer 24/7 on, and use a browser extension to auto-refresh the group’s feed to scrape new posts.

You’ll likely have to provide an account that has access to the group, or they will have to create an account to join the private group, to access posts within the group.


So you basically need to “visit” the site to get the content. This is called Web Scraping. This can seem fairly simple, but get complex very quickly if you encounter the issues described below.

Incomplete Scraping; No Errors?

1. Anti-Scraping; Anti-Bot Measures

Are you getting no output from the HTTP “Make a request” module? This is because the website has employed anti-scraping measures, and has detected that the visit is not made by a human, and has blocked the request silently by returning no content. Hence, you cannot use normal scraping integrations like the HTTP “Make a Request” module to fetch pages from websites like these. This is NOT a Make platform, HTTP, Text Parser, or Regular Expression issue/bug.

Example: Scraping Bee Integration Runtime Error 400

2. Script Tags Do Not Run

Are you getting NO output from the Text Parser “HTML to Text” module? This is because there is NO text content in the HTML! The entire page content you are scraping may be likely hosted in a script tag, which is dynamically generated and placed onto the page using JavaScript when run on the user’s web browser (e.g.: when the page loads, or when an action is taken like on scroll).

Make is a server-side runtime environment, so when you use the HTTP modules it only fetches the initial page code, and all script tags are ignored by the Text Parser “HTML to Text” module because it is not a HTML layout element. Furthermore, the HTTP “Make a request” module also does not run any of those scripts, so no content is loaded on the page. You’ll probably get a default message that tells you to enable JavaScript.

3. Incorrect Regular Expression Pattern

Are you getting the same output as the input when using the Text Parser “Match Pattern” module? Your regular expression pattern may simply be incorrect. A reason for this is that every page is different and only works for a specific page. You also need to ensure that your pattern is built correctly to handle the raw output from the website. One way of building and testing a regular expression pattern is by using a popular tool that I use, regex101.com.

Running Page Scripts; Emulating User Input

For web scraping, a service you can use is ScrapeNinja to get content from the page.

ScrapeNinja allows you to use jQuery-like selectors to extract content from elements by using an extractor function. This is way easier than coming up with a valid and robust[1] regular expression pattern!

ScrapeNinja also can run the page in a real web-browser, loading all the content and running the page load scripts so it closely simulates what you see, as opposed to just the raw page HTML. It can even perform user actions like clicking on elements on the page!

Example: Grab data from page and url

Some tools that ScrapeNinja has provided for free

Use this to test the scraping parameters on web pages:

Use these to build and test the “extractor function”:

If you need help with the above tools, please start a new topic.

AI-powered Web Scraping

You can also use AI-powered web scraping tools like Dumpling AI.

This is probably the easiest and quickest way to set-up, because all you need to do is to describe the content that you want via a prompt.

The plus-side of this is that such services combine BOTH fetching and extracting of the data in a single module (saving operations), and doing away with the lengthy setup and maintenance from the other methods described in the previous sections.

More information; Other methods

For more information on the different methods of web scraping, see my full community blog post here: Overview of Different Web Scraping Techniques in Make 🌐

@samliew
P.S.: investing some effort into the tutorials in the Make Academy will save you lots of time and frustration using Make!


  1. A robust regular expression is one that is reliable, efficient, and handles various potential inputs and edge cases, and is able to fail gracefully. ↩︎

1 Like

Thank you so much for this info, I see a lot of ai tools that offer scraping, do you think these are capable of scraping facebook groups? I tried to ask chatgpt but was told most likely they won’t be able to since Facebook is extremely strict when it comes to scraping their data, that’s why the blog I mentioned shows high price per group, do you think that’s true? I’m not super technical so excuse my arrogance

1 Like

Hi @mikeautomation, that’s a valid question, and you aren’t being arrogant.

Like I mentioned, that service you linked utilizes a custom-made proprietary browser extension to “look” at the group’s feed to extract content, which means that:

  • Your computer needs to be on 24/7 for it to run/refresh the web browser to fetch new content
  • You need to provide access to the group (i.e.: using your account), if the group is private

Hope this makes sense!

@samliew

1 Like