How to Build an Automated Web Scraping Workflow with Make.com and Scrapeless

We’ve recently launched an official integration on Make, now available as a public app. This tutorial will show you how to create a powerful automated workflow that combines our Google Search API with WebUnlocker to extract data from search results, process it with Claude AI, and send it to a webhook.

What We’ll Build

In this tutorial, we’ll create a workflow that:

  1. Triggers automatically every day using integrated scheduling

  2. Searches Google for specific queries using Scrapeless Google Search API

  3. Processes each URL individually with Iterator

  4. Scrapes each URL with Scrapeless WebUnlocker to extract content

  5. Analyzes content with Anthropic Claude AI

  6. Sends processed data to a webhook (Discord, Slack, database, etc.)

Prerequisites

  • An Anthropic Claude API key

  • A webhook endpoint (Discord webhook, Zapier, database endpoint, etc.)

  • Basic understanding of Make workflows

Complete Workflow Overview

Your final workflow will look like this:

Scrapeless Google Search (with integrated scheduling) → IteratorScrapeless WebUnlockerAnthropic ClaudeHTTP Webhook

Step 1: Adding Scrapeless Google Search with Integrated Scheduling

We’ll start by adding the Scrapeless Google Search module with built-in scheduling.

  1. Create a new scenario in Make

  2. Click the + button to add the first module

  3. Search for “Scrapeless” in the module library

  4. Select Scrapeless and choose Search Google action

Configuring Google Search with Scheduling

Connection Setup:

  1. Create a connection by entering your Scrapeless API key

  2. Click “Add” and follow the connection setup

Search Parameters:

  • Search Query: Enter your target query (e.g., “artificial intelligence news”)

  • Language: en (English)

  • Country: US (United States)

Scheduling Setup:

  1. Click the clock icon on the module to open scheduling

  2. Run scenario: Select “At regular intervals”

  3. Minutes: Set to 1440 (for daily execution) or your preferred interval

  4. Advanced scheduling: Use “Add item” to set specific times/days if needed

Step 2: Processing Results with Iterator

The Google Search returns multiple URLs in an array. We’ll use Iterator to process each result individually.

  1. Add an Iterator module after Google Search

  2. Configure the Array field to process search results

Iterator Configuration:

  • Array: {{1.result.organic_results}}

This will create a loop that processes each search result separately, allowing better error handling and individual processing.

Step 3: Adding Scrapeless WebUnlocker

Now we’ll add the WebUnlocker module to scrape content from each URL.

  1. Add another Scrapeless module

  2. Select Scrape URL (WebUnlocker) action

  3. Use the same Scrapeless connection

WebUnlocker Configuration:

  • Connection: Use your existing Scrapeless connection

  • Target URL: {{2.link}} (mapped from Iterator output)

  • Js Render: Yes

  • Headless: Yes

  • Country: World Wide

  • Js Instructions: [{"wait":1000}] (wait for page load)

  • Block: Configure to block unnecessary resources for faster scraping

Step 4: AI Processing with Anthropic Claude

Add Claude AI to analyze and summarize the scraped content.

  1. Add an Anthropic Claude module

  2. Select Make an API Call action

  3. Create a new connection with your Claude API key

Claude Configuration:

  • Connection: Create connection with your Anthropic API key

  • Prompt: Configure to analyze the scraped content

  • Model: claude-3-sonnet-20240229 or your preferred model

  • Max Tokens: 1000-4000 depending on your needs

URL


/v1/messages

Header 1

Key : Content-Type |

Value : application/json

Header 2

Key : anthropic-version|

Value : 2023-06-01

Example Prompt copy paste in body:


{

"model": "claude-3-sonnet-20240229",

"max_tokens": 1000,

"messages": [

{

"role": "user",

"content": "Analyze this web content and provide a summary in English with key points:\n\nTitle: {{14.title}}\nURL: {{14.link}}\nDescription: {{14.snippet}}\nContent: {{13.content}}\n\nSearch Query: {{1.result.search_information.query_displayed}}"

}

]

}

  • Don’t forget to change number ```14`` by your module number.

Step 5: Webhook Integration

Finally, send the processed data to your webhook endpoint.

  1. Add an HTTP module

  2. Configure it to send a POST request to your webhook

HTTP Configuration:

  • URL: Your webhook endpoint (Discord, Slack, database, etc.)

  • Method: POST

  • Headers: Content-Type: application/json

  • Body Type: Raw (JSON)

Example Webhook Payload:


{

"embeds": [

{

"title": "{{14.title}}",

"description": "*{{15.body.content[0].text}}*",

"url": "{{14.link}}",

"color": 3447003,

"footer": {

"text": "Analysis complete"

}

}

]

}

Module Reference and Data Flow

Data Flow Through Modules:

  1. Module 1 (Scrapeless Google Search): Returns result.organic_results[]

  2. Module 14 (Iterator): Processes each result, outputs individual items

  3. Module 13 (WebUnlocker): Scrapes {{14.link}}, returns content

  4. Module 15 (Claude AI): Analyzes {{13.content}}, returns summary

  5. Module 16 (HTTP Webhook): Sends final structured data

Key Mappings:

  • Iterator Array: {{1.result.organic_results}}

  • WebUnlocker URL: {{14.link}}

  • Claude Content: {{13.content}}

  • Webhook Data: Combination of all previous modules

Testing Your Workflow

  1. Run once to test the complete scenario

  2. Check each module:

  • Google Search returns organic results

  • Iterator processes each result individually

  • WebUnlocker successfully scrapes content

  • Claude provides meaningful analysis

  • Webhook receives structured data

  1. Verify data quality in your webhook destination

  2. Check scheduling - ensure it runs at your preferred intervals

Advanced Configuration Tips

Error Handling

  • Add Error Handler routes after each module

  • Use Filters to skip invalid URLs or empty content

  • Set Retry logic for temporary failures

Benefits of This Workflow

  • Fully Automated: Runs daily without manual intervention

  • AI-Enhanced: Content is analyzed and summarized automatically

  • Flexible Output: Webhook can integrate with any system

  • Scalable: Processes multiple URLs efficiently

  • Quality Controlled: Multiple filtering and validation steps

  • Real-time Notifications: Immediate delivery to your preferred platform

Use Cases

Perfect for:

  • Content Monitoring: Track mentions of your brand or competitors

  • News Aggregation: Automated news summaries on specific topics

  • Market Research: Monitor industry trends and developments

  • Lead Generation: Find and analyze potential business opportunities

  • SEO Monitoring: Track search result changes for target keywords

  • Research Automation: Gather and summarize academic or industry content

Conclusion

This automated workflow combines the power of Scrapeless’s Google Search and WebUnlocker with Claude AI’s analysis capabilities, all orchestrated through Make’s visual interface. The result is an intelligent content discovery system that runs automatically and delivers enriched, analyzed data directly to your preferred platform via webhook.

The workflow will run on your schedule, automatically discovering, scraping, analyzing, and delivering relevant content insights without any manual intervention.