Make + Supabase Testing & Integration Specialist Needed ASAP

Make + Supabase Testing & Integration Specialist Needed ASAP

Overflow Queue System - Testing, Debugging & Launch Integration (20-30 hour project)

About the Project

I’m launching Overflow, a conscious outreach platform for entrepreneurs, healers, and coaches. I’ve built a queue-based automation system using Make.com + Supabase + Softr and designed complete architecture for 8 scenarios.

By October 10, I will have built all 8 Make scenarios based on my architecture specs. What I need is an expert to test everything thoroughly, debug integration issues, and get it working reliably for launch.

I don’t enjoy testing and debugging - but you probably do. This is a focused, tactical engagement: make my scenarios work flawlessly with Supabase and Softr.


Specific Deliverables

Testing & Debugging (20-30 hours total)

1. Supabase Integration Testing (8-10 hours)

  • Test all RPC function calls from Make scenarios

  • Verify REST API authentication, headers, and response handling

  • Debug JSON parsing issues (arrays, nested objects, timestamps)

  • Fix data type mismatches (TEXT arrays, TIMESTAMPTZ, UUIDs)

  • Ensure error responses from Supabase are handled properly

  • Test “FOR UPDATE SKIP LOCKED” race condition prevention

2. Scenario-by-Scenario Debugging (8-12 hours)

  • Run each of the 8 scenarios independently with real test data

  • Fix logic errors, incorrect mappings, missing modules

  • Debug iterator/aggregator configurations

  • Handle edge cases: empty arrays, null values, API timeouts, rate limits

  • Test retry logic and error handlers

  • Verify parent-child job tracking through queue

  • Optimize obviously wasteful operations (if you spot them)

3. Softr Integration (4-6 hours)

  • Connect Softr campaign creation form → Make webhook (Scenario 8)

  • Connect Softr “Run Campaign” button → Make webhook (Scenario 0)

  • Test user signup → Supabase profile auto-creation

  • Verify campaign data flows correctly from Softr → Supabase

  • Test batch completion → email notification delivery

  • Ensure user sees correct data in Softr dashboard after jobs complete

4. End-to-End Testing (2-3 hours)

  • Complete user journey: Signup → Create Campaign → Run Campaign → Receive Leads → Get Email

  • Test with 2-3 different campaign configurations

  • Simulate 2-3 concurrent users running campaigns (basic load test)

  • Verify cost tracking, monthly limits, and usage counters work

  • Document any remaining bugs or edge cases for me to address later

5. Handoff (1 hour)

  • Create bug list (anything you couldn’t fix or requires my input)

  • Write basic troubleshooting notes (common failure points, how to check logs)

  • Quick Loom video (10-15 min) walking through what you tested and any gotchas


What You’re Working With

Tech Stack:

  • Make - 8 scenarios (I build these, you test them)

  • Supabase - PostgreSQL with REST API, RPC functions, triggers (95% complete)

  • Softr - User frontend with forms and dashboard

  • Apify - Instagram scraping (used in Scenarios 2-3)

  • OpenAI API - GPT-5 & GPT-5 mini (used in Scenarios 1 & 4)

  • SendGrid/Gmail - Email notifications

The 8 Scenarios:

  1. Campaign Initiation Webhook (validates limits, creates queue job)

  2. Generate Search Queries (GPT-5 mini)

  3. Scrape Instagram Posts (Apify)

  4. Scrape Instagram Profiles (Apify)

  5. Enrich Profiles & Generate Messages (GPT-5)

  6. Job Timeout Monitor

  7. Monthly Usage Reset

  8. Weekly Cleanup

  9. Campaign Creation from Softr

Architecture Pattern:

  • Queue-based job processing with Supabase processing_queue table

  • Jobs spawn child jobs (parent_job_id tracking)

  • Batch completion detection triggers email notifications

  • User concurrency limits (max 1 active campaign per user in Beta)

  • Monthly enrichment limits with soft-stop warnings

  • All scenarios use Supabase RPC functions for complex queries


What I Provide You

On Day 1 (October 10):

  • All 8 Make scenarios built and configured

  • Complete architecture documentation (10,000+ word technical spec)

  • Access to: Make organization, Supabase project, Softr workspace

  • Walkthrough call (90 min) where I explain what each scenario does

  • Test data: sample campaigns, mock Instagram profiles, etc.

During the project:

  • Daily availability for quick questions (Slack/async)

  • 1-2 debugging calls if needed (30 min each)

  • Fast turnaround on any “why did you build it this way?” questions


Requirements

Must-Have:

  • Make debugging experience - you’ve fixed broken scenarios before, not just built from scratch

  • Supabase/PostgreSQL skills - REST API, RPC functions, troubleshooting database connection issues

  • API integration debugging - comfortable reading API docs, testing endpoints, fixing auth issues

  • Methodical testing approach - you test systematically, document what you find, don’t skip edge cases

Nice-to-Have:

  • Softr experience (form webhooks, data display blocks)

  • Experience with queue systems or job processing architectures

  • Worked with Apify or similar scraping tools

  • OpenAI API integration experience

Working Style:

  • Independent - I give you access, you run with it

  • Clear communicator - daily updates on what’s broken, what’s fixed, what’s blocked

  • Detail-oriented - this needs to work for real paying customers

  • Problem-solver - when something breaks, you trace it through the whole stack


Timeline

October 10 (Day 1) - Kickoff

  • 90-min video call: I walk you through all 8 scenarios

  • You get access to everything

  • You create testing plan (which scenarios first, test data needed)

October 10-15 (Days 1-6) - Testing & Debugging Sprint

  • You systematically test each scenario

  • Fix bugs as you find them

  • Daily async updates: “Scenario 2 works. Scenario 3 has JSON parsing issue - fixed. Scenario 4 needs your input on X.”

  • 1-2 quick calls for complex debugging (optional, as needed)

October 16-17 (Days 7-8) - Integration & Final Testing

  • Softr → Make → Supabase fully connected

  • End-to-end user journey tested and working

  • Load testing with simulated concurrent users

  • Handoff call + bug list + troubleshooting notes

October 17 - Launch Ready

  • All critical paths working reliably

  • Any minor bugs documented for me to fix post-launch

  • System ready for beta customers


Budget

$2,000 - $2,500 for 20-30 hours

Pricing Structure Options:

Option A - Fixed Price: $2,250 for the whole project

  • Covers up to 30 hours of work

  • You take the risk if it goes over (unlikely if my scenarios are well-built)

  • I take the risk if it finishes early (you still get full payment)

Option B - Hourly: $80-100/hour, capped at 30 hours

  • Track actual time spent

  • Invoice weekly or at completion

  • If you finish in 20 hours, I pay $1,600-2,000

  • If it takes 30 hours, I pay $2,400-3,000

I’m flexible - tell me your preference.


Ongoing Support (Optional)

After launch, I’d love to keep you on retainer:

$320-500/month (4-5 hours)

  • Bug fixes and troubleshooting

  • Performance monitoring

  • Minor feature additions

  • Emergency support if something breaks

Major product upgrades (new platforms, big features) would be scoped separately.


To Apply

Please send:

1. Debugging Experience

  • Describe 2-3 times you inherited/debugged someone else’s Make scenarios

  • What was broken? How did you figure it out? How long did it take?

  • Bonus: Show me a particularly gnarly bug you fixed

2. Technical Background

  • Years with Make (and types of scenarios you’ve tested/debugged)

  • Supabase or PostgreSQL experience (REST API, debugging connection issues)

  • Experience testing integrations between multiple tools

  • Softr or similar no-code frontend tools (nice-to-have)

3. Rate & Availability

  • Your hourly rate (or fixed price if you prefer that model)

  • Confirm you can start October 10

  • Confirm you can commit 20-30 hours over 8 days

  • Your typical working hours (for scheduling calls)

4. Testing Approach (Shows your methodology)

  • Given 8 interconnected scenarios, how would you approach testing them?

  • What would you test first? How do you prioritize?

  • How do you track what you’ve tested vs. what still needs testing?

  • How would you test the queue system without spamming real Instagram accounts?

5. Communication Style

  • How do you prefer to give updates? (Slack? Loom videos? Written summaries?)

  • What do you do when you’re blocked and need my input?

  • How do you document bugs so I can understand them later?


Selection Timeline

  • Applications reviewed: October 7-8

  • Interviews: October 8-9 (45 min video calls - show me how you’d approach testing one of my scenarios)

  • Selection: October 9 evening

  • Kickoff: October 10, 9 AM PT (or your preferred time)


Why This Matters

Overflow helps conscious entrepreneurs (therapists, coaches, healers, artists) grow sustainable practices without manipulative marketing. Traditional lead gen feels soul-crushing to them; Overflow enables authentic, values-aligned outreach at scale.

I’ve validated the concept with beta testers. The architecture is sound. I just need someone to ensure the implementation is rock-solid before real customers depend on it.

If you love finding bugs, fixing integrations, and ensuring systems work reliably - this is your project.


Questions?

Reply with any questions about scope, timeline, or the technical stack. I’m responsive and want you to have full clarity before applying.

Let’s make this thing bulletproof.

— Joseph Arnold joseph@overflowthrive.com (I haven’t set up my website yet, but my Overflow email works)

Founder, Overflow


P.S. - If you can genuinely start October 8-9 instead of October 10, I’d bump the budget to $2,750 to move faster. Just let me know in your application.

3 Likes

Hello @jarnold84

I have sent you the email as requested. I look forward to your response. You can check out our website and Upwork, or book a Consultation Call

Thanks & Regards

Chhaya Choudhary

I’m writing to express strong interest in your Make + Supabase Testing & Integration project.

This kind of work — digging into existing scenarios, finding edge-case bugs, and making integrations rock-solid — is exactly what I specialize in. With hands-on experience debugging complex workflows in Make and integrating with Supabase/PostgreSQL, I’m confident I can contribute immediately and help ensure the system is launch-ready by October 17.


1. Debugging Experience

Case 1 – Broken Webhook Mapping in Make (E-commerce Toolchain)

  • Issue: Webhooks from a Stripe endpoint failed to trigger correct data flows in a Make scenario.

  • Fix: Identified inconsistent field mappings, incorrect timestamp formatting, and broken filter conditions.

  • Result: Rebuilt the scenario using iterators and filters to dynamically handle various Stripe event types. Resolved the issue in ~4 hours.

Case 2 – JSON Parsing & Nested Object Errors (CRM → Supabase Sync)

  • Issue: Complex nested data was being misparsed due to incorrect use of get() functions in Make and JSON operations.

  • Fix: Implemented parsing modules with regex and custom functions, restructured the scenario to handle nested array responses and UUID fields.

  • Result: Data synced reliably into Supabase. Traced and resolved within 6 hours.

Bonus Fix – Race Condition in Parent-Child Job Queue

  • Diagnosed a concurrency issue where multiple jobs were processed simultaneously due to missing locking logic in the Supabase RPC layer.

  • Suggested and helped implement FOR UPDATE SKIP LOCKED on the processing queue, resolving random duplication and job overwrites.


2. Technical Background

  • Make: 3+ years experience with scenario debugging, webhook integrations, API chaining, iterators/aggregators, error handlers.

  • Supabase/PostgreSQL: Strong understanding of REST API, RPC functions, triggers, schema management, and JSONB types.

  • Integration Testing: Extensive work across OpenAI, Airtable, Stripe, Notion, Apify, and custom REST APIs.

  • Softr: Used in client projects to handle form submissions and dashboard visualizations. Familiar with webhooks and data display blocks.


3. Rate & Availability

  • Rate Preference: Fixed price – $2,250 for the full scope.

    • Covers 20–30 hours of focused work, including testing, debugging, and handoff.
  • Availability:

    • Yes, I can start October 10

    • Yes, I can commit 20–30 hours over the 8-day sprint

    • My typical working hours: 9 AM – 6 PM CEST (flexible for calls)


4. Testing Approach

Here’s how I would approach testing the 8 interconnected scenarios:

Step 1: Planning

  • Review your technical spec, walkthrough notes, and schema documentation.

  • Create a testing matrix: scenarios (rows) × test layers (unit, integration, edge cases, error paths).

Step 2: Prioritize by Dependency

  • Start with scenarios that trigger others (e.g. Scenario 0 and 1).

  • Identify shared RPCs or queue logic to reduce duplicate debugging.

Step 3: Scenario-by-Scenario Testing

  • Run real test data with structured logs enabled.

  • Validate each RPC: headers, responses, error handling, race conditions.

  • Document all test inputs, expected/actual results, and fixes made.

Step 4: Simulated Concurrency

  • Use Make’s scheduling and delays to simulate 2–3 concurrent campaign launches.

  • Monitor job locks and Supabase logs to verify proper queue behavior.

Step 5: Final E2E Testing

  • Walk through full user journey (signup → campaign → leads → notification) with 2–3 campaign configs.

  • Verify dashboard accuracy in Softr.

  • Test failure and retry paths, log all known edge cases.


5. Communication Style

  • Daily updates via Slack with a bullet summary: “What’s working, what’s broken, what’s next.”

  • Use Loom for any visual debugging walk-throughs (especially UI-based scenarios).

  • Maintain a shared doc with:

    • Bug list

    • Scenario notes

    • Testing coverage checklist

    • Troubleshooting tips for handoff

When blocked, I’ll:

  • Clearly outline what’s blocking me

  • Share current logs/output

  • Propose a possible fix or ask targeted questions


I’m excited by both the mission behind Overflow and the technical challenge in this project. I’d be happy to jump on a call to discuss how I’d handle testing a specific scenario or dive deeper into any past projects.

Thanks for the detailed brief — it’s clear you’ve put serious thought into the system design and collaboration process.

Looking forward to hearing from you.

Hi Mital,

Thanks for your detailed reply! Here’s my scheduling link: https://zeeg.me/soulforcearts/overflow
Please schedule a call at your earliest convenience. I’d like to make a hiring decision by Friday.
Joseph

Hi Joseph, I can tell you’ve put serious thought into this system. Most projects I see on Make are vague or half-architected yours is clearly built on solid logic with defined data flows, RPC functions, and edge case handling already scoped. You don’t need someone to rebuild it; you need someone who can think like an engineer, test like a QA, and debug like a detective.

You mentioned you don’t enjoy debugging I do. It’s what I do best. I’ve stepped into dozens of half-finished Make systems where scenarios failed due to nested JSON mishandling, data type mismatches, or silent API failures. One client had 12 broken scenarios syncing Supabase with HubSpot I rewrote the parsing logic, normalized all timestamps and UUIDs, and reduced error frequency from 40% to under 2% in a week.

For Overflow, here’s how I’d approach testing:

  1. Supabase RPC Validation First: verify headers, auth, data types, and REST returns using real campaign payloads.

  2. Scenario-by-Scenario Testing: simulate each Make flow independently, documenting all edge cases (arrays, nulls, timeouts).

  3. Queue & Concurrency Testing: simulate multiple users using test data to ensure SKIP LOCKED works properly and no duplicate jobs spawn.

  4. Softr Integration & End-to-End Run: test the user flow from campaign creation to completion, confirming usage counters and notifications fire correctly.

You’ll get daily Slack updates summarizing test results (“Scenario 2 - fixed JSON nesting; Scenario 4 - Apify timeout caught, retry works”). I document every fix in a shared sheet and can add short Looms for visual explanations.

I can start October 10, commit 20–30 focused hours that week, and deliver a bulletproof system ready for launch. As for the rate, we can go with your fixed-price option at $2,250 I’m flexible.

Want me to outline exactly how I’d structure the test plan for Day 1–3, so you can see how I’d get traction fast?

We can discuss this better on a call here.

Best,

Philip

1 Like

lol. you are taking your decision on Friday and your calendar is already booked! I can book the meeting for next week only.

1 Like

Hi Joseph,

If you are still on the lookout for someone, I would be happy to help.

I am comfortable working through complex Make scenarios with Supabase and external APIs, and I take a structured, thorough approach to debugging, just making sure everything runs reliably end to end.
You can reach out to me on my email here

Colin

Hi Colin - this position has been filled. Thanks for your interest!

1 Like

This topic was automatically closed after 30 days. New replies are no longer allowed.