Make + Supabase Testing & Integration Specialist Needed ASAP
Overflow Queue System - Testing, Debugging & Launch Integration (20-30 hour project)
About the Project
I’m launching Overflow, a conscious outreach platform for entrepreneurs, healers, and coaches. I’ve built a queue-based automation system using Make.com + Supabase + Softr and designed complete architecture for 8 scenarios.
By October 10, I will have built all 8 Make scenarios based on my architecture specs. What I need is an expert to test everything thoroughly, debug integration issues, and get it working reliably for launch.
I don’t enjoy testing and debugging - but you probably do. This is a focused, tactical engagement: make my scenarios work flawlessly with Supabase and Softr.
Specific Deliverables
Testing & Debugging (20-30 hours total)
1. Supabase Integration Testing (8-10 hours)
-
Test all RPC function calls from Make scenarios
-
Verify REST API authentication, headers, and response handling
-
Debug JSON parsing issues (arrays, nested objects, timestamps)
-
Fix data type mismatches (TEXT arrays, TIMESTAMPTZ, UUIDs)
-
Ensure error responses from Supabase are handled properly
-
Test “FOR UPDATE SKIP LOCKED” race condition prevention
2. Scenario-by-Scenario Debugging (8-12 hours)
-
Run each of the 8 scenarios independently with real test data
-
Fix logic errors, incorrect mappings, missing modules
-
Debug iterator/aggregator configurations
-
Handle edge cases: empty arrays, null values, API timeouts, rate limits
-
Test retry logic and error handlers
-
Verify parent-child job tracking through queue
-
Optimize obviously wasteful operations (if you spot them)
3. Softr Integration (4-6 hours)
-
Connect Softr campaign creation form → Make webhook (Scenario 8)
-
Connect Softr “Run Campaign” button → Make webhook (Scenario 0)
-
Test user signup → Supabase profile auto-creation
-
Verify campaign data flows correctly from Softr → Supabase
-
Test batch completion → email notification delivery
-
Ensure user sees correct data in Softr dashboard after jobs complete
4. End-to-End Testing (2-3 hours)
-
Complete user journey: Signup → Create Campaign → Run Campaign → Receive Leads → Get Email
-
Test with 2-3 different campaign configurations
-
Simulate 2-3 concurrent users running campaigns (basic load test)
-
Verify cost tracking, monthly limits, and usage counters work
-
Document any remaining bugs or edge cases for me to address later
5. Handoff (1 hour)
-
Create bug list (anything you couldn’t fix or requires my input)
-
Write basic troubleshooting notes (common failure points, how to check logs)
-
Quick Loom video (10-15 min) walking through what you tested and any gotchas
What You’re Working With
Tech Stack:
-
Make - 8 scenarios (I build these, you test them)
-
Supabase - PostgreSQL with REST API, RPC functions, triggers (95% complete)
-
Softr - User frontend with forms and dashboard
-
Apify - Instagram scraping (used in Scenarios 2-3)
-
OpenAI API - GPT-5 & GPT-5 mini (used in Scenarios 1 & 4)
-
SendGrid/Gmail - Email notifications
The 8 Scenarios:
-
Campaign Initiation Webhook (validates limits, creates queue job)
-
Generate Search Queries (GPT-5 mini)
-
Scrape Instagram Posts (Apify)
-
Scrape Instagram Profiles (Apify)
-
Enrich Profiles & Generate Messages (GPT-5)
-
Job Timeout Monitor
-
Monthly Usage Reset
-
Weekly Cleanup
-
Campaign Creation from Softr
Architecture Pattern:
-
Queue-based job processing with Supabase
processing_queuetable -
Jobs spawn child jobs (parent_job_id tracking)
-
Batch completion detection triggers email notifications
-
User concurrency limits (max 1 active campaign per user in Beta)
-
Monthly enrichment limits with soft-stop warnings
-
All scenarios use Supabase RPC functions for complex queries
What I Provide You
On Day 1 (October 10):
-
All 8 Make scenarios built and configured
-
Complete architecture documentation (10,000+ word technical spec)
-
Access to: Make organization, Supabase project, Softr workspace
-
Walkthrough call (90 min) where I explain what each scenario does
-
Test data: sample campaigns, mock Instagram profiles, etc.
During the project:
-
Daily availability for quick questions (Slack/async)
-
1-2 debugging calls if needed (30 min each)
-
Fast turnaround on any “why did you build it this way?” questions
Requirements
Must-Have:
-
Make debugging experience - you’ve fixed broken scenarios before, not just built from scratch
-
Supabase/PostgreSQL skills - REST API, RPC functions, troubleshooting database connection issues
-
API integration debugging - comfortable reading API docs, testing endpoints, fixing auth issues
-
Methodical testing approach - you test systematically, document what you find, don’t skip edge cases
Nice-to-Have:
-
Softr experience (form webhooks, data display blocks)
-
Experience with queue systems or job processing architectures
-
Worked with Apify or similar scraping tools
-
OpenAI API integration experience
Working Style:
-
Independent - I give you access, you run with it
-
Clear communicator - daily updates on what’s broken, what’s fixed, what’s blocked
-
Detail-oriented - this needs to work for real paying customers
-
Problem-solver - when something breaks, you trace it through the whole stack
Timeline
October 10 (Day 1) - Kickoff
-
90-min video call: I walk you through all 8 scenarios
-
You get access to everything
-
You create testing plan (which scenarios first, test data needed)
October 10-15 (Days 1-6) - Testing & Debugging Sprint
-
You systematically test each scenario
-
Fix bugs as you find them
-
Daily async updates: “Scenario 2 works. Scenario 3 has JSON parsing issue - fixed. Scenario 4 needs your input on X.”
-
1-2 quick calls for complex debugging (optional, as needed)
October 16-17 (Days 7-8) - Integration & Final Testing
-
Softr → Make → Supabase fully connected
-
End-to-end user journey tested and working
-
Load testing with simulated concurrent users
-
Handoff call + bug list + troubleshooting notes
October 17 - Launch Ready
-
All critical paths working reliably
-
Any minor bugs documented for me to fix post-launch
-
System ready for beta customers
Budget
$2,000 - $2,500 for 20-30 hours
Pricing Structure Options:
Option A - Fixed Price: $2,250 for the whole project
-
Covers up to 30 hours of work
-
You take the risk if it goes over (unlikely if my scenarios are well-built)
-
I take the risk if it finishes early (you still get full payment)
Option B - Hourly: $80-100/hour, capped at 30 hours
-
Track actual time spent
-
Invoice weekly or at completion
-
If you finish in 20 hours, I pay $1,600-2,000
-
If it takes 30 hours, I pay $2,400-3,000
I’m flexible - tell me your preference.
Ongoing Support (Optional)
After launch, I’d love to keep you on retainer:
$320-500/month (4-5 hours)
-
Bug fixes and troubleshooting
-
Performance monitoring
-
Minor feature additions
-
Emergency support if something breaks
Major product upgrades (new platforms, big features) would be scoped separately.
To Apply
Please send:
1. Debugging Experience
-
Describe 2-3 times you inherited/debugged someone else’s Make scenarios
-
What was broken? How did you figure it out? How long did it take?
-
Bonus: Show me a particularly gnarly bug you fixed
2. Technical Background
-
Years with Make (and types of scenarios you’ve tested/debugged)
-
Supabase or PostgreSQL experience (REST API, debugging connection issues)
-
Experience testing integrations between multiple tools
-
Softr or similar no-code frontend tools (nice-to-have)
3. Rate & Availability
-
Your hourly rate (or fixed price if you prefer that model)
-
Confirm you can start October 10
-
Confirm you can commit 20-30 hours over 8 days
-
Your typical working hours (for scheduling calls)
4. Testing Approach (Shows your methodology)
-
Given 8 interconnected scenarios, how would you approach testing them?
-
What would you test first? How do you prioritize?
-
How do you track what you’ve tested vs. what still needs testing?
-
How would you test the queue system without spamming real Instagram accounts?
5. Communication Style
-
How do you prefer to give updates? (Slack? Loom videos? Written summaries?)
-
What do you do when you’re blocked and need my input?
-
How do you document bugs so I can understand them later?
Selection Timeline
-
Applications reviewed: October 7-8
-
Interviews: October 8-9 (45 min video calls - show me how you’d approach testing one of my scenarios)
-
Selection: October 9 evening
-
Kickoff: October 10, 9 AM PT (or your preferred time)
Why This Matters
Overflow helps conscious entrepreneurs (therapists, coaches, healers, artists) grow sustainable practices without manipulative marketing. Traditional lead gen feels soul-crushing to them; Overflow enables authentic, values-aligned outreach at scale.
I’ve validated the concept with beta testers. The architecture is sound. I just need someone to ensure the implementation is rock-solid before real customers depend on it.
If you love finding bugs, fixing integrations, and ensuring systems work reliably - this is your project.
Questions?
Reply with any questions about scope, timeline, or the technical stack. I’m responsive and want you to have full clarity before applying.
Let’s make this thing bulletproof.
— Joseph Arnold joseph@overflowthrive.com (I haven’t set up my website yet, but my Overflow email works)
Founder, Overflow
P.S. - If you can genuinely start October 8-9 instead of October 10, I’d bump the budget to $2,750 to move faster. Just let me know in your application.