What you have here is not a simple Make cleanup job. It is a live intake pipeline tied directly to revenue, and the reason most people struggle with work like this is because they treat the symptoms instead of fixing the structure underneath it. I am a strong fit for this because I regularly step into business critical workflows where incoming data is messy, logic has already been partially built, and the real work is making the system reliable enough that the business can trust it.
The first thing I would do is audit the current scenario from trigger to final board write so I can see exactly where the breakdowns are happening and whether the issue is in parsing, AI extraction, GraphQL lookup, router logic, or payload formatting.
-
I would pull real examples of the inbound emails and inspect the raw body structure, because email automations usually fail at the source when HTML, forwarded content, reply chains, or inconsistent formatting are not handled correctly.
-
I would standardize the body handling before touching the AI layer, so the extraction step is working from one clean content format instead of trying to interpret inconsistent input every time.
-
I would review the Monday board architecture and GraphQL mapping in detail, including board IDs, column IDs, linked items, and expected value types, because Monday integrations break quickly when column payloads are even slightly off.
-
I would isolate how client matching is currently being handled, since this is one of the most important control points in the whole flow and a weak lookup strategy can create blocked orders, false matches, or bad board data.
-
I would tighten the router conditions based on real failure paths, not assumptions, because Make routers often look fine visually while still failing due to null checks, condition order, or inconsistent field typing.
-
I would restructure the AI extraction step to force predictable output for the exact fields that matter operationally, so the downstream modules are not trying to work off loose text.
-
I would add validation between extraction and order creation so incomplete or questionable data gets stopped cleanly instead of silently creating bad records your team has to fix later.
-
I would clean up the blocked alert path so it gives you something operationally useful, not just a dead end, including exactly what failed and why.
-
I would test against multiple real email samples, especially the ugliest ones, because a scenario like this is only valuable if it survives inconsistent client behavior in production.
-
I would leave the scenario organized and understandable so you can actually learn from it and extend it later instead of being stuck with a fragile black box.
-
I would also structure the fixes so future additions like supervisor matching, attachment handling, or richer decision logic can be added without rebuilding the whole automation.
A few relevant examples from my work:
Cococure AI WhatsApp Automation
This project is relevant because it was not just a chatbot. It was a multi-step AI driven business workflow where incoming user messages had to be interpreted correctly, routed through orchestration logic, checked against live availability data, and turned into usable actions without creating operational confusion. I personally handled the product ownership, workflow design, prompt logic, QA process, and coordination across OpenAI, LangChain, Redis, FastAPI, and the messaging layer. That experience maps directly to your project because the hard part is not just calling AI. The hard part is controlling what AI extracts, validating it, and making sure the next system in the chain can trust the output.
Select Screening Services Platform (https://stage.drugscreening-fe.testyourapp.online/)
This is relevant because I led the build of a structured intake and workflow system where records moved through multiple operational states and bad data could not be allowed to pass loosely between steps. I handled the product structure, backend workflow planning, role based logic, integration design, and the sequencing of how information entered and moved through the platform. In your case, the same discipline applies. If the order intake is not validated correctly before it hits Monday, the rest of the workflow becomes cleanup work and revenue risk.
AgensyCare (agensy.com)
This project involved building a platform with structured forms, role based workflows, secure document handling, and operational logic that had to stay dependable as different users entered different kinds of information. I led the architecture direction, feature planning, implementation oversight, and workflow structure across the application and AWS infrastructure. What ties directly into your project is the need to define exactly what clean input looks like, what gets accepted, what gets blocked, and how the system should behave when data is missing or malformed. That is the same kind of thinking your automation needs.
I am available for a short call to review the existing scenario and give you a grounded read on what needs to be fixed first.
A few questions I would want answered up front:
-
How many real order email variations are you dealing with right now, and do you already have examples of the ones that fail most often?
-
Are you matching clients in Monday by name only, or do you already have a more reliable identifier in place?
-
Which fields are truly required before an order can be created without someone on your team having to manually fix it later?
-
Do you want this engagement focused on stabilizing the current scenario first, or do you want cleanup plus expansion planning at the same time?’
Brandon
brandon@bluegrass-media.com
501-733-1465