Email marketing best sending time optimization

Hi Make Community,

I’m working on an email-marketing workflow and need your advice on feasibility and setup. Here’s my scenario:

Background

  • I have a Google Drive folder full of .csv files.
  • Each file represents a different email campaign (id_message).
  • File structure:
    id_list,id_message,group,activity,email,datetime_last_open,total_opens,total_clicks
    8,50572,,,marco.mancini8763@example.com,2025-06-05 20:04:09 +0000 UTC,1,4
    
    
  • For each contact I track their last open timestamp.

Goal Every time a new campaign file lands in the folder, I want Make.com to:

  1. Trigger on the new .csv.
  2. Re-aggregate all customer open times across campaigns.
  3. Cluster contacts into time-of-day segments (e.g., morning, afternoon, evening) to define each person’s optimal send window.
  4. Leverage an AI agent (instead of a simple rules-based approach) to detect and adapt to evolving patterns.

Questions

  1. Is this automation feasible entirely within Make.com??
  2. Which modules/integrations would you recommend? (e.g., Google Drive “Watch files,” data stores, AI/ML services)
  3. How would you structure the scenario to:
  • Parse and merge new CSV data with historical results
  • Call out to an AI inference step for clustering
  • Store & update cluster assignments incrementally

Thanks in advance for your insights—looking forward to your recommended architecture or sample scenario!

Hi, and welcome to the Community!

That’s a great use-case, and certainly feasible in Make.

I’d use the Google Drive Watch Files in a Folder module as the trigger.

I’d then experiment with using an LLM (e.g. ChatGPT, Gemini) to analyze the CSV file. I’d suggest trying out different prompts and analyzing the number of LLM tokens used (which relate directly to cost). Depending on cost and volume, you might event want to try different LLMs.

I’d then automate that in a simple three-step Make scenario with the Drive trigger, the relevant LLM app and an email or IM app to send the results.

If it seems beyond the capability of a simple LLM prompt, you might want to look at each task within the analysis and build those as separate scenarios, invoking them from a Make AI Agent as tools.

2 Likes