Openclaw like Life Assistant

I spent some time mapping out how to replicate OpenClaw’s functionality in Make.com. The overall architecture stays the same: a UI layer, an application layer where the agents run, and storage for the database, memory, and instructions.

I’ve now built the full application in Make.com. Instruction files are currently stored on my personal Google Drive, authentication is handled through Make.com modules (which makes the setup safer to use), and I’m using Firestore (NoSQL) as the database for memories.

Current setup

Google Docs (separate files)
System prompt
Heartbeat prompt
Model ID to use
Memory backup

Firestore
NoSQL database that stores memories and every interaction

Scenarios

Engine (triggered on demand)
Triggered on demand by the other scenarios. This is one large Make AI Agent with tool access to TickTick, Gmail, Calendar, Firestore, and Google Docs. The AI agent runs through this engine.

Assistant (Triggered instant on new Telegram Messages)
Triggered instantly on new Telegram messages. I originally tried Discord, but it doesn’t have an instant trigger for new messages.
When a message comes in, I check for attachments. Right now I’m only supporting images, but I plan to expand to other file types. OpenAI handles the OCR so the agent can understand the file, then I send the instructions plus file details to the engine. I also pull the system instructions and model ID from Google Docs and pass those to the engine.

Heartbeat (scheduled every 4 hours)
A scheduled scenario that runs every 4 hours to check tasks, email, calendar, etc., and keep me updated. The instructions for this are stored in a Google Drive file. If you’d prefer more privacy, we can store the instructions in Firestore or MongoDB instead and retrieve them from there. I also check if the engine and the chatbot scenarios are active and running or if there are any errors.

Email triage (every 15 minutes)
Triggered on new emails. The email content is passed to the agent for triage, with the goal of keeping inbox zero. The agent either archives the email or creates a task and then archives it. If it can’t triage confidently, it leaves the email in the inbox and creates a task for me to review it.

All scenarios route into the engine and keep the same thread ID. I’m currently using the current date and hour as the thread ID, so a separate thread is created for each hour. That keeps context available within the hour while still keeping the agent context relatively lean.

This also means an email triage run or a heartbeat run can happen, and you can still chat with the agent about what just happened and ask follow-up questions or trigger follow-up actions.

I’ve kept the system prompt and model ID in Google Docs because I want the agent to be able to update prompts or model IDs. I can chat with it and ask it to update the prompt or model based on what we learn together.

Enhancements planned

  • Browser automation
  • Support for other file types uploaded to Telegram
  • Using OpenRouter instead of OpenAI to reduce costs and access more models
3 Likes