How to host LLM to elaborate sensitive data?

Hi everyone,
I’m planning to build some automations to process personal data for my clients—for example, an automated email responder. I’d like to know if it’s possible to train and host an LLM on a server and use it to process this data.
in the best case each client would have their own secure, personalized LLM trained on their data. Is this feasible? Would I need to self-host a separate instance for each client, or is there a way to manage multiple clients efficiently? Also, what module would be best suited for this setup?
I’d really appreciate any insights or recommendations

Hey Simone,

for the only Make related part in your question - you can configure an API on the server and either build a custom Make app to access it, or use the generic HTTP call module. Whichever you choose, depends on the complexity of the API and how you want to connect to it.

About the AI itself - sure, why not. I suppose you can have it both ways and it will work. One LLM that accesses individual clients’ databases to process their requests. Or one LLM per client. The second approach will definitely not cross contaminate and get mixed up, but the first approach is much more easily scalable, since you just add a new data set when a new client gets onboarded.

1 Like