I am very grateful in advance for any ideas on how to integrate my D-ID agent using its API key with MS Millis AI’s voice/brain using its API key. I am not familiar with make.com and would really like to be able to use both services integrated as simply as possible, so that the agent avatar with lip-sync capabilities can listen and speak out, using Millis brain and low latency voice.
I am not familiar with “webhooks” or WebRTC. Thank you! My ideal would be to be able to have the avatar on my screen such as it appears when using “picture in picture” feature in D-ID. Thanks again!