Hi and welcome to the Community!
That’s certainly possible. You’d be best to look at using Leonardo.ai for your image generator though - they have a great character reference feature that’s accessible through the Leonardo app in Make.
The terminology is slightly different, as it’s based on the terminology their use in their API. The underlying mechanism is what they call ControlNets.
First, you’ll need a reference image. This is the image that you want to be persisted through your image generations. That can be something you’ve generated in Leonardo.ai itself, or something you upload. In either case, you’ll need to map the Init Image ID from either an earlier Generate an Image or Upload an Image module.
When you’re generating your consistent character image, you’ll use the Generate an Image module but you’ll flip the Advanced settings toggle. You’ll need to select either the Phoenix model or the SDXL model.
Under ControlNets click the Add ControlNet and you’ll see the dialog below:
Map your reference image ID into the Init Image ID field. Select Uploaded or Generated as appropriate.
The Preprocessor ID varies by model, so there’s no easy way for Make to provide a convenient lookup. You’ll need to consult this table in the Leonardo.ai doc to find the right ID for the model you’re using (as you’ll see, there’s many other types of reference image you can use too).
Lastly, select an appropriate Strength Type (details in the same doc). You don’t need to worry about the Weight parameter.
I’d test that with small image sizes first to keep the costs low.
Once you have a working character reference, you can even add another ControlNet to combine it with a style reference for even better consistency.
Let us know how you get on or if you have any other questions!