Comparing DALL-E 3 Results: Direct Prompting vs. Make Method Accuracy

I am new to make and I created a simple scenario that grabs a prompt generated through a template in google sheets. I am noticing that the results are extremely different when the same prompt is used through my make scenario vs when I enter directly in to ChatGPT. My results through make are much less accurate when compared to direct prompt.

The prompt is:
“A pair of Air Jordan 11’s featuring a custom color scheme of deep navy blue (#0C2340), vibrant orange (#E87722), and crisp white (#FFFFFF). The shoes are displayed on an Outdoor Basketball Court, emphasizing their bold and dynamic design. The image is in a square format, perfect for Instagram, and is a product like shot where the shoes are not being worn. The photo is taken using a 35mm Wide Angle lens with an aperture setting of 1.8 and under High-Key Lighting. The shoes are in sharp focus and a natural perspective”

Key difference between the 2 results is that the make scenario never results in an image with the actual Air Jordan that I am requesting in the prompt while the direct ChatGPT prompt always returns the correct version of the Air Jordan. Why would that be?

Make.com scenario results:

Direct ChatGPT4 prompt results:

Google Sheet Get a cell output
image

Open AI Generate Image settings:

Seems like ChatGPT uses a different LLM than in their API. This is rather on OpenAI.

1 Like

Interesting… I was hoping to avoid using MidJourney due to the increased complexity of scenario and cost but the results out of the OpenAI LLM are not close enough for the intended purpose.