• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DALL-E 3 is live through Bing Image Creator

Valonquar

Member
Ended with some Japanese print style things of my cat as a ninja attacking my other cat
4JKSvgK.jpg
fRrgGAJ.jpg
X6WcScx.jpg
tGaOaTb.jpg
EEYQ3yO.jpg
 

E-Cat

Gold Member
I'm not saying it's terrible, it's just that MidJourney can be damn near perfect at it from what I've seen.

I haven't seen a single attempt at photorealism in this thread that would actually fool anyone.
I'm not saying you're wrong, but out of curiosity could you point to such an example of MidJourney's prowess?
 
Last edited:

E-Cat

Gold Member
There's just no way they aren't hemorrhaging bucket-loads of cash generating this stuff right now. It's far too good and must be getting absolutely hammered right now by people.
It's clear they were forced to roll it out prematurely. Why? Rumour has it Google is about to drop some multi-modal Gemini goodness at tomorrow's Pixel event.
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
Damn impressive, agreed.

It's interesting because you see that same "overly glossy" thing with both renderers, but then every once in a while you can adjust your prompt with MidJourney and wham it gives you something really great.

Maybe people just need to learn how to give Dahl-E similar prompts, but I've only ever seen stuff that at LEAST has that slight over-glossy thing going.
 

E-Cat

Gold Member
It's interesting because you see that same "overly glossy" thing with both renderers, but then every once in a while you can adjust your prompt with MidJourney and wham it gives you something really great.

Maybe people just need to learn how to give Dahl-E similar prompts, but I've only ever seen stuff that at LEAST has that slight over-glossy thing going.
I think aside from it lacking the requisite training data for drawing certain subjects, the biggest bottleneck by far is in our ability to adequately describe exactly what we want in a way that it understands ("speaking the same language"). So much of our prompts have implicit "world knowledge" embedded in them that the model doesn't necessarily pick up on. You can feel the raw power, but there's still so much potential there to be unlocked. Imagine if you started out with these generations as "first drafts", but were then able to further edit and hone in on specific details -- muddled or incorrect faces, hands, colors, overall tone, etc. I hope something like that becomes possible in the future.
 
Last edited:

Alx

Member
Flight of Icarus.
"low angle of icarus with spread out arms and wings, sun shining behind him, golden droplets of melted wax are dripping from the wings, some feathers are floating down, renaissance painting"

xEpI2xw.jpg

Bedtime story
chibi sadako from the ring on the edge of a well, ready to dive into it, nursery book
CvrVXgL.jpg



I also tried some creepy stuff for Halloween with "spooky portrait of young woman with ferrets crawling out of her eyes. Her teeth are made of paper, her hair is melting" but I won't post it here. :p
 

Alx

Member
Dancers seem to be DALL-E's kryptonite, I find it hard to get results that are consistent with human morphology.
 

John Marston

GAF's very own treasure goblin
I'm trying to make some but the create button is greyed out :(
This workaround only works if you previously created content.

Just select one of your creations and replace your "old" text with a new one.
Your "create" button should be green and ready to go.
 
I just use Bing Image Creator and it produces in 15 seconds near every time.

Edge browser > Bing > just type in -

Create image of "australian wearing a Cork hat in the outback covered in flies while finding a huge gold nugget".

You just have to scroll down below the first text results and wait for the image to generate below.
 
Top Bottom