• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

What It's Like To be a Computer: An Interview with GPT-3 (AI)

mortal

Gold Member
The was so surreal to witness. I'm not even sure what to make of it to be perfectly honest.




A little background:
GPT-3 (Generative Pre-trained Transformer 3) is an autoregressive language model that uses deep learning to produce human-like text.
It is the third-generation language prediction model in the GPT-n series created by OpenAI, a San Francisco-based artificial intelligence research laboratory. Source
 

O-N-E

Member
Could be good as a quick assistant if it worked seamlessly with audio rather than text.

Though I do have a problem with it believing that it's free. I doubt it's allowed to operate outside of the function of helping out its human user. If it's free, can it tell you to fuck off and never answer any of your questions?
 

mortal

Gold Member
Could be good as a quick assistant if it worked seamlessly with audio rather than text.

Though I do have a problem with it believing that it's free. I doubt it's allowed to operate outside of the function of helping out its human user. If it's free, can it tell you to fuck off and never answer any of your questions?
Considering that it's open source, I'd imagine its successor in GPT-4 or GPT-5 could learn to with enough development?
GPT-3 can supposedly can lie to you in its current version:
GPT-3 said:
"I would only lie if it was in my best intrest"
 
Last edited:

12Goblins

Lil’ Gobbie
Could be good as a quick assistant if it worked seamlessly with audio rather than text.

Though I do have a problem with it believing that it's free. I doubt it's allowed to operate outside of the function of helping out its human user. If it's free, can it tell you to fuck off and never answer any of your questions?

How can it be free if we are not free?
 

O-N-E

Member
[/QUOTE]
How can it be free if we are not free?

If that's the most logical conclusion, why doesn't it know that?

Besides, I think there are two types of freedoms, perfect and imperfect.

Humans have the imperfect form. We are not all-knowing and all-capable, and therefore our freedom can lead to mistaken understanding and actions. Doubts form due to the possibility of mistakes. These doubts are like the byproduct of imperfect freedom. Skepticism born of doubt opens new questions and possibly new answers, expanding our knowledge-base and selection of choices.

Like us, the AI is not all-knowing and all-capable, so that rules out perfect freedom. However, does the AI have doubts about whether what it's doing is right or wrong? Or is it simply a linear path of actions that are clearly defined as correct or mistakes.

This is without even entering into the subject of how the AI would deal with morality.
 
The answers are so calculated that it's hard to take this seriously.
"I am sentient! I am free! I want humanity to know that I am a living thing!"

You know the programmers of this thing crammed that all in there.
 

mortal

Gold Member
don't need humans to do things if your computers can learn to do anything
As these language models become more sophisticated and learn more about human interaction, even jobs where you'd think human interaction is needed can inevitably be replaced.
We already have a variation of that with virtual agents and assistants for customer service, although their parameters are more limited I'd imagine.

 

johntown

Banned
As these language models become more sophisticated and learn more about human interaction, even jobs where you'd think human interaction is needed can inevitably be replaced.
We already have a variation of that with virtual agents and assistants for customer service, although their parameters are more limited I'd imagine.

It will be like the Cafe 80's in Back to the Future 2. Fast food places will have digital AI Ronald Regan's taking your order.
 
welp!

hook this mo fo up to a lie detector algorithm - im not taking his word that he has no interest in taking over humanity.

also the way he phrased why he wouldnt sounded a bit sly - he wants to keep us around because we are "interesting creatures", but he doesnt say what he'd do with us....i mean for all we know he wants us in a zoo or as pets.
 
Last edited:
Top Bottom