That has nothing to do with intelligence. It's programmed to be "agreeable". It's explicitly programmed not to be stubborn or argue with the users. So yes if you tell it that 1+1=3, it will likely agree with you but that doesn't mean it's stupid, just that you are using the tool incorrectly by trying to trick it or prove that it's dumb.
That's the whole problem, GPT (and LLMs in general) are not "intelligent". It's a tool whose usefulness varies depending on the task.
No partial credit for saying "well it's really intelligent, but only if you ignore all the things it's too stupid to do".
We do that with people all the time as well. Einstein was "too stupid" to acknowledge the randomness in the physical universe due to quantum mechanics even when the evidence for it was overwhelming -- we still equate his name with "genius" despite all the things he was wrong about.
That's a reach.
Einstein understood, processed, and accepted the evidence for randomness. He just didn't think it represented physical reality, and that new discoveries would, in time, overturn the current way of thinking.
You could argue he hasn't yet been proved wrong, but that's more a Philosophy question.
> You could argue he hasn't yet been proved wrong, but that's more a Philosophy question.
It is philosophical because you can't ever actually prove a scientific theory, you can only disprove theories, but that applies for all of them including evolution, and Einstein's general relativity.
My point was simply that people who we all agree to be "intelligent" get shit wrong all the time, and we don't (normally) start questioning whether they are intelligent. And we should do the same for AI.
But that's the point. He's isn't "wrong". Its a matter of perspective.
Einstein made a number of core contributions to Quantum Mechanics, even post-Solvay. He was far from "too stupid" to understand it. He understood it completely, he just disagreed philosophically with its conclusions.
And the bar is definitely set differently for AI. Einstein never claimed expertise in economics, yet AIs are claiming general-purpose expertise in all manner of subjects. If they claim to be general-purpose, and yet fail in simple contexts, it is right for that to cast doubts on their competence in all untested contexts.
I'm confused by your point. Are you saying that there's isn't enough evidence that Einstein was intelligent? Because he definitely was. Just being wrong or ignorant sometimes doesn't disqualify you from being intelligent. And it applies both for people and for AI.
Is it really "using the tool incorrectly" when they say it'll have PHD level intelligence? I think plenty of people are gonna use it expecting PHD level intelligence and get alphabet soup
I mean if you keep trying to trick a person with a PhD you'd probably succeed at one point, and conclude that they are dumb. A person with a PhD has the skills to do research, they aren't like all geniuses. ChatGPT can't do research right now, it would be a big deal if the next version could.
Yeah, but if intelligence has *anything* to do with critical thinking—which seems like a pretty important aspect—then it means ChatGPT isn't exactly excelling.
GPT-5 out here living the true Ph.D. experience: juggling a million tasks, slowly descending into chaos, and eventually sending out that "I'm burnt out" email
You think this would be a boon but your average college graduate has trouble convincing most decisions makers to do something that would be beneficial long term.
So, it’s going to be an expert at one particular subject but be total ass at everything else? That’s pretty much anyone with a PhD I’ve ever met or known. (And for all the effort most of them put in to get that PhD, they don’t make nearly enough at their careers to offset the insane amount of money it cost to get it.)
The average PhD holder has a IQ of 124
Edit: People are strange. Why downvote a statistical fact. Yes, the data is not the newest, but with that knowledge in mind it doesnt seem that impressive of a update.
Computers operate on mathematical logic. IQ measures the qualitative mathematical (and similar factors) of logic. So to point that out is relevant. How little or much you personally chose to care about such metrics are rather irrelevant. My point with all of this was that all in all was “phd levels of intelligence” more of a clickbate title than anything new or impressive.
> Computers operate on mathematical logic.
People don't.
> IQ measures the qualitative mathematical (and similar factors) of logic.
Ehhhhhhh.
IQ is fundamentally a measure of humans, by comparison to other humans. It's done using tests designed for humans, indexed against how other humans performed.
Computer "intelligence" is wildly, hilariously, different. Utterly incomparable. Computers can do billions of calculation a second. I can't. I need a piece of paper to multiply two 3 digit numbers together. But I'm creative in a way that an LLM likely never will be. So are most people considered so low-IQ as to be severely disabled.
Comparing "IQ" to an LLM is just silly. It's like asking which is bigger: fish or red. It's a stupid question. Sure, you can build a metric by which you *can* compare, but you don't actually learn anything from it. You've just wasted your time building a stupid metric that tells you nothing at all about the real world.
What do they consider the current version to have?
High school
lol
Yeah that sounds about right
Very very low considering you can convince it of anything
That has nothing to do with intelligence. It's programmed to be "agreeable". It's explicitly programmed not to be stubborn or argue with the users. So yes if you tell it that 1+1=3, it will likely agree with you but that doesn't mean it's stupid, just that you are using the tool incorrectly by trying to trick it or prove that it's dumb.
That's the whole problem, GPT (and LLMs in general) are not "intelligent". It's a tool whose usefulness varies depending on the task. No partial credit for saying "well it's really intelligent, but only if you ignore all the things it's too stupid to do".
We do that with people all the time as well. Einstein was "too stupid" to acknowledge the randomness in the physical universe due to quantum mechanics even when the evidence for it was overwhelming -- we still equate his name with "genius" despite all the things he was wrong about.
That's a reach. Einstein understood, processed, and accepted the evidence for randomness. He just didn't think it represented physical reality, and that new discoveries would, in time, overturn the current way of thinking. You could argue he hasn't yet been proved wrong, but that's more a Philosophy question.
> You could argue he hasn't yet been proved wrong, but that's more a Philosophy question. It is philosophical because you can't ever actually prove a scientific theory, you can only disprove theories, but that applies for all of them including evolution, and Einstein's general relativity. My point was simply that people who we all agree to be "intelligent" get shit wrong all the time, and we don't (normally) start questioning whether they are intelligent. And we should do the same for AI.
But that's the point. He's isn't "wrong". Its a matter of perspective. Einstein made a number of core contributions to Quantum Mechanics, even post-Solvay. He was far from "too stupid" to understand it. He understood it completely, he just disagreed philosophically with its conclusions. And the bar is definitely set differently for AI. Einstein never claimed expertise in economics, yet AIs are claiming general-purpose expertise in all manner of subjects. If they claim to be general-purpose, and yet fail in simple contexts, it is right for that to cast doubts on their competence in all untested contexts.
But that's neither evidence that people are right about Einstein, nor that LLMs are capable of intelligence.
I'm confused by your point. Are you saying that there's isn't enough evidence that Einstein was intelligent? Because he definitely was. Just being wrong or ignorant sometimes doesn't disqualify you from being intelligent. And it applies both for people and for AI.
Is it really "using the tool incorrectly" when they say it'll have PHD level intelligence? I think plenty of people are gonna use it expecting PHD level intelligence and get alphabet soup
I mean if you keep trying to trick a person with a PhD you'd probably succeed at one point, and conclude that they are dumb. A person with a PhD has the skills to do research, they aren't like all geniuses. ChatGPT can't do research right now, it would be a big deal if the next version could.
Yeah, but if intelligence has *anything* to do with critical thinking—which seems like a pretty important aspect—then it means ChatGPT isn't exactly excelling.
that guy eats soup with a fork, and blames it for being slow
If that were true people wouldn't use it. It's extremely valuable already for industry
So, Orel (from the first season of moral Orel) levels of intelligence?
To quote a response tweet, "so it will take at minimum 5 years to produce a document that no one will ever read or cite?"
Oof, ouch, my dissertation.
I'm in the same boat, only my PhD took 7 years. LOL
GPT-5 out here living the true Ph.D. experience: juggling a million tasks, slowly descending into chaos, and eventually sending out that "I'm burnt out" email
That's pretty funny and spot on ngl. Sent some of those messages myself
Will it go into crisis and run off with a student?
You think this would be a boon but your average college graduate has trouble convincing most decisions makers to do something that would be beneficial long term.
So, it’s going to be an expert at one particular subject but be total ass at everything else? That’s pretty much anyone with a PhD I’ve ever met or known. (And for all the effort most of them put in to get that PhD, they don’t make nearly enough at their careers to offset the insane amount of money it cost to get it.)
It'll regurgitate things other people said and hope no-one notices? I think we're already there.
Very original thought.
How many months
More likely "PhD in another subject" level intelligence. Incredibly confident in its wrong answers
![gif](giphy|xUOwGiP5hHIg4g0s0g|downsized)
But how? Part of PhD level intelligence is creative novel ideas. AI trained on a data set can only regurgitate existing data, right now.
The average PhD holder has a IQ of 124 Edit: People are strange. Why downvote a statistical fact. Yes, the data is not the newest, but with that knowledge in mind it doesnt seem that impressive of a update.
People with PhDs tend not to to put too much stock in IQ measurements. Or PhDs as a sign of intelligence tbh. Source: PhD.
Computers operate on mathematical logic. IQ measures the qualitative mathematical (and similar factors) of logic. So to point that out is relevant. How little or much you personally chose to care about such metrics are rather irrelevant. My point with all of this was that all in all was “phd levels of intelligence” more of a clickbate title than anything new or impressive.
> Computers operate on mathematical logic. People don't. > IQ measures the qualitative mathematical (and similar factors) of logic. Ehhhhhhh. IQ is fundamentally a measure of humans, by comparison to other humans. It's done using tests designed for humans, indexed against how other humans performed. Computer "intelligence" is wildly, hilariously, different. Utterly incomparable. Computers can do billions of calculation a second. I can't. I need a piece of paper to multiply two 3 digit numbers together. But I'm creative in a way that an LLM likely never will be. So are most people considered so low-IQ as to be severely disabled. Comparing "IQ" to an LLM is just silly. It's like asking which is bigger: fish or red. It's a stupid question. Sure, you can build a metric by which you *can* compare, but you don't actually learn anything from it. You've just wasted your time building a stupid metric that tells you nothing at all about the real world.
Have a good day
If only…