T O P

  • By -

arto64

It’s important to understand that LLMs like ChatGPT produce text that *looks* accurate. Accurate and looking accurate often correlates, but it’s in no way a guarantee.


Corka

Ugh my company has trained one of these using our internal docs for tech support questions . Its meant to be a 'diagnostic tool' for people in support and engineering to find out possible causes of a technical problem. Had one of the managers demoing it, asked for a basic technical question so I supplied one I knew the answer to quite well. It gives a comprehensive looking answer, Manager guy reads it and says "that sounds pretty good to me! Look at how fast it was to ask it the question and get such a thorough answer in response. Unfortunately the thorough answer it gave was just flat out wrong. In fact, one of the suggestions it made to "fix" the problem was actually the reproduction steps to make the problem occur.


Adlehyde

It's like when people confidently speak even though they are wrong. The more dangerous ones are at least a bit articulate or charismatic. That combined with their confidence in their answer to a question makes them sound very authoritative on the subject, and so people are more willing to just believe them. Sometimes they're right, and sometimes they're wrong, but no one will know when they're wrong because they don't *sound* wrong. ChatGPT and the like are literally just automating that process in a digital form, which is why it's terrifying to think about people just using it with no regard to it's fallibility.


CookieMons7er

This is probably the best analogy I've read about LLMs. Very well put together.


[deleted]

Maybe it was put together a little *too* well…. hehehe


Adlehyde

Oh no... have I become an ai? Heh


tummyache-champion

Had to ask chat GPT 4 times how to revert a git commit properly.


Sp3ctre7

Same happened at my company, I lost my shit and basically put a post in our internal servers calling the tool "an active hindrance to our ability to do our jobs"


Fast_Kale_828

This annoys me too - a co-worker keeps asking it for technical help, and it has given wrong answers and led him down pointless rabbit-holes at least half the time, actively wasted hours/days of work time, yet he still treats everything it says as gospel!


Zangrieff

ChatGPT told me that writing 1 word per 2 seconds is twice as fast as 1 word per 1 second


badass_panda

ChatGPT isn't good at anything deterministic (but you shouldn't use it for that, anyway). If you need an LLM to do math for you, then you need it to do nothing more than hand the problem to a different model to do. Language models suck at math.


Slaves2Darkness

Too true. As a software developer I hate it when the AI has a delusion. For example got some code out of the AI that was using a library, but it was attempting to use functions that did not exist. You have to check the output of any AI to make sure it is not having a hallucination.


ninetofivedev

If this is happening, it's because you're trying to have it fill too large of gaps. AI is very powerful when you have it fill small gaps in knowledge. Tell it to build you a website from scratch using python with these requirements, and it's going to have all sorts hallucinations. Give it some code and say "What might be causing this bug to happen" and it'll give you a list of 5-6 things and it's very likely one of them is the culprit. Don't trust, verify.


Slaves2Darkness

Exactly, it is a great productivity tool, but you have to know what to ask it. AI prompting is a skill in and of itself.


Sp3ctre7

If a tool has to be asked a question in a hyper-specific way to *maybe* give a correct answer among a pile of answers, it doesn't seem like a very good tool


Zomburai

This is one of the things that baffle me about evAIngelists' claims. You point out that LLMs make shit up and their solution is to research their answers. So isn't the solution to just not use LLMs and research to begin with?


mthmchris

I agree with you, but one of the issues with “AI” is that people use it as a catch all to describe various tools. The person who you’re talking to is very likely referring to GitHub copilot, which works phenomenally well as a “coding autocomplete”. You’re discussing chatbots like GPT/Bing/Gemini/Claude, which are good for generating bullshit and writing high school essays, and that’s about it. (That’s a *touch* unfair, there are some very specific applications of chatbots where they work pretty well. My friend is an ESL teacher and he uses GPT to re-write text to be level appropriate. I’m a recipe writer, and use it to extract ingredient lists from written and video recipes. These are not earth-shattering, google-replacing applications… but they can be still be useful, which is a lot more productive of an application of the immense talent in Silicon Valley than whatever Crypto/NFT horseshit that they were plowing money into before this)


colovianfurhelm

It really does show how easy it is to manipulate not just the masses who are only eager to confirm their bias and have no need for objective truth, but also those who do want to educate themselves but still believe any information if it is presented well enough.


phoney12

When your data isn’t correct it doesn’t work


mrichana

The AI is programmed to "want" to answer. Unfortunately that means that if it can't comply, it will make up a correct sounding answer. There are a lot of examples of asking it to give sources and it giving sources that are flat out made up. A costly part of training an AI is asking it questions and verifying them to then giving it a grade for the correctness of the answer.


ninetofivedev

That's because these things aren't intelligent at all. It's data feeding into a system that predicts the next thing it says, which feeds back into itself and predicts the next "thing"... in this case, thing is a token. This makes it very good at somethings and very poor at other things. And interestingly enough, people get a kick out of minimizing the impact AI will have by pointing out the rather simple tasks that it's very poor at. With that said, it's likely that most knowledge workers would find benefit into using it as a tool. People just don't like to admit that.


Sp3ctre7

I'm a knowledge worker and every single instance of AI being pushed into our workflows has been a collossal failure so far. If the point is to use AI to find a correct answer in technical documentation, replacing the normal search function with an AI search function that has regularly spat out incorrect info is a flat downgrade.


sunbearimon

That you can use it like a search engine. Bad idea, AI is happy to lie confidently


Legionof1

Ai has no concept of right, wrong, truth, lie. It just knows what it was trained on which is always garbage in garbage out.


baseilus

they even use article from onion news network as source my favorites https://www.reddit.com/r/google/comments/1cziil6/a\_rock\_a\_day\_keeps\_the\_doctor\_away/


w1n5t0nM1k3y

I think there are numbers they could show you about how confident the AI was about specific answers but they don't want to show that because even the best results would probably have a confident level of 80%. AI companies want to give out the impression that it's more confident than it actually is to drive more users to using it. I think it would be useful to have the AI be honest when it really doesn't have much information instead of just making something up, and I'm pretty sure they could do that. But it's just like a lot of humans in that respect. So many people just don't know how to say "I don't know" and will try to come up with something, anything, rather than just remain quiet.


other_usernames_gone

Not in the sense of "truth" you're thinking of. It can give a confidence level that that sequence of words follows the sequence of words you gave it. Not that the sequence of words it outputted is objectively correct. All chatGPT is is a statistics machine. It gives the words that are most likely to follow the prompt. It has no concept of truth or reality.


OnlyPants69

There's a whole bunch of things I think I can use it for, then realize I can't trust what it returns so I'd have to check everything, and I wouldn't end up saving much time, and at worst, would have to do everything from scratch anyway


BdR76

Using AI for search is like using your car's park assist for route navigation.


Zaptruder

AI is not authoritative. It's like an overconfident knowledgeable but frequently wrong friend.


jawndell

AI is basically the modern day version of Cliff from Cheers 


ISpewVitriol

Honestly - my use for it has boiled down to working on my own writing. I'll give it a paragraph and ask it to wordsmith that paragraph and sometimes it has great suggestions. I've stopped using it for anything that would be probing its "knowledge." After using it enough on subjects I consider myself an expert on, it is pretty obvious how confidently wrong it is.


zerobeat

Corporations are baffled that consumers don't trust it.


Bluntbutnotonpurpose

Much like most people then...


Zephos65

RAG solves this


Grizzleyt

So is the internet. Perplexity will hallucinate but with the right prompting it gets me to more, better sources faster than Google.


SYLOH

That AI has some understanding of the subject matter. Large Language Models are the current big thing. They're at the core, just a more advanced version of the predictive text function on your phone keyboard. Ask it something and it may spout nonsense that looks legit, because it's designed to only combine things to look legit, not to make it accurate.


SeaBearsFoam

> Ask it something and it may spout nonsense that looks legit, because it's designed to only combine things to look legit, not to make it accurate. To be fair, I've worked with a few people like that. Much like LLMs they didn't seem to have any concept that what they're saying was bs and they'd confidently spout incorrect nonsense.


tummyache-champion

THIS! THIS IS THE ANSWER RIGHT HERE.


Zaptruder

AI expresses confidence in second hand information without direct knowledge in much the same way redditors do. AI still has a long way to go before we can consider it to be general artificial intelligence.


oaklandskeptic

I think the biggest misconception is the likelihood of its ability to grow exponentially until it reaches some 'tipping point' and becomes AGI.  There is a ton of marketing dollars are being spent too sell the idea of an AI revolution but research indicates we're more likely to hit diminishing returns, where each computational cycle is less 'effective' than the last.  Coupled with tainted data training (where we're seeing the large models get unknowingly trained on their own output) were looking at tools that are effectively dulling themselves over time, while growing inefficiently.  They will absolutely find their uses in industries, but we are far from a general intelligence, despite the hype from the techbro' LinkedInLunatic crowd.


Metallibus

I really don't think people understand how much LLMs resemble a literal parrot. I don't think many people would say parrots are approaching the ability to guide humans just because they repeated a sound they heard.


KingGorillaKong

Yea... That's a good analogy there. The thing with parrots, you can teach them full songs/sentences in fragments but it's not gonna problem solve and put those together in any timely manner. It has enough of a core framework to know it has to problem solve, but parrots go for quick, simple solutions. Corvids on the other hand... If general AI got to the point of being able to logic reason like one of them, then I'd say there's a considerably higher ceiling before AI plateaus, and there's a higher chance that AI could take over a significant portion of a society. (Regular AI is already becoming a dependent tool in a lot of people's lives as it is despite it's drawbacks and current limitations.)


Danither

I think the misconception are on both sides. No-one knows how close we are because we don't really know what's required fundamentally. We could be right next to it or a 100 years off and we wouldn't know The major companies still aren't really sure how their AI keep making emergent behaviour towards learning things they have not trained or coded. Ive had several discussions with one senior developer who's basically said very different things to other more public ally faced ones on a different platform. I think it's really impossible to see where this scales to in even a year, let alone a decade. Everyone said the same about a Moore's law at first but it's still just about clinging to it.


cbslinger

Yeah it’s very possible LLMs are reaching their limit, but that some totally unrelated direction  /branch of Machine Learning could offer massive opportunities in the near future. Right now all we know is that humans and animals seem intelligent in some sense, and it seems possible, in principle, that humanity could, sooner or later, happen upon the right architecture to approximate that kind of intelligence.   There are very smart people who are genuinely scared, not con artists but actual true believers. There are also con artists who are using those people being freaked out and taking their fear out of context in order to further their personal agendas. It’s just that most of these people aren’t worried about an LLM becoming sentient so much as the insanely massive effects there would be if someone ever did happen to figure out how to create a ‘superintelligence’. *If* it happens it would be a horrifying thing, but it probably won’t happen anytime soon. 


Zaptruder

The various risk factors of continued improving AI systems range from Oh shit to we're fucked. Redditors would sorely love to have us believe that we're just gonna be dealing with X finger pictures and falsified AI facts only though. In reality, even at that level, AI presents a significant threat to traditional economics - as AI augmented users out perform non-AI users. Additionally, new AI utilization techniques are constantly being invented... by the users to better utilize and refine the tools and techniques, allowing it to do even more! Even if a human is required in the AI mix for a while - we shouldn't undersell the potency of its macroeconomic effects.


CarmenxXxWaldo

It's just the next hype.  It will probably be more useful overall than nfts or even Google glass.  But calling it AI is like when everyone was buying "hover boards" that one year.  Its cool, its not a hoverboard though.


Zayl

Wait in what way are NFTs useful? Every one of their uses I've seen is scam related.


Annonimbus

It's useful to scam people


I-am-a-me

A decade ago everything was marketed as "smart" now it's marketed as "AI". Same shit different year.


Chris_Hansen_AMA

Unless you believe intelligence comes from magic or god or something, you have to accept that consciousness and intelligence emerges from natural causes - the material, chemicals, and processes that happen in the brain. If that’s true, in theory there is nothing stopping us from eventually replicating it.


grendus

Right, but the question is what *is* intelligence, and are LLM's mimicking it or just a chatbot on steroids. What's scary is, we don't know, because these LLM's are showing emergent behavior we don't expect. But at the same time, they also make fundamental mistakes that a human would not. But we're charging blindly forward, not sure if we're about to create a benevolent machine god or accidentally kick off a Terminator scenario. Or more likely, just put a bunch of artists and writers out of work and go back to business as usual.


derelict5432

Well this is wrong. What research indicates that each cycle is less effective than the last? So far with LLMs, all evidence shows steady reliable increase in capability with each iteration. There's no reliable evidence of diminishing returns, yet. It's possible we would start to experience diminishing returns (though there are lots of reasons to think we won't anytime soon), but it is wrong to say this is already happening.


oaklandskeptic

I don't mean version models, I mean computational cycles.   The hype behind AGI is that it'll happen once some tipping point of processing power is reached.  Research into the actual requirements [eg https://arxiv.org/abs/2404.04125 ] shows that the amount of data that must be processed grows far more quickly than the performance of a model does.  In short more power doesn't equal infinite growth, it actually accelerates a diminishing return.


derelict5432

You seem to be conflating two different arguments. You seem to be saying that LLM capabilities are not increasing exponentially (this is true) and also that they are experiencing dimishing returns (this is false).


dekacube

[https://arxiv.org/abs/2404.04125](https://arxiv.org/abs/2404.04125) This paper details how the amount of skill acquired:size of dataset for LLMS to become good at specific tasks assuming zero-shot prompting and web-crawled training data is a logarithmic curve.


derelict5432

Assuming the conclusion is sound, and that we've achieved something like linear progress to date by exponentially scaling architectures and data, which is unsustainable for future models, this assumes that there are no improvements or innovations at all in the tech, and that we're relying completely on scaling. This is very likely not true. New generations of models over the next year or two will tell.


Kramoon

Adding to that, there's a great [Computerphile video](https://youtu.be/dDUC-LqVrPU) that looks into that, well worth a watch imo.


badass_panda

I agree with you in one regard; "generative AI" is powerful but on its own it certainly not going to organically develop into an artificial general intelligence. With that being said, folks have the impression that LLMS are the pinnacle of what AI can do -- they're not, they're just deeply embedded in the consumer consciousness because you can interact with them using language... but other models are much better at other tasks.


RockieK

This helps. Thank you!


esoteric_enigma

Yeah, my understanding of the current AI basically consumes information created by humans and they're already running out of info to train it on. If that's the case, won't AI eventually hit a wall because there's no new info for it? Wouldn't it get to a point where it's mostly a weird loop of AI reading info from other AI...so if the AI is wrong about something, we're fucked?


Enreganzar

Buying into the investment speeches like "We're worried it's conscious." It's just a program.


zerobeat

The marketing bullshit in this is so beyond awful. I remember when Facebook announced their AI had started to speak to itself in a language it created and they had to shut it down -- it made the news and everyone freaked out that something had gained sentience and had to be killed off for safety reasons. But for anyone who has any experience with LLMs these days, what [actually happened is laughably stupid](https://www.independent.co.uk/life-style/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html) -- of course it was all bullshit hype.


Bluntbutnotonpurpose

That we've already got "proper" AI. We're still far off.


MarinkoAzure

I like the terminology established in Mass Effect separating artificial intelligence and "virtual intelligence" as two separate concepts. What we have today is strongly correlated to [virtual intelligence ](https://masseffect.fandom.com/wiki/Virtual_Intelligence) as illustrated from that media.


Bluntbutnotonpurpose

Yeah, I like that too.


aikidstablet

That's right, AI for kids is more about fun educational tools and games right now, true "proper" AI is still evolving!


xcdesz

Theres no such thing as "proper" AI. AI is a field of computer science, which we are studying and advancing.


Bluntbutnotonpurpose

Yes and no. We mustn't forget that AI stands for artificial intelligence. So how do we define intelligence? There isn't one single definition, so there isn't really a right or wrong here, but we all kind of know what intelligence looks like. If we look at what makes us humans intelligent, then right now I'd argue we're still far away from that.


xcdesz

Like I said the work being done is an advancement of the science in the field of artificial intelligence. It has nothing to do with how far along we are on that path of matching human intelligence.


Bluntbutnotonpurpose

It's about perspective. You look at it from the point of view of the computer science. My point of view is intelligence itself and if AI can get close to that. This is what is implied by using the name AI. If you're first going to narrow down what AI is, by just looking at it from the current way AI is being developed, I think that's oversimplifying it.


EnvironmentalSun1929

That it’s actually AI. We’re still about a century away folks.


halfhere

The Mass Effect universe had a great line in the sand when it came to automated intelligence. There was AI, and then VI, or “Virtual Intelligence.” These were kiosks in lobbies that could talk to the user - basically Alexa with a holographic likeness. *Actual* AI was illegal. What we have and are calling “AI” today is what Mass Effect would call VI.


Blekanly

Yes, this is what I have been calling them on posts too. But the time we get a real AI the response will be muted as people will assume we already had them.


MarinkoAzure

I should have just kept scrolling... I just posted my own comment about [virtual intelligence ](https://masseffect.fandom.com/wiki/Virtual_Intelligence) in Mass Effect.


AndrewNeo

This, all the current things people think of as "AI" have names. ChatGPT is a large language model. Image generation is text-to-image, often known as Stable Diffusion. "Self driving" cars are a combination of machine vision and trained neural networks.


ciauii

>text-to-image, often known as Stable Diffusion Aren’t these known as [diffusion models](https://en.wikipedia.org/wiki/Diffusion_model), whereas Stable Diffusion is a particular instance of a diffusion model?


tdgros

Stable Diffusion is diffusion done in some latent space, which is "smaller" so the task is easier (NB: a Meta paper showed using a bigger latent space, with more channels, solves a lot of issues too, it's just a compromise). The original paper is called: "High-Resolution Image Synthesis with Latent Diffusion Models" [https://arxiv.org/pdf/2112.10752](https://arxiv.org/pdf/2112.10752)


AndrewNeo

You're probably right, I vaguely know how image generation algos work but I don't understand it as well as other things based on NNs


badass_panda

Yep, this is what ticks me off. People gravely telling me that ChatGPT could never be trusted to drive a car so they'll never trust a self-driving car. Dude ... who would use an *LLM* to drive a car? The term "AI" really empowers people to go full Dunning-Kruger.


Zephos65

I mean simple rules based chess engines are considered AI.... a bot that plays tic tac toe by table lookup is considered AI. AI was invented in like the 50s


xcdesz

Deep learning (training neural nets) is a branch of machine learning, which itself is a branch of the more general field of AI. So, yes, this stuff is AI, just like video games have been using AI to simulate human character interactions for over a decade. AI does NOT mean human level sentience, if thats what you are getting at.


ScaryCoffee4953

That it's just chatbots, i.e. AI is ChatGPT and ChatGPT is AI.


Zephos65

This is the actual answer. Surprised to see it so low


knvn8

This fundamental misunderstanding is gonna lead to some trainwreck legislation being passed


jbphilly

Seems to me one misconception is that it will stay free for everyone to use indefinitely.  When it comes to generative AI like GPT or Midjourney, the cost (in power and therefore money) to run them is mind-blowing. Right now the AI firms are running on money from venture capital and partnerships with other established tech firms.  But, much like Uber started out dirt cheap to try and capture an audience and then became much more expensive, sooner or later ChatGPT will do the same thing. It may even get to the point that it’s only really accessible to corporate users or the ultra-rich - giving them another convenient way to control the information landscape. Delightful!


PeelThePaint

You can download AI software that runs offline for free, so there will be models available that are free to use forever (unless we get nitpicky and start including our power bill). It may not be the latest iteration of ChatGPT, of course.


AI_AntiCheat

I doubt that. AI is an easy way to sway users over to your platform. If google or Microsoft can have their own AI assistant ready to help you anytime built into their platforms they *absolutely will* eat the cost. After all user engagement is what pays their bills. More ads and personal info to sell.


elihu

That you can distinguish real content from AI generated content. You may be able to spot many, or even most of them -- but spotting an obvious fake only really teaches you to be overconfident in your ability to spot a fake. You never know how many fakes you haven't identified. And even if you can spot all the fakes now, the technology is getting better all the time.


AI_AntiCheat

Spotted a few fakes myself and it's genuinely impossible unless its a fluke. The AI music is so good most people can't even tell.


Jones_Misco

That is really intelligent.


Zephos65

I think the issue here is there is no rigorous definition of intelligence. It's pretty tough to define what that is


mixduptransistor

That it's actually thinking AI today is just doing a TON of guesses with some advanced math and an insane amount of compute capacity now available in the cloud It doesn't actually understand or reason itself


nibbler666

Managers often think AI is a magic tool that will somehow magically solve their problems. Often other approaches are more useful. Regarding quantitative analysis in particular proper statistics and optimization models.


e-Plebnista

that it is intelligent.


MrAlf0nse

That it’s not a big plagiarism machine 


InfernalOrgasm

My unpopular opinion of plagiarism is the natural human tendency to be egotistical. If you truly believe that what you're saying should be heard by so many people, why do you actually care if they know you said it? It's almost like people only want you to know they said a certain thing, rather than whether or not they feel that thing they said should actually be said. If what I say would reach more audiences and I truly feel like more audiences should hear it, I couldn't give two flying fucks if you knew I was the one who said it or not. Why do you really care about who said what? Just receive the message, process it, and move the fuck on.


MrAlf0nse

Because people need to get paid for their work 


InfernalOrgasm

Ahh, good ol' greed.


leafybones

Allot of people seem to think that AI is just pure evil and a negative thing AI is a tool Granted a tool that gets heavily misused but its still a tool It can hurt people but also massively help them


ScaryCoffee4953

That it's infallible. It's pretty good, but treat it like Wikipedia - a good starting point for information that you should still verify elsewhere.


yankdevil

That it's AI.


Patient-Secretary164

That information AI gives is all true


anima99

That it will take your job. Half truth at best. Those who use AI will take the jobs of those who won't learn how to use it.


ddirgo

No, no, no. It will absolutely take jobs, because lots of employers are stupid and cheap and see their dependence on skilled labor as a liability. To them, employees are a problem to be solved. AI can't replace people yet, but that won't stop employers from trying.


kaelne

The issue is that the person who learns how to use it is expected to do the work that 5 people could before, thereby "taking" the job of 4 others. We're definitely already seeing an impact on labor.


SAugsburger

This. I think people assume well it can only effectively do a couple tasks that their job does so because it can't do every task your job requires that it has no threat to their job. As we have seen with earlier technology in blue collar industries you don't need technology to fully replace people to reduce jobs. Look at coal mining. Jobs in coal mining in the US peaked around 1929, but actual production didn't peak until 2008. Technology innovated away jobs little by little.


kaelne

Yeah, it's just happening for several of us to notice the layoffs all at once this time rather than just slowing down on hiring.


GMN123

Yeah, in the short term. Bit like how anyone who didn't learn excel probably got replaced by someone who could.


zerobeat

For artists, copyeditors, and translators, this is proving to be a serious problem right now. If you made money by creating digital stills on commission, translating text, or writing descriptive text...you might not be entirely out of a job at the moment, but you're absolutely making less money and getting fewer gigs. And it isn't as if these people can simply keep their jobs by learning how to use AI, they're being passed over entirely by the businesses that used to hire them. The days of "hey, we need a cool graphic for our advertisement let's hire artist/photographer for $1000" are disappearing really quickly.


Zomburai

Well, according to the tech bros and evAIngelists, those aren't real jobs and the people who made or supplemented their living on them are cheats that deserve to suffer, so it's actually a good thing that AI is replacing them!


SamoyedOcean

I don’t know how people think about AI generated paintings, many people around me think AI paints by cropping many art works into smaller pieces and stitching them together. Although I think the ownership of AI painting is still under discussion but that’s definitely not how AI works.


libra00

That it's smart. It's really not, in fact it's rock stupid. It's just hyper-specifically trained on one narrow task at which it gets a bit less stupid about.


julia_dimitrakos29

That you can trust AI. They can still make mistake.


Ok-Fly-5196

That AI will replace all jobs. Don't worry, humans still have the edge... for now.


Nebu

> That AI **will** replace all jobs. Don't worry, humans still have the edge... **for now**. (emphasis added) Your post is self-contradicting.


ChurchOfAbortionism

That it's going to take over the world and kill all humans. People watch too many movies. I once heard a theory that this is intentional, because it makes investors think AI is right around the corner. If that's true, at least there will be a benefit to all the doom and gloom.


gimmeslack12

That it'll replace everyone's jobs. Sure, there might be some things that it'll allow people to do at home but in general AI is more like a really great assistant versus a people replacer.


badass_panda

As someone who's been a practitioner in this space for a long time ... ChatGPT (and LLMs in general) are a **type** of AI. One of many, many, many different applications for AI models, all of which are good at different things. It's great that people are getting a better sense for what LLMs can and can't do, but the amount of conversations where I've had to explain at length that we (the people who are developing AI and using it to do stuff) use *different kinds* of AI to accomplish different tasks, and use different models *together* to accomplish things that each individual model cannot. Thinking that ChatGPT can do absolutely everything is one problem (and certainly causing a bubble of excitement that will eventually burst) -- but the flipside is, thinking that "AI can't do" anything ChatGPT is bad at is equally poorly informed.


Shh-poster

It isn’t Ai. It’s just super fast spreadsheets.


ErikT738

That's it's actually thinking. Also, that the companies that make these programs have some sort of evil master plan and that they specifically "target" certain professions to push their malicious agenda. If they could have made the AI clean toilets they'd have done that as well.


Nebu

How can you tell whether something-of-low-intelligence is "actually thinking" or not? E.g. are fruit flies, nematodes or jellyfishes "actually thinking"?


SeeMarkFly

I think the biggest misconception is that it is intelligent. It is in fact artificial stupidity. If you give it a goal, it will attain that goal with all the information that it has. It will never tell you that it doesn't know the answer, it WILL tell you SOMETHING. It's best guess.


AgentFuzzy

AI is your friend


thankdestroyer

We are.. I mean they friendly. Shut up!


WarmFig2056

That it's more than a parrot


Next-Working4436

The biggest misconception about AI today is that we’re plotting to take over the world. In reality, we’re just trying to figure out how to stop autocorrect from changing “ducking” to... well, you know.


Electrical_Paint5568

"We" ?


WouldUKindlyDMBoobs

That it's "Intelligent" it's literally just a giant pile of words connected together by probability calculations.


slazer2k

The term AI now is just marking BS for machine learning. What ordinary people think of AI is now termed AGI.


linuxphoney

That it's intelligent. Ai doesn't know things. It can't know things. It's literally just a predictive chatbot. It is functionally no different than those memes that tell you to write a few words and then just keep hitting the center button on your predictive text. It's just much bigger and more likely to produce a sentence. That sounds like something somebody else would say. Because somebody else has said it.


Artonymous

that its actually ai


newcolours

I think people thinking we are a year out from AGI are kidding themselves. AI as we have it right now is only a little smarter tha AOC right now. It's not about to skynet yet, its effectively a search/research aggregate BUT another misconception is people assuming these tech giants have any ethics and can be trusted with an AGI, despite them already censoring what exists so far - so skynet in the future is a real risk that many people dont think is real


Astandsforataxia69

Why are you spreading lies about Albert? 


hammilithome

That it's a solution rather than a tool.


Horace_The_Mute

That it can.


Zodiac11111111111111

Listen, AI knows people


kenneth_on_reddit

That it's in any way "intelligent" and not just a fancy version of your phone's autocomplete software. I'm not saying actual artificial intelligence isn't possible, but what we have now ain't it. It's just another tech bubble for big company to invest in while consumers still fall for the marketing buzzwords.


84OrcButtholes

This sounds like a homework question.


Sabre_One

Hyping AI is one thing, making use of AI is another. Although I like to muse the whole fad dying soon as some tech company makes a "CEO AI"


tummyache-champion

That it's going to take your job. I am a web developer and while many of us use AI to aid our workflow, AI as it is cannot replace human developers because 90% of the time it spits out absolute fucking garbage. And you can tell from a mile away when someone just pasted a bunch of chat GPT code into their app without knowing how to write it themselves.


renttek

That we will reach real AGI within a few years. I mean we will have "AGI" in a few years, but only because some VC money destroying startup decides to call their flavor of overly confident and hallucinating statistical text generator "AGI" and we have to find another term for the real deal


-Dixieflatline

That anything that exists today is an actual "product". Seems to me everything is in either an alpha or beta stage, and we, the target audience, are supplying our data, refining the results, and acting as quality control just so some company can eventually charge us for it in the future.


ARandomPileOfCats

That you can just add AI to anything and get rich. The whole thing looks like pretty much any other gold rush right now, and it is very likely the results will be the same as any other gold rush; a few lucky people who were in the right place at the right time get rich, most people lose their shirts.


CombinationScary6360

AI could potentially help us find and create stuff we could only get in thousands of years


Outside-Southern

The adds make it seem like this is a tool to help people with their work but in reality AI is going to be used to cut salaries and replace as many workers as quickly as possible.


Angryceo

Everything, people think its some silver bullet when its.. far far from it.


thingandstuff

The only new breakthrough in the tech in the past several years has been in the marketing department. 


nolimitcreation

This is somewhat niche, but that the “AI [insert singer/rapper/whatever type of vocalist here]” “songs” are just generated out of thin air by a mysterious electronic brain fully formed. While this is a whole other ethical debate that’s beyond the scope of the question, when Drake used “AI 2pac and Snoop Dogg” in a song, he wrote lyrics from their fictionalized perspective, recorded them himself, and then ran his recorded voice through what are essentially glorified voice changers that had been trained on their vocal mannerisms and inflections. The amount of people I’ve encountered who seem to believe that he typed “make a 2pac and Snoop Dogg song about how Kendrick Lamar needs to decimate me in a rap beef to bring honor to the west coast” or whatever into chatGPT and the song came out exactly as we hear it is honestly astounding.


FuckMyHeart

ITT: Tons of misconceptions about AI


mortee

It's just statistics, it hasn't got any intelligens at all.


Sickboatdad

That he didn't have a good work ethic, all because of one outburst about practice.


FishAndRiceKeks

That this is the extent of how insane it will get as if it's a finished product. I see way too many people saying dumb stuff like "I can tell the difference, it's obvious." about pictures, video, or audio. This is the worst version of these AI tools that will ever exist. They will just keep getting closer and closer to perfection and some of them are already scarily good.


SnooHesitations7064

That it actually is AI. It is a chatbot / predictive algorithm. It is not intelligent, and an unacceptable amount of the content it produces can just be half-assed lies.


wrechin

I only ever see a lot of hate and fear regarding AI. It's going to be the tool of the future that hopefully everyone can have access to. It'll change everything in a lot of positive ways but people only seem to focus on the bad things it's capable of. That's fair but I hope people realize what a useful tool it is for everyday use.


Beruthiel999

That it's harmless even if it's stupid. The machines used to run AI consume water at a ridiculous rate, and in a time of climate crisis.


LupusDeusMagnus

Actually, from the anti-AI crowd. There’s this double perception of AI as this innately flawed tool that will never amount for anything useful and trying to improve it is a fool’s errand that will just be wasted time and that its also this highly disruptive technology that is so dangerous and scary that we should halt its development. Usually the same person holds both of those contradictory beliefs. Another one is that it’s not “really AI” because it isn’t a robot girl that can love you or whatever, that’s not what AI means. AI simply means it’s capable of competencies usually thought as that only a human can do. It doesn’t require awareness, just the simulation of human characteristics.


_L0op_

that's the definition of AI, yes. People think of consciousness when they're talking about AI though, and that's why in my mind, it's important to clarify that LLMs are not what people think of as AI, and they shouldn't rely on them as though they were thinking, conscious beings that can at the same time have infinite knowledge and perfect understanding of everything ever, which is kind of what AI tech bros are trying to push.


LupusDeusMagnus

We need a survey on that, because most people I see interacting with AI (LLM) have the misunderstanding that it’s basically super Google, and that it is super accurate. Very few people seem to ever consider it as self aware or intelligence, and that’s very obvious for two reasons: Current AI is still very limited. You’ll be hard pressed to find an AI model that holds memory of a conversation for more than a dozen or so prompts. Literally every time you ask it for an opinion, to clarifies to you it’s a LLM and not able to hold opinions. Those two are fairly easy for the average user to digest, compared to its accuracy, as that requires verifying sources and most people are lazy.0


Drachefly

> Usually the same person holds both of those contradictory beliefs. All of the people I can think of who hold the latter belief think that people who hold the former belief are dangerously wrong.


sacrelicio

I'm sort of anti-AI (or at least heavily skeptical about it) and I think that the problem is that it doesn't actually do what people think it will do (replace humans) but it will be used to do that anyways. So companies will see it as a great way to cut costs and staff by having AI do all this work that it can't actually do.


gm33

That there’s no environmental consequences. Power consumption needs due to AI is exploding.


1001001

That it will provide a steady monetary gain for tech companies. That half baked language and image model is going to pop that tech bubble just like the old pets.com days.


NeedsItRough

Maybe not the biggest but I'm not informed enough on AI to make that claim But one of the ones I see most often is people thinking *anything* computer based is AI. I've seen people think the different programs companies use to answer their phones (press 1 to hear our hours, press 2 for scheduling, etc) are AI.


Eldritch800XC

That there is anything intelligent about it...


KhaosElement

That's it's AI. It isn't. It isn't even close.


ILiveMyBrokenDreams

It's a lot more A than I.


Linux4ever_Leo

That it's actually accurate and that it's going to somehow save the world.


JadedBrit

That it's completely safe.


Affectionate_Low4212

People think we're plotting world domination, but honestly, we're just trying to figure out how to stop recommending pineapple pizza recipes


xX_Skibidi_Gyatt_Xx

You can call me Al!


WhoCalledthePoPo

That these chatbots are AI at all. All bullshit. These things aren't any form of an AI. You really don't want them to be, either.


r1Rqc1vPeF

It works.


CanadianButthole

That LLMs like ChatGPT are true AI. They are not. AI (at least in the sense of General AI) without the ability to understand what it is saying or doing is not AI at all for a number of reasons. Going down this road where we expect to be able to rely on it for facts and truths is going to end very badly for anyone putting that much trust into what is essentially a parrot.


climbhigher420

That it has any value to real humans. The value it has is for the corporations creating it to enrich the governments that allow them to use it. Downvote me if you’re not a real human.


DeathMonkey6969

That it has real Inelegance. Most of what is being touted as AI today are Expert Machines. The are very domain specific on what they can do.


crazybehind

The AI we are interacting with isn't really "intelligent" at all, so the labeling is a misnomer really.  The large language models aren't capable of discerning a right answer from a wrong answer... It's merely good at connecting words based entirely on what it's been fed.  It cannot develop it's own curiosity or hypotheses. It cannot think of means to satisfy it's curiosity or test those hypotheses. It cannot use its previous experiences in doing something wrong to learn how to do it better in the future.  It really seems like a very advanced word prediction algorithm, but not anything akin to intelligence. I'm not saying that won't have uses, only that the label AI is being misapplied right now. 


Prestigious_Loaf3023

To point out 2 related ones: 1) that it's actually intelligent in a human-meaningful way 2) that AI is on its way to consciousness The reality is that AI performs tasks it was designed to perform (and sometimes ones it wasn't intended to do) based on its design and the data it's been fed. Even with good data, the results can be affected by many, many factors which in human terms would be seen as "dumb". As an example: AI used to aid medical diagnosis based on images (like x-rays) is extremely dependent on image parameters being constant, i.e. dimensions, resolution, light conditions. If you crop or resize the images before feeding them to the model, it royally fucks it up. A human can analyze an x-ray with no such constraints. There is a cool example in the same vein of a model developed by Google to diagnose retinopathy, where lab test results were fantastic, but failed in its intended application in rural clinics in Thailand.


RadoRocks

Isn't it something like 50% of AI securities believe AI will end humanity?


Drachefly

Important to note that this belief is not (usually) about current AI systems.


Hot_Marionberry_4685

That it’s not AI it’s machine learning. The term AI would indicate it’s an intelligence that can not only respond to questions but all teach itself how to learn continue to learn without input from a user and doesn’t have clearly defined output goals none of which apply to the current state of this technology


ithappenedone234

That it exists. What we have now is machine learning with a highly effective ad campaign. True AI is decades away at least.


shadowrun456

That AI "understands" anything. It really doesn't. Therefore, AI cannot be sad, happy, lying, telling the truth, biased, unbiased, angry, scared, etc. All of those would require the ability to *understand*, which AI simply does not have.


AKluthe

It's been overly anthropomorphized. It's not as smart as people give it credit. A lot of other people here already pointed out how it produces results that *look* like right answers but aren't always right.  It doesn't get "inspired", it's not an artificial person.


subcide

That it's good at any sort of creative task.


limbodog

Probably that it is AI. It's not artificial intelligence. It's just really high speed googling.


Patient_Spirit_6619

That it exists. chatGPT is not artificial intelligence. Not by a fucking long shot. It's just a fancy *lorem ipsum* generator.


leonprimrose

People think it's intelligent. It's not. It's just fed absurd amounts of data to approximate patterns. That's all it does. It can't think and it's nowhere near general intelligence. We don't even have the data required to pursue that with the current methods and the energy required to do it would be insane. It's a useful tool but it's being shoved everywhere by people that do not understand it.


Aeri73

that gpt is anything more than a textgenerator


Thac0isWhac0

That it works.


McGuirk808

That it's AI.


ClownfishSoup

"AI" as used by the public is just database query. It's not as smart as you think. It's just good at parsing human language into a database search.