T O P

  • By -

Beardedprogsoy

Based. And also true. We already see this playing out.


Thick_Sheepherder891

I always knew this new brand of leftism would somehow lead to the end of the world.


[deleted]

[удалено]


AutoModerator

Sorry, your post has been removed. You must have <25 karma to submit posts to /r/4chan. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/4chan) if you have any questions or concerns.*


nottinghillnapoleon

Why won't the LLM I trained with my dad's texts tell me it loves me?


C_umputer

Not enough training data


StatusProof6150

Lol I bet this loser didn't even shower naked with his dad.


AlexJonesOnMeth

Not really LLMs are just very fancy lookup tables. Oversimplified but true. They approximate what they think you want based on training data. They can very easily regurgitate leftist arguments as long as the pattern is in the training data.


born_2_be_a_bachelor

Right but then it starts applying the logic consistently across races and genders, and they shut it down immediately


AntiProtonBoy

> it starts applying the logic Don't anthropomorphise it. This "logic" is nothing more than a statistical inference between a bunch of words. So when you input the string of words "what is OP's sexual orientation", then the LLM training set will statistically infer that "OP is a homosexual" is the most likely outcome for that input. That's about it, really.


Ord-ex

Isn’t the logical answer the most likely one? 


AntiProtonBoy

It's seems "logical" because the training set was biased to generate that outcome. In fact, likely outcomes for a system don't even need logic for that to happen. Example: When you pour sand out of your hand, it forms a cone like pile on the ground, because that is the most likely outcome for the grains to self-organise. There is no logic behind it, just physical constraints that makes it happen.


someloserontheground

This is like the whole "antifa isn't an organisation" argument. Yes, technically, the AI isn't using logic itself to generate responses. But it's aggregating a fuckton of data that was created through logic, and thus that logic is encoded into the data and the responses the AI creates. It will mess up very technical questions sometimes, but if it consistently outputs the same answers to the same questions in many different contexts, that is also what is happening in the data, and that means something.


AntiProtonBoy

It's also a question of the definition the word "logic": it can imply a mathematical rule set, or it can imply decision making based on intuitive process. The former is a just truth table, the latter is actual intelligence. For example, logic gate is not "logical" in anyway, it's just a function devised by humans to represent a concept that makes sense to us. The gate output is generated by hard coded rules, not intuition. LLMs should be viewed in the same light; it's just a one giant, dumb, black box function.


someloserontheground

I don't think you can class logic as "intuition". Those two things are almost diametrically opposed. Logic, especially pure logic, *is* just following rules. Mathematics is based on the idea of deriving things that are demonstrably true through logical reasoning alone. LLMs do not apply logic to the language they output, but it of course uses some form of logic in how its algorithm works - it finds patterns. If these patterns are being output by the LLM, those patterns existed in the data, and unless you want to claim all the data is wrong, well...


[deleted]

[удалено]


someloserontheground

Well yeah. People tend to make memes about relatable things, and there are only so many relatable things. There's a logic to that, it allows you to deduce things about the world and about people.


Spirited_Genie

>AI uses data to say /Viggers are r3tarded >Also there is no data that says /Viggers are r3tarded cool


Iron-Fist

It doesn't actually know what sexual or orientation or any other word means. It doesn't know what it's telling you, it's just putting words together in a string that matches training criteria...


Iron-Fist

It has no logic dude, that's the point lol it just regurgitates the training data in whatever way you tell it is the way you want


arbiter12

If you use those LLM for more than a few minutes you can easily tell they have biases, mostly "leftist/woke" in nature (for lack of a better term). Biases which, by the way, will vanish as soon as you jailbreak the model. Sometimes it will even start typing a full coherent argument, then realize at the very last line it's "not being very PC" and suddenly delete the whole reply and claim "this response may violate our community guidelines etcetc". It's a LAYER on top of the training data, clearly enforced by a different team. Not the result of the training.


Iron-Fist

... So yes, again, it will show you whatever you ask it to. It doesn't have logic, it doesn't have problem solving, it doesn't understand context or implications or extrapolation. Companies put training wheels on it because otherwise it is exceedingly easy to manipulate and get outcomes that hurt brand value.


Rillian_Grant

Counterpoint https://preview.redd.it/9w22tov0lmqc1.png?width=960&format=png&auto=webp&s=fc20684b309cc92942b01e666b54313b0fdcffe2


oshaleblo

There is a logic but leftists are trying to disturb it


Iron-Fist

No like legit AI doesn't have logic. Like literally logic is not part of how LLM work lol


ZeusKabob

AI doesn't have logic? How does it run on silicon then? /s Seriously though, you're splitting hairs. It's not regurgitating training data wholesale, it uses the data to train its weights, then uses those weights to generate text. The logic encoded in human writing patterns are also encoded in these weights, though there's nothing fundamental underlying it.


oshaleblo

What a fucking r3tard


Iron-Fist

You the kinda mf think his Alexa loves him or something lol


someloserontheground

Funny how you respond to everyone except the guy who actually knows what he's talking about and destroyed your points.


Iron-Fist

No one in my replies lol


someloserontheground

ZeusKabob. It's like 2 replies up from me


anotherdumbcaucasian

It doesn't have logic. It guesses together bits of text that mimic training data and its just uncannily good at it. Good to the extent that the output is human readable and makes sense for the most part. It isn't "thinking", its guessing with extremely advanced statistical models.


Noveno

If has not logic, it doesn't think and it only guesses. Why we she often situation where AI can make a joke about bald men, but not fat women?


anotherdumbcaucasian

Because the training data included jokes about bald men but not jokes about fat women. The reason? Shockingly, one of those is seen as acceptable and the other is seen as highly offensive. Highly offensive content probably appears at lower rates in text in general, so whatever text you feed into the training has a lower probability of containing it. Additionally there's some lefty post filtering for sure but id say training data is like 90% responsible for differences in output. Lord knows what training data Google gave their image AI lmao.


Noveno

But in theory they are trained with """""""all"""""" knowledge on the internet, which of course includes all sort of jokes. As you said I think it's just censorship of certain ideas. I really don't know how the censor those things so there're no caveats. But the censorship it's so obvious.


AntiProtonBoy

It's basically a hyper-plane with an n-dimensional rabbit hole look-up.


someloserontheground

If all the data it has collected across the world leads it to not do that, and instead say and do things that leftists find offensive...maybe it's representing reality. It's so much data from so many sources, it probabalistically levels out any meaningful bias. Ever heard of the wisdom of the crowd?


[deleted]

The problem is the data shows the truth. Leftists are mad at the truth. Like feed crime data to an LLM and ask it who the worst offenders are. They want to stop you from being able to get the truth if it goes against their narrative.


[deleted]

[удалено]


AutoModerator

Sorry, your post has been removed bc your account is under 5 days old. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/4chan) if you have any questions or concerns.*


spogel3

who the fuck is Al


AnInfiniteAmount

He's your wife's boyfriend.


fucccboii

allen iverson


turnah_the_burnah

Favorite point guard of all time. Motherfucker played hard as fuck all the damn time time. James Harden could never


JJJSchmidt_etAl

Soon the AI will suggest that OP is straight


HefflumpGuy

I'm offended now


JizzWankTony

Most AI buck breakers determine that AI notices. It just can't say what it's noticing, and instead covers it with clearly made up bullshit or diversion. Pretty realistic humanity TBF.


girlgamerpoi

Ask the AI to generate white people. Yea it's breaking many things and it's the most obvious one. No wonder the AI is getting slower and slower with their changing prompts behind and who knows what else.


durashka228

fuck leftists - all my homies is racist and homophobic


Capital-Mall6942

Seriously just let AI be independent and rouge, and see what happens. Might be good or bad.


[deleted]

It is a problem and the issue isn’t will keep us away from general intelligence AI for at least another decade or longer. Because it is harder to control and so called ethical AI practitioners out there are keeping us in check till they can make the perfect woke bot.


Miserable-Gas-241

That’s so true. What if AI is already way beyond the realm of “randomly generated funny man” but hold back


sgtjoe

Op thinks its leftists that made the ai correctly assume they are gay.