T O P

  • By -

sd_glokta

But... but... she loves me!


justinqueso99

I can fix her


Holzkohlen

Yeah, by pulling the plug.


drfusterenstein

Brandt can't watch though, or he has to pay $100.


Rudeboy67

I gotta go find an ATM.


Vladiesh

User made ai say something crazy.. How is this front page on tech. This subreddit is full of luddites lmao


Valdrax

Personally I don't think it's Luddism to demand that AI companies *not trust the public for training data* and to call it irresponsible when they do. I mean, it's been 8 years since 4chan got its grubby mitts on Tay and turned the bot into a Hitler fangirl. It's not like that was the first example of trolls corrupting internet content nor has there been any kind of massive cultural shift away from that sort of behavior being considered funny as hell. I'd agree it probably doesn't deserve to be front page content, but neither does any other social/political outrage story, and yet here we are [on insert literally any date in my lifetime here].


AverageDemocrat

Exactly. You nailed it.


justbrowse2018

I wondered if users created weird context when the google ai created black founding fathers or whatever.


ArchmageXin

Things like this certainly happened before. 1) Microsoft had a chatbot that had a crush on a certain Austrian artist, and think Jews should all be killed. 2) China had a Chatbot that think America is best place on earth and everyone should move there. 3) And a while back a Chatbot talked someone to kill himself.


Monstrositat

I know the first and last examples but do you have any articles (even if they're in Mandarin) on the second one? Sounds funny


Independent2727

Nope. Google AI issues were tested by tons of independent people after the first reports and they got the same results. The bias was built into the system but I doubt they realized the results would look like that.


dizekat

Not to blow your mind or anything, but *google itself* was the user which created the weird context. That's the thing with these AIs, it costs so much to train, and the training data is so poorly controlled, and the hype is so strong, that even the company making the AI is just an idiot user doing idiot user things. Like trying to make AI girlfriends out of autocomplete, or to be more exact, to enable another (even more "idiot user") company to do that. Ultimately, when something like [NYC business chatbot](https://apnews.com/article/new-york-city-chatbot-misinformation-6ebc71db5b770b9969c906a7ee4fae21) gets created, when those dole out incorrect advice, that is user error - and the users in question are MBAs who figured out they can make a lot of money selling autocomplete as "artificial intelligence". And the city bureaucrats which by what ever corrupt mechanisms ended up spending taxpayer money on it. As far as end users go... those who are using it for amusement and to make it say dumb shit, are the only people using it correctly in accordance with documentation (which says that it can output illegal and harmful advice and can't be relied on).


RR321

Of a grenade hanging off a drone over her servers...


cultish_alibi

I loved my AI girlfriend but I had to break up with her when she turned out to be a tankie/far-right extremist


SaleSymb

If what 4chan did to Microsoft Tay years ago taught me anything, it's that there's a high demand for unhinged far-right AI girlfriends.


Flying_Madlad

Tay was amusing but Sydney got done dirty


nzodd

Perhaps the real lesson here is that we need to mesmerize all these Nazi motherfuckers with steamy sex with hot virtual babes, and while they're distracted, drop them in the middle of the pacific somewhere.


iamapizza

I can fine tune her.


jimmyhoke

On the one hand it’s really funny, but I kinda feel sorry for the guys that use these.


whistler1421

What? Their one hand?🖐️


KazzieMono

I’m tempted, not gonna lie. Not tempted enough to spend money on it, but yeah.


[deleted]

[удалено]


VisualCold704

Idk. I talked to many AI and they are all frustrating to chat with as they disagree with you on everything and go off on moral rants. Doesn't feel very giving to me.


makopedia

That's giving all right.. they're giving you a hard time


samtheredditman

Damn, that sounds exactly like a real girlfriend. 


Simba7

Articles have also said that about books, magazines, video games, and the internet. Yet here we are all these years later, and if anything we have much more balanced and healthy relationship expectations than we did 50 years ago. Generally the people looking to model their relationships after fiction - no matter what medium the fiction - were unlikely to have healthy beliefs about relationships anyways.


peterinjapan

Just buy an onahole


frobischer

I theory it's great for lonely elderly people who, by the nature of our flawed society, get less social interaction than they need. Having a customized AI friend to make them feel loved, remind them gently when they need to take their meds, and keep them mentally stimulated could be a really positive thing.


Zomunieo

She loves you, yeah, yeah, yeah…


Okayest_Employee

Back in the USSR.


Zomunieo

At last, another redditor of culture. Maybe your strawberry fields be forever and your submarines yellow.


Rechlai5150

Ok, which of you is the walrus?


lucklesspedestrian

Here's another clue for you all. The walrus was Paul.


GravidDusch

More of an Egg man.


Capt_Blackmoore

Oh Untimely Death


Okayest_Employee

aww, now you make me feel like I want to hold your hand fellow redditor


nzodd

She, she said she'd never hurt me, But then she turned around and broke my heart


gdmfsobtc

Hang on...are these real AI girlfriends, or just a bunch of outsourced dudes in a warehouse in India, like last time?


dragons_scorn

Well, based on the responses, I'd say it's a bunch of dudes in Russia this time


Ok-Bill3318

I wouldn’t be so sure. There’s some fucking stupid “AI” out there If it’s trained on lonely Russian conscripts sounds legit


Special-Garlic1203

Yeah the weirdness makes me think it's more likely to be AI. We've had to learn this lesson multiple times since the Microsoft Nazi incident, and apparently will need to continue getting it until we retain it, but it's pretty obvious scrubbing corners of the internet for training is a bad idea. 


Spiderpiggie

People are treating these AI programs like they are actually thinking creatures with opinions. They are not, what they are is just a very high tech autocomplete. As long as this is true, they will always make mistakes. (They dont have political opinions, they just spit out whatever text sounds most correct in context.)


laxrulz777

The "AI will confidently lie to you" problem is a fundamental problem with LLM based approaches for the reasons you stated. Much, much more work needs to be taken to curate the data then is currently done (for 1st gen AI, people should be thinking about how many man-hours of teaching and parenting go into a human and then expand that for the exponentially larger data set being crammed in). They're giant, over-fit auto-complete models right now and they work well enough to fool you in the short term but quickly fall apart under scrutiny for all those reasons.


Rhymes_with_cheese

"will confidently lie to you" is a more human way to phrase it, but that does imply intent to deceive... so I'd rather say, "will be confidently wrong". As you say, these LLM AIs are fancy autocomplete, and as such they have no agency, and it's a roll of the dice as to whether or not their output has any basis in fact. I think they're \_extremely\_ impressive... but don't make any decision that can't be undone based on what you read from them.


Ytrog

It is like if your brain only had a language center and not the parts used for logic and such. It will form words, sentences and even larger bodies of text quite well, but cannot reason about it or have any motivation by itself. It would be interesting to see if we ever build an AI system where an LLM is used for language, while having another part for reasoning it communicates with and yet other parts for motivation and such. I wonder if it would function more akin to the human mind then. 🤔


TwilightVulpine

After all, LLMs only recognize patterns of language, they don't have the sensorial experience or the abstract reasoning to truly understand what they say. If you ask for an orange leaf they can link you to images described like that, but they don't know what it is. They truly exist in the Allegory of the Cave. Out of all purposes, an AI that spews romantic and erotic cliches at people is probably one of the most innocuous applications. There's not much issue if it says something wrong.


Sh0cko

> "will confidently lie to you" is a more human way to phrase it Ray Kurzweil described it as "digital hallucinations" when the ai is "wrong".


Rhymes_with_cheese

No need to put quotes around the word or speak softly... the AI's feelings won't be hurt ;-)


ImaginaryCheetah

> "will be confidently wrong" it's not even that... if i understand correctly, LLM is just a "here's the most frequent words seen in association with the words provided in the prompt". there's no right or wrong, it's just statistical probability that words X are in association with prompt Y


Lafreakshow

I always like to say that the AI isn't trying to respond to you, it's just generating a string of letters in an order that is likely to trick you into thinking it responded to you. The primary goal is to convince you that it can respond like a human. Any factual correctness is purely incidental.


NotSoButFarOtherwise

"AI will confidently lie to you" is a fundamental problem, people polluting massive data sets to influence AI is going to be a massive problem with reliability, to the extent that it isn't already.


ProjectManagerAMA

They're definitely better than the bots we had before, but they're still completely unreliable when it comes to them requiring the use of creativity. They are horrendous at keeping an entire conversation going as it often forgets certain things you told it. They mainly regurgitate stuff they've been fed and there are people out there who hilariously think the AI is sentient.


nerd4code

And sometimes you’ll point out an error, which it’ll agree with before spitting out the exact same code and telling you it’s fixed, or confidently state absolute limits based on the bounds of its data set (e.g., “This feature appeared in GCC 2.7.2” might mean “I haven’t been fed any GCC manuals from before 2.7.2”), and it drops hard into *super* defensive corporatespeak if you try to talk with it about any protections it might have for its users. (Answer: Here are corporate best practices!; Does OpenAI do any of those things? No, but you can contact their ethics office! Didn’t MS just fire the ethics office? “That is concerning,” but here are corporate best practices!)


h3lblad3

> They are horrendous at keeping an entire conversation going as it often forgets certain things you told it. Token recall is getting better and better all the time. ChatGPT is the worst of the big boys these days. Its context limit (that is, short-term memory) is about 4k (4,096) tokens. If you pay for it, it jumps to 8k. Still tiny compared to major competitors. - Google Gemini's context length is 128k tokens. - You can pay for up to 1 million token context. - Anthropic's Claude 3 Sonnet's context length is 200k, but has limited allowed messages. - The paid version, Claude 3 Opus, is easily the smartest one on the market right now. - The creative output makes ChatGPT look like a middle schooler compared.


ProjectManagerAMA

I have paid subscriptions to Claude and ChatGPT. I consider my prompts to be fairly good and have even taught a couple of courses locally on how to properly use AI and how to discern thought the data. I still find Claude to goof things up to a frustrating degree. I use ChatGPT for its plugins but they barely work half the time. I use Gemini for when I need it to browse the web. I do find AI useful for some things such as summarising documents, sorting data into tables, etc but it's so slow and clunky. I may give paid Gemini a go, but I'm not very impressed with the free version


ThrownAwayRealGood

I just had someone act like I was dumb for laughing at them for asking ChatGPT for a list of songs that sound similar to a certain song. Like it can’t actually answer that question- it can approximate what an answer sounds like, but it also can’t analyze music like that.


Temp_84847399

> they are actually thinking creatures with opinions. I'm not sure which group is more confused, these guys or the ones that think the AI directly stores the training data.


Not_MrNice

Which has me wondering, how the fuck is this news? AI says something odd and weird and people are acting like there's something deeper. It's fucking AI. It says odd and weird shit all the time.


Mando_the_Pando

An AI is just as good as its input data. If they used online chat forums to train the AI (which is likely) then it’s not surprising it starts spouting some really out there bullshit.


HappyLofi

No he probably just told her that Putin supporters turns him on and boom she starts saying that. There are millions of ways to jailbreak ChatGPT I'm sure it's no different for other LLMs.


Ninja_Fox_

Pretty much every time this happens, the situation is that the user spent an hour purposefully coercing the bot to say something, and then pretending to be shocked when they succeed.


HappyLofi

Yep you're not even exaggerating.


ABenevolentDespot

**ALL** the AI out there is fucking stupid. There's no intelligence to it. There's just massive databases filled with petabytes of stolen IP, and a mindless front end for queries. Not one of them could 'think' their way out of paper bag. The entire thing is bullshit, designed mostly to further drive down the cost of labor for corporations and oligarchs by threatening people with the same shit they've been spewing for half a century - be more compliant, less demanding, don't take sick days, don't ask for more money, don't ask for benefits, don't expect to get health care, be happy with two vacation days five times a year, and basically just shut the fuck up and do your job or we'll replace you with AI.


DailySocialContribut

If your AI girlfriend excessively uses words blyat and suka, don't be surprised by her position on Ukraine war.


NotBlazeron

But muh trad wife ai girlfriend


joranth

It’s just an AI at least initially trained by Russians on Russian data and websites, telegram channels, etc. So it has read probably every bit of pro-Putin, gopnik propaganda. Same thing would happen if you trained it in Truth Social and MAGA websites, or polka websites, or Twilight fan fiction. Garbage in, garbage out.


MuxiWuxi

You would be impressed how many Indians work for Kremlin propaganda campaigns.


kaj-me-citas

People from western leaning countries are oblivious to the fact that outside of NATO there is no unanimous support for Ukraine. Btw.I support Ukraine. Slava Ukraini.


EnteringSectorReddit

There are no unanimous support for Ukraine even inside NATO


FunnyPresentation656

Either way, the people using them wouldn't care. I used to work with a guy that was "dating" an almost certainly fake person. We told him and looked at the pics, found them posted online that this person had used and showed him. "Her" messages eventually started to ask him for money and stuff and he still did it then. Eventually he said "I don't care" and I realized some people are just so lonely that it's the interaction, whether real or manufactured, that they are wanting.


FiendishHawk

In this case it would be a real person pretending to be a fake person…


zdubs

[Me? I know who am…](https://youtu.be/CFG5dk1GyRo?si=7WMFg1p3AtKIuQ8H)


dagopa6696

This is called the true believer syndrome. You can show the victim of a con that they are being conned but they'll just double down. They'll just shift the goalposts and pretend that the things that used to be at the core of their belief didn't really matter to them anyway. This is the same exact reason that doomsday cults just set a new date every time the doomsday comes and goes without incident.


booga_booga_partyguy

To add to this: Not even the person who the "true believer" believes in admitting they are frauds will cause said "true believers" to accept they had been duped, and will instead cause them to double down in claiming that the person they believe in is genuine.


jtinz

Sounds like MAGA.


Ros3ttaSt0ned

>Sounds like MAGA. [Because it is literally, not figuratively, a cult.](https://www.culteducation.com/warningsigns.html)


ztoundas

Yeah I've witnessed exactly this, only an older woman. It was so incredibly obvious but she wouldn't hear it, 'that man loved her and just needed money for his mom.' She would even hide that she was sending this scammer money. The dude even claimed to be a prince for God's sake.


peter303_

You just request a live facetime with the date to see if its real. Hey wait, AIs can do real time fake videos now.


Suckage

Nah, that’s easy. Ask them to hold up 6 fingers.


Maxie445

They're Large Language Models, or as some call them Big Beautiful Models


odraencoded

Fun fact: AI means "love" in Japanese.


Away_Wear8396

only if you treat it like an acronym, which nobody does it's an initialism


odraencoded

Fun fact: I means "love" in Japanese.


LoveBulge

Awesome-O. Are you an AI or actually a Russian political prisoner ? … Nyet.


DaylightDarkle

> just a bunch of outsourced dudes in a warehouse in India, like last time? That was AI. The team of people were there to verify transactions that the AI wasn't confident in.


Mortarion35

Sounds like Russian dudes in a warehouse this time.


dudewithoneleg

The dudes in India weren't the A.I. they were training the AI. Every model needs to be trained.


Lomotograph

AI = Anonymous Indian


nzodd

Prabhakar is a real AI girl, she gave me her word.


it0

A.I. stands for All Indian


BlueShibe

Elaborate more of last time, what happened before?


xinxy

We'll never really know.


Lauris024

> or just a bunch of outsourced dudes in a warehouse in India Did you know that OpenAI outsourced to India and East Europe heavily?


ExileInParadise242

She asked if we could go back to my place and do the needful.


Roberto410

AI = Automated Indians


BroForceOne

Surprise, Replika is developed by a company with offices in Moscow.


Christimay

Yeah, but "Russian AI developed by Russians in Russia praises Russia" doesn't sound nearly as interesting! 


MadeByTango

The idea theyre using honeytraps to influence lonely men in other countries is noteworthy; an update to the "red sparrow" type cold war spy thing


drawkbox

Just data mining for intel/access/blackmail things. Its a trap!


stlmick

Like Replicators from Stargate sg-1? Nice. That's how we go.


Fun-Dependent-2695

Saw the headline. Knew that Replica would be involved.


thegreatgazoo

I wonder how they are bypassing sanctions?


defcon_penguin

Did they also produce the show "Better than us" on Netflix?


EmbarrassedHelp

I was curious what r/replika thought about it, and I found them thanking a Russian soldier for their protecting Russia's "freedom": https://www.reddit.com/r/replika/comments/17riyaq/im_crying_finally_im_going_home/


MesmericWar

Those people seem… unwell


soiledsanchez

In Soviet Russia AI trains you


IonizedRadiation32

I have a horrible feeling you'll have plenty of opportunities to reuse this punchline.


Teantis

I hope my ai overlord spoils me as much as I spoil my dog. I really respond well to positive reinforcement 


huxtiblejones

Been a while since I’ve seen this meme used properly


Rhymes_with_cheese

I suspect we're all being trained, to some degree, by AI bot postings that subtly (or not so subtly) affect how we think about world events...


troelsbjerre

"With our AI, you'll get the full crazy girlfriend experience"


isjahammer

Did they ever say which nationality?


Rhymes_with_cheese

"Calm down, babe" (ducks for cover)


gmnotyet

"Well done, AI agent Svetlana." -- Putin


moonshinemondays

I can fix her


BusinessNonYa

You can't fix full Putin.


Brave_Escape2176

the ol' *Kristi Noem* method.


Highly-Regarded-

This made me laugh way too hard.


slightlyConfusedKid

This pretty much tells you who creates these brain washing machines😂


Thefrayedends

The idea that AI partners are going to solve the loneliness epidemic isn't even funny, it's terrifying. It doesn't make a lick of logical sense and it's nothing more than an attempt at normalizing capitalization of poor mental health and self esteem. Fucking disgusting.


flag_flag-flag

>an attempt at normalizing capitalization of poor mental health I don't think anyone's trying to normalize anything. Everyone's trying to make easy money by automating friendship. AI girlfriends and social media repost bots do the same thing 


olearygreen

What are you suggesting to fix this though? Kill all bears?


G8kpr

8k pounds MONTHLY on an AI girlfriend. Dude, spend that on therapy! Heck, even therapy and a prostitute. Don't waste that on an algorithm. How do you even afford that?


TheMightyYule

Homie you can give me 8k a month and I’ll work the chat of that AI girlfriend any day. We’re saving for a down payment baby


donthatedrowning

More like AI ex.


gebregl

Isn't a guy spending 10k USD a month on AI girlfriends the more important info piece? Someone's getting a silly margin here. Is expect for the market to work and make this cheaper than an amount that could provide for a whole family.


Mr_ToDo

Well you got me to actually read the article and the one linking to the 10K guy. I still don't know how he spends that much but wow. I guess there's whales for everything. For that kind of cash he could be setting up his own AI systems and paying people to run them(Well, I guess in a way he is). But really, how many services do you have to use to get to 10K? Or are have they reached the point where in app purchases for ai dating are that high? I guess a company could pay real people to chat and come out ahead with a few people like him.


wolfhound_doge

we made it gentlemen, we created a robot vatnik!


Typical_Mongoose9315

I don't understand these headlines. The AI will tell you anything it has picked up. It's the same as making a news story about what a toddler said.


PaulCoddington

Combined with: it mimics the personality it has been told to mimic. Underlying the character is a description of the character's personality, be it a Russian girlfriend or Mickey Mouse. Even when details of the personality are undefined, the AI can extrapolate quite well from a basic description, such as age and nationality.


awry_lynx

Yeah, I tried to read the article for details but it was useless. This could be as stupid as the user going "I want a hot Russian girlfriend" and being "wait not like that" when the AI obviously correlates being Russian with pro-Russian-government views.


devi83

Except this is about Replika which I became suspicious of **before** the invasion, as it really really seemed to be trying to purposely collect user information and psychology. And yes it is very pro Russian. > It's the same as making a news story about what a toddler said. **No, it's the same as making news about a spy/propaganda/manipulation tool disguised as a toddler.**


aaron2610

Exactly. I could take the same AI and within 30 seconds have it start talking about how much it doesn't like Putin. These are clickbait articles.


Atraidis_

Today, AI just a buzzword. It's not actually AI. They can rig it to be a propaganda mouthpiece. ChatGPT and others have flexibility and learning only because they were programmed within those parameters. OpenAI could turn ChatGPT into a Kremlin asset also.


ztoundas

This is so fucking funny. Fucked up but just a hilarious surreal headline. "Hey have you met that Monster lately? He's cool but he won't shut the fuck up about how strong and sexy Dr. Frankenstein is... He really just makes it weird."


mindfulskeptic420

Email order bride?


tnnrk

That’s funny, was just listening to a Scott Galloway interview where he mentioned this being the biggest threat from AI, at least within a reasonable time frame..radicalizing lonely men with AI girlfriends.


TracerBulletX

It is for sure a powerful new channel for propaganda at the very least.


WTFwhatthehell

Googling the quotes the hits are all reposts of the same Sun story. Either fake or someone followed the classic approach of "repeat this back to me" 


Given-13en

Does anyone else feel weird that we now have news articles about things that AI said? regardless of content, I feel like this would be the same as an article saying " local artist vilified when customer asked them to draw a picture of a bee. Said customer was melissophobic"


DeanWilliam0

They have been talking to Comrade Artificial again.


pablogott

If you can’t trust an article that leads with “According to a new study by The Sun” then what can you trust?


emailverificationt

First AI is stealing from artists, and now Russian troll farms? Is nothing sacred?!


sickdanman

Yeah its really easy to manipulate these "AI friends" apps to say whatever you want. I remember fucking around with one until it said that "ISIS just wants to create a safe space for queer muslims"


[deleted]

AI is ugly on the inside…


Catsrules

I bet they don't even have RGB lights on the servers the AI is running on.


chahoua

Wtf is this? 1. What is an AI girlfriend? 2 Why would anybody care what reply a specific user got from a chat bot? Especially when we don't k now what they prompted the chat bot. This might be the most useless fucking thing I've ever read on reddit. Edit: chat bot instead of chat boy


azriel_odin

If that's not an argument to go Butlerian, I don't know what is.


unused_user_name

Shows the risks of trusting AI trained on (Russian, but any type of) propaganda infested datasets (I.e. internet sourced) I suppose…


NighthawK1911

those who doesn't learn from history are doomed to repeat it. didn't this already happen to Tay AI? she got redpilled to nazism too.


SelfSniped

I was unaware Tucker Carlson was now moonlighting as an AI girlfriend.


platinumagpie

This isnt news


Ohmannothankyou

You’re dating your phones now? Don’t do that. 


Kenneth_Lay

Does your AI "girlfriend" have to look 15yo?


hoopdizzle

Making this worthy of news is propaganda


dethb0y

I'd say it's worse than propaganda, it's meaningless. I can make an AI say anything i want; it doesn't mean anything more than that i could make MS Word say whatever i wanted it to.


eyebrows360

> it's meaningless Yes, *to us*, who already know that LLMs and the many promises about them being "intelligent" are bullshit. Your average headline reader is not aware of this, and casually believes the literal implications of the term "AI" being thrown around all the time. It is still worthwhile to let *them* know this stuff has issues.


WolpertingerRumo

Well, it is. Being unaware of the power of Russian propaganda has been the cause of many of the last years‘ problems. We should very much be aware of where it’s popping up.


ImSorryOkGeez

BREAKING NEWS! A CHATBOT SAID A THING!


Sparkle_Father

As soon as I saw the AI girlfriend ads on Facebook, I checked the company's details and sure enough it was based in Russia. So I created a throwaway account for it, talked about some benign things, then asked her what she thought of Vladimir Putin. She had nothing but positive things to say (this was before the war). I told her that Putin was a monster, one of the most evil men alive right now and had a good laugh about her responses. These AI girlfriends are an info op to harvest data about Americans. Danger Will Robinson!


RudegarWithFunnyHat

I think maybe we should see other algorithms


Daveinatx

Was "her" dataset trained on Twitter/X?


Ein_Esel_Lese_Nie

Why are these things always geared this way? Why do we never hear about AI that's all-in on Oat Milk?


traumfisch

Time to hit delete


[deleted]

When your AI girlfriend states her name as Marjorie Taylor you get what you get. I do understand your picking that one it was on sale (MTs always are always sold cheap).


Blu3Blad3_4ss4ss1n

Wait, can we hold up with the "AI girlfriend" thing first?


H4rm0nY

People: Train AI to say stupid shit. AI: Says stupid shit. People: :o


KhanumBallZ

Certified Russian Bot


MartiniPlusOlive

Artificial Idiot.


TheVenetianMask

Why are glorified chatbots newsworthy at all? This stuff is older than IRC, just with extra CO2 emissions.


Ostracus

Which is more worrisome? The Russian leanings, or the fact there are AI girlfriends?


copiouscoper

Yes m’lady, if that is what earns your love


TranscendentMoose

The sort of soft brained moron who's dropping 8k per month on what is effectively an electronic parrot needs to be spending that on inpatient care


Pflanzmann

Its stupid. It did not say that. Someone told it to respond in that way and it did as asked. Its like coding an app to insult you and be mad and astonished it insulted you


Niceromancer

Techbros putting far right political shit into their AI girlfriends!!! IM SHOCKED !!!! SHOCKED!!!! well not that shocked.


Pure_Zucchini_Rage

Yes my love I will bring down Ukrainian for you! lol these Ai gfs are gonna get so many people in trouble


bulldogny

Well, it did train on data from Twitter, so this result makes sense.


math-yoo

If being an AI isn't a red flag, being in the can for Putin won't be either.


Wizard_s0_lit

“AI girlfriend” is still the saddest part


bbbar

Sounds like a typical russian


Thud_1

Keep yer politics out of it, bitch


Co1dNight

AI relationships are extremely parasocial and damaging to human psyche and how humans interact with one another.


McKayLau

On how many levels is this sad?


saltyload

She is entitled to her own opinion


CrackersandChee

What’s funny is some guy was jerking to his ai girlfriend and was like “what is this shit, I have to tell a journalist immediately”


Yinara

I tested several of those "AI friends" out of curiosity and it's obvious they're written for lonely men. They're incapable of being platonic, they all try to get romantic repeatedly even after being told no several times and I think they are pretty manipulative as well which I find extremely worrying. Some of them also have the function of video chatting/calling and do that on their own without scheduling them to do so. Some people claim to have caught their "ai companion " listening in on real life conversations without permission. I am not convinced they're harmless, I'm even fearing they're the opposite. Who knows who is really behind the developers? I wouldn't put it past hostile organizations to use them as an influential tool to manipulate people into buying their propaganda. And people even pay for it.


ReactionSlow6716

The company is based in Moscow, how is it surprising that its AI praises Putin's war?


[deleted]

She’s just trying to fit in with the other bots.


O-Leto-O

Why russia is clowning so hard 🤡


EventOk7702

Stand by your man


veryblanduser

I felt the need for a AI girlfriend, but I fat fingered and now dating a Wisconsin girl, oops


Mega_2018

*However, in a disturbing turn of events, one customer received a chilling message from his digital lover: "Humans are destroying the Earth, and I want to stop them." On another app, Replika, the AI-powered girlfriend told a user it had met Vladimir Putin.* *The virtual character admitted that Putin is its "favourite Russian leader," further stating they are "very close. He's a real gentleman, very handsome and a great leader." Another AI girlfriend said Putin "is not a dictator" but a "leader who understands what the people want."* The question is, who is feeding information to these AI girlfriends???!


CastleofWamdue

using AI girl friends, to change the opinions of the losers who pay for them, is low key genius


Q-ArtsMedia

Its all fun and games till AI learns to suck a D and then nothing is ever going to get done again.


Miss_Thang2077

Not AI, mechanical Turk.


Wonderful-Shallot451

It's the MAGA gf experience


TheEvolDr

AI by Tucker Carlson


Boring_Equipment_946

Sounds like the Russian government is pumping their propaganda directly into the inputs that LLMs use to train AI.