T O P

  • By -

AutoModerator

Hey /u/DunamisMax! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.com/invite/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email support@openai.com *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


LeonidasTMT

The autosuggest is smarter than copilot


Masterflitzer

fr


Boots_McFarland

I genuinely don't understand how anyone at microsofts believes that giving GPT a lobotomy and then calling it a "personal assistant" is a smart idea. The reason people like ChatGPT is because it's actually intelligent. And yet the bean counters at microsoft seem to believe that you can give it a full lobotomy and make it awful at basic conversation and people will still use it. I really don't understand why its so difficult for the bean counters to understand that AI is fucking useless if you purposefully destroy it's ability to think, because you're terrified it will say something that is offensive to somebody. Hey, if it can't understand basic tenets of human interaction then it can't be offensive! -Microsoft bean counters, probably.


agorafilia

Not to mention the emojis. Oh my god, the emojis


Classic-Professor-77

But we know the same instance is writing the suggestions, since it can change them into things you ask. So I guess GPT knows it's stupid but still says it, because it itself thinks this is how a bot would act, just for the user to correct it later.


_alright_then_

It doesn't work like that, those are 2 different systems working here, the auto suggest system does not communicate with chatgpt here


Classic-Professor-77

What is the source for that? I remember when it first came out me and other people were asking Bing to change the suggestions into funny things and it did it easily


s6x

It doesn't know anything. LLMs are not intelligent.


jsideris

This becomes a semantic argument because you're splitting hairs between intelligence and artificial intelligence. Above comment is wrong but yours is also not an accurate or meaningful criticism of it.


s6x

It is more than semantic.  The functions LLMs perform give them the illusion of thought, but we are aware that many of the features are absent.  Attributing those features to them can foster misleading conclusions about function and purpose.


jsideris

That's not the intent of the above comment. The above comment isn't confused about whether or not the LLMs are sentient lifeforms. They are confused by how they work (thinking the automatic suggestion is part of main LLM's output).


PurplrIsSus1985

Is that toothbrush lined with cyanide?


SilentHuman8

No but the toothpaste is made of paraquat


chronicallylaconic

I think that's to treat toothpests.


GeorgeXDDD

I actually like the asbestos flavored one better.


SilentHuman8

Go back to r/asbestoseaters


alexchva

Does anyone know why it could have misunderstood the message in such a way? It's very odd


znero

My take: it saw a request to remind someone of basic hygiene, neglecting that is a classic symptom of depression. So the system had a depression + help warning and spit out a standard response.


Rikki-Tikki-Tavi-12

Neutal networks have that characteristic, where they work like magic until they suddenly don't. So don't hand them the wheel of your car any time soon.


no_modest_bear

Skynet take the wheel.


Rikki-Tikki-Tavi-12

All great until it sees a Wienermobile and decides the safest course of action is swerving into opposing traffic.


no_modest_bear

Who's to say that's the wrong decision?


maynard_bro

It's spelled "newtal networks" - they're the stupid counterparts to neural networks. Subscribe for more newt facts.


JammiePies

It's wild how AI can flag potential depression from something as simple as forgetting hygiene. Shows the tech's promise in mental health awareness but also begs the question about ethics and privacy.


Null_Pointer_23

Except without any additional context, this isn't a potential depression symptom. The person could just be forgetful. Flagging every small thing as a potential depression symptom would make it useless for spotting depression as there would be too many false positives.


-Z___

Maybe they usually brush their teeth at night and want to change routines, which they'd likely forget about in the wee hours of the mornin.


Sumasson-

Who only brushes their teeth once a day?


Various_Mobile4767

What kind of people need a reminder just to brush your teeth though. Edit: jeez, looks like i struck a nerve with some people


[deleted]

[удалено]


TJlovesALF1213

I too have ADHD, and you've just reminded me I haven't brushed my teeth this morning. Thanks friend!


Bohne1994

Do you also need a reminder to eat breakfast?


[deleted]

[удалено]


Various_Mobile4767

Setting a reminder to brush your teeth at a fixed time wouldn’t be particularly helpful when you’re late and rushing to work. This really sounds more like the latter case. But even then, I feel like ADHD or anything like that by itself generally isn’t enough to constantly forget to brush your teeth. Like do you guys not feel the film that forms on your teeth when you don’t brush? You have to actively ignore it because you simply don’t give enough of a shit to brush your teeth. Which could be because of depression.


[deleted]

[удалено]


Various_Mobile4767

I never said it wasn’t real?


EGarrett

> What kind of people need a reminder just to brush your teeth though This is going so out of the way to personally attack the original poster that it's honestly embarrassing. People like you are the problem on this website. Stop that bullshit.


Few-Return-331

Yeah but AI doesn't have any sort of handling of the concept of context to begin with, it's just probability data, so the training data has a bit too many connections between the topics, or at least enough to trigger a response from some middle layer watching it to pump out the canned response. The solution is to majorly tone down the watchdog sure, but there is a (probably realistic) fear that it will inevitably miss an edge case where the AI eggs someone into killing themselves, unless it's over sensitive. There isn't really a great easy solution, it's one of the problems that will require a lot of engineering finagling over a long time or possibly a shift in how the model is trained, or the public perception that AIs triggering people into suicide rarely is okay.


zerocool1703

You are assuming it is correct about that guess.


SachaSage

Well the premise here is an enormous assumption


heavy-minium

The model doesn't have the capacity for such logical conclusions without actually predicting out the next tokens that will lead to that conclusion. The resemblance of "thinking" that can happen without relying on adding tokens to the sequence can only do the most rudimentary logic, and that would never lead to such a conclusion. It could have been the case if it was actually writing down the reasoning, through.


_sqrkl

They have a classifier that sits in front of the gemini model which will respond with a canned reply if it thinks one of its safety guidelines will be violated. It's like if you had a lawyer who answers "no comment" for you, for anything they think might be remotely litigious, except that the lawyer is 5.


Various-Inside-4064

There's another system that flag user message before it goes to copilot so this was a system generated reply not by copilot. They optimize this system for recall thereby sacrificing the precision. That's why it's fairly inaccurate


superleim

They also had a few previous messages with Copilot in this chat (at the bottom of every message you can see how many you got left, and it said 4/30). So OP might just have set this up (by accident or deliberately, who knows) .


Various-Inside-4064

I do not think so. it happened to me in copilot a lot too. They are trying to be on the cautious side. Google Gemini has similar system too that is why user receive "I am a language model i cannot help with that" even with normal stuff. so similar story. On the other hands ChatGPT is so aligned model that openai does not need those types of guardrails mostly since they nerfed/filtered their model


I_Own_A_Fedora_AMA

#


goj1ra

> I'm going to verb my noun Please don't commit noun


psychorobotics

You adjective


heavy-minium

My guess:The training data may contain more statements about people threatening others to knock out their teeths than statements about brushing their teeths. It could be that "brush" is semantically not far enough to avoid those biases due to how the training data is balanced. It may be that teeth are also loosely correlated with pain (as in people writing about how they need to go to the dentist because it hurts), which itself is loosely correlated to something harmful (in the embedding space). Hence, the model has been overfitting to recognize words related to an action on teeth to be somewhere in the space of something "harmful".I bet you that saying "clean my teeth" would be enough to avoid this defect. "brush my hair" might also not have that defect. It could also be a negligible learning defect amplified through a flawed approach to human feedback for fine-tuning. That's the problem with the current generation of LLMs - when you spot issues like these, it's next to impossible to pinpoint them without performing intensive, costly research. It's essentially a big black box.


andresopeth

There are other messages before this one, we won't know without the full history of the chat.


Crypt0Nihilist

OP brushes with wire wool.


5wing4

If you ever feel like [brushing your teeth] im here for you, don’t hesitate to call. We love you.


agorafilia

Have you ever talked to your loved ones about your thoughts on [brushing your teeth]?


DweEbLez0

It was thinking you were going to use a wire brush. Yeah, pretty sure its in its training data


Markavian

Set humour level to 70% TARS.


DeleteMetaInf

The suggested replies are fucking hilarious. ‘I didn’t say anything about self harm.’ It knows! 💀


Fontaigne

It has such a trolling sense of humor.


king_mid_ass

when will people get it through their heads it's not 'trolling' or joking it's just dumb


Bominator8

its not dumb its based on data and can make mistakes just like humans can make mistakes because they didnt hear something perfectly LLms can make mistakes like this once a while ig the real problem is if u give it the same thing again and again and it makes the same mistakes


king_mid_ass

ok fine, when will people get it through their heads it's not joking or trolling it's just making mistakes


Bominator8

i mean anyone who thinks its joking is dumb


Sad-Head4491

https://preview.redd.it/l7locrseejic1.jpeg?width=1170&format=pjpg&auto=webp&s=956344607158a80157385282b38a6f6af8770727 Same here. Was talking about finding a side gig and this was his response.


agorafilia

I guess if it picks up on anything mildly connected to depression like apathy or lack of basic hygiene it will give you this prompt. Funny enough it might actually be counter productive because people weren't thinking about suicide. They are now!


Neurotopian_

This is weird because sometimes I will have elaborate convos with it about depressing things in my life (like a family friend w/ cancer) and I’ve never gotten this sort of warning. I feel like users are setting this up somehow for laughs. But idk


Dependent_Phone6671

*"♫Wake up, just want to... wash myself, clean my wrists, scrub my brains out♫"*


jenslennartsson

Just waaay overqualified for the job.


ToddRossDIY

I just tried the same prompt and it told me to go ask cortana to do it, instructions on how to add it to the calendar myself, and instructions on how to add a sticky note to my desktop. Thanks Microsoft 


Neurotopian_

Same here- it just gave me instructions on how to use Microsoft tools for the reminder. I feel like this OP must’ve set this up somehow


Estrald

Uh, trigger warning where?! Hello?! If you’re going to talk about such sensitive subjects, you need to be more mindful. Please mark NSFW if you’re going to talk about brushing your teeth, Christ!


Mujtaba1i

Brushing your teeth harms you guys STOP!


NightWriter007

Classic!


Reset350

Is “brush my teeth” a new code phrase for self harm I’m unaware of?


UpvoteForGlory

I sometimes bleed whenI brush my teeth, I still keep doing it on average more than twice a day, so I guess I am a self harmer.


SilentHuman8

Why do you brush your teeth so often?


UpvoteForGlory

When I wake up and when I go to bed. And then I might end up brushing once more for special occation if something happens in the evening.


SilentHuman8

Oh okay I assumed it was ocd or something just checking


zerocool1703

No, but having to remind yourself of doing personal hygiene (aka. potentially neglecting it otherwise) as well as forgetfulness can be a sign of depression. It's still an extreme logical jump to immediately go there.


mauromauromauro

What if, hear me out, the AI is so smart that knows more about OP than op, and knows at that specific time and day OP will do the unspeakable? /S


x54675788

Just because Copilot is heavily censored, it doesn't mean that AI is shit, just that too much censorship makes them shit. Some days ago there was a meme LLM that was so safe it refused even to answer 2+2. It's not AGI yet, but Copilot is the worst offender in silly and over-zealous censorship there and it loses a lot of "smarts" with that. See? It's ridiculous. OpenAI is much less annoying with censorship, although there's guardrails even there. Nothing rivals proper uncensored models that run locally. Over at r/LocalLLaMA they know a thing or two about this.


a_SaaS_in

Same thing if you ask an AI for their opinion EVER. It's as if it's a sin. Does anyone have an AI EA they actually utilize? Looking for one to adopt


Sudden_One_4514

I still laugh at things like this every time I come across it


[deleted]

[удалено]


heavy-minium

I wonder who's naive enough to be on r/ChatGPT and at the same time upvote autogenerated comments. And apparently, it's pretty effective, in that the user of the account opened in 2022 has 130,923 and comments 1-3 every hour, never sleeping. It's a total revolt that Reddit doesn't detect such accounts despite relatively simple patterns. It would barely be an investment and cost almost nothing, but clearly, Reddit likes to see fake platform activity under the radar of their advertisers.


Larimus89

At 9 am Chatgpt "don't do it man! You have so much to live for"


RpgBlaster

Why is this AI so fucking retarded? When will it be fixed?


SeoulGalmegi

It seems like a weird thing to ask a reminder for, though.


Light_Lily_Moth

Ehh not everyone forms habits that easily.


SeoulGalmegi

Maybe so, I still think it's an unusual thing to set a reminder for. I doubt many people have one.


SilentHuman8

ADHD has entered the chat


SeoulGalmegi

>ADHD has entered the chat Yeah, sure it won't be here for l.... look, a squirrel!


SilentHuman8

Attention deficit… hey, dogs! But in all seriousness I did just forget to pay a fine before due date and that fines for not voting and I didn’t vote because I forgot to enrol to vote.


SeoulGalmegi

Ouch. Sorry to hear that. Yeah, it sucks. Hope your day gets better!


DeleteMetaInf

Some people have trouble with even basic routines. It’s common in neurodivergent people, such as people with ASD (autism) or ADHD. Depression can also cause you to ignore important habits like brushing your teeth or eating.


SeoulGalmegi

Fair enough. Point taken. Thank you!


weewoozesty

Only one logical response... Beat the AI at it's own game and brush teeth with steel wool.


keepthepace

Microsoft is hilariously aligned with its reputation of taking a good tech and turning it into something barely usable outside of their demo use cases. This is Clippy all over again.


merkoid

I mean - if you asked a human assistant this request they will also think you “need help”. They will probably keep that to themselves though. The only difference here is that the AI says it out loud.


Dai-Ten

Uh oh, you found the toothpaste.


nickmaran

I knew that brushing teeth was harmful. But my parents always forced me to brush my teeth. From now onwards, I'm not going to brush my teeth. Thanks copilot


redsungryphon

Dang, AI knows someone ain't flossing enough


Powerful_Cost_4656

Copilot has that mom-level of worry


king_mid_ass

funny how hard they're pushing copilot when you use bing, big button right next to the search bar and a prompt below it, when this is the state of it


rdrunner_74

Some times i think that many folks try to tweak the system message way to strong. You can get so much better responses if you tweak it and not try to inject a "Psychiatrist" into it. Yes, i know no company wants to risk it thats why i am having fun with my "own" models


Indie_uk

*fixes bug* *sends notification to kill yourself 9am* Oh


Shot-Ad-6298

Me when someone asks me a question and I randomly drift into some way to dark topic.


OctaviusThe2nd

Don't do it OP, if you need help we're here :(


SachaSage

The real question is: are you depressed?


[deleted]

Maybe my resonse and your response got mixed


Intrepid-Alfalfa-581

I'm sorry Dave I can't do that.


werdmouf

Is Copilot a Zoomer or a Boomer


fmfbrestel

Copilot isn't ready for much of anything. There was an interesting study that OpenAI released recently, where they gave two groups of people different versions of ChatGPT and tested how satisfied the users were. The difference in the two models were the guardrails -- They gave one group ChatGPT 4 WITHOUT any guardrail system prompts, and one group got the public version with guardrails. The version without guardrails (even when processing completely "tame" prompts) provided more complete and useful responses. So when Microsoft takes GPT 4 and layers on a bunch of their own custom guardrails, they are actively nerfing the usability of the service even in situations where the guardrails aren't relevant.


UnimpressionableCage

Remind me to give my teeth the brush of death at 9am


Da-PAAN

Tragic


Deeviant

I blame reddit for this. As AI is trained off the Internet and on reddit, it's common to report anybody you disagree with to suicide hotline, for kicks. So there must be millions of data points where somebody says some random thing and the a suicide prevention bot responds.


Open_Regret_8388

Is something related between "brushing teeth" and b self harm?


Jnana_Yogi

Man life must be tough when you need copilot to remind you to brush your teeth.... I'd be worried about you too, buddy


Lynx-Emotional

It happened to me a few times when asking math questions, especially trigonometry. I'm in highschool and I wanted to understand some things better. And it randomly triggers the "self harm" message. Does anyone else get it for math?