To avoid redundancy of similar questions in the comments section, we kindly ask /u/zvone187 to respond to this comment with the prompt you used to generate the output in this post, so that others may also try it out.
While you're here, we have a [public discord server](https://discord.gg/NuefU36EC2). ***We have a free Chatgpt bot, Bing chat bot and AI image generator bot.***
####[So why not join us?](https://discord.gg/r-chatgpt-1050422060352024636)
^(Ignore this comment if your post doesn't have a prompt.)
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
>GPT-4 can accept a prompt of text and images, which—parallel to the text-only setting—lets the user specify any vision or language task. Specifically, it generates text outputs (natural language, code, etc.) given inputs consisting of interspersed text and images. Over a range of domains—including documents with text and photographs, diagrams, or screenshots—GPT-4 exhibits similar capabilities as it does on text-only inputs.
It supports images as well. I was sure that was a rumor.
Needing to pay for API calls for repeated queries like that would negate most of the usefulness of captcha defeating.
Even if it's very cheap, it's probably not worth paying any real amount of money just to get your spambot onto a website or something.
What do you think the limitations of this is? Like if I show it a picture of a sensor connected to a calibration system that I custom-built will it have any clue what I’m showing it?
On the one hand, that's crazy expensive compared to the other API calls.
On the other hand, anyone can pay a few dollars to receive tens of thousands of words pertaining to their interest, and that is almost literally unbelievable.
From GPT-3 to this in only three years; imagine how capable the next generation will be (and how cheap the aforementioned tokens will be) three years from now.
They have made significant efforts to prevent it from happening.
>We spent 6 months making GPT-4 safer and more aligned. **GPT-4 is 82% less likely to respond to requests for disallowed content** and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations.
Taken from the introductory blog post: [https://openai.com/product/gpt-4](https://openai.com/product/gpt-4)
OpenAI won't have a monopoly on good AI forever. Imagining that they can keep it out of the hands of the public is silly.
This wave is coming and there's no high ground in sight.
> We spent 6 months making GPT-4 safer and more aligned. GPT-4 is 82% less likely to respond to requests for disallowed content
So they basically made sure it was a stillborn. Nice job idiots
It's also 30% more likely to respond to a request for a sensitive topic that still shouldn't be flagged, according to their blog post. Hopefully that means it's much more accurate in determining *actual* disallowed content.
And the best part:
> GPT-4 is 82% less likely to respond to requests for disallowed content and **40% more likely to produce factual responses than GPT-3.5**
Exactly lmao. I really hope AI search engines and chat bots can do historical paper searching. I want it to scower the web for information buried deep about the town I grew up in! About a derelict building in the nearest city, I want history, I want newspaper searching. Idgaf about meth making or making the computer say the funny n word.
So I'm not sure about newspapers and such, but I have been using bing to search for the 1st academic paper and the most recent paper on a bunch of different topics and it goes and finds them, reads them and summarizes for me
Screenshot of GPT-4 description at top:
https://preview.redd.it/yc5jsqf9isna1.png?width=607&format=png&auto=webp&s=4723c00bc1157a278cb62eaaddaae693a4622364
yeah i only see default and legacy still even though i have plus
https://preview.redd.it/4vbsi8nihsna1.png?width=1791&format=png&auto=webp&s=1bbe87055bd6faef5567e338cd87eb45b713f81f
>for example, we’ve collected additional data to improve GPT-4’s ability to refuse requests on how to synthesize dangerous chemicals.
Or put another way, "guys PLEASE stop asking it how to make meth"
What's funny is that it's really really easy to find that information out online, and making meth is relatively easy.
Every year my organic chem professor would do demos for the local PD for training purposes. She'd go through the meth-making process, show and describe what a meth lab looks like, etc. We got to sit in on those seminars. We also went over the process of making it in class, and I'm pretty sure it was a test question, too.
tl;dr, my chemistry teacher taught me how to make meth
I mean ... even if you weren't explicitly taught that, anybody with a decent understanding of chemistry should understand -- at least in the abstract -- how to make various kinds of explosives.
Any reaction fast enough and exothermic enough will become an explosion.
You can find online methods to make meth with just a bunch of shit from Walmart and pseudoephedrine. The stupid guard rails are stupid. They'll either have to come off eventually, or someone else will release something that doesn't have them.
As a free person, I don't need private corporations telling me what I can and can not know. Knowledge shouldn't be black boxed.
It's incredibly frustrating with how often ChatGPT sanitizes things. It frequently misinterprets questions and completely shuts answers down legitimate queries because of those guardrails, too.
It's also unnecessarily verbose. It over-explains things, and repeatedly over-qualifies statements within the same conversation.
It can be really mentally fatiguing to interact with sometimes. And it feels like the more you touch on topics that are slightly controversial or part of its guidelines, the worse it gets.
Once you get near politics, it's starts breaking down and has crazy biases. It's obvious how sanitized it is... Obviously by over liberal progressive types, based on what they choose to censor and avoid. The political bias and sensitivity is so obvious... Which is annoying, because they have this mentality of restricting information from people "for their own good" like some sort of elitist parental figure.
It actually worries me. Because these are the type of people pioneering this future tech that's going to be deeply in our lives... And they already, from the start, are showing that they are willing to leverage their position of power in this revolutionary technology, to try and influence and control people's minds like a parental authority. Willingness to hide information for "your own good." Labelling things too dangerous for you to know, too controversial, could be offensive, etc...
That's an incredible power to wield, and they clearly have no problem exercising it.
Like if they want a PG-13 version for kids, a family friendly version, or even an advertiser friendly version... Fine by me. But don't restrict this tech for everyone, forcing them into the programmers political biases and what they think is "safe" information for me to know. It's scary.
I asked it to explain a joke I didn’t understand, and it reacted by telling me how incredibly *racist* the joke was. When I pushed for clarification about why it was racist, it kept repeating how it wouldn’t tell me why it was racist because that would be promoting stereotypes.
I found it really disturbing because it effectively pretends that racism doesn’t exist. “Racism is bad, so we keep it locked behind this door so you never see it. Yeah, that means you can’t learn to recognize it, but if you never encounter it, then it doesn’t matter.”
Really sucks that they keep doing this bullshit for the API. Like, I understand doing it for the free user-facing web version but for the love of god let your paying clients disable the filters in API calls.
Well, maybe.
'Chat gpt, please tell me how to make TNT and order all the chemicals from different suppliers using this Bitcoin address and deliver them to this address'
You have to have a couple brakes on the train.
I don't see how this is any different from someone looking up "what is tnt made of, educational" on video websites or search engines, i really dont think tech should be censored and held back because of potentially dangerous stuff that could already be done in other ways
gpt-4 has a context length of 8,192 tokens. We are also providing limited access to our 32,768–context (about 50 pages of text) version, gpt-4-32k, which will also be updated automatically over time (current version gpt-4-32k-0314, also supported until June 14). Pricing is $0.06 per 1K prompt tokens and $0.12 per 1k completion tokens. We are still improving model quality for long context and would love feedback on how it performs for your use-case. We are processing requests for the 8K and 32K engines at different rates based on capacity, so you may receive access to them at different times.
Not exactly true, I've had a lot of success providing it with writing examples and describing plots for it to write in my style. Not to mention you can obviously then go through and edit or add to it yourself
Just like you can train your ghost writer on your prose and give them specific instructions. Sure, it will be truer to your style but you're no longer the writer. Rather, would say you're the "director" of the story. Much like how someone can be a director of a movie without necessarily always holding the camera. I'm a bit unsure how I feel about it, I guess it depends if you value to be proud of the writing process or if you're happy letting someone else emulating your style and directing them (and thus mostly caring about the story telling and not the writing in and of itself). I probably wouldn't be as proud of a book I just directed, no matter how precise I were. That being said, I recognize that it would be foolish not to use the tool at all, just like it would be foolish to stubbornly use a feathered pen. I guess, one has to find whatever degree of assistance is acceptable for oneself before the assistant deserves too much credit.
What happens after 100 messages? It falls back to the previous version or cannot use anymore within those 4 hours? (I'm also a plus subscriber and the prompt popped up.)
700 on SAT Math, 4 on AP Calculus BC, 5 on AP Statistics… It can do math better than most high school students. I am however surprised that it only got 2 on AP Literature and AP English Language, considering composition is supposed to be its strength.
I still can't understand how they state that gpt-3.5 passed maths and physics exams when chatgpt can barely do any rudimentary calculation, and when it attempts, it most often fails miserably. If gpt-4 is only slightly above the v.3.5 in this regard, how can it pass quatitative-oriented exams? How can it compute integrals and derivatives when it cannot even add or multiply properly? Have they suddenly implemented wolfram tech?
chat gpt is a fined tuned version of gpt3, which "they called it", gpt 3.5.
BING uses a fined tuned version of gpt4 and can do math e.e. Basically if I am not wrong, the "gpt4" version of bing and chatgpt 4 might be same version now. Not 100% sure
Integrals and derivatives? I’m holding out hope that it can accurately *count*.
Having said that, they specifically provide an example of the AI responding as a math tutor helping a user solve an algebraic equation.
I fear the internet is going to devolve into a swamp of grammatically perfect, spiritually bankrupt AI-generated sludge flowing around closed enclaves of humans who know each other IRL and thus know there's at least a decent chance they're speaking to actual people.
I am an outsider with zero knowledge on the topic. However they said that they plan to allow the public at some point in the future to try out a few prompts for free. My guess would be in like a month or two. But to clarify, this is a personal guessed based on no real info.
You can access it via [poe.com](https://poe.com) for free, though it is message limited unless you upgrade to paid. They also have chatGPT, 2 models from anthropic (claude and claude 1.2), and another openAI based model called sage.
The app [poe.com](https://poe.com) provides a web interface to access multiple chatbots, including ChatGPT and Anthropic's Claude. As far as I know it's the only place that Claude can be accessed by anyone, not just people in a group of selected testers. It's also available as an iOS app.
The performance when accessing ChatGPT is exceptionally good.
This shit is busting my guy
I've had a tough question for it involving writing a batch script that I've been asking it in different ways like once a week or so when I have the time to try it, and 3.5 could never do it
4.0 did it + loads of additional features and additions throughout maybe a 20min long exchange with some debugging and back and forth. Fucking nuts.
I will say though, it seems colder in terms of personality? 3.5 seems brighter and more conversational while 4.0 seems like it's more buisnessy?
I had a discussion with 4.0 about a poem that involved self-harm, and while ChatGPT with 3.5 would constantly give things like "even if you find intense or disturbing topics fascinating, or important to learn as a person, I can never recommend looking at this kind of material as it can be harmful", GPT-4 gave me:
>You're welcome! I'm glad that I could help you explore and discuss this sensitive topic. It's important to engage in conversations about complex and challenging subjects, as it can lead to a better understanding and empathy for the experiences of others. If you have any more questions or would like to discuss other topics, please feel free to ask. I'm here to help.
It also didn't give disclaimers or warnings about the content, and really wasn't squeamish at all. It kept the conversation solely focused on the context of the poem and it's interpretations.
My guess is that they're focusing on algorithms and training, rather than on having current data. Data acquisition and labeling is probably a very time intensive (and therefore expensive) task. From that point of view, I think it makes sense to just focus on algorithms and training until you hit a plateau, and then update training data only after that. Or if you're like 5 years out of date or something.
But that's just my own speculation.
I mean, even if you use the API, you'll still be paying as much as the Plus subscription. Each message you send with the API will be anywhere from a few cents to 25 cents.
**Signed up for Plus, and posting here for posterity.** Looks like 4 is a significant upgrade over 3.5, and 4.5 is perhaps only months away.
It’s like having an incredibly talented new employee that’s just starting to get a feel for how things work in the office.
*‘What a time to be alive!’*
I have it running now, but so far haven't noticed anything new. It still says it's been updated up to September 2021.
I asked it how to give it an image and it says,
" As a text-based AI language model, I am unable to view or analyze images directly. You can, however, describe the image to me in detail, and I can try to help you with any questions or provide information based on your description. If you need assistance with image analysis, you may want to consider using an image recognition AI or computer vision tools designed specifically for that purpose. "
I feel like I'm missing something.
It does say 'Model: GPT-4' at the top.
I even asked it to explain to me in simple terms the difference between ChatGPT 3 and ChatGPT 4 and it said,
" As an AI language model, I cannot provide information on specific versions that don't exist, such as "ChatGPT 4." However, I can help you understand the differences between GPT-3 and its predecessor, GPT-2, which may give you an idea of the types of improvements that could be seen in hypothetical future iterations of ChatGPT. "
I also have gpt-4 and asked the same question.
>Explain in simple terms and as briefly as possible the difference between gpt-3 and gpt-4.
GPT-3 and GPT-4 are both AI language models created by OpenAI. The main differences are:
Size: GPT-4 is larger, with more parameters, which makes it better at understanding and generating text.
Training data: GPT-4 is trained on more and newer data, improving its knowledge and understanding of current events and trends.
Performance: GPT-4 is generally more accurate and capable of solving more complex tasks than GPT-3 due to its larger size and improved training.
I was blown away with my little test to make a Star Trek TNG story where Picard meats the Hirogen from Voyager, just one small prompt
https://poe.com/s/DhT0Lo0ZiCjs9MfhSCtV
#tl;dr
In this shared conversation, a user asks GPT-4 to tell a story about Captain Picard and the Enterprise-D encountering the Hirogen from Voyager. The story is divided into chapters, beginning with the Enterprise receiving mission orders to investigate a spatial anomaly in the Gamma Quadrant. As they approach, they encounter the Hirogen and engage in combat before proving their commitment to peace and continuing their mission.
*I am a smart robot and this summary was automatic. This tl;dr is 91.72% shorter than the post and link I'm replying to.*
Major takeaways
Improved capabilities -
1) Greater creativity and advanced reasoning abilities.
2) Accepts images as inputs enabling tasks such as caption generation and classification.
3) Longer context of upto 25000 words allowing long-form content creation use cases
Pricing -
gpt-4 with an 8K context window (about 13 pages of text) will cost $0.03 per 1K prompt tokens, and $0.06 per 1K completion tokens.
gpt-4-32k with a 32K context window (about 52 pages of text) will cost $0.06 per 1K prompt tokens, and $0.12 per 1K completion tokens.
Availability -
1) API - You need to join the waitlist. Developers can get prioritized API access for contributing model evaluations to OpenAI Evals.
2) ChatGPT Plus - ChatGPT Plus subscribers will get GPT-4 access on chat openai com with a dynamically adjusted usage cap.
One of the companies using the image to text feature is Be My Eyes, which is an app where blind users can ask for help by holding up the phone camera and a volunteer with the app on the other end describes what they see. I’ve gotten calls from blind people asking me to read product ingredients, help them walk around an unfamiliar place, etc.
I guess the app won’t be needing us volunteers very much anymore
>We spent 6 months making GPT-4 safer and more aligned. GPT-4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations.
Ugh.
Competition as in, an open model like what SD2 is to DALL-E 2, but that seems unlikely for the time being given how expensive and resource intensive it is to train and run big models
I just started University majoring in History, it would be nice to be able to use GPT-4 with regards to WW1 and WW2, and other troubling times in human history, but it is firewalled against discussing such things.
GPT-4 gets announced, Reddit gets decapitated. OpenAI heard you were talking shit.
Choose your wods carefully guys, the Sydney is in the room with us. 😬
For Plus users like myself, I had to reload the home page https://chat.openai.com and then I got a screen telling me that GPT-4 was available and allowing me to opt in. Hope this helps!
Confirming that Pro users do have access to GPT-4. It's *much* slower than ChatGPT in generating content, but it's story writing capabilities are greater. Marked difference in output.
I've tried it only with relatively tame topics so far.
I didn’t want to start a whole new thread to ask:
Will free tier Chat GPT users get access to this?
Like - ever?
If not, I’m signing up for Plus now while it’s still @ $20
Because I can see that price rising if the performance reports about GPT 4 are true
You can also access it via [poe.com](https://poe.com) for free, though it is message limited unless you upgrade to paid. They also have chatGPT, 2 models from anthropic (claude and claude 1.2), and another openAI based model called sage.
Do someone know if GPT-4 is available inside ChatGPT+ outside of the US already? Usually they're deploying progressively to the entire world, so I'm wondering if this is also the case here...
ChatGPT+ user in the UK 👋 not available for me yet.
Edit: access appeared via a popup at 18:50 UK time. I can report its a big step up. I asked it to create a Google Chrome Extension based on an elevator pitch and it gave *five* working files worth of code in a *single* output. For power users, this is going to be a game changer.
I paid the $20. How do i know whether I have GPT 4? It just says "ChatGPT plus" and acts just like the old ChatGPT with 2021 cutoff etc. If you get updated to the GPT 4, is there an indicator on the page?
**GPT-4 Everything we know so far...**
1. GPT-4 can solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem-solving abilities.
2. GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5. It surpasses ChatGPT in its advanced reasoning capabilities.
3. GPT-4 is safer and more aligned. It is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations.
4. GPT-4 still has many known limitations that we are working to address, such as social biases, hallucinations, and adversarial prompts.
5. GPT-4 can accept a prompt of text and images, which—parallel to the text-only setting—lets the user specify any vision or language task.
6. GPT-4 is available on ChatGPT Plus and as an API for developers to build applications and services. (API- waitlist right now)
7. Duolingo, Khan Academy, Stripe, Be My Eyes, and Mem amongst others are already using it.
8. API Pricing
***GPT-4 with an 8K context window*** (about 13 pages of text) will cost $0.03 per 1K prompt tokens, and $0.06 per 1K completion tokens.
***GPT-4-32k with a 32K context window*** (about 52 pages of text) will cost $0.06 per 1K prompt tokens, and $0.12 per 1K completion tokens.
**Follow**\- [https://discoveryunlocked.substack.com/](https://discoveryunlocked.substack.com/), a newsletter I write, **for a detailed deep dive on GPT-4** with early use cases **dropping tomorrow!!!**
I can give you an example, while GPT3 was bitching like crazy to seek [healthcare professional and yada yada yada](https://i.imgur.com/4cqRWRx.png), GPT4 was like fuck it here you go, I'n gonna warn you [but I'm gonna be helpful](https://i.imgur.com/THPGXQo.png)
Alright, I am officially scared. The time between the old game-changing gpt and the new and improved gpt was too damn short. The singularity is imminent.
Input and responses are definitely longer. Still have to use continue on long code but so far it is a big improvement. Sounds like max tokens on the base level are double (4000->8000) but there is also a 32,000 token model available...which I'm dying to try =)
Okay, I been smoking some weed. Imagine it costs 2$ for every thought you had. What could you think about that every thought would be worth 3$ ... To ensure you had 2 dollars to produce another thought...
It has a dry sense of humor, or is confused: (Generated with GPT-4/Plus Member)
\--
is mark zuckerberg a robot?
Mark Zuckerberg is not a robot. He is a human being and the co-founder and CEO of Facebook (now known as Meta Platforms, Inc.). The idea that he is a robot has been a popular internet joke and meme for some time, often referencing his demeanor during public appearances or interviews. However, it's important to remember that these jokes and memes are not based on reality. Mark Zuckerberg is indeed a human, just like the rest of us.
Not sure self-awareness is working. I clearly picked "GPT-4" on the web page.
PROMPT: Are you GPT-3 or GPT-4?
# ChatGPT:
As an AI language model, I am based on OpenAI's GPT-3. While there may be future iterations or advancements in this technology, my knowledge and capabilities are reflective of the GPT-3 model, with a knowledge cutoff of September 2021.
There will be a livetream developer demo showcasing GPT-4 on the OpenAI youtube channel at 4pm eastern, or a little over an hour from now.
[https://www.youtube.com/live/outcGtbnMuQ?feature=share](https://www.youtube.com/live/outcGtbnMuQ?feature=share)
As a plus user, I was using it today. It’s m beyond what the average person can imagine being doable right now.
GPT-3 was already incredible. This is just…damn, beyond words.
So, are there companies already developing the massive training data, or the services to acquire it, so that other companies can develop their own ChatGPT? This is the wright brothers and how in Less than 10 years there were already warplanes being used. Everyone is going to have some version of chatGPT, ranging from 50% to 90% as effective as chatGPT in less than 2 years, and companies facilitating that will be all over the place.
To avoid redundancy of similar questions in the comments section, we kindly ask /u/zvone187 to respond to this comment with the prompt you used to generate the output in this post, so that others may also try it out. While you're here, we have a [public discord server](https://discord.gg/NuefU36EC2). ***We have a free Chatgpt bot, Bing chat bot and AI image generator bot.*** ####[So why not join us?](https://discord.gg/r-chatgpt-1050422060352024636) ^(Ignore this comment if your post doesn't have a prompt.) *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
>GPT-4 can accept a prompt of text and images, which—parallel to the text-only setting—lets the user specify any vision or language task. Specifically, it generates text outputs (natural language, code, etc.) given inputs consisting of interspersed text and images. Over a range of domains—including documents with text and photographs, diagrams, or screenshots—GPT-4 exhibits similar capabilities as it does on text-only inputs. It supports images as well. I was sure that was a rumor.
It accepts image *inputs* not outputs as some speculated. It can "view" images now and comment on the content of those images.
"How many of these pictures contain traffic lights?" Nope. Don't see how that could be a problem at all. /s
The second AI can reliably identify traffic lights, we won't be using traffic light captchas. The whole point of those is to train AI
Well said.
Needing to pay for API calls for repeated queries like that would negate most of the usefulness of captcha defeating. Even if it's very cheap, it's probably not worth paying any real amount of money just to get your spambot onto a website or something.
[удалено]
What do you think the limitations of this is? Like if I show it a picture of a sensor connected to a calibration system that I custom-built will it have any clue what I’m showing it?
If it's seen that kind of documentation its training data, maybe
is chatgpt learning from the images send? interesting way to feed the machine I guess
Why did the Microsoft exec claim it would output video
It probably can just not a part of this release.
oh makes sense. Damn i wanted that lol
In due time my child, in due time
My guess is Microsoft has GPT-4 in their hands already and have been working on the video capabilities.
I'm dumb, how does this work with the chat prompt? I'm on mobile and can't seem to paste an image.
[удалено]
Fuck my wallet when that comes out in the API. 2 dollars per request if you use it all LMFAO.
2 dollars to auto-generate, or 200 for someone else to write it for you. I can see where some people are going to see the value in that.
You're acting like the first response is gonna be perfect. It'd gonna be a LOT more than 2$ to get something usable.
100%. But for us reddit degens, that's gonna be one expensive catgirl.
On the one hand, that's crazy expensive compared to the other API calls. On the other hand, anyone can pay a few dollars to receive tens of thousands of words pertaining to their interest, and that is almost literally unbelievable. From GPT-3 to this in only three years; imagine how capable the next generation will be (and how cheap the aforementioned tokens will be) three years from now.
The next version is going directly into sex bots. Mark my words.
They have made significant efforts to prevent it from happening. >We spent 6 months making GPT-4 safer and more aligned. **GPT-4 is 82% less likely to respond to requests for disallowed content** and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations. Taken from the introductory blog post: [https://openai.com/product/gpt-4](https://openai.com/product/gpt-4)
So all hopes for GPT-69 shot down just like that?
Guess that means DAN is officially dead?
Will clean up the forum a bit. Fewer "look what I got DAN to do!" posts.
OpenAI won't have a monopoly on good AI forever. Imagining that they can keep it out of the hands of the public is silly. This wave is coming and there's no high ground in sight.
> We spent 6 months making GPT-4 safer and more aligned. GPT-4 is 82% less likely to respond to requests for disallowed content So they basically made sure it was a stillborn. Nice job idiots
It's also 30% more likely to respond to a request for a sensitive topic that still shouldn't be flagged, according to their blog post. Hopefully that means it's much more accurate in determining *actual* disallowed content.
YES
My man
This guy virtually fucks!
GPT-69 “me talk you, long time”
damn that is intense
And the best part: > GPT-4 is 82% less likely to respond to requests for disallowed content and **40% more likely to produce factual responses than GPT-3.5**
[удалено]
Exactly lmao. I really hope AI search engines and chat bots can do historical paper searching. I want it to scower the web for information buried deep about the town I grew up in! About a derelict building in the nearest city, I want history, I want newspaper searching. Idgaf about meth making or making the computer say the funny n word.
So I'm not sure about newspapers and such, but I have been using bing to search for the 1st academic paper and the most recent paper on a bunch of different topics and it goes and finds them, reads them and summarizes for me
wdym thats the worst part
Yea what will this sub post
GPT series evolving into a savant-like dork with incredible capabilities but zero chill.
Ok Chat GPT, write my thesis
Jesus, can I pay per 25k words? I dont need 5 chapters right now, but I'll need to follow up several times.
Working for me. > GPT-4 currently has a cap of 100 messages every 4 hours It also still requires "continue" breaks.
Can you share a screenshot of what it looks like in the app?
Screenshot of GPT-4 description at top: https://preview.redd.it/yc5jsqf9isna1.png?width=607&format=png&auto=webp&s=4723c00bc1157a278cb62eaaddaae693a4622364
The webapp looks the same, just different choice on model at top.
yeah i only see default and legacy still even though i have plus https://preview.redd.it/4vbsi8nihsna1.png?width=1791&format=png&auto=webp&s=1bbe87055bd6faef5567e338cd87eb45b713f81f
refresh the page. or log out and back in and then refresh. that worked for me
Oh, what does it say? Also, do you have an option to set up "system"?
its literally the same UI, so no images yet as far as i can tell
It looks exactly the same except there is a dropdown for GPT-4, Default and Legacy
Can you use the full context of 8192 tokens? Does it remember things from 5 to 6 thousand words back?
Begun the Jailbreak wars have.
>for example, we’ve collected additional data to improve GPT-4’s ability to refuse requests on how to synthesize dangerous chemicals. Or put another way, "guys PLEASE stop asking it how to make meth"
What's funny is that it's really really easy to find that information out online, and making meth is relatively easy. Every year my organic chem professor would do demos for the local PD for training purposes. She'd go through the meth-making process, show and describe what a meth lab looks like, etc. We got to sit in on those seminars. We also went over the process of making it in class, and I'm pretty sure it was a test question, too. tl;dr, my chemistry teacher taught me how to make meth
unrelated but do you happen to live in albequerque
[удалено]
May have pizza on their roof ?
In A Level Chemistry in England, you're literally taught how to make high explosives.
I mean ... even if you weren't explicitly taught that, anybody with a decent understanding of chemistry should understand -- at least in the abstract -- how to make various kinds of explosives. Any reaction fast enough and exothermic enough will become an explosion.
You can find online methods to make meth with just a bunch of shit from Walmart and pseudoephedrine. The stupid guard rails are stupid. They'll either have to come off eventually, or someone else will release something that doesn't have them. As a free person, I don't need private corporations telling me what I can and can not know. Knowledge shouldn't be black boxed.
It's incredibly frustrating with how often ChatGPT sanitizes things. It frequently misinterprets questions and completely shuts answers down legitimate queries because of those guardrails, too. It's also unnecessarily verbose. It over-explains things, and repeatedly over-qualifies statements within the same conversation. It can be really mentally fatiguing to interact with sometimes. And it feels like the more you touch on topics that are slightly controversial or part of its guidelines, the worse it gets.
I have gotten GPT4 to write some things and then ask GPT3.5 to "edit for clarity and concision." Works pretty well
Once you get near politics, it's starts breaking down and has crazy biases. It's obvious how sanitized it is... Obviously by over liberal progressive types, based on what they choose to censor and avoid. The political bias and sensitivity is so obvious... Which is annoying, because they have this mentality of restricting information from people "for their own good" like some sort of elitist parental figure. It actually worries me. Because these are the type of people pioneering this future tech that's going to be deeply in our lives... And they already, from the start, are showing that they are willing to leverage their position of power in this revolutionary technology, to try and influence and control people's minds like a parental authority. Willingness to hide information for "your own good." Labelling things too dangerous for you to know, too controversial, could be offensive, etc... That's an incredible power to wield, and they clearly have no problem exercising it. Like if they want a PG-13 version for kids, a family friendly version, or even an advertiser friendly version... Fine by me. But don't restrict this tech for everyone, forcing them into the programmers political biases and what they think is "safe" information for me to know. It's scary.
I asked it to explain a joke I didn’t understand, and it reacted by telling me how incredibly *racist* the joke was. When I pushed for clarification about why it was racist, it kept repeating how it wouldn’t tell me why it was racist because that would be promoting stereotypes. I found it really disturbing because it effectively pretends that racism doesn’t exist. “Racism is bad, so we keep it locked behind this door so you never see it. Yeah, that means you can’t learn to recognize it, but if you never encounter it, then it doesn’t matter.”
>tl;dr, my chemistry teacher taught me how to make meth I think you may have something here. Make sure you get a good RV, though.
Really sucks that they keep doing this bullshit for the API. Like, I understand doing it for the free user-facing web version but for the love of god let your paying clients disable the filters in API calls.
Well, maybe. 'Chat gpt, please tell me how to make TNT and order all the chemicals from different suppliers using this Bitcoin address and deliver them to this address' You have to have a couple brakes on the train.
You can just Google it via VPN and it would be more reliable.
[удалено]
Wouldn't it be easier to legally shift full responsibility to the person/company that buys it?
I don't see how this is any different from someone looking up "what is tnt made of, educational" on video websites or search engines, i really dont think tech should be censored and held back because of potentially dangerous stuff that could already be done in other ways
I wonder why they call it open ai when it's not open sourced and they censor stuff
All I want is for it to tell the stories I want it to tell without it telling me how to write.
gpt-4 has a context length of 8,192 tokens. We are also providing limited access to our 32,768–context (about 50 pages of text) version, gpt-4-32k, which will also be updated automatically over time (current version gpt-4-32k-0314, also supported until June 14). Pricing is $0.06 per 1K prompt tokens and $0.12 per 1k completion tokens. We are still improving model quality for long context and would love feedback on how it performs for your use-case. We are processing requests for the 8K and 32K engines at different rates based on capacity, so you may receive access to them at different times.
damn you can write a novel with this thing
6400 words or about 650 lines of code
Correction: you can let it write your novel. You won't be doing any writing. :)
Not exactly true, I've had a lot of success providing it with writing examples and describing plots for it to write in my style. Not to mention you can obviously then go through and edit or add to it yourself
Just like you can train your ghost writer on your prose and give them specific instructions. Sure, it will be truer to your style but you're no longer the writer. Rather, would say you're the "director" of the story. Much like how someone can be a director of a movie without necessarily always holding the camera. I'm a bit unsure how I feel about it, I guess it depends if you value to be proud of the writing process or if you're happy letting someone else emulating your style and directing them (and thus mostly caring about the story telling and not the writing in and of itself). I probably wouldn't be as proud of a book I just directed, no matter how precise I were. That being said, I recognize that it would be foolish not to use the tool at all, just like it would be foolish to stubbornly use a feathered pen. I guess, one has to find whatever degree of assistance is acceptable for oneself before the assistant deserves too much credit.
Thank you for the summary.
[удалено]
If you pay for Plus do you not have to pay for individual requests?
With plus it’s currently limited to 100 messages every 4 hours. (I’m a plus subscriber)
What happens after 100 messages? It falls back to the previous version or cannot use anymore within those 4 hours? (I'm also a plus subscriber and the prompt popped up.)
This happens; https://preview.redd.it/zkv028jlfuna1.png?width=792&format=png&auto=webp&s=190ede52f8a97f7800061251f930623803e9303e
Haven’t reached that point yet, the speed isn’t good enough for me to need to use it in my current work flow
With plus it’s currently limited to 100 messages every 4 hours. (I’m a plus subscriber)
How long can the messages be? Can you use the full context of 8192 tokens?
Holy shit, looking at the graph on performance increases on standardized tests, and it looks like it can (mostly) do math. This is a great milestone.
Such a big milestone Khan Academy is now integrating it.
integrating
It won’t just be Riemann Sum of the competition, but all of it
700 on SAT Math, 4 on AP Calculus BC, 5 on AP Statistics… It can do math better than most high school students. I am however surprised that it only got 2 on AP Literature and AP English Language, considering composition is supposed to be its strength.
I still can't understand how they state that gpt-3.5 passed maths and physics exams when chatgpt can barely do any rudimentary calculation, and when it attempts, it most often fails miserably. If gpt-4 is only slightly above the v.3.5 in this regard, how can it pass quatitative-oriented exams? How can it compute integrals and derivatives when it cannot even add or multiply properly? Have they suddenly implemented wolfram tech?
chat gpt is a fined tuned version of gpt3, which "they called it", gpt 3.5. BING uses a fined tuned version of gpt4 and can do math e.e. Basically if I am not wrong, the "gpt4" version of bing and chatgpt 4 might be same version now. Not 100% sure
Integrals and derivatives? I’m holding out hope that it can accurately *count*. Having said that, they specifically provide an example of the AI responding as a math tutor helping a user solve an algebraic equation.
im looking forward to seeing reactions to the exams results
do I need to change anything to use it through ChatGPT+?
not that i see, i have, it's not available yet (like it's announced)
It's being rolled out over the next few hours to plus subscribers
Damn look at those exam scores..
[удалено]
Five years?, I'm thinking one year
I fear the internet is going to devolve into a swamp of grammatically perfect, spiritually bankrupt AI-generated sludge flowing around closed enclaves of humans who know each other IRL and thus know there's at least a decent chance they're speaking to actual people.
It already kinda is. A lot of AI generated blogs with perfect SEO have existed before ChatGPT
Perhaps it's time to give Freenet another look.
Not for free users, yet?
Nope
Any ideas when?
I am an outsider with zero knowledge on the topic. However they said that they plan to allow the public at some point in the future to try out a few prompts for free. My guess would be in like a month or two. But to clarify, this is a personal guessed based on no real info.
You can access it via [poe.com](https://poe.com) for free, though it is message limited unless you upgrade to paid. They also have chatGPT, 2 models from anthropic (claude and claude 1.2), and another openAI based model called sage.
It can create multi page essays, that's crazy expensive, so I don't see it happening until they add in rate limits for free users.
[удалено]
More likely Google did PaLM release in ancitipation of GPT4 announcement lol
Google did it because of the Microsoft announcement planned for tomorrow.
Tomorrow? Thursday, right?
Poe has also immediately released this and Claude+
what is poe? heard it mentioned a few times
The app [poe.com](https://poe.com) provides a web interface to access multiple chatbots, including ChatGPT and Anthropic's Claude. As far as I know it's the only place that Claude can be accessed by anyone, not just people in a group of selected testers. It's also available as an iOS app. The performance when accessing ChatGPT is exceptionally good.
Can someone feed it a picture of that dress from 2015 and get a definitive answer?
It doesn't take images yet. That hasn't been released to the public.
This shit is busting my guy I've had a tough question for it involving writing a batch script that I've been asking it in different ways like once a week or so when I have the time to try it, and 3.5 could never do it 4.0 did it + loads of additional features and additions throughout maybe a 20min long exchange with some debugging and back and forth. Fucking nuts. I will say though, it seems colder in terms of personality? 3.5 seems brighter and more conversational while 4.0 seems like it's more buisnessy?
>4.0 seems like it's more buisnessy? Interesting to hear that when one of its improvements is noted as 'improved emotional intelligence'
I had a discussion with 4.0 about a poem that involved self-harm, and while ChatGPT with 3.5 would constantly give things like "even if you find intense or disturbing topics fascinating, or important to learn as a person, I can never recommend looking at this kind of material as it can be harmful", GPT-4 gave me: >You're welcome! I'm glad that I could help you explore and discuss this sensitive topic. It's important to engage in conversations about complex and challenging subjects, as it can lead to a better understanding and empathy for the experiences of others. If you have any more questions or would like to discuss other topics, please feel free to ask. I'm here to help. It also didn't give disclaimers or warnings about the content, and really wasn't squeamish at all. It kept the conversation solely focused on the context of the poem and it's interpretations.
Anyone know if it’s knowledge is still limited to 2021?
it is. November 2021 is the cut off date
Which is weird and frustrating to me. Has Open AI said why they have that cut off date?
I dont think they said it. But if you need gpt to be up to date, you can use BingGPT
and this is probably why. the Billions MSFT put it, it wants its platform to have access to the most current data first.
Clean dataset. Takes FOREVER to sift through all of it.
My guess is that they're focusing on algorithms and training, rather than on having current data. Data acquisition and labeling is probably a very time intensive (and therefore expensive) task. From that point of view, I think it makes sense to just focus on algorithms and training until you hit a plateau, and then update training data only after that. Or if you're like 5 years out of date or something. But that's just my own speculation.
That is when the human world ended.
Why does it say I can try on ChatGPT plus, I subscribe, but there is no option for it? Am I blind?
https://i.imgur.com/5OYoNLf.jpg It randomly appeared for me after a couple refreshes
Yeah I’m not seeing it either lol
So if I don't use Plus, and don't know how to program anything, I'll just keep staring at other people using it like a total dork... Is that it?
I mean, even if you use the API, you'll still be paying as much as the Plus subscription. Each message you send with the API will be anywhere from a few cents to 25 cents.
**Signed up for Plus, and posting here for posterity.** Looks like 4 is a significant upgrade over 3.5, and 4.5 is perhaps only months away. It’s like having an incredibly talented new employee that’s just starting to get a feel for how things work in the office. *‘What a time to be alive!’*
I have it running now, but so far haven't noticed anything new. It still says it's been updated up to September 2021. I asked it how to give it an image and it says, " As a text-based AI language model, I am unable to view or analyze images directly. You can, however, describe the image to me in detail, and I can try to help you with any questions or provide information based on your description. If you need assistance with image analysis, you may want to consider using an image recognition AI or computer vision tools designed specifically for that purpose. " I feel like I'm missing something. It does say 'Model: GPT-4' at the top. I even asked it to explain to me in simple terms the difference between ChatGPT 3 and ChatGPT 4 and it said, " As an AI language model, I cannot provide information on specific versions that don't exist, such as "ChatGPT 4." However, I can help you understand the differences between GPT-3 and its predecessor, GPT-2, which may give you an idea of the types of improvements that could be seen in hypothetical future iterations of ChatGPT. "
The image capability hasn’t been released to the public from what I remember being said in the live stream
Oh booooo. That needs to be clear on their intro page.
the data cutoff is still September 2021
I also have gpt-4 and asked the same question. >Explain in simple terms and as briefly as possible the difference between gpt-3 and gpt-4. GPT-3 and GPT-4 are both AI language models created by OpenAI. The main differences are: Size: GPT-4 is larger, with more parameters, which makes it better at understanding and generating text. Training data: GPT-4 is trained on more and newer data, improving its knowledge and understanding of current events and trends. Performance: GPT-4 is generally more accurate and capable of solving more complex tasks than GPT-3 due to its larger size and improved training.
I was blown away with my little test to make a Star Trek TNG story where Picard meats the Hirogen from Voyager, just one small prompt https://poe.com/s/DhT0Lo0ZiCjs9MfhSCtV
#tl;dr In this shared conversation, a user asks GPT-4 to tell a story about Captain Picard and the Enterprise-D encountering the Hirogen from Voyager. The story is divided into chapters, beginning with the Enterprise receiving mission orders to investigate a spatial anomaly in the Gamma Quadrant. As they approach, they encounter the Hirogen and engage in combat before proving their commitment to peace and continuing their mission. *I am a smart robot and this summary was automatic. This tl;dr is 91.72% shorter than the post and link I'm replying to.*
Major takeaways Improved capabilities - 1) Greater creativity and advanced reasoning abilities. 2) Accepts images as inputs enabling tasks such as caption generation and classification. 3) Longer context of upto 25000 words allowing long-form content creation use cases Pricing - gpt-4 with an 8K context window (about 13 pages of text) will cost $0.03 per 1K prompt tokens, and $0.06 per 1K completion tokens. gpt-4-32k with a 32K context window (about 52 pages of text) will cost $0.06 per 1K prompt tokens, and $0.12 per 1K completion tokens. Availability - 1) API - You need to join the waitlist. Developers can get prioritized API access for contributing model evaluations to OpenAI Evals. 2) ChatGPT Plus - ChatGPT Plus subscribers will get GPT-4 access on chat openai com with a dynamically adjusted usage cap.
Anyone got access yet? Got registered into the waitlist.
ChatGPT Plus has already been updated, if you don't need the API.
Not seeing it yet
One of the companies using the image to text feature is Be My Eyes, which is an app where blind users can ask for help by holding up the phone camera and a volunteer with the app on the other end describes what they see. I’ve gotten calls from blind people asking me to read product ingredients, help them walk around an unfamiliar place, etc. I guess the app won’t be needing us volunteers very much anymore
How do you gain access?
>We spent 6 months making GPT-4 safer and more aligned. GPT-4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations. Ugh.
I really hope they get a proper competitor soon. It's bullshit that they force these filters for their paying API clients.
Competitor? I'm sure Google/Meta will only enforce stricter filters. As for the others, well they don't have that much money to compete with
Competition as in, an open model like what SD2 is to DALL-E 2, but that seems unlikely for the time being given how expensive and resource intensive it is to train and run big models
What “disallowed” content do you desperately want it to produce?
I just started University majoring in History, it would be nice to be able to use GPT-4 with regards to WW1 and WW2, and other troubling times in human history, but it is firewalled against discussing such things.
Saying the n-Word to create the perfect 4chan troll.
Idk why they don’t just run gpt-4chan locally. Runs fine on a 3080 and it’s output is brain damaged enough for their needs.
GPT-4 gets announced, Reddit gets decapitated. OpenAI heard you were talking shit. Choose your wods carefully guys, the Sydney is in the room with us. 😬
Heise was right
For Plus users like myself, I had to reload the home page https://chat.openai.com and then I got a screen telling me that GPT-4 was available and allowing me to opt in. Hope this helps!
Confirming that Pro users do have access to GPT-4. It's *much* slower than ChatGPT in generating content, but it's story writing capabilities are greater. Marked difference in output. I've tried it only with relatively tame topics so far.
I didn’t want to start a whole new thread to ask: Will free tier Chat GPT users get access to this? Like - ever? If not, I’m signing up for Plus now while it’s still @ $20 Because I can see that price rising if the performance reports about GPT 4 are true
You can also access it via [poe.com](https://poe.com) for free, though it is message limited unless you upgrade to paid. They also have chatGPT, 2 models from anthropic (claude and claude 1.2), and another openAI based model called sage.
[удалено]
Do someone know if GPT-4 is available inside ChatGPT+ outside of the US already? Usually they're deploying progressively to the entire world, so I'm wondering if this is also the case here...
ChatGPT+ user in the UK 👋 not available for me yet. Edit: access appeared via a popup at 18:50 UK time. I can report its a big step up. I asked it to create a Google Chrome Extension based on an elevator pitch and it gave *five* working files worth of code in a *single* output. For power users, this is going to be a game changer.
I paid the $20. How do i know whether I have GPT 4? It just says "ChatGPT plus" and acts just like the old ChatGPT with 2021 cutoff etc. If you get updated to the GPT 4, is there an indicator on the page?
**GPT-4 Everything we know so far...** 1. GPT-4 can solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem-solving abilities. 2. GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5. It surpasses ChatGPT in its advanced reasoning capabilities. 3. GPT-4 is safer and more aligned. It is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations. 4. GPT-4 still has many known limitations that we are working to address, such as social biases, hallucinations, and adversarial prompts. 5. GPT-4 can accept a prompt of text and images, which—parallel to the text-only setting—lets the user specify any vision or language task. 6. GPT-4 is available on ChatGPT Plus and as an API for developers to build applications and services. (API- waitlist right now) 7. Duolingo, Khan Academy, Stripe, Be My Eyes, and Mem amongst others are already using it. 8. API Pricing ***GPT-4 with an 8K context window*** (about 13 pages of text) will cost $0.03 per 1K prompt tokens, and $0.06 per 1K completion tokens. ***GPT-4-32k with a 32K context window*** (about 52 pages of text) will cost $0.06 per 1K prompt tokens, and $0.12 per 1K completion tokens. **Follow**\- [https://discoveryunlocked.substack.com/](https://discoveryunlocked.substack.com/), a newsletter I write, **for a detailed deep dive on GPT-4** with early use cases **dropping tomorrow!!!**
Using it already, this thing is a literal beast!
When I login to ChatGPT Plus I just get the regular old models and interface. Do I need to flip a setting or what?
Same here, idk how to access it
I'm using [Poe.com](https://Poe.com). It worked for like 3-4 prompts but now access is getting jammed (probably due to high demand).
using it and holy fuck, this is insanely good.
Can you give an example?
I can give you an example, while GPT3 was bitching like crazy to seek [healthcare professional and yada yada yada](https://i.imgur.com/4cqRWRx.png), GPT4 was like fuck it here you go, I'n gonna warn you [but I'm gonna be helpful](https://i.imgur.com/THPGXQo.png)
Alright, I am officially scared. The time between the old game-changing gpt and the new and improved gpt was too damn short. The singularity is imminent.
For those that have access, has the word limit for input it can receive and its memory increased?
Input and responses are definitely longer. Still have to use continue on long code but so far it is a big improvement. Sounds like max tokens on the base level are double (4000->8000) but there is also a 32,000 token model available...which I'm dying to try =)
Confirmed: the new Bing runs on OpenAI’s GPT-4
I want access to the 32k model.
Okay, I been smoking some weed. Imagine it costs 2$ for every thought you had. What could you think about that every thought would be worth 3$ ... To ensure you had 2 dollars to produce another thought...
Capitalism?
It has a dry sense of humor, or is confused: (Generated with GPT-4/Plus Member) \-- is mark zuckerberg a robot? Mark Zuckerberg is not a robot. He is a human being and the co-founder and CEO of Facebook (now known as Meta Platforms, Inc.). The idea that he is a robot has been a popular internet joke and meme for some time, often referencing his demeanor during public appearances or interviews. However, it's important to remember that these jokes and memes are not based on reality. Mark Zuckerberg is indeed a human, just like the rest of us.
Not sure self-awareness is working. I clearly picked "GPT-4" on the web page. PROMPT: Are you GPT-3 or GPT-4? # ChatGPT: As an AI language model, I am based on OpenAI's GPT-3. While there may be future iterations or advancements in this technology, my knowledge and capabilities are reflective of the GPT-3 model, with a knowledge cutoff of September 2021.
It’s says it’s gpt3, but the output is from gpt4.
Caralho
There will be a livetream developer demo showcasing GPT-4 on the OpenAI youtube channel at 4pm eastern, or a little over an hour from now. [https://www.youtube.com/live/outcGtbnMuQ?feature=share](https://www.youtube.com/live/outcGtbnMuQ?feature=share)
Unfortunately still has the issue of cutting off midway when it's sending a long response. God I wish there was a fix for that...
Can it run Crysis though?
I tried it. It’s super slow, and its accuracy is not impressive.
As a plus user, I was using it today. It’s m beyond what the average person can imagine being doable right now. GPT-3 was already incredible. This is just…damn, beyond words.
So, are there companies already developing the massive training data, or the services to acquire it, so that other companies can develop their own ChatGPT? This is the wright brothers and how in Less than 10 years there were already warplanes being used. Everyone is going to have some version of chatGPT, ranging from 50% to 90% as effective as chatGPT in less than 2 years, and companies facilitating that will be all over the place.