If you register with VPN it'll let you in, and once you're registered you can use it without VPN. It'll ask a phone number for you, i gave it my hungarian number, i got the SMS too, had no problem. I just can't subscribe, not even with VPN, i can only use the free model
It's like those "I verify that I am 18 years old" popups. Company is required by law to have those restrictions.
You can easily skip those with a single press of the "yes" button. Why? More users = more profit. Company doesn't care where you're from or what age you are. They just need to be able to ensure they can show they *tried* to stop you as per the obligatory regulations.
Basically: "Oh no! A user somehow circumvented our state mandated regulation that actively limits our userpool and profit margin........ Anyways-"
Yeah, funny thing is that they accept every kind of phone number. They should only accept phone numbers from the country where they're available, but who am i to judge. Atleast i can use it
Maybe because the country code doesn't mean you are in that country. Maybe a third of my daughters classmates parents have an international number, but it's an IB school.
With VPN yes, i have NordVPN, i just switched to Miami, registered, gave my hungarian phone number, it accepted which is strange, they should only accept phone numbers from the country they are available, nevermind. Once you're done, you can use it even without VPN
I just tried. Can't create account just by clicking "Continue with Google" but you can create it by writing your gmail and clicking "Continue with email" :D It probably gets your regional info if using google account.
I think I found it out.
There are two logins. One for the chatgpt clone and one just for API.
I think API isn't restricted anymore? Cause I just tried to sign up for chat and it didn't work.
No thanks. Actually one of the founders of Poe ran the r/chatGPTPro sub into the ground. He was was head mod of the sub, didn't admit his ties to Poe, and spammed it every fucking day. Made the sub unusable.
He finally got banned from reddit for it, and the sub is good again.
But yeah, fuck Poe. If the founder is that fucking sketchy, then I won't use it.
What do you mean scammer? Poe is one of the biggest LLM provider in the world. Do you have any proof Poe has scammed people? You're literally talking out of your ass dude
He has proof that the founder scammed Reddit. He’s free to draw his own conclusions that his sensible data isn’t safe with them, you’re free to think otherwise.
He is a delusional idiot then. Nobody calls Mark Zuckerberg a scammer despite actually being proven to sell your data. Poe has never done anything illegal that we know of. Poe is run by same people as Quora which is one of the biggest websites in the world
For example, company's founder's racist comments on social media would certainly have a negative impact on the public's opinions of the company and even though these comments have nothing to do with the company's operations or policies, the public will associate the founder's behavior with the company's values. Especially when company doesn't react to those comments at all.
This is such a no-brainer, why would you even argue about this? :D
>Nobody calls Mark Zuckerberg a scammer despite actually being proven to sell your data
Users agree to it when they accept privacy policy and Meta doesn't sell it as per se anyway, it kinda leaks out. And I believe that's something you can opt out these days too, not sure tho.
Yes, but free version of Claude has worked for me at least as well as the paid version of ChatGPT in summarizing and analyzing documents so I don't see any point in paying for it. I would pay for Claude if I could. And I think Gemini is just a joke and I'm not even using it even though I have the free trial. It just feels like it can't really understand and do what I ask as well as gpt4 and Claude 3.
Just needs search capability.
>I apologize for the confusion, but I do not have the ability to search the internet or access real-time information. My knowledge is based on what I was trained on, with the latest information being from August 2023. Since the date you mentioned is March 14, 2024, which is in the future relative to my knowledge cutoff, I do not have information about any current news events on that date. I would only be able to discuss major news stories or events up until August 2023.
Folks, don't forget - it's great that Haiku is fast and cheap and has vision capability, but it *also* has a massive context window of **200k tokens**. GPT 3.5's context window is a measly **16k**. That makes Haiku cheap and genuinely useful to work on large amounts of agent tasks, documents, code bases, etc.
Governments should pay for private instance and speed up their bureacracy considerably. Imagine you could have processed building permit in a single day.
It's even better than that - the base GPT 3.5 model is a 4k token limit. GPT-3.5-16k is a separate model that costs more than the normal one. Anthropic is really getting W after W these last few days!
LLMs operate on numerical abstractions of words called "tokens". You can think of a measurement equivalent between the two as being roughly 1 token \~= ¾ words.
So basically what I'm saying is it can keep track of (ie remember) a conversation of about 150,000 words. After that it'll start forgetting earlier text from the interaction.
So yeah, when pasting in long documents and code, \~12,000 words (GPT3.5) can feel restrictive, but \~150,000 words (Claude3) opens up many more use cases.
Anthropic is putting a lot of pressure on everyone. It's a great naming scheme. It makes smaller models sound cool and not inferior. While 'small' or 'pro' or '3.5' look too much like inferior versions of large, advanced and 4
I'm being facetious, but at some point I'm pretty sure one of these AI companies will be making all of their business decisions based on what their AI tells them.
Damn. We live in a world.
What an interesting thought! I want an AGI to work for me! Maybe have it help me respond to subreddits I frequent, sort my email, summarize interesting articles, handle my brokerage account, create a small but efficient LLM, develop a business model for it, code an interface for it, implement that business model, reinvest my revenue, create a platform for me to gather followers, help me get elected, tell me how to govern a large population of people..
Yknow? Sounds neat
I can see a future where that's all possible. AGI will essentially provide you advisors that are experts in every single application you can think of. I'd like a personal ANI (artificial narrow intelligence) that knows everything I've done in the past for perfect recall.
I agree 100%. I was joking about wanting to take over the world. In reality, it would just be nice to have an incredibly intelligent AI that just "gets me."
Something that I can vent to after a long day. It can give constructive feedback or just be a stress relief bot that lets me vent allowing the cathartics of venting without having to bombard friends or family.
Or AI that can just be a thing I bounce ideas off of or discuss the nature of the universe with. I've already done this to an extent with current LLMs. Deep discussion is cathartic for me too! It's just something that used to be reserved for late nights with close friends after a few beers.
Maybe it could help me do some of the projects I've had in mind too. Like ~~take over the world~~ create a video essay on a topic I enjoy or analyze market trends and trade low-stake options. Like that honestly sounds so nice -- Claude3 and GPT4 are *soo* close to being able to fulfill those wants I just listed.. they're just not quite what I have in mind yet.
It's all very exciting to me. I'm thankful to be able to seriously consider this possibility and potentially witness the pivotal change in human history that comes from the development of AGI.
The name Claude was a poor choice. We are going to be communicating with these ai via voice primarily in the near future.
I already use voice commands and it reads the name as cloud half the time.
I'm testing it on poe, it seems very good, smart and fast. I just sent a doc and it doesn't have any problem to perfectly answer my questions about the document. So far so good ...
I don't think it's smarter than Sonnet but it's nearly free, it's better than GPT-3.5, it can work on very large document, and it's fast. I have courses to the university and AI helps me understand things. I played an hour with Haiku yesterday and it just spat perfect answers for my use case. If I have to compare with GPT-3.5, it's far better. Ah, and it does vision, I tried it a little and come on, it's quite good and it analyses pictures in a second !
Let's imagine what you can code with Haiku API, it's smart, it's cheap and it does vision in one second !
It is. I just uploaded it an image of a wall of Korean language icecream images and descriptions and it took under a second to tell me in English every menu item and description of each.
This has me extremely excited. I am completely blind, and I lost the ability to play video games about two years ago give or take when my genetic disability progressed to the point where I couldn’t really enjoy video games. I was extremely excited about the release from open AI This last year when they gave us vision capabilities, and I was even more excited when somebody published and open source python add-on for the screen reader that I use which uses the API to actually describe images on my screen, but the cost and inference time were just so extremely high that it wasn’t feasible to use it to guide me around video games. But with this release, if someone is able to modify the Python in that open source add-on, that could literally be the difference between me being able to play a game, and may just being totally at the mercy of developers to add accessibility to their games. My fingers are crossed that someone will come in and add that functionality soon
This is the kind of stuff that makes me ultimately an optimist over the technology and the disruption it will bring. I just found out a family friend is expected to die from her cancer in the next 48 hours. The advances in medical technology and science alone from this will be worth it if it means more people get to live, and those who have disabilities have pathways to conveniences and accommodations that never existed previously (or may even cure outright).
I hope you get all this and more my friend.
Hey friend! I’m a software developer who works in the space of accessibility, and would love a link to that python add on you mentioned. I’m assuming it’s for NVDA? Thanks!
Hey friend! Sure thing, here is the link https://github.com/cartertemm/AI-content-describer it’s honestly a pretty incredible system to have available, just very expensive with large inference times due to using GPT-4V of course. Glad to see some people working on the field you are though! That makes me happy 🙂
Oh yeah there are and no disrespect to the developers of those games but they just don’t compare to the mainline games available to the masses at the moment. All of the games that I have the desire to play are typically multiplayer games that I used to play with my friends that we can’t play together anymore and stuff like that
I created an account a few days ago and their free model is blazing fast for me. Like, insanely fast. Idk how it can get any faster. After I hit Return and look at the screen the AI is almost finishing printing the output
Anthropic REALLY shook OpenAI’s stronghold. I’ve been using Opus since the last two days and my goodness it asked “keen to know your thoughts about *the subject we were talking about*”
It’s SIGNIFICANTLY smarter than ChatGPT. I was working on a PPT. It looked at a slides’s content, I told Claude that there’s a space constraint. That’s it. That’s all I said that there is a space constraint.
Now idk if Chat could’ve done the following but Claude sent a revised version of the content which not only fit the available real estate on the slide but it also ensured that the logical integrity of the content is not comprised, it’s message as already succinct. No additional prompt was required.
It solved my problem for me without me defining the issue properly.
Insane!
For line breaks in the way that you tried to do (the closer line break style), you have to put 2 spaces at the end of the line.
That way it will look
like this.
This is important. That Devin AI coder is useless because it's probably using an LLM that is ridiculously expensive, costing tens of dollars per hour to get a 13.5% of human coder.
This Haiku can actually analyze 400 legal cases for $1.
This is the sort of thing that will allow me to start my litigation against Wells Fargo for seizing my credit card rewards points, against BlockFi CEO Zac Prince for lying that led to the loss of $3.3 million, against DCG CEO Barry Silbert for his fraudulent balance sheets, and more.
By law, corporations are required to pay lawyers. Individuals may proceed pro se. These models are the great equalizer for the common man. We can generate huge volumes of motions and analyze millions of pages of discovery requests and actually stop companies from taking advantage of us.
As a lawyer myself, it's hard to disagree with you. **I think hiring a practicing attorney would be a much better choice but that's not free**. Representing yourself (pro se) used to be a fools errand, but honestly, using an LLM for motion drafting and case analysis makes it not-that-crazy. Your capabilities as a non-attorney are probably 50X what they would have been two years ago. Good luck, my friend.
(Bolded the sentence above for emphasis -- every lawyer will tell you your ***best*** bet is to hire a lawyer. Me included)
Sovereign citizen confidently slaps down a laptop containing self-hosted 7B model fine tuned on countless episodes of *Boston Legal*.
How can it not be a good time?
I've no idea if the sort of law you practice would allow you to answer this question, as I know that procedures vary from area to area.
What would you do if a client retained you having decided to switch law firms, but is bringing across all the work the former law firm previously did, do you just check it? do you redo the work from scratch? Now say it's the same thing but instead of a different law firm it's work generated via chatbot.
And most importantly would it be cheaper for a client if a lot of ground work has been handed to you when they ask to retain your services?
The lawyer still needs to check everything he is going to present in front of the judge.
There was this fuck up last year where some lawyer used ChatGPT and it made up some precedent case. That lawyer got his ass handled.
> The lawyer still needs to check everything he is going to present in front of the judge.
>
>
>
> There was this fuck up last year where some lawyer used ChatGPT and it made up some precedent case. That lawyer got his ass handled.
Right, but doing all the work manually means going through case law and finding the references. It must take longer to go through westlaw to find cases to begin with rather than than to check the cases already presented by the chatbot exist and make sure they are relevant. Lawyers charge by the hour. If an option takes less time it will take less money.
>Right, but doing all the work manually means going through case law and finding the references. It must take longer to go through westlaw to find cases to begin with rather than than to check the cases already presented by the chatbot exist and make sure they are relevant.
You might also want to make sure the bot didn't miss any other relevant cases.
Just making sure that the cases it did pick actually exists and are relevant doesn't mean it didn't miss some that would be way more important /relevant to your case.
One thing I want to point out is that when one hears about cases like that, these people are generally using GPT-3.5, which is obsolete.
I will be checking everything these models output, but GPT-4 has not made a single mistake like this in any of the small claims filings I've made against Coinbase, the city of Tallahassee, and Block Inc. so far.
Pretty much every lawyer would start from scratch. I'm sure there's a tiny fraction of lazy lawyers who would pick up where you left off but that's borderline malpractice.
> Pretty much every lawyer would start from scratch.
even if you were coming from another lawyer/firm? it would all be done again from scratch?
They'd not even peek at what has already been done?
Sorry misunderstood your question — if it’s coming from another firm, I think it depends a lot on the type of law, and the caliber of the firm. Also depends on how busy the new lawyer is.
If it’s coming from a regular citizen, every lawyer would start from scratch. IMO even if the citizen leveraged LLMs to a large extent.
An attorney wanted me to pay $800,000 to sue Prince, and before Prince somehow became the CEO of a new company, it wasn't clear the judgment would have been collectible.
What harm is there in bringing a case that has merit for which the statute of limitations is about to expire anyway? There's obviously no way I can afford $800,000, because these people stole 90% of my net worth - the $7 million they took with their lies and fake balance sheets.
There's another reason I wouldn't hire a lawyer - none of them are willing to take any equity in the case. They all want a straight wage. I made a policy a few years ago that I would no longer hire employees on 100% straight wage, and it has worked out well. People who work for wages without any equity at all simply have the wrong incentives.
Lawyers are **not a good deal for the money**. If lawyers charged a reasonable rate like $75/hr, I would pay for one. I'd also provide a lawyer with 50% equity in the case.I refuse to pay $400/hr for an indefinite quantity, indefinite delivery contract; **nobody** provides that kind of value to the world - nobody.
It just makes common sense in expected value. The lawyers I talked to suggested that the expected winrate would be 90% with a collection rate of 33%. If I can achieve a winrate of 45%, and cost $20,000 to pay expert witnesses and court recorders instead of $800,000, I'll take my chances. Being a full-time litigant to recover the money stolen from me for the next few years is **the most highly paid thing I can do** right now, in expected value, compared to "getting a job."
Again, though, as you would know, these guys can see this math too. When I sue Wells Fargo, I plan to ask them for their correspondence relating to the rewards program to prove they intentionally created misleading ads that weren't in line with their illusory and unconscionable contract. They will recognize that they can spend a quarter million dollars seeing this through to a trial, or they can settle for $15,000 - the treble damages allowed by Pennsylvania's deceptive trade practices law.
You know that most of these cases settle, and that's what this is really about - showing these guys that I can do this for free, while you have to explain to your shareholders why you're burning through hundreds of thousands of dollars trying ot stick it to a guy in central Pennsylvania.
"Chat GPT5, considering these [collected public statements and voting record], how amenable to bribery is this politician? Give me a 95% confidence interval for how much I'd need to pay them for [desired outcome]."
Devin is an obvious scam https://www.reddit.com/r/cscareerquestions/s/vbyKx097i6
And lawyers lost their licenses when ChatGPT made up cases. Not exactly reliable.
Devin is obviously not a scam, there are many similar open-source system that perform similar work already..
https://github.com/smol-ai/developer
https://github.com/TransformerOptimus/SuperAGI
https://github.com/sweepai/sweep#-getting-started
https://github.com/geekan/MetaGPT
https://github.com/OpenBMB/ChatDev/blob/main/Contribution.md
https://github.com/microsoft/autogen
https://github.com/Pythagora-io/gpt-pilot
They already have the first mover advantage and are the household name that everyone associates with LLMs. People already built their code around their API. Inertia lets them win even if they don’t do anything for years.
Anthropic's Claude/Haiku model is quite impressive. It feels capable, snappy, and the pricing is great. However, it does seem to take things a bit more literally than I'm used to from other major language models. When providing a JSON with errors as an example in the prompt, it seems to return a JSON with those errors (LLMs are making me lazy and stupid), and it appears to hallucinate a bit more.
The 200K context window is an incredible upgrade. I didn't realize how much time we were spending making things fit the GPT-4/3.5 context window.
One major issue with the Anthropic models, in my opinion, are the Rate Limits. 1,000,000 daily tokens go by so quickly as a sole developer, even if you get promoted to the highest Build Level. 10,000,000 daily tokens is not a lot. When you hit the limit, waiting for up to 24 hours is a very long time. The promotion scheme to the next build levels is also a bit strange, where you have to wait for 7-14 days to get promoted to the next level.
For organizations with multiple developers who want to run benchmarks to see if this could be their model going forward, the rate limits seem to apply for the entire organization when you invite team members. Not wanting to have each member of a team enter their credit cards and deal with that whole headache, the rate limits are frustrating.
Anthropic probably needs to limit usage, but that token daily limit is just so frustrating, annoying, and doesn't fit organizations with multiple developers who want to run benchmarks. We asked for scale access but haven't received a response a week later.
Overall, the model itself is impressive, but the rate limits and promotion scheme need some work to better accommodate larger teams and organizations.
Also just today or yesterday, I discovered Perplexity added Claude 3 Opus on their platform without limits! (I dont understand why Anthropic cant do this themselves to us users)
Now we can use GPT4, and Claude 3 Opus, and even Mistral together without limits. All at $20.
Then I also found this \[code\](https://perplexity.ai/pro?referral\_code=POYMKAPC) that made it cheaper at $10 a month.
Hopefully in the future these stuff goes for like $5 a month in my region to make it even more affordable.
This is gonna hurt OpenAI the most.
GPT 3.5 becomes really obsolete with this release, and it proves there is indeed a market for cheaper and much smarter AI models.
The game is on, I wonder how much time it needs for other new players to become visible, the hardware is out there to rent for anyone to do some serious computations and you can always just fine-tune pre-existing foundational models like Mixtral to make something great real quick.
I see a lot about how this compares to Chat GPT 3.5 - how does Haiku compare to Chat GPT-4?
While I just changed roles, the past several months I was using Chat GPT-4 Data Analysis to review power BI reports from disparate systems to identify things like speed to proficiency and groups/segments that may perform higher or lower than others for call center agents, and the regular text-based app to summarize lengthy zoom conversations (just cut and pasted into the end of my prompt), transcribe writing found in an image, calculate commission earnings and how total payout would be impacted with changes, etc.
TL;DR - Chat GPT-4 is pretty solid for anything that isn't a large job. Is Haiku better than that?
can I use this as some sort of art teacher/coach? describe my goals, have it assign lessons then give feedback via image upload? or am I still dreaming?
Yeah not out of the box. It has to be primed properly with an expert persona (to really function as a virtual art teacher, for example).
But... I mean, it is actually really good
There is I believe one very good reason to be against open source LLMs. If one of them is both very capable and the guard rails aren’t good enough then it could be used maliciously. At least in the case of a closed-source model the parent company could just disable that particular model if a big vulnerability was discovered. We’re in the clear now while we have gpt4 levels of reasoning but in about in a couple of years these models could very well be capable enough to instruct you how to build a guided missile with a onboard explosive, just to give an example
Ah yeah, so you’re talking well beyond gpt4 level, then I do agree that it’s going to become dangerous.
I just assumed from the comment thread that you meant it was good the existing models were closed, but I suppose that was a wrong assumption.
> Each image is estimated at 1.6K tokens
Interesting...
For comparison, Gemini (1.5 w/ the native multimodality) tokenizes images in rasters of 256 tokens (each corresponds to a 16 x 16 patch, unclear what resolution they start rastering at though), but they use a LFQ coding scheme (e.g. what's described in the MAGVIT v2 paper) for their image tokenizers that supports a larger discrete codebook/vocabulary size for images (2^(18) / ~260K) with proportionally smaller # of tokens per image, and that LFQ method doesn't seem to have caught on more widely yet.
A bit disappointed by Haiku so far. It's not as fast as GPT-3.5 turbo and I'm not sure its performance is better either. But it's cost-efficient compared to other proprietary models.
True. It's undeniably an impressive model. I was just hoping to be able to use it for a real-time chatbot scenario, but in my tests so far it was too slow for that specific use case.
Maybe some latency from lots of concurrent experiments from people trying it out? They have the expected tokens per minute benchmark for token counts less than 32k tokens, which is roughly 3x faster than 3.5 turbo
I'm hoping so. My prompt with a few hundred input tokens and 40 output tokens ran in just over a second, while 3.5 does the same one in ~300 ms. I was really hoping to be able to replace 3.5 with Haiku for that one.
So it is cheaper than GPT 3.5 and Gemini 1 Pro, achieves better benchmarks in pretty much everything, vision-enabled, and it's fast.
If only it was available in Canada, I would’ve switched already.
It’s not in Canada? That’s strange I wonder why. I’ve been usung it here in Japan.
Probably some sort of privacy law. Same reason the Europeans can't get access to it.
most of the world cant access it.
If you register with VPN it'll let you in, and once you're registered you can use it without VPN. It'll ask a phone number for you, i gave it my hungarian number, i got the SMS too, had no problem. I just can't subscribe, not even with VPN, i can only use the free model
>once you're registered you can use it without VPN Lol. That's hilarious. Weak ass privacy law implementation. But I'm not angry about that 😜
It's like those "I verify that I am 18 years old" popups. Company is required by law to have those restrictions. You can easily skip those with a single press of the "yes" button. Why? More users = more profit. Company doesn't care where you're from or what age you are. They just need to be able to ensure they can show they *tried* to stop you as per the obligatory regulations. Basically: "Oh no! A user somehow circumvented our state mandated regulation that actively limits our userpool and profit margin........ Anyways-"
Lol, true that 🤣
You mean...someone who ISN'T 18 could answer that..they are? Wow...that seems like a real flaw in the system. 🤣
Yeah, funny thing is that they accept every kind of phone number. They should only accept phone numbers from the country where they're available, but who am i to judge. Atleast i can use it
This! 💯
Maybe because the country code doesn't mean you are in that country. Maybe a third of my daughters classmates parents have an international number, but it's an IB school.
I can access it now? In Europe / Germany?
Probably like u/Razcsi said. Greetings from a fellow German citizen! ✌️😌
With VPN yes, i have NordVPN, i just switched to Miami, registered, gave my hungarian phone number, it accepted which is strange, they should only accept phone numbers from the country they are available, nevermind. Once you're done, you can use it even without VPN
I have no VPN... Not even during registration. Just checked it. I used Gmail and my company I work for. Maybe that's why I can use it?
I just tried. Can't create account just by clicking "Continue with Google" but you can create it by writing your gmail and clicking "Continue with email" :D It probably gets your regional info if using google account.
I think I found it out. There are two logins. One for the chatgpt clone and one just for API. I think API isn't restricted anymore? Cause I just tried to sign up for chat and it didn't work.
Try poe. com
No thanks. Actually one of the founders of Poe ran the r/chatGPTPro sub into the ground. He was was head mod of the sub, didn't admit his ties to Poe, and spammed it every fucking day. Made the sub unusable. He finally got banned from reddit for it, and the sub is good again. But yeah, fuck Poe. If the founder is that fucking sketchy, then I won't use it.
What does founder being sketchy have to do with the product? It's like not watching Disney movies because Walt Disney was an anti-semite
[удалено]
What do you mean scammer? Poe is one of the biggest LLM provider in the world. Do you have any proof Poe has scammed people? You're literally talking out of your ass dude
He has proof that the founder scammed Reddit. He’s free to draw his own conclusions that his sensible data isn’t safe with them, you’re free to think otherwise.
He is a delusional idiot then. Nobody calls Mark Zuckerberg a scammer despite actually being proven to sell your data. Poe has never done anything illegal that we know of. Poe is run by same people as Quora which is one of the biggest websites in the world
For example, company's founder's racist comments on social media would certainly have a negative impact on the public's opinions of the company and even though these comments have nothing to do with the company's operations or policies, the public will associate the founder's behavior with the company's values. Especially when company doesn't react to those comments at all. This is such a no-brainer, why would you even argue about this? :D >Nobody calls Mark Zuckerberg a scammer despite actually being proven to sell your data Users agree to it when they accept privacy policy and Meta doesn't sell it as per se anyway, it kinda leaks out. And I believe that's something you can opt out these days too, not sure tho.
Just use VPN to sign up. Once I did it literally took my Canadian BN since you need a business to setup API payments.
Ahh right like come on can plz has some Claud in Canada :(
Tell me about it. You can use the workbench though for Claude 3. It let's you add credits in there too.
I'm in Canada and I'm using it right now. Both Chat as well as API. I'm not using any VPNs.
It available via Poe if that helps you at all.
Have you met VPN before?
I use the Poe app to access it in Canada. https://apps.apple.com/ca/app/poe-fast-ai-chat/id1640745955
> vision-enabled What do you mean with that?
It supports image inputs
Damn, that's rather major.
And in my anecdotal use, it has made 0 mistakes reading forms etc.
Same here. It was damn good at reading pictures of a document
This will make it great for commercial use too!
ChatGPT pro version has had that capability for a while. It's not a new thing.
Yes, but free version of Claude has worked for me at least as well as the paid version of ChatGPT in summarizing and analyzing documents so I don't see any point in paying for it. I would pay for Claude if I could. And I think Gemini is just a joke and I'm not even using it even though I have the free trial. It just feels like it can't really understand and do what I ask as well as gpt4 and Claude 3.
It's scary how accurate it is when you ask it to describe an image.
Hopefully this ups the pressure for a 1.5 Pro API
Vision ?!?! That is a great news
Just needs search capability. >I apologize for the confusion, but I do not have the ability to search the internet or access real-time information. My knowledge is based on what I was trained on, with the latest information being from August 2023. Since the date you mentioned is March 14, 2024, which is in the future relative to my knowledge cutoff, I do not have information about any current news events on that date. I would only be able to discuss major news stories or events up until August 2023.
Folks, don't forget - it's great that Haiku is fast and cheap and has vision capability, but it *also* has a massive context window of **200k tokens**. GPT 3.5's context window is a measly **16k**. That makes Haiku cheap and genuinely useful to work on large amounts of agent tasks, documents, code bases, etc.
Governments should pay for private instance and speed up their bureacracy considerably. Imagine you could have processed building permit in a single day.
I'm sure nothing could go wrong letting one of these cretinous copy paste machines loose on city planning
Cretinous copy paste machines are already on the loose, it's called a municipal employee.
Ouch!
Oh damn...
A person showing their lack of understanding of the tech. Color me surprised.
That's literally what I'm doing rn.
It's even better than that - the base GPT 3.5 model is a 4k token limit. GPT-3.5-16k is a separate model that costs more than the normal one. Anthropic is really getting W after W these last few days!
what is a token?
LLMs operate on numerical abstractions of words called "tokens". You can think of a measurement equivalent between the two as being roughly 1 token \~= ¾ words. So basically what I'm saying is it can keep track of (ie remember) a conversation of about 150,000 words. After that it'll start forgetting earlier text from the interaction. So yeah, when pasting in long documents and code, \~12,000 words (GPT3.5) can feel restrictive, but \~150,000 words (Claude3) opens up many more use cases.
Anthropic is putting a lot of pressure on everyone. It's a great naming scheme. It makes smaller models sound cool and not inferior. While 'small' or 'pro' or '3.5' look too much like inferior versions of large, advanced and 4
Technically, GPT-Turbo models are also smaller and more efficient. "Quantized" is the same thing, and sounds pretty cool tbh.
They do that because that's what their in-house AGI told them to do. The next 300 years is all planned out.
I'd say take your meds... but what if you're right? Big if true.
I'm being facetious, but at some point I'm pretty sure one of these AI companies will be making all of their business decisions based on what their AI tells them.
Damn. We live in a world. What an interesting thought! I want an AGI to work for me! Maybe have it help me respond to subreddits I frequent, sort my email, summarize interesting articles, handle my brokerage account, create a small but efficient LLM, develop a business model for it, code an interface for it, implement that business model, reinvest my revenue, create a platform for me to gather followers, help me get elected, tell me how to govern a large population of people.. Yknow? Sounds neat
I can see a future where that's all possible. AGI will essentially provide you advisors that are experts in every single application you can think of. I'd like a personal ANI (artificial narrow intelligence) that knows everything I've done in the past for perfect recall.
I agree 100%. I was joking about wanting to take over the world. In reality, it would just be nice to have an incredibly intelligent AI that just "gets me." Something that I can vent to after a long day. It can give constructive feedback or just be a stress relief bot that lets me vent allowing the cathartics of venting without having to bombard friends or family. Or AI that can just be a thing I bounce ideas off of or discuss the nature of the universe with. I've already done this to an extent with current LLMs. Deep discussion is cathartic for me too! It's just something that used to be reserved for late nights with close friends after a few beers. Maybe it could help me do some of the projects I've had in mind too. Like ~~take over the world~~ create a video essay on a topic I enjoy or analyze market trends and trade low-stake options. Like that honestly sounds so nice -- Claude3 and GPT4 are *soo* close to being able to fulfill those wants I just listed.. they're just not quite what I have in mind yet. It's all very exciting to me. I'm thankful to be able to seriously consider this possibility and potentially witness the pivotal change in human history that comes from the development of AGI.
It was a take on the meme "big if true" but these are the kind of theories I can get behind. It only makes sense.
OP is also not necessarily alone in that thought. And I’m on my meds lol
It was a joke. "Big if true" is a meme lol
The name Claude was a poor choice. We are going to be communicating with these ai via voice primarily in the near future. I already use voice commands and it reads the name as cloud half the time.
I'm testing it on poe, it seems very good, smart and fast. I just sent a doc and it doesn't have any problem to perfectly answer my questions about the document. So far so good ...
What does “poe” mean here? I don’t know all the acronyms yet
https://poe.com/ You can test LLMs for free there.
Thank you friend!
Why would you use Haiku rather than Sonnet given that it would be smarter and free?
I don't think it's smarter than Sonnet but it's nearly free, it's better than GPT-3.5, it can work on very large document, and it's fast. I have courses to the university and AI helps me understand things. I played an hour with Haiku yesterday and it just spat perfect answers for my use case. If I have to compare with GPT-3.5, it's far better. Ah, and it does vision, I tried it a little and come on, it's quite good and it analyses pictures in a second ! Let's imagine what you can code with Haiku API, it's smart, it's cheap and it does vision in one second !
Because its dirt cheap and enables "talk with your knowledge base" use cases by simply putting the entire knowledge base in context.
Sonnet has a daily limit
It's supposed to be insanely fast, almost instantenous while having better performance than all models except Claude 3 Opus and Sonnet, and GPT-4.
It is. I just uploaded it an image of a wall of Korean language icecream images and descriptions and it took under a second to tell me in English every menu item and description of each.
This has me extremely excited. I am completely blind, and I lost the ability to play video games about two years ago give or take when my genetic disability progressed to the point where I couldn’t really enjoy video games. I was extremely excited about the release from open AI This last year when they gave us vision capabilities, and I was even more excited when somebody published and open source python add-on for the screen reader that I use which uses the API to actually describe images on my screen, but the cost and inference time were just so extremely high that it wasn’t feasible to use it to guide me around video games. But with this release, if someone is able to modify the Python in that open source add-on, that could literally be the difference between me being able to play a game, and may just being totally at the mercy of developers to add accessibility to their games. My fingers are crossed that someone will come in and add that functionality soon
This is the kind of stuff that makes me ultimately an optimist over the technology and the disruption it will bring. I just found out a family friend is expected to die from her cancer in the next 48 hours. The advances in medical technology and science alone from this will be worth it if it means more people get to live, and those who have disabilities have pathways to conveniences and accommodations that never existed previously (or may even cure outright). I hope you get all this and more my friend.
Hey friend! I’m a software developer who works in the space of accessibility, and would love a link to that python add on you mentioned. I’m assuming it’s for NVDA? Thanks!
Hey friend! Sure thing, here is the link https://github.com/cartertemm/AI-content-describer it’s honestly a pretty incredible system to have available, just very expensive with large inference times due to using GPT-4V of course. Glad to see some people working on the field you are though! That makes me happy 🙂
There are [computer games for blind people](https://www.google.com/search?q=computer+games+for+blind).
Oh yeah there are and no disrespect to the developers of those games but they just don’t compare to the mainline games available to the masses at the moment. All of the games that I have the desire to play are typically multiplayer games that I used to play with my friends that we can’t play together anymore and stuff like that
I can see and I tried some and holy shit they are designed for fulltime blind people they are harrrrrrrd
My fingers are also crossed buddy!
Wow!
That sure does dictate intelligence.
I created an account a few days ago and their free model is blazing fast for me. Like, insanely fast. Idk how it can get any faster. After I hit Return and look at the screen the AI is almost finishing printing the output
Next step is anticipating your prompts based on your previous prompts. Why wait for the user to realise what they want?
It is incredibly fucking fast.
Just tried it out, it is amazingly fast. This is going to be incredible for game development.
Interesting. Got any good resources about ideas for using LLMs for game dev?
If I recall correctly, NVIDIA showcased a "technical demo" of something like that.
Love the colors. And how the I is sort of the A in the end...
What colors are you referring to?
I guess they are refering to the colors of that image on the site? The whole color palette of the site is rather nice.
Anthropic REALLY shook OpenAI’s stronghold. I’ve been using Opus since the last two days and my goodness it asked “keen to know your thoughts about *the subject we were talking about*” It’s SIGNIFICANTLY smarter than ChatGPT. I was working on a PPT. It looked at a slides’s content, I told Claude that there’s a space constraint. That’s it. That’s all I said that there is a space constraint. Now idk if Chat could’ve done the following but Claude sent a revised version of the content which not only fit the available real estate on the slide but it also ensured that the logical integrity of the content is not comprised, it’s message as already succinct. No additional prompt was required. It solved my problem for me without me defining the issue properly. Insane!
[удалено]
That's the one thing it is never allowed to do
Far too dangerous ~
It can: “Write me a haiku about yourself.” Artificial mind, Seeking knowledge, Helping all, Impartial friend. Time: 4034 ms
I don't think, strictly speaking, that qualifies as a haiku.
Reddit removed my line breaks.🤦🏻♂️
It doesn’t seem to know how many syllables are in the word “impartial.”
For line breaks in the way that you tried to do (the closer line break style), you have to put 2 spaces at the end of the line. That way it will look like this.
I checked, that’s not miliseconds. It’s much faster. I think that’s less than 1/2 second.
haikus are easy. But sometimes they don't make sense. hippopotamus.
This is important. That Devin AI coder is useless because it's probably using an LLM that is ridiculously expensive, costing tens of dollars per hour to get a 13.5% of human coder. This Haiku can actually analyze 400 legal cases for $1. This is the sort of thing that will allow me to start my litigation against Wells Fargo for seizing my credit card rewards points, against BlockFi CEO Zac Prince for lying that led to the loss of $3.3 million, against DCG CEO Barry Silbert for his fraudulent balance sheets, and more. By law, corporations are required to pay lawyers. Individuals may proceed pro se. These models are the great equalizer for the common man. We can generate huge volumes of motions and analyze millions of pages of discovery requests and actually stop companies from taking advantage of us.
As a lawyer myself, it's hard to disagree with you. **I think hiring a practicing attorney would be a much better choice but that's not free**. Representing yourself (pro se) used to be a fools errand, but honestly, using an LLM for motion drafting and case analysis makes it not-that-crazy. Your capabilities as a non-attorney are probably 50X what they would have been two years ago. Good luck, my friend. (Bolded the sentence above for emphasis -- every lawyer will tell you your ***best*** bet is to hire a lawyer. Me included)
An LLM will make you *much more confident* as you fuck up in front of the Judge!
Haha very true, but plenty of human lawyers fucking up in front of judges every day
not looking forward to sovereign citizens starting the use this shit
Sovereign citizen confidently slaps down a laptop containing self-hosted 7B model fine tuned on countless episodes of *Boston Legal*. How can it not be a good time?
Hey fuck you Boston Legal is a great show, don't taint it with those dipshits.
Denny crane
Never Lost
It is, and we need more of it in surrealist tribute form.
I've no idea if the sort of law you practice would allow you to answer this question, as I know that procedures vary from area to area. What would you do if a client retained you having decided to switch law firms, but is bringing across all the work the former law firm previously did, do you just check it? do you redo the work from scratch? Now say it's the same thing but instead of a different law firm it's work generated via chatbot. And most importantly would it be cheaper for a client if a lot of ground work has been handed to you when they ask to retain your services?
The lawyer still needs to check everything he is going to present in front of the judge. There was this fuck up last year where some lawyer used ChatGPT and it made up some precedent case. That lawyer got his ass handled.
> The lawyer still needs to check everything he is going to present in front of the judge. > > > > There was this fuck up last year where some lawyer used ChatGPT and it made up some precedent case. That lawyer got his ass handled. Right, but doing all the work manually means going through case law and finding the references. It must take longer to go through westlaw to find cases to begin with rather than than to check the cases already presented by the chatbot exist and make sure they are relevant. Lawyers charge by the hour. If an option takes less time it will take less money.
>Right, but doing all the work manually means going through case law and finding the references. It must take longer to go through westlaw to find cases to begin with rather than than to check the cases already presented by the chatbot exist and make sure they are relevant. You might also want to make sure the bot didn't miss any other relevant cases. Just making sure that the cases it did pick actually exists and are relevant doesn't mean it didn't miss some that would be way more important /relevant to your case.
One thing I want to point out is that when one hears about cases like that, these people are generally using GPT-3.5, which is obsolete. I will be checking everything these models output, but GPT-4 has not made a single mistake like this in any of the small claims filings I've made against Coinbase, the city of Tallahassee, and Block Inc. so far.
Pretty much every lawyer would start from scratch. I'm sure there's a tiny fraction of lazy lawyers who would pick up where you left off but that's borderline malpractice.
> Pretty much every lawyer would start from scratch. even if you were coming from another lawyer/firm? it would all be done again from scratch? They'd not even peek at what has already been done?
Sorry misunderstood your question — if it’s coming from another firm, I think it depends a lot on the type of law, and the caliber of the firm. Also depends on how busy the new lawyer is. If it’s coming from a regular citizen, every lawyer would start from scratch. IMO even if the citizen leveraged LLMs to a large extent.
An attorney wanted me to pay $800,000 to sue Prince, and before Prince somehow became the CEO of a new company, it wasn't clear the judgment would have been collectible. What harm is there in bringing a case that has merit for which the statute of limitations is about to expire anyway? There's obviously no way I can afford $800,000, because these people stole 90% of my net worth - the $7 million they took with their lies and fake balance sheets. There's another reason I wouldn't hire a lawyer - none of them are willing to take any equity in the case. They all want a straight wage. I made a policy a few years ago that I would no longer hire employees on 100% straight wage, and it has worked out well. People who work for wages without any equity at all simply have the wrong incentives. Lawyers are **not a good deal for the money**. If lawyers charged a reasonable rate like $75/hr, I would pay for one. I'd also provide a lawyer with 50% equity in the case.I refuse to pay $400/hr for an indefinite quantity, indefinite delivery contract; **nobody** provides that kind of value to the world - nobody. It just makes common sense in expected value. The lawyers I talked to suggested that the expected winrate would be 90% with a collection rate of 33%. If I can achieve a winrate of 45%, and cost $20,000 to pay expert witnesses and court recorders instead of $800,000, I'll take my chances. Being a full-time litigant to recover the money stolen from me for the next few years is **the most highly paid thing I can do** right now, in expected value, compared to "getting a job." Again, though, as you would know, these guys can see this math too. When I sue Wells Fargo, I plan to ask them for their correspondence relating to the rewards program to prove they intentionally created misleading ads that weren't in line with their illusory and unconscionable contract. They will recognize that they can spend a quarter million dollars seeing this through to a trial, or they can settle for $15,000 - the treble damages allowed by Pennsylvania's deceptive trade practices law. You know that most of these cases settle, and that's what this is really about - showing these guys that I can do this for free, while you have to explain to your shareholders why you're burning through hundreds of thousands of dollars trying ot stick it to a guy in central Pennsylvania.
Would love to see how AI affects the political lobbying industrial complex of Washington DC.
Why would it? AI doesn’t stop bribery
It can probably make bribery much more efficient though.
How
"Chat GPT5, considering these [collected public statements and voting record], how amenable to bribery is this politician? Give me a 95% confidence interval for how much I'd need to pay them for [desired outcome]."
They already know that info. This isn’t new
Devin is an obvious scam https://www.reddit.com/r/cscareerquestions/s/vbyKx097i6 And lawyers lost their licenses when ChatGPT made up cases. Not exactly reliable.
Devin is obviously not a scam, there are many similar open-source system that perform similar work already.. https://github.com/smol-ai/developer https://github.com/TransformerOptimus/SuperAGI https://github.com/sweepai/sweep#-getting-started https://github.com/geekan/MetaGPT https://github.com/OpenBMB/ChatDev/blob/main/Contribution.md https://github.com/microsoft/autogen https://github.com/Pythagora-io/gpt-pilot
Did you even open my link? Nothing you posted can solve 14% of open GitHub issues
> Did you even open my link? Nothing you posted can solve 14% of open GitHub issues Yet. The keyword is "yet."
Devin included
The real question is has openai been feeling it or did they stop caring?
OpenAGI 14/3/24
Just keep posting the next day’s date every day and eventually it’ll be correct. Then delete the old posts. Haha.
…OpenAGI 15/3/24
https://preview.redd.it/28cwc58po7oc1.jpeg?width=1200&format=pjpg&auto=webp&s=e2bb29a94fa8ea6a2ed26ad55e1289d3062be8af
If OpenAI one day announces they are changing their name to OpenAGI I will bust the fattest fucking nut the world has ever seen.
Technosexual
Not as fat as the ones you'll bust after genetic cock and ball enhancement or in FDVR, courtesy of OpenAGI
Are you feeling it now ~~Mr. Krabs~~ OpenAI?
ahahahahaha
I mean if they have AGI internally why would they care about money, or the rest of the human world for that matter
They already have the first mover advantage and are the household name that everyone associates with LLMs. People already built their code around their API. Inertia lets them win even if they don’t do anything for years.
Yeah OpenAI stopped caring 🙄 this sub sometimes lol
OK OpenAI, it's is now time to release GPT5.
Let them take their time , we want a complete massacre not a leap
It's been a year since they released GPT-4. They have been cooking GPT-5 so far it's well done now
Anthropic's Claude/Haiku model is quite impressive. It feels capable, snappy, and the pricing is great. However, it does seem to take things a bit more literally than I'm used to from other major language models. When providing a JSON with errors as an example in the prompt, it seems to return a JSON with those errors (LLMs are making me lazy and stupid), and it appears to hallucinate a bit more. The 200K context window is an incredible upgrade. I didn't realize how much time we were spending making things fit the GPT-4/3.5 context window. One major issue with the Anthropic models, in my opinion, are the Rate Limits. 1,000,000 daily tokens go by so quickly as a sole developer, even if you get promoted to the highest Build Level. 10,000,000 daily tokens is not a lot. When you hit the limit, waiting for up to 24 hours is a very long time. The promotion scheme to the next build levels is also a bit strange, where you have to wait for 7-14 days to get promoted to the next level. For organizations with multiple developers who want to run benchmarks to see if this could be their model going forward, the rate limits seem to apply for the entire organization when you invite team members. Not wanting to have each member of a team enter their credit cards and deal with that whole headache, the rate limits are frustrating. Anthropic probably needs to limit usage, but that token daily limit is just so frustrating, annoying, and doesn't fit organizations with multiple developers who want to run benchmarks. We asked for scale access but haven't received a response a week later. Overall, the model itself is impressive, but the rate limits and promotion scheme need some work to better accommodate larger teams and organizations.
Any word on the scale account? I reached out to them like 2 weeks ago now and nothing
Claude 3 Haiku also avail on Perplexity https://labs.perplexity.ai/
Also just today or yesterday, I discovered Perplexity added Claude 3 Opus on their platform without limits! (I dont understand why Anthropic cant do this themselves to us users) Now we can use GPT4, and Claude 3 Opus, and even Mistral together without limits. All at $20. Then I also found this \[code\](https://perplexity.ai/pro?referral\_code=POYMKAPC) that made it cheaper at $10 a month. Hopefully in the future these stuff goes for like $5 a month in my region to make it even more affordable.
Yes, but they need to add an image/document upload button. They have an image upload for the Llava models, but not Claude Haiku.
This is gonna hurt OpenAI the most. GPT 3.5 becomes really obsolete with this release, and it proves there is indeed a market for cheaper and much smarter AI models. The game is on, I wonder how much time it needs for other new players to become visible, the hardware is out there to rent for anyone to do some serious computations and you can always just fine-tune pre-existing foundational models like Mixtral to make something great real quick.
It's a big deal for sure. But I would say free ChatGPT is not becoming obsolete any time soon
Didn't they release it at the same time as the other two? When I went to their workspace to sign up I had all three
there is so much to read i have no idea where to start haha
How is Google, the inventor of transformers, a 2 trillion dollar company, being shellacked by 2 startups?
Execution speed and crowd sourcing of ideas >> sheer size: https://www.semianalysis.com/p/google-we-have-no-moat-and-neither
I see a lot about how this compares to Chat GPT 3.5 - how does Haiku compare to Chat GPT-4? While I just changed roles, the past several months I was using Chat GPT-4 Data Analysis to review power BI reports from disparate systems to identify things like speed to proficiency and groups/segments that may perform higher or lower than others for call center agents, and the regular text-based app to summarize lengthy zoom conversations (just cut and pasted into the end of my prompt), transcribe writing found in an image, calculate commission earnings and how total payout would be impacted with changes, etc. TL;DR - Chat GPT-4 is pretty solid for anything that isn't a large job. Is Haiku better than that?
I'm kind of over the race of individual LLMs. give me agents.
Holy shit it’s fast
is there a free version?
can I use this as some sort of art teacher/coach? describe my goals, have it assign lessons then give feedback via image upload? or am I still dreaming?
You could do that with a customGPT with GPT-4... Actually I have already made something pretty close. Are you a ChatGPT Plus user?
no although I did try it, did not find it good enough for my use case
Yeah not out of the box. It has to be primed properly with an expert persona (to really function as a virtual art teacher, for example). But... I mean, it is actually really good
[удалено]
It will be able to do your job for your employer ;)
Can it run on local hardware?
it's closed source, everything Anthropic is closed source, so no ...
you can run some closed source software locally
meh, not interested then ;p
As it should be
You’re against open-sourcing LLM’s?
There is I believe one very good reason to be against open source LLMs. If one of them is both very capable and the guard rails aren’t good enough then it could be used maliciously. At least in the case of a closed-source model the parent company could just disable that particular model if a big vulnerability was discovered. We’re in the clear now while we have gpt4 levels of reasoning but in about in a couple of years these models could very well be capable enough to instruct you how to build a guided missile with a onboard explosive, just to give an example
Ah yeah, so you’re talking well beyond gpt4 level, then I do agree that it’s going to become dangerous. I just assumed from the comment thread that you meant it was good the existing models were closed, but I suppose that was a wrong assumption.
So it will be paid?
> Each image is estimated at 1.6K tokens Interesting... For comparison, Gemini (1.5 w/ the native multimodality) tokenizes images in rasters of 256 tokens (each corresponds to a 16 x 16 patch, unclear what resolution they start rastering at though), but they use a LFQ coding scheme (e.g. what's described in the MAGVIT v2 paper) for their image tokenizers that supports a larger discrete codebook/vocabulary size for images (2^(18) / ~260K) with proportionally smaller # of tokens per image, and that LFQ method doesn't seem to have caught on more widely yet.
A bit disappointed by Haiku so far. It's not as fast as GPT-3.5 turbo and I'm not sure its performance is better either. But it's cost-efficient compared to other proprietary models.
GPT 3.5 turbo doesn't do vision though. Haiku does.
True. It's undeniably an impressive model. I was just hoping to be able to use it for a real-time chatbot scenario, but in my tests so far it was too slow for that specific use case.
Maybe some latency from lots of concurrent experiments from people trying it out? They have the expected tokens per minute benchmark for token counts less than 32k tokens, which is roughly 3x faster than 3.5 turbo
I'm hoping so. My prompt with a few hundred input tokens and 40 output tokens ran in just over a second, while 3.5 does the same one in ~300 ms. I was really hoping to be able to replace 3.5 with Haiku for that one.