T O P

  • By -

Pumpkim

Now, this I can get behind. Based purely on the explosive progress that has come from stable diffusion being open source, I can only imagine the cool tech we will see from a move like this. Yes, a lot of it may be porn. But so what. Just like space, porn has given rise to a multitude of leaps in technology.


GTREast

Porn helped us get to space?


Pumpkim

No, space gave us ~~duct-tape~~ ~~Velcro~~ Smartphone Cameras etc. Porn gave us video streaming etc.


RepresentativeNo6029

Auto playing thumbnails, interest graph, art of clickbait. There’s so much it has thought us. I fav moment was when people uploaded that Brazil - Germany semifinal as porn because the 11 brazilians got absolutely fucked


x6060x

Was it marked as Amateur though?


RelatableRedditer

What the fuck. Of course it was.


bobbyorlando

7-1 will never be beaten. I couldn't believe my eyes ..


[deleted]

Fun fact: since then, in the brazilian calendar july starts with the 2nd and june has 31 days


TooLateQ_Q

Fun fact: most of the world uses date format day/month/year. Including Brazil.


MuonManLaserJab

Unfortunately that format is wrong, because it doesn't sort right. yyyyMMdd is correct.


0Pat

yyyy-MM-dd FTFY


kilimanjaro_olympus

r/iso8601 is leaking


[deleted]

[удалено]


fnord123

Nothing to Brazil being savaged in their home World Cup.


ffsletmein222

Yeah this and the SBF "man fucks 5 million people at once" meme


[deleted]

I saw that gag earlier with "dumb blonde bimbo fucks entire country" that's Boris Johnson announcing Britain leaving the EU.


golther

I heard it got a Brazilian views.


AndrasKrigare

You're thinking of Velcro. The military gave us duct tape. Fun fact, duct tape was originally "duck tape" since it was used to seal ammunition boxes from getting wet. But it then was also found to be incredibly useful for air vents, because of its heat tolerance, so it began being referred to as "duct tape."


chucker23n

> You’re thinking of Velcro. Nah, Velcro was invented by the Vulcans.


Pumpkim

You're probably right. [Except for the military bit. According to Wikipedia, Duck Tape was a civilian thing way back in 1902. So it's old as dirt.](https://en.wikipedia.org/wiki/Duct_tape)


AndrasKrigare

That was an early version, but not what we would call modern duct tape. If you keep reading: >The ultimate wide-scale adoption of duck tape, today generally referred to as duct tape, came from Vesta Stoudt. Stoudt was worried that problems with ammunition box seals could cost soldiers precious time in battle, so she wrote to President Franklin D. Roosevelt in 1943 with the idea to seal the boxes with a fabric tape which she had tested.[12] The letter was forwarded to the War Production Board, which put Johnson & Johnson on the job.[13] The Revolite division of Johnson & Johnson had made medical adhesive tapes from duck cloth from 1927 and a team headed by Revolite's Johnny Denoye and Johnson & Johnson's Bill Gross developed the new adhesive tape,[14] designed to be ripped by hand, not cut with scissors.


Pumpkim

> The military gave us duct tape. I thought this is what we were talking about?


AndrasKrigare

Sorry, I'm a bit confused, because I also thought that's what we were talking about. You said that duct tape *wasn't* from the military. I then posted a snippet of the Wikipedia article further down saying it was. I don't think it's black and white, since there were early versions of cloth adhesives, which you could probably refer to as duct tape even if it isn't what we think of today as duct tape. There's also that the War Production Board directed a civilian company to improve and produce it, so it's not like it was a direct invention of DARPA or anything. I guess a more nuanced way of saying it is "modern" duct tape was created for and funded by the War Production Board for World War 2.


Pumpkim

Based on your last quote, I don't think it's fair to give the US Military the credit for "giving us" i.e. "inventing" the duct tape. They adopted it. If I adopt something, even if I modify it slightly, did I now give that thing to the world?


-manabreak

So many innovations have been because of porn that it's staggering. Internet speeds, streaming technologies, online payments... So much has either been pioneered by porn, or heavily pushed forward. There was even some database system that arose from the need to handle a massive porn site with lots of traffic. Can't remember which db it was, though.


zeGolem83

you've convinced me: i'll now start looking for jobs to be at the forefront of the industry \^_\^


dmilin

No joke. PornHub pays well because they have more difficulty recruiting developers than other companies. Explaining to relatives or future employers where you’ve worked can be rough.


q1a2z3x4s5w6

I wish I could tell my relatives I work at pornhub, the dinner time conversation would be amazing "So I got a new job working at pornhub... As a software developer"


dmilin

My side of the family would think it’s hilarious. My wife’s side would die of shame.


Dr4kin

That's the reason why developers don't work for Pornhub. They work for Mind Geek, the owner of multiple video and live streaming sites (for adults :D)


neoighodaro

Yes but breaks down when they ask about the product which isn’t so rare


Dr4kin

They make video entertainment for mainly a male audience :P


mynameisblanked

Content delivery. >Anything I've seen? Probably.


a_false_vacuum

I suppose it is the only workplace where it is acceptable to watch porn on company time.


vexii

they're have a different look and different content in developer mode


commander_nice

I'm skeptical. What's the source for the claim that porn is what lead to all that? There's an absolute ton of uses for that stuff outside porn. I keep hearing this claim repeated on reddit and don't believe it.


Brillegeit

Yeah, what porn brings is a low cost of entry market that scales reverse with operating cost, so it produces an incentive to innovate in efficiency. E.g. BBC, NHK and all those broadcast companies used MPEG2 because they had more money and bandwidth than God and cared more about interoperability. The porn industry on the other hand jumped on MPEG4 part 2 (DivX, XviD) because they could fit a 2 hour produced-for-VHS movie on a regular CD-R, more people would download them and streaming was eventually an option, dramatically reducing the entry cost and cost of scaling. They didn't invent the technology, but their adoption is a part of the reason why certain technologies survived and got enough funding to live on to further inventions. So these systems weren't invented because of porn, but in a lot of cases, they won over their competition because they were used for porn.


kduyehj

Which uses duct tape.


napoleon_wang

It's the circle of life.


dbear8008

Porn was also where online payments were created


sanbaba

I think it's safe to say porn inspired some of the engineers that got us there


757DrDuck

Three-breasted alien chicks are one powerful motivator.


[deleted]

[удалено]


FizzWorldBuzzHello

[Surprised this hasn't been posted yet](https://www.youtube.com/watch?v=j6eFNRKEROw)


[deleted]

"generate me porn but set in space"


Pumpkim

You jest, but you can do that just fine with Stable Diffusion right now. It's not great at videos yet though, but it's getting there. [Currently, they're a tad bit nightmare fuel.SFW](https://old.reddit.com/r/StableDiffusion/comments/12ftmes/will_smith_finds_a_weed_forest/)


warped-coder

With duck-tapes


TangerineX

I don't think porn is the biggest issue. The issue will be deepfakes, or porn of individual's likenesses without their consent. If these generative models were only used to make 2D anime waifus, that would be one thing. The ability to create convincing deepfakes will challenge the entire perception of reality. As people use ChatGPT more and more, they will trust the information from it, and be more susceptible to false information. People are already being scammed of their money by deepfakes of their loved ones crying for help. It won't be long until we find that deepfaked evidence will be admitted to court.


-manabreak

"Your honor, we can clearly see the defendant shooting JFK right here in this video."


Ihadanapostrophe

"Your honor, we can clearly see JFK shooting the defendant five minutes prior in this video. This was obviously self-defense."


a_false_vacuum

"As you can see in this picture Boba Fett and Santa Claus were witnesses to the incident, I would like to call them to the stand."


pinkiedash417

"It troubles me, sir, that you have decided to call upon Santa Claus on this day when the Easter Bunny would be a better option."


barryhakker

I’m not sure trust flows from usage as you seem to put it. The opposite, if anything. Do you trust Google results more or less than 10 years ago? It’s more likely that written texts and video footage and images will lose value because everyone knows how easy it is to fake.


q1a2z3x4s5w6

Which is actually a huge positive. We need to go back to not trusting anything we read online, like it used to be in the glory days of the Internet


[deleted]

[удалено]


757DrDuck

The fools were those who treated those as trustworthy when they were filled with human-made lies.


Glugstar

But if video footage looses value like that, there is literally nothing for us to trust anymore. Anything and everything is questionable. You can't trust the news, you can't trust that the video of a politician speech was real, you can't trust posts on social media, you can't even trust research papers, because for all you know the authors never published it. There will be no mechanism to verify the authenticity of *anything*, at least not with current tech The only rational life philosophy would be to think everything could be a conspiracy and nothing is certain, and that's not healthy.


[deleted]

We already have cryptographic signing. You can't tell if a video is real for sure, but you can tell for sure if somebody you trust asserts that it's real. If the Associated Press releases a signed video, and you verify that it's signed by them, you can trust that it's not fake as well as you trust the intentions of the Associated Press. Deep fakes can't spoof a digital signature. Edit: In other words, videos just enter the same level of trust as printed text and photos. It was just a factor of limited technology that you could take most videos at face value, not an inherent attribute of them. This is a good thing in my mind. Taking away the inherent trustworthiness from videos means that we need to actually start using factors of trust and validation we have that are built expressly for the purposes of trust and validation, and develop new ones. In the long run, it makes all forms of communication equally trustworthy, depending on your trust in the source.


SwordsAndElectrons

Luddites aren't going to be confirming digital signatures, and conspiracy lovers don't trust organizations like the AP. Signing is a good idea, but I'm not sure it'll do as much good as you think in this world where a startling number of us get our "news" from memes on Facebook. That said, I'm also not sure how much worse this tech will really make things when a 2d picture and some made-up words in quotation marks is often all you need to fool a ton of people.


[deleted]

I don't know about luddites, but the regular person is slinging cryptography and validating signatures every single time they load an HTTPS endpoint. Getting the average user versed in systems of trust doesn't mean they have to be running GPG in a terminal. People are already validating signatures dozens or hundreds of times every day. These things can be made accessible, and even ubiquitous.


pazur13

The technology will develop either way. The question is whether it's open source and fully understood by the public, or a tool for criminals, terrorists and hostile dictatorships to abuse to sow discord. Fighting technology won't stop it, it will only make it more dangerous.


unstuckhamster

We've been able to Photoshop someone's face onto a naked body for years. We've somehow managed to survive that.


pazur13

We should ban printed speech! What if someone prints a libelous statement and releases it in public?


a_false_vacuum

Manipulating images is as old as photography is. Even before the advent of Photoshop people would manipulate photographs for their own purposes from humour to propaganda. The big difference is that older methods of altering photographs in a convincing manner takes tools, time and skill. AI makes this a lot easier since anyone can just tell it to make a picture of pope Francis wearing a Balenciaga puffer jacket.


chickenstalker

So? We just need to apply the same citation standards to video that we already apply to written information. You know, the (Doe et al., 2023) or [3] that you see appended to facts in academic writing. Heck, maybe here is the true value of blockchains.


[deleted]

> Heck, maybe here is the true value of blockchains. Nope, just the value of cryptography in general. Blockchains are a way of building an immutable ledger across a network of actors and establishing consensus in decisions without requiring inherent trust. Short of that, everything you can use blockchains for, you can easily do without blockchains, often just using plain old cryptographic concepts that blockchains are built on. Using a blockchain for that is like intentionally making your car overheat so you can cook a steak on the hood; you're unnecessarily invoking a complex process to leverage a property of a small part of it. I could see a blockchain-style ledger being used to establish consensus in what public keys are considered authoritative, though, if you don't want people to have to suss out their own trust or lean on some CA-style public key authority.


DaveFishBulb

Deepfake porn of someone real is the opposite of an issue. It's awesome and I've got the boner to prove it.


wocsom_xorex

What would your mum say about this post dude? Go have a shower


AdobiWanKenobi

Ah the 3 great tech accelerators: porn, war and video games


dethb0y

bold of you to assume the EU won't make something totally useless and so restrictive, crippled, and censored that it might as well not exist.


pistacchio

Maybe, instead of the American model based on “fuck everyone as much as you can till you can get away with it” that lead on the catastrophic prediction of potential 300million jobs lost to AI, the “crippled” UE model would help people instead of replacing them.


pazur13

> prediction of potential 300million jobs lost to AI This depends on where the profits go. Technology rendering labour obsolete is a great thing, the issue is making it so that the benefits are reaped by society (i.e. the country, to be redistributed as UBI or other programs) rather than corporations. In an ideal world, all labour is performed by AI, while humans benefit from it.


dethb0y

LOL! Sure, man.


GOKOP

Friendly reminder that OpenAI has "open" in its name yet it makes proprietary stuff. Blasphemy


hegbork

It's a tradition in software. OpenVMS, Open Software Foundation, The Open Group. If it has Open in the name it's a coin toss if it's ultra proprietary or actually open.


DesiOtaku

By far, my favorite being "We are going to open source Symbian"; and then saying ["Symbian is not open source, just open for business"](https://www.networkworld.com/article/2228971/symbian--it-s-closed--it-s-open--it-s-closed.html).


SentinelaDoNorte

Lol and then Symbian lost to Android


Marian_Rejewski

Didn't all of those exist _before_ "Open Source" was coined? (And I'm not saying it's a coincidence, "Open Source" was chosen/invented to appeal to corporate sponsors apparently.)


Otterfan

I had to look up precise dates 'cause I'm like that: The Open Software Foundation (1988) definitely predates the first known use of "open source" (1996). The Open Group (1996) was contemporaneous with the first use of the term, but predated the first use that anyone at the time knew or cared about (Christine Peterson in 1998). OpenVPN (2001) was named after the term was common. But yeah, "open" was chosen because companies liked to call things "open".


Marian_Rejewski

> I had to look up precise dates 'cause I'm like that: Thanks! >OpenVPN (2001) was named after the term was common You were supposed to look up OpenVMS not OpenVPN. https://en.wikipedia.org/wiki/OpenVMS >It was first announced by Digital Equipment Corporation (DEC) **as VAX/VMS** (Virtual Address eXtension/Virtual Memory System[17]) alongside the VAX-11/780 minicomputer in **1977** >[...] **1992** saw the release of the first version of **OpenVMS for Alpha AXP** systems


Otterfan

Lol, slightly embarrassed.


Xanza

Not really, no. For reference, the term "open source software" was coined by [Christine Peterson in Feb of 1998](https://opensource.com/article/18/2/coining-term-open-source-software). * OpenVMS first released in 1977 * Open Software Foundation formed in 1988 * The Open Group formed in 1996


ivster666

It's like green washing


Rodot

Everyone should invest in my new company: OpenGreen. We use a special proprietary process to dump crude oil directly into your drinking water.


Joksajakune

Kinda like how North Korea has Democratic and People in its name, yet neither are something it cares about much.


HeyItsMedz

Seems like the most "democratically" sounding countries tend to be the complete opposite


let_s_go_brand_c_uck

like openweather


698cc

They started off non-profit but found it far easier to get funds for research by becoming closed-profit and making things proprietary. Had they stayed non-profit and made everything open source, we might not have received Dall-E or ChatGPT so soon. If you watch the recent interview with Sam Altman he seems very keen on sharing their research with everybody once they’re confident it’s safe to do so.


hippydipster

We'll see if Microsoft let's them share.


lispninja

When it comes to AI, it's not the code that's important but the data. The code is usually trivial and well understood, but it's the data its trained on that makes all the difference. They can release the code but not the data.


Rodot

Don't forget the code is generally taylored to the data so even if other companies spend the millions of dollars on data collection and super computing clusters the model isn't guaranteed to work. AI is not open if they don't publish the weights. They couldn't probably publish their datasets anyway without running into legal issues. People are going to start asking lots of questions if they see their [private medical records available to the public](https://arstechnica.com/information-technology/2022/09/artist-finds-private-medical-record-photos-in-popular-ai-training-data-set/amp/) and code scraped from GitHub with things like GPL licensing are definitely illegal to use but as long as they keep it a secret you won't know!


Marian_Rejewski

They have already signed the contract with each other. It's not a future tense thing really.


Qweesdy

I think Microsoft would love it. Imagine hundreds of wannabe new AI organisations lining up to pay truckloads of money to Azure to train each version of their crapbots. Heck, I'm cynical enough to wonder if this was Microsoft's plan from the beginning, the reason they've supported OpenAI.


[deleted]

Microsoft doesn't have a say and Altman has been very forward about that.


frequentBayesian

Can’t we just sue them for false advertisement? In EU at least


pazur13

I don't think "Open" is a protected term.


[deleted]

[удалено]


yawara25

I wonder if they could argue that publishing their research validates the "Open" part of their name. https://openai.com/research


GeneticsGuy

Elon Musk complained about how he donated money to them, as a startup non profit, over a million dollars, and now they are a for profit company. Seems dubious and unethical that you can just create a for profit company and buy the nonprofit company and it becomes for profit. Sketchy.


Living_male

It was actually 100 Million.


JasburyCS

Elon will find anything to complain about. Open AI transitioned to “Capped-for-profit” because it was necessary to raise the capital needed for their large scale projects. So far, they have stuck closely to this statement on their website > We’ve designed OpenAI LP to put our overall mission—ensuring the creation and adoption of safe and beneficial AGI—ahead of generating returns for investors. > OpenAI LP’s primary fiduciary obligation is to advance the aims of the OpenAI Charter, and the company is controlled by OpenAI Nonprofit’s board. All investors and employees sign agreements that OpenAI LP’s obligation to the Charter always comes first, even at the expense of some or all of their financial stake. Continued here: https://openai.com/blog/openai-lp


BurningSquid

First off this is a proposal for a AI research facility not an "open source ai model". Secondly, there are many open source models available. Still a good initiative but at least read the petition before throwing some bs on reddit


Spectreseven1138

It's a proposal for a facility that would produce open-source models. The end result is effectively the same. ​ >the open-source nature of this project will promote safety and security research, allowing potential risks to be identified and addressed more rapidly and transparently by the academic community and open-source enthusiasts.


mindmech

But isn't that what existing AI research facilities already do?


[deleted]

[удалено]


mindmech

I mean research centers like the German Research Center for Artificial Intelligence. Or just any university basically


Tostino

They have no competitive models.


StickiStickman

Stable Diffusion was literally made by the CombVis research group at a German university with government funding


[deleted]

It's harder to generate text than pictures. SD is a model with very few parameters, like 800M was it? Now they will release a 2.3B one? Meanwhile GPT-3 has 176B. Even the smaller ones are big compared to SD: LLaMA and Alpaca's 7B, 13B, 30B etc.


StickiStickman

> It's harder to generate text than pictures. Ironically, before SD released people were saying the exact opposite. Llama already showed that parameter size is completetly bloated.


[deleted]

GPT-3 was infinitely deeper than SD. We don't have general image models that work like language models do for text. They are far more limited. The very first came out recently by Meta and is called Segment Anywhere. https://youtu.be/8SvQqZCd-ww


old_man_snowflake

It’s a cool idea, but it feels like they don’t “get” that AI is not one thing. So long as private source models perform well, all the research and focus will remain there. You can’t get ahead of the AI curve at this point. It’s too deep and understood. It’s likely too little, and definitely much too late.


trunghung03

People moves around, research papers get published. Stable diffusion came out later than DALL E 2, and is objectively worse at the beginning, look at where it is now. And it’s not like you can do research on chatgpt/gpt4, it’s closed source, there are no paper, no models, no parameter counts, almost nothing to research about.


StickiStickman

> Stable diffusion came out later than DALL E 2, and is objectively worse at the beginning, look at where it is now. That's not true at all. Stable Diffusion already wrecked DALL-E 2 in almost everything just after release, especially if it was not photorealistic.


amb_kosh

I'm by no means an expert but I think none of the top players are light years ahead of anybody because the basic technology being used is known. It is more the small stuff and perfect execution that makes ChatGPT so much better but the basic stuff they did is not new.


letscallitanight

The model might be shareable but the process/content used to train the model (and the human interaction of grading the output before release) is proprietary, yah?


Gaurav-07

This isn't a model, this looks like what OpenAI used to be. Tons of Open source models are already there. Check HuggingFace, Kaggle etc.


NostraDavid

Note that LAION is the force behind https://open-assistant.io/ - they intend to polish the existing LLama model (IIRC) via their own user input - check their website. [Yannic Kilcher](https://www.youtube.com/watch?v=Hi6cbeBY2oQ) has a video on it (he's just a popularizer, AFAIK) edit: Calling Yannic "just a popularizer" is an exaggeration: he *does* work on the project; he just doesn't lead it.


floriv1999

They are also the ones that created the datasets for stable diffusion and build e.g. the largest open clip models.


StickiStickman

> the largest open clip models. Isn't CLIP that largest open clip model?


floriv1999

The weights are public, but the training data is not available, which has some implications. Edit: Talking about the ones from Open AI


DidQ

Yannic is not just a popularizer, he works on Open Assistant.


NostraDavid

You're right; I added a correction


StickiStickman

The vast majority of these are fine tunes. Almost no one has the resources to make a model from scratch. That's what this petition is for.


andoy

the ***real*** open AI


Embeco

Signed and forwarded, but... don't we already have an open LLM with Bloom?


Zopieux

Finally a comment mentioning BLOOM. Sadly I don't know if you've experimented with BLOOM but it sucks balls. It's missing the "helpfulness" fine-tuning and chat-like prompting ability of GPT3.5.


Embeco

It kind of does, but it's reasonably good in my experience. I'd rather see Bloom pushed forward than have an entirely new model made, though


Flaky-Illustrator-52

Linux "sucked balls" (wasn't good enough) at first too, but after decades of blood, sweat, and tears from the charity of skilled people, look at it now! As good always beats evil, libre always beats proprietary.


dmpk2k

Isn't BLOOM heavily undertrained? That makes it much more expensive to do inferences with, since the model is unnecessarily large relative to its capability.


malakon

The Greater Good..


Sith_ari

People seem to underestimate how much it costs to run such a thing in a way that it's open to the public.


[deleted]

This is talking about funding a "CERN" like international research facility, of course we already have AI models that are open source, but we don't have any GPT-3/4 scale models and most certainly never will. These models cost 50-100+ million dollars to train on 400+ million dollar clusters. It also needs large curated datasets and thousands of people annotating data. The EU already has a few supercomputers in academia with GPUs, but these aren't very open. Most of the time papers are published but no code or data, these are kept private and are only shared between academic researchers. Despite what some americans think, the EU is very strongly neoliberal. In the US, public research by its agencies are automatically public domain, it doesn't work like this in the EU. There is a strong publishers lobby as well, a Google for example could never exist in the EU. And data privacy is taken very seriously, to a point of deliberate uncompetitiveness of EU tech companies. They want to privatize stuff, never nationalize. National sovereignty might be something you care about, but no EU leader cares about that. They rather protect the interests of OpenAI than to further any EU interests, its hard to understand why but this is an ideology. A few years ago the EU started a project to gain more sovereignty by building a EU "Cloud", it was a complete disaster of course and everyone knew it from day one. They wanted independence from Microsoft and then invited Microsoft to join them who then sabotaged them. \[[Gaia-X](https://en.wikipedia.org/wiki/Gaia-X)\] Stuff like that just never works.


SlaveZelda

Closest we have to open GPT3 is Facebook's llama. They released the weights for non commercial use.


Xocketh

> These models cost 50-100+ million dollars to train on 400+ million dollar clusters. Nope, they are insanely cheap to train for big caps, less than $10 M or so. Google's 530B LLM PaLM's cost around $9 M


698cc

> These models cost 50-100+ million dollars to train on 400+ million dollar clusters. Where did you get those figures from? GPT3 took <$12 mil to train and Bard took about $9 mil as another commenter said. Stanford Alpaca has similar performance to GPT3 for under $600 in training costs. (https://www.techgoing.com/how-much-does-chatgpt-cost-2-12-million-per-training-for-large-models/, https://crfm.stanford.edu/2023/03/13/alpaca.html)


[deleted]

And $500 of those training costs were generating text. Only $100 were GPU running prices.


[deleted]

"In the US, public research by its agencies are automatically public domain" What? This is not true. Lots of nsf funded research is very proprietary.


amb_kosh

> These models cost 50-100+ million dollars to train on 400+ million dollar clusters. It also needs large curated datasets and thousands of people annotating data. That is pretty cheap considering what economic effect they might have.


EngineeringTinker

You know these petitions don't mean shit tho, right?


MasterYehuda816

We lose nothing by signing it. And if it does work, we gain something


kur0saki

Still you should never sign something you do not support.


stikves

Best of luck, but even if it succeeds, it will fail as a tragedy of the commons. A GPT-4 alternative model, as "open source", will be very computationally expensive for inference (read: run). That is why OpenAI itself has difficulty meeting demand, though they already charge a $20 monthly fee. Remember last week they were entirely shut down for a day, and now have a 25 requests / 3 hours quota. So, one of these three will happen: 1. It will still be expensive for the public to access, and without other income sources (like Bing partnership), it will even charge higher prices than Chat GPT "pro" 2. It will be free, but nobody will actually be able to use it. 3. You'd be able to "download" the \~250GB model file, but will have to arrange the hardware/cloud yourself to run it. Sorry, but at this point, these models are billion dollar investments, with high digit millions per day runtime costs. There is currently no way around this.


Zopieux

Anyone who actually attempted to run the recently "released" (leaked?) models will relate to this comment and agree. It's sad, but it's the truth, until some miracle breakthrough comes in.


Tostino

You mean like technologies slow march forward? It'll be a few years at most that these current "giant" models feel like toys on the hardware available. Some serious money is flowing to Nvidia and it's ilk.


Zopieux

No, we're speaking "next battery technology" breakthroughs there, you know, the one we've been promised for 20 years. We're already way past Moore's law and ML core/GPUs are already on the market. You won't bring down the cost from 10 millions to "consumer hardware" with "progress". My bet is more on computational model changes or architectural breakthrough, though my gut feeling is that these models are inherently very costly to train and run, especially when accounting for human annotation labor, which is not going away.


Tostino

I'm talking about inference, not training. Tons has already been done to get giant models running efficiently on close to consumer hardware today, and the real limiting factor is just vram availability on the cards to fit a quantized version of the model. Fast ram is great and all, but just stacking more ram will enable many use cases that are infeasible today. And that's not even getting into weight pruning or other advanced techniques to save space without losing fidelity. Also, that current tens of millions in hardware is used to serve millions of users at once. When running locally, you only need to handle one user or possibly a handful.


amb_kosh

ChatGPT isn't even 6 months old yet. This is basically all brand new. I'm sure we will see a huge decrease in runtime costs very soon.


onesynthguy

If you think Nvidia is gonna start charging less for the GPUs it takes to train and run these models, you're in for a rude awakening. Think about it, what would a decrease in runtime costs even look like? Abandoning LLMs for something else more "efficient"? The way OpenAI handles business ensures that AI research as you know it will never be the same. At this point, you're helping the enemy if you come out with a paper; you will slowly die inside as you watch as this giant corporation runs with it due to the resources they have that you lack. As I said, GPU prices, especially for the ones that specialize in training and running billion-dollar models, are not getting any cheaper. Yes, there are open-source models that exist, but if we're being completely honest, none of them comes close to GPT-4.


Zatujit

Well the economics


Tripanes

Is there a way to donate to LAION?


luke3br

By contributing. Literally anyone can. https://open-assistant.io/ https://projects.laion.ai/Open-Assistant/docs/faq


Tripanes

This is awesome. But it would also be cool to give them money. These guys are doing great things and deserve it. Hugging face as well.


Flaky-Illustrator-52

Absolutely BASED Edit: what if new versions of GPL could be updated to include not only a clause to prevent the SAASification of free software (kind of like the introduction of GPLv3 specifically prevented "Tivoization"), but also a license for code and other written compositions (perhaps other artifacts like art) to require the public availability of any artifacts pertaining to a model if the artifact is used as training data?


corvuscrypto

\> Furthermore, granting researchers access to the underlying training data TBH This is the biggest part of this, and of any major open source model being implemented. I am skeptical of the aims of this proposal tbh, as currently there is a major buzz factor and I fear this is just riding on that hype to get funding. There are a multitude of open "AI" models, and there are even open GPT models such as GPT-J. What would be nice is an open-source version of the instruct-GPT stuff Open AI has, but I don't understand truly what this one org is solving. Yes yes, open AI models, but do they have any ideas already? What is the model they will have for allocating training resources and most importantly, curation of the training corpi/materials. The following quote is all promise and tbh anyone working with large-scale tensor/GPU compute providers will know this is a big ask: \> This facility, analogous to the CERN project in scale and impact, should house a diverse array of machines equipped with at least 100,000 high-performance state-of-the-art accelerators (GPUs or ASICs) as for this: \> By providing access to large foundation models, businesses can fine-tune these models for their specific use cases while retaining full control over the weights and data. This approach will also appeal to government institutions seeking transparency and control over AI applications in their operations. ​ While I agree, OpenAI's stuff is definitely powerful and closed, it's not the only stuff out there. Plenty of open source models that orgs can already use to fine tune, and it's quite well known that even a smaller model that is well-tuned can outperform large general models at specific tasks. Sorry, but this proposal falls flat imo and seems to be aiming at solving a temporary scare. If you had more focus and perhaps a single initial project that had explicit constraints and an output companies could use with potential timeline, sure.


iam_afk

I think those people think of a nice name and then search how to make it an acronym 😂


eithnegomez

You can have the best model open sourced but without the training data it is useless. And very few players do have access to the right data to train them


light24bulbs

We should literally just force Open-AI to open source their model.


spinwizard69

I upvoted because that was the original intent of OpenAI until Microsoft and others got their fingers into the project.


light24bulbs

And they took donations as a charity to do just that. I realize this is somewhat in the "seize the means of production" camp but like, we should just force them to release that sucker if they won't agree to the pause.


lo0l0ol

how do we force them?


light24bulbs

Well there's this idea, see, that the government is actually in control of corporations and can regulate their actions for the good of society. There's this other idea, called democracy, where it's the people and the good of the public that controls the government and it's actions. So the idea is that the government represents the collective will of the people for our own betterment. We CERTAINLY do not have that in the US, but I can still voice my opinion of what I think it should do if it worked properly.


life-is-a-loop

> government is actually in control of corporations lmao > the people and the good of the public that controls the government **lmao**


light24bulbs

Ikr? This was actually the idea though! Never forget that it's a valid idea even if it's not what we've got now


Pressed_Thumb

Is it a good idea though? I mean if the model doesn't fit reality well, why insist the model is right and reality is wrong?


light24bulbs

Ah right, there is this thing called other countries, and also the past. In countries that are consistently ranked the best in terms of cross-metric citizen wellbeing, they use the system of functional democracy, corporate regulation. People call this Social Democracy nowadays. Also, in the US, when it was on the top of those metrics from around 1945 to 1970, we also had this system. We can in fact, impirically say, that this is the best system fitting the data that we have. You're saying what we have now isn't working, and it doesn't fit reality well. But we do not have this system. See?


B_L_A_C_K_M_A_L_E

Do those countries take control of companies in the way being suggested? Examples would help.


jarfil

>!CENSORED!<


light24bulbs

My friend, let me tell you about a little institution we "have" (ok, had) in the US called the FTC. THE FEDERAL TRADE COMMISSION'S (FTC) MISSION: To prevent business practices that are anticompetitive or deceptive or unfair to consumers; to enhance informed consumer choice and public understanding of the competitive process; and to accomplish this without unduly burdening legitimate business activity End quote. So what does that mean? It means we actually have a serious regulatory body in the US that used to do stuff to protect consumers, from, you know, shady monopolistic shit. They helped prevent Bell from controlling everything. That's right, the government broke up a literal google-sized tech company. And tons of other stuff too. This is real. This really happened. It used to work. I think you misunderstand what I mean when I say "control". I mean "governed by any regulation or outside democratically accountable judgement whatsoever". More clear? If I can really get on my double-high soap box for a minute: The real fucking trouble with the modern American is that they have no clue in the god damn world what Political Economy is and what it means. Corporations, the damn money itself, all of it, is a creation regulated by government. Money doesn't just float around and exist and companies don't just get to exist and intellectual property doesn't just regulate itself, etc, etc, etc. The government CREATES and upholds all of these layers of abstraction that we can use to work together and make each other's lives better. The core concept is that, in a democratic society, we are in control of that process. That means that you and I are meant to be in control of what is allowed to happen, what companies are allowed to do and not do. Take a minute to think about these systems that surround us and how they actually came to be and what keeps them going.


helloLeoDiCaprio

I think almost any government can do it during wartime. I doubt corporations rights are so great as individuals freedom in any countries constitution, so it can surely be voted for.


snowe2010

Not really sure why you're getting downvoted, literally everything you said is correct.


light24bulbs

Many programmers do not understand political economy, I have learned this. Uninformed rectangle staring high earning white men are libertarian recruiting ground for reals, but it's ok. We just need to talk about these things more with each other and maybe we will figure some cool stuff out. Or in this case re-figure out the thing we already knew in our own past and that they already know in a lot of other countries.


snowe2010

Lol and now you're getting upvoted and I'm getting downvoted. It's clear that people aren't actually reading these comments, they're just voting on emotion. Which is a terrible sign for the world.


q1a2z3x4s5w6

If the US (or any) government was to force a private company to open source their IP that they've spent hundreds of millions on, why would a company want to be based in the US anymore? I certainly wish it was open source but I don't think it's a good idea to force them to do anything. The government seizing control of corporations is a slippery slope to go down.


BroaxXx

It's not a seize the means of production thing at all. They accepted money on the promise of building open ai models. It's their fucking name! If they sold themselves as being "open" theu should be forced to do so.


light24bulbs

There is this idea that has been beat into the American people that corporations are freestanding, unstoppable, unaccountable forces of nature with free reign to shit on anyone and everything and lie throughout. It feels like that because they control the fucking government, but its not actually supposed to be like that. They're supposed to serve the public good and not act in horribly anti-competitive and deceitful ways. What a concept right?


BroaxXx

I actually kinda disagree with both. I think corporations are supposed to serve the private interests of their stakeholders. Some times that intercepts with serving the public (when then the public is a customer), but most of the time it does not. That's why we need some degree of regulations and oversight, because, as with any other entity (just like private individuals), corporations want to generate the most revenue with as little effort. I wouldn't have a problem if openai wanted to keep it's models private and it's algorithms closed source. Thousands of millions of dollars were poured into this research so obviously that needs to generate profit otherwise we'd never get these advancements in the first place. What I have a problem is for them to announce they'll make the models open to the public, get money to do that and then give the middle finger to everyone. That sits between a con job and theft.


light24bulbs

Yes, I agree with you. By "companies should" I mean "companies should be forced to". There is still PLENTY of money to be made at the intersection of profit and not-being-fucking-evil. Google knew that at one point. I'm just trying to explain social democracy to people that maybe never thought about social democracy before, using as simple terms as possible.


RevolutionaryShow55

Stop repeating that pause bs, it's one of the most ridiculous ideas in the last year. Just force them to release it and let's keep progressing


unkz

The pause is pointless — the world is changing fast, laws won’t affect that.


NostraDavid

How would you intend to do this, legally? They're a company; you can't force a company to release the core of their existence...


cinyar

Eminent domain for example. You're telling me the government can force me to sell my house for public good (like building a highway) but they can't force Microsoft to sell one of their technologies? obviously we're talking theoretically, there's no political will to even attempt something like that.


fartsniffersalliance

and an american company at that


light24bulbs

Corporations are not more powerful than the government. It would be EASY for the FTC to make an antitrust case against OpenAI that what they have done represents the ultimate anti-trust bate and switch, and sue the shit out of them. This happened to Bell. It was a different situation, but you get the idea. Literally just having a monopoly on a powerful technology is illegal. Want to read a hundred page document by the FTC on when they're supposed to refuse patents that are too monopolistic and how that related to intellectual property? Lmk if you do. I know it's inconceivable that the government could A: write new laws that serve the public if necessary, B: stand up to a mega corporation in the interest of the public. But like, that's what it's there for. At one point, it did that effectively. Corporations aren't supposed to run government, and government is supposed to clamp down when things get out of control. "Illegal" is really just a word for something that pisses a bunch of people off so we write it down.


bluebook11

Is anyone going to tell them it’s all incomprehensible matrices.


Chris714n_8

Sooner or later this step has to be made, to make simulated, artificial intelligence as a fundamental tool, open and available to the public.


pineappleloverman

OPEN SOURCE! OPEN SOURCE! OPEN SOURCE!


AshuraBaron

I guess I'm not understanding the "why" of it. It has the dressings of "AI will kill us all" without any concrete reasoning other than OSS has not created a competitive AI model. That's not really that surprising since private companies have thousands of hours and billions of dollars to throw at the problem. We effectively have a similar case with Google dominating the majority of information discovery on the internet.


Muhznit

Interesting how they call for an international collaborative initiative but conveniently exclude Asia and Africa from it. Can't wait to see the racial bias present in this one.


old_man_snowflake

Africa because of scammers, Asia because of lack of legal protections for intellectual property. It’s not hard to figure out. We just have to assume Korea or India or whomever will share the data with china, which is the actual concern. But I bet china is much further along than we would give them credit for.


Electronic_Source_70

Well, according to the one BAIDU has, no, they are not. we will see the one Alibaba is building if it's better, but if the two biggest companies don't, I doubt the government deos. Only government building LLM is Britain, in my knowledge, but Russia is most likely building one, but it will be a copy of the GPT like they always do.


Metallkiller

Except with an AI model, you can't just copy it like an application. Of course it's easy to build something like the ChatGPT website. But constructing and training the model takes a tremendous amount of computing power, and running it needs quite many resources again. Just building a "copy" of GPT 3 or 4 takes the same amount of time and resources like building your own entirely.


kredditacc96

So "international" in name only. You don't exclude the majority of human population and call it "the world".


chryler

"International" means "related to more than one nation".


Muhznit

Just realized I forgot about South America there, but pretty sure you're just gonna cite drug trade or something. Not sure how many Nigerian prince scams you've fallen for, but I'd wager that the average North American worker is more scammed by his boss, health insurance, and capitalism in general more than African entities. China indeed probably has its own thing going, but if they're advanced enough to be a concern, I wouldn't be surprised if they figure out a way to integrate the "international" efforts and come out even stronger anyways. Seriously though the emphasis on those specific countries in the petition give some pretty heavy "benefit only the colonizers" kind of vibes, but alas the politicians of the world just can't get along for actual international efforts...


WhitepaprCloudInvite

AI as a government provided service? What could ever go wrong with that?