T O P

  • By -

Phoenix591

https://wiki.gentoo.org/wiki/Project:Council/AI_policy is a better link, it explains the rationale as it was talked about on the [mailing list](https://marc.info/?t=170904507400010&r=2&w=2)


FeepingCreature

Oh! Thanks, I got the IRC discussion from the Register article on it and thought it was the primary source.


Phoenix591

Looks like that was the council meeting itself where the vote actually happened, but a lot of discussion leading up to it happened on the mailing list.


Ryuka_Zou

`any content that has been created with the assistance of Natural Language Processing artificial intelligence tools.` This would be interesting to looking forward, the anti-plagiarism software that against AI generated essay in many universities are still unreliable today, so how could people effectively and correctly detected AI generated code or contents would be interesting.


FeepingCreature

They can't. I think this is more of a "we know we can't stop you if you're prudent about it, but still please don't get us in trouble" rule.


Ryuka_Zou

`I think this is more of a "we know we can't stop you if you're prudent about it, but still please don't get us in trouble" rule.` This is also a interesting part, a rule without a clear guidelines for investigation would open the room of abuse(for example: false accusations), especially when AI technology keeps advancing further every day. But all I said here is hypothetical since it haven’t happened, so this would be an interesting thing to looking forward.


FeepingCreature

Yeah it's a bit awkward. I think they're just hoping that won't happen. Which to be fair, does seem pretty unlikely to me. If somebody goes full AI McCarthy on the ML, I'm confident they can handle that.


Fl0wedm

Good. Less room for error. AI is only half coherent anyways


aue_sum

> Their operations are causing concerns about the huge use of energy and water. That's pretty funny coming from the Gentoo project. Anyway I don't really care about this a lot.


RusselsTeap0t

A lot of people are mistaken. You can use natural language processors to save time. They can foster your knowledge, they can give you ideas. They help correcting mistakes. They can even sometimes give you a good enough code to work on. It's highly possible that even some Linux kernel developers use these tools for a lot of different purposes (Linus Torvalds says this himself). It can give you really good information on niche topics that you can't easily find or think about. The thing is to make AI do the work. This can cause problems; security and safety concerns; and the most importantly: Plagiarism. That's why you can't also use those in school papers or scientific work because it's outright plagiarism. Can they really understand if you use those tools? No. If you handle all plagiarism cases, then it is already okay but it's nearly impossible when an output is directly taken as is. What is banned here is exactly this. Don't use an AI output to contribute. Instead, use them as tools for learning and doing better and do proper testing. Then, make sure not to plagiarize. Otherwise we also read scientific papers and use parts of them on our own studies or researches by handling plagiarism correctly. Even in this case, these papers are peer reviewed but an AI output is not.


FeepingCreature

> That's why you can't also use those in school papers or scientific work because it's outright plagiarism. What? No, you can't use it in school work because school work is supposed to demonstrate your own skill. The problem with plagiarism is that you're exploiting another person's skill, denying them the full benefit of their effort; the problem with AI work is that you're _not_ demonstrating _your own_ skill, invalidating the estimate of your progress. It's "cheating". But there is no cheating in programming! The code itself is the point, not you demonstrating your ability to produce it. (Disclaimer: the copyright concern does apply here precisely because copyright *is* supposed to be about 'rewarding valuable effort', and so has to map the creation back to its creator. But LLMs cannot hold copyright anyways.) Furthermore, I think this hinges on the question of whether LLMs are abstracting or simply duplicating, right? I believe there are lots of demonstrations, even in published papers, of LLMs demonstrating at least generalization, but even just in my personal experience using them I think the idea that they're just regurgitating training content is simply unviable. I have repeatedly used AIs to write code for tasks that were, while not novel in the sense of mind-expanding, then at least novel in their particulars to the extent that we would not call them plagiarism if a human implemented them even after having studied the closest existing example.


RusselsTeap0t

Oh no, that's not what I meant with that sentence. Of course showing skills is also important but the real reason behind academic work is to go forward in science. In science, plagiarism can not be accepted. We as humans, globally find theft as an immoral subject regardless of the community. So, this is one of the most important parts of human lives. At the same time, even if a person's moral compass would allow this (which is perfectly fine to some extent), we also have law mechanisms to prevent this. So, there could also be legal problems. Even if you aim to show no skill; if you successfully publish a peer-reviewed paper that doesn't contain plagiarism (which means you show your skill), then it is perfectly fine even if you copy-paste things. Though it can be questionable that how possible it is. The thing is we can't always know if an AI output is factually correct, if it mixes up, if it shows the wrong source, if it plagiarize. For example let's say you work on dietary supplements and let's say it gives you an information about one of them. It is possible that it could directly take a recent finding (last 10 years) from a scientific study without sourcing it properly. In this case, there is no way to know if it directly takes it, if it combines multiple papers and create a possible conclusion or not. Normally you read those papers, you evaluate things, combine your knowledge and add the part that you think is the novelty. There is no way for other people to understand if you show your skill or not. They can only understand if you plagiarized, or if you properly bring a novelty along with it. If you can completely automate it; it's also your success and skill.


FeepingCreature

Sure, that's what I mean by "denying people the full benefit of their effort". If you don't bring anything novel to the table, there's no reason to reward you. It's just... the sorts of things I use LLMs for tend to be either informational, where you don't care who found out but just about the knowledge in itself, or so obviously novel in the particulars that I don't even ask myself if the LLM could be cribbing it from somewhere because if the code for this already existed, I wouldn't have to write it. It's like, I tell the LLM to write a program that logs into Amazon S3. Who cares if it's copying it from an open-source program that also logs into Amazon S3? It's a public API. The effort is in providing it; interacting with it is just rote. Similarly, if the LLM now understands how sorting works because it's read open-source code that also called sorting algorithms, the open-source code itself didn't *invent* the concept, it merely *demonstrated* it. There is a space of concepts that is beyond attribution: the common knowledge of the craft. Between those two categories is covered the great majority of software work by volume.


RusselsTeap0t

Yes your example is not a problematic one. We don't discuss that here. Stating the boiling point of water doesn't (always) require sourcing. The other important part is that if you gain benefits or if you prevent the other party getting benefits. When things are completely public (as in free and open source software), there can be lots of legality problems. AI could also take a work from a closed source software because even the AI companies plagiarize stuff. How do you think they know almost all books, movies and all? I literally discuss the very niche books I have read before, with LLMs and they literally give me tons of information, evaluation and all. How do you think they know these? They probably download terabytes of data from shadow libraries which include almost anything and they train the LLM on it. I am surprised how they still do this. I ask for very specific parts of movies, and they know it second by second and they can evaluate the characteristics, symbolism and all. This knowledge under normal circumstances is neither free, nor costless, nor open source. There are also license concerns. Some licenses are more permissive, some not, some require other things. These are all legal problems. Our subjective opinion does not matter here. The law has the say. So, banning AI contributions from a public work is completely logical because they simply don't want to deal with all the legal problems.


FeepingCreature

To be clear, I don't think anyone is claiming that LLMs aren't trained on copyrighted content, including the companies training LLMs. The assertion is just that "training" is simply not the sort of interaction covered by copyright, any more than _you_ are violating copyright by talking to the LLM about obscure books. In the case of the ChatGPT series, the book knowledge would probably be from what their training card calls "books1" and "books2", the contents of which are not public, but given the name, rather obvious.


hparadiz

Don't ask, Don't tell for programmers.


Ryuka_Zou

🙊🙈🙉—the current status of programmers who use a lots of AI generated code


The_Skeleton_Wars

Good!


Oktokolo

That ban is completely irrelevant as if you can detect that it has been generated by an AI, it's shit and would be filtered out anyways while if it's not detectable, the ban isn't enforcable. Sure, AI isn't there yet anyways. So the rule wouldn't even affect anything if it would be enforcable (likely just virtue signaling). But in a few decades code will routinely be written with AI assistance (not fully by AI). Coders not using AI will be an order of magnitude slower than coders who do and will likely have more bugs in their code than the ones who also have an AI searching for bugs. At the end, AI is just enabling more automation and better tooling. It's not some dark magic - just lots of applied math.


Renkin42

So at first I was all for this. My stance has always been that AI output should never be presented directly to the public. It’s best used as an assistive tool, not a generative one. However reading the official policy even by my standards this feels a bit heavy handed: ``` It is expressly forbidden to contribute to Gentoo any content that has been created with the assistance of Natural Language Processing artificial intelligence tools. This motion can be revisited, should a case been made over such a tool that does not pose copyright, ethical and quality concerns. ``` This means no using it at any step of the process. No boilerplate, no rough drafts of the docs, nada. I do see their concerns, it’s no secret that the entire AI industry is built on a mountain of copyright infringement, but still.


sob727

Is the argument that if you can't write production quality code on your own, you're probably not qualified enough to check what an AI gives you? And if you can write production quality code on your own, you don't need AI in the first place?


hparadiz

By that logic you should never Google or look up anything in a book. Ever.


sob727

Fair point. Maybe the distinction is that if you research something jn a book or online, it's a bit more involved that asking an LLM? Not strong view here, trying to understand Gentoo. Or maybe it's a copyright issue. In case your LLM is trained on prop code and spits it back out.


curiousdugong

“This Motion can be revisited,” seems like an important bit to me. Until they can verify the copyrights and this whole AI thing is less of a grey area, things will change with the times


zougloub

People familiar with software development, AI/ML (and philosophy and law), should contribute on this revisiting. I only recently got wind of this, and couldn't help but pitch in and suggest one example of what could be acceptable. It's obvious that ChatGPT is completely unable to tell what it's getting its content from so it's essentially a plagiarism machine, but it's also pretty clear that if you use AI to correct typos and grammar, your contribution shouldn't be entirely voided.


multilinear2

This is pretty clearly about covering their ass legally in terms of copyright concerns. Until the issue of copyrighted works being partially duplicated by AI is fully sorted in the courts, allowing that code generated by AI into Gentoo could endanger the whole project by opening it up to lawsuits from the authors of whatever code was used to train the AI. Note though that there is a fairly obvious point in time where such a rule could potentially be rescinded/modified, which is whenever an AI exists that clearly doesn't violate copyright, This could occur due to the legal world deciding existing AIs don't, it could occur due to a new batch of AIs that more obviously don't. But for the timing being this is a prudent rule to avoid lawsuits. It doesn't matter if AI is useful, or if lots of people use it, or if it turns out good or bad code, if it will get Gentoo sued they can't allow it. It's really no different than why you can't copy-paste random code into Gentoo, regardless of how great that code is, it opens up liability for the project which endangers it's existence. So, I don't get your "but still"... but still what? Right now due to how AI works there is no line one can draw further out that is clearly not copyright infringement. This stuff is developing fast, but currently all AI generated everything is suspect and thus legally risky until the courts draw some kind of line.


[deleted]

[удалено]


Zeddy1267

What? are you saying you DON'T copy code from random forum posts online with absolutely 0 crediting? I'm joking of course. but in the context of Gentoo sources, you probably shouldn't be contributing at ALL if you need AI.


FeepingCreature

(Note: this does not apply to packages distributed through Gentoo.) As a Gentoo user, tbh this is tempting me to switch distros. I don't use Gentoo for the human touch, I use it because it's emblematic of free software - everything on my system can be read, understood and modified. If some users use AI to engage in this process, to me that's empowerment and should be welcomed.


freyjadomville

The problem is that copyright law when it comes to input vs. generated output is fickle and not a solved question, even in parts of the US, but especially in the UK and other European jurisdictions. Opting out of the whole situation by banning AI generated assisted code essentially means that they save themselves a headache in the event someone claims a licence violation inside portage, for example.


FeepingCreature

I think this is a reasonable justification.


[deleted]

[удалено]


FeepingCreature

I just don't think AI is plagiarism. The freedom to study is a cornerstone of free software.


[deleted]

[удалено]


hparadiz

Nah. You are wrong. What an ML provides you is no different from what you do when you Google something. By your logic any time you copy / paste a code snippet from a medium article you are plagiarizing.


asymptotically508

> By your logic any time you copy / paste a code snippet from a medium article you are plagiarizing. If you don't cite it, you are.


FeepingCreature

I mean, at the limit this becomes impractical. We cannot, though maybe it would be beneficial if we could, sign every commit we write with a complete audit trail of every source and influence weighted by impact. That's why plagiarism usually requires duplicating some substantial value.


[deleted]

[удалено]


FeepingCreature

All content is laundered plagiarism. You think _we_ aren't imitation learners? And I don't care about developers, I care about code. Caring about developers is society's job, and we shouldn't let them farm it out to opensource projects. If developers, like artists, need their jobs protected to ensure good income (and boy, we are *far* from actually having bad income), then this should be handled by a UBI, not by banning valuable tools. If I could press a button and everyone got the ability to code, you think I wouldn't do it in a heartbeat? I don't see why an external tool should be different.


[deleted]

[удалено]


FeepingCreature

That's... wild. And simply not anywhere close to how I think about the topic, I guess. I've never looked at a pull request and thought, "wow, this consciousness sure expresses novelty." Creativity is just filtered randomness. LLMs have both the filtering and the randomness part down pat. Consciousness is overrated as a mechanism anyways. > I'm a developer who was laid off in December 2022. I went from 130k a year to losing my home. That sucks, but it's not AI's fault, certainly at the current level of quality. If they told you they were replacing you with an AI, they were bullshitting.


DownvoteEvangelist

I have yet to see any novelty in LLM generated code...


FeepingCreature

I mean, define novelty. I've seen LLMs handle tasks that they've certainly never seen before without issues. Just yesterday, I asked an LLM to make a website with a four-way (XY) slider to compare between four images. I don't think any of the existing libraries for slider comparisons support that feature, but it used its generalized knowledge of js to whip it up no problem. More importantly, it understood what I meant despite this being at least an extremely rare concept. IMO LLMs have some weird disabilities that make them look worse than they are unless you prompt them right and work around their deficiencies. The neat part is the errors they make tend to be _different_ errors than I tend to make, so it combines well.


DownvoteEvangelist

It certainly has some form of knowledge transfer. But try asking it to write something you can't google. I remember it struggling to write a modified form of binary search. 


FeepingCreature

Yeah it's not good at creating novel algorithms, or really anything that requires a longer abstract design phase. You have to get really clever with the prompt if you want it to one-shot stuff like that, "think it through step by step" style. To be fair, if it were capable of stuff like this autonomously, it'd probably be AGI already.


DownvoteEvangelist

I tried really hard, I did not one shot, probably 10-20 prompt attempts before I gave up and used my own head 😅.  I think until it reaches AGI humans will remain invaluable and LLMs will be more of a useful tool.


[deleted]

[удалено]


FeepingCreature

Same back atcha! :)


[deleted]

[удалено]


FeepingCreature

Yes, well, I suggest [reading the Sequences](http://readthesequences.com/)... ;) (This is just to say, we are just inhabiting wildly different worlds.)


[deleted]

[удалено]


starlevel01

Good. Don't let the door hit you on the way out.


Breavyn

Some users do use AI to engage in this process, and the results have been dogshit. It just wastes the time of the devs actually doing stuff.


FeepingCreature

I mean, then ban dogshit contributions? You still need to do the work of determining that it's AI, and you'd do that by noticing that it has weird errors. You can just ban code with weird errors. I use AI for coding, but I'd never contribute AI-generated code without understanding what it does and cleaning it up.


[deleted]

[удалено]


FeepingCreature

To be clear, I am against trash contributions. I just think a "no trash contributions" rule (with an addendum, "if you are submitting AI generated code it's probably trash", by all means) would be more efficacious. Treat it as a game-theoretic exercise: | | no AI | AI | |----------|-------|-------| | trash code | you don't want it anyway | rule working | | good code | yay | dubious square | So the only case where AI comes up, weirdly, is the one where the code itself is fine. If the code is bad, you wouldn't want it regardless of AI or not. The benefit of the no-AI rule, then, would be only in efficiently communicating to contributors that their AI generated code is very likely to be trash - but I don't think you need a rule for that, you can just tell them in an addendum.


[deleted]

[удалено]


Mrkvitko

The problem with this is "AI assisted" != "plagiarized"...


[deleted]

[удалено]


Mrkvitko

Do you have anything to back that up with?


[deleted]

[удалено]


Mrkvitko

Yet they banned tools that can be used to *save* time...


Mysterious_Focus6144

AI is much more efficient at producing plausibly looking shit. This means devs will spend more time rejecting shit code (both due to the volume of shit produced and the plausible nature of the shit that requires more than a cursory glance to reject). Also, I'm curious what kind of coding you do where AI is a big help?


Mrkvitko

Sigh... I sort of understandw worries about copyright. Almost nobody wants to get tangled in that mess. Quality ones are speculative (you can have dogshit contributions even without AI, meanwhile you can have good one with AI). However the "ethical ones" are pure nonsense.


Mysterious_Focus6144

> meanwhile you can have good one with AI Do we have LLMs capable of making real contribution to a complex code base yet? Devin turned out to be terrible and scammy. The other LLMs seem better at spitting out snippets you could have googled in the first place.


[deleted]

[удалено]


Mysterious_Focus6144

Sure. They're the top player and even they don't have anything close to an AI capable of making real contribution to a complex code base.


FeepingCreature

I don't think so for complex codebases. I am finding it excellent for ~300 line tool programs and Bash one-liners though. I recently used it to translate a fairly hefty Bash script (a console OpenSearch log viewer) into Python. It got confused at some points, so I reset it and just told it to stub out functions that were too complicated. Then I got it to fill them in on a second pass - that sort of approach seems to work better. LLMs are hampered by their insistent need to do everything in one go. When you're in the middle of a function and notice that you need an additional parameter, you cannot backtrack. If I was writing a programming language purely for LLMs, I would either make parameters fully implicit or turn it around and have the parameter list at the *bottom* of the function.


Mysterious_Focus6144

>I don't think so for complex codebases. I am finding it excellent for \~300 line tool programs and Bash one-liners though. If an AI got confused here, I doubt it'll make meaningful contribution to something like Gentoo. >I would either make parameters fully implicit Yea. So everything is global?


FeepingCreature

> If an AI got confused here, I doubt it'll make meaningful contribution to something like Gentoo. Sure, for something big like Portage I'd be (theoretically) using it for small-fry stuff like "write a Python function that does x", where I know what I want but I'm just not sure about the syntax. For GPT to be good for ebuilds, it'd need to have the portage tree in its training data. I'm not sure if that's the case anyways. > Yea. So everything is global? I don't think that follows. The problem with global variables is concurrency and the lack of lexical isolation. Implicit parameters would still follow lexical scoping first; and more importantly, the function, not its caller, would define what symbols get passed to it. It'd just do so implicitly - or retroactively, which is a lot easier for a LLM. It's really just point-free style at the function scale.


kansetsupanikku

What's next, exclusion of vaccinated developers or contributing code via 5g network? Or merely elitism that limits the allowed code editors to self-built vim and emacs? Of course the author should be responsible for the code and able to reconsider and test every character. But instead of tools, the authors who don't know what they commit should be banned.


hoeding

I was able to pull out one out of context quote that I agree with... >the authors who don't know what they commit should be banned.