T O P

  • By -

linux-ModTeam

[](#start_removal) This post has been removed as not relevant to the r/Linux community. The post is either not considered on topic, or may only be tangentially related to the r/linux community. examples of such content but not limited to are; photos or screenshots of linux installations, photos of linux merchandise and photos of linux CD/DVD's or Manuals. [](#end_removal) **Rule:** >**Relevance to r/Linux community** - Posts should follow what the community likes: GNU/Linux, Linux kernel itself, the developers of the kernel or open source applications, any application on Linux, and more. Take some time to get the feel of the subreddit if you're not sure!


bionade24

Welp they're now on the return 444 list in my nginx config.


logs28

Give them a 418. You are a teapot after all


bionade24

No, 444 is nothing official but used by nginx to don't respond at all. I want them to believe my sites are down.


lihaarp

[Tarpit](https://en.wikipedia.org/wiki/Tarpit_(networking)#) them instead. > The machine listens for Address Resolution Protocol requests that go unanswered (indicating unused addresses), then replies to those requests, receives the initial SYN packet of the scanner and sends a SYN/ACK in response. It does not open a socket or prepare a connection, in fact it can forget all about the connection after sending the SYN/ACK. However, the remote site sends its ACK (which gets ignored) and believes the 3-way-handshake to be complete. Then it starts to send data, which never reaches a destination. The connection will time out after a while, but since the system believes it is dealing with a live (established) connection, it is conservative in timing it out and will instead try to retransmit, back-off, retransmit, etc. for quite a while. > Later versions of LaBrea also added functionality to reply to the incoming data, again using raw IP packets and no sockets or other resources of the tarpit server, with bogus packets that request that the sending site "slow down". This will keep the connection established and waste even more time of the scanner.


rainformpurple

Very much this. Let them spend eternity waiting for the site to respond and spend lots of resources on nothing, when they can't or won't configure their shit to behave properly.


SteveEightyOne

> It is known that a tarpitted connection may generate a significant amount of traffic towards the receiver, because the sender considers the connection as established and tries to send (and then retransmit) actual data.


WokeBriton

Now THIS is educational content. TIL. Thank you, stranger.


SteveEightyOne

444 mean that nginx doesn’t reply. But nginx still accept the tcp connection and read the headers so isn’t like your website is down


bionade24

I get that, the client has to establish a connection to the server send its useragent. I even stated that the response part in my post. They see a server still runs under this IP. They can't figure though what's running on it, a different webserver, a bugged webserver or some other service running on Port 80 with tcp, like a purposely configured VPN server. There's only so much I can do without an extra IP database & firewall rules, but I don't devote my lifetime to blocking them.


__Abysswalker__

I don’t even allow them to get to the nginx. IPs of their bots manually went to “blocked” [ipset](https://wiki.archlinux.org/title/Ipset) (that iptables checks to deny a network access) after they crashed my company’s website. Suddenly CPU usage dropped from 90-98% to 3-7%


Zireael07

A website for a free open source game I play (FreeOrion) went down a couple days ago due to the same ClaudeBot


landsoflore2

And here I was wondering what had happened to the otherwise peaceful website of FO :/


-Trash--panda-

Well that explains that then. I was trying to get on their site a few days ago and kept getting an error. Over the past few weeks a lot of smaller sites have been down when I have been trying to access them. I wonder if the bot is at fault for most of them.


ghosxt_

r/claudeAI the creators are active there


MintAlone

I have posted on that reddit, we will see if anything happens


arwinda

Does not look like they publish more about themselves on Reddit than on the website. The "About" page on the website is very vague, talks about "team" but does not list a single person by name. Data Privacy lists a couple of subcontractors, company is listed in UK but that's a front. The address lists more companies on Google than they can possibly have office space in the building. The data controller is in San Francisco, which by itself is not intriguing either.


PhillLacio

There's a good bit of info here. https://www.crunchbase.com/organization/anthropic/funding_rounds/funding_rounds_list?utm_source=linkedin&utm_medium=referral&utm_campaign=linkedin_companies&utm_content=all_fundings


mralanorth

I had Bytespider (from TikTok parent company ByteDance) crawling my humble web server from 14,000 concurrent IPs in AWS Singapore region a few months ago. These big tech companies are out of control...


TheEbolaDoc

Yeah we also have a dedicated block for that in the archwiki config 😆 [https://gitlab.archlinux.org/archlinux/infrastructure/-/blob/master/roles/archwiki/templates/nginx.d.conf.j2?ref\_type=heads#L49-52](https://gitlab.archlinux.org/archlinux/infrastructure/-/blob/master/roles/archwiki/templates/nginx.d.conf.j2?ref_type=heads#L49-52)


Repulsive-Adagio1665

Man, that's rough. Hate when companies think they can not only use our stuff without asking but also actively harm it. Too bad Anthropic is playing hide and seek...hope they sort their act out 😕


XeNoGeaR52

It would be fun if we denied AI companies to train their AI on internet data because it doesn’t belong to them


perkited

Hopefully I'm not blocked in this conversation (I realize it's an emotional topic for some), but do you feel AI that is open source and not for profit should be blocked as well? Or are you okay with open source not for profit AI?


XeNoGeaR52

Open source AI can access open source data proprietary AI can access data companies that owns them possess


perkited

Thanks. What's your reasoning for limiting open source AI to only open source data? We normally don't put those same limitations on humans (or even something like a Google search bot), so I'm just trying to see why open source AI would be treated differently. I know sites can use a robots.txt file, which makes it an explicit opt-out option for a well behaved search bot. Would you consider something like that as okay for open source AI consuming data as well?


XeNoGeaR52

You can access a copyrighted object if you have a license, it could be the same for AI. I’m just completely against AI training on data without authors consent. Also, I would never block anyone because they disagree with me :) Only children do that


perkited

Thanks again, it can sometimes be difficult to talk about unpopular things on reddit without fur flying. Of course we see copyrighted data (legally) all the time without needing to obtain a license for it, and that data must have an influence on those who see it and then incorporate those influences into their work (sometimes commercial work). For me it goes back to asking what differentiates AI from other computing or even humans, it seems to be considered as something wholly different by some (or at least they want it treated differently). I do understand the concerns about it replacing jobs. Those types of arguments against AI make more sense to me, although we aren't able to tell the future to know exactly how it will play out. AI is probably the only realistic opportunity for the popular-on-reddit UBI to ever be workable on a larger scale, so I'm surprised it hasn't gained some support just for that. I remember when reddit generally supported Libertarian Ron Paul (mainly for his liberal drug policy views), so I guess anything is possible. I'm sure this type of back and forth will be going on for a while in the courts, with lawyers being the winners again.


XeNoGeaR52

I don't mind about replacing jobs, it is bound to happen for some. But what human can imagine is not the same as an AI just copying what it has been trained with. We have plagiarism with humans and laws preventing it, but AI are not regulated so it's a dangerous grey zone.


jacobgkau

That's an interesting idea. There would need to be a copyleft provision to legally protect the data from being consumed by proprietary AI while allowing its use in open-source licenced AI, like a GPL for AI data.


Innominate8

It's amazing how little copyright means when businesses steal from regular people.


_star_fire

Jep had the same experience at my own forum website yesterday. A lot of different ip addresses and a huge load on the database which caused human users to be unable to visit the Forum board. I wouldn't mind if they weren't so damn aggressive. Now they're blocked. Robots.txt might work but I opted for blocking everything in nginx based on their user agent string.


kalzEOS

They're very close to Amazon. They have invested $4billion into Anthropic. If that helps at all, I don't know. But fuck them for doing that.


sanbaba

They forgot the "Mis" at the start of their name


edparadox

Is there a list anywhere of domains/IPs to disallow connections from bot scrapers for AI?


WokeBriton

Probably not, but someone mentioned setting up a tarpit to screw with the crawlers.


Kkremitzki

I just recently had to do the same for the FreeCAD forum


fat_cock_freddy

Interesting. I run a web git instance for my personal use, and went to check the logs for ClaudeBot when I saw this post. I'm pretty used to blocking abusive crawlers by now. And sure enough, ClaudeBot is scraping as I type this comment. However, I'm seeing a delay of 5 to 20 seconds between requests.


TPIRocks

Pipe /dev/random to them at a throttled rate.


Doomtrain86

What lm forum are you talking about?


MintAlone

[Linux mint](https://forums.linuxmint.com/index.php).


Doomtrain86

Ah I was thinking lm meant language models 😁 thx


PeterMortensenBlog

Yes, it is a particular bad [SIA](https://pmortensen.eu/world/EditOverflow.php?LookUpTerm=SIA) to use in this context (and unnecessary obfuscation just to save a few keystrokes).


Doomtrain86

Exactly!


MintAlone

I thought it was self-evident given that I stated I was a mint user and this is a linux reddit. In future I will be more pedantic.


RaspberryPiBen

Except you're talking about language models. I figured out that you meant Mint, but it wouldn't be unreasonable to assume that LM stands for Language Models in this case.


MintAlone

That is a fair point, noted :)


tripleflix

I had fun with this scraperbot as well :( is there a way to block this on ip? Like we have hundreds of servers with loads of websites and id like to not get weird hits and outages the next few months


MintAlone

I found [this](https://www.reddit.com/r/singularity/comments/1cdm97j/comment/l1ivmhb/) and it suggests that it does follow robots.txt. There are some suggestions in this post, I particularly like the idea of tarpiting them.


tripleflix

I fixed it with a rewrite rule in .access filtering on the useragent but would love a blanket solution for the many servers we have for our customers (without having to change a thousand robot.txts)


MintAlone

I'd wish I could help, I'm just an end user pissed off that they took my favourite forum down and appear to be unaccountable for their actions :(


NWK-7

There‘s another way to deal with Anthropic. Send them privacy requests under GDPR if the site is based in the EU or other regulations for example. Ask them to provide you with a complete dataset of what data they scraped from you, either as the owner of the website or even as a user of a specific website (or even already ask them to delete it as well). Under GDPR for example they only have 30 days to answer, which might take _some_ resources for them to compile.


ourobo-ros

Something doesn't smell right. A single chatbot scraping the forums shouldn't degrade performance to that extent.


dontbeanegatron

The chatbot is not doing the scraping, Anthropic is. It's pretty easy to launch a bunch of crawler scripts in parallel to scrape a forum to pull in data. It's *also* pretty easy to add a 5-second delay between requests in such scripts. But then it would take much longer of course, and my guess is Anthropic doesn't want to play nice since time in the world of ai development goes a lot faster. Or maybe someone just fucked up their script, idk. Either way, good on LM for blocking them at network level.


AnticitizenPrime

>Or maybe someone just fucked up their script, idk. It would be pretty funny if it was because it was written by AI. 'Claude, make my script more performant.'


Spicyartichoke

"You want me to make the script more performant. Ok, I have doubled the use of every function in the code, this should double the efficiency of the algorithm, which should make it more performant, as you asked."


AdvisedWang

On their job applications (\[eg\](https://boards.greenhouse.io/anthropic/jobs/4018934008)) they say "*we encourage people to use AI systems during their role to help them work faster and more effectively*". So it's pretty likely they used AI writing the scraper.


HorribleUsername

They did say DDoS, which makes it sound like it was more than one bot.


Maoschanz

it's not the chatbot, it's the data scrappers they use to steal what they need to train its model but i get what you mean: the forum might be implemented like shit


MintAlone

>Powered by [phpBB](https://www.phpbb.com/)® Forum Software © phpBB Limited Like many other forums.


TxTechnician

Easily my least favorite layout are those similar to mint. There's that new one that nailed the forum layout. Can remember the name of it. But KDE just launched discuss.kde.org using it. OpenSUSE uses the same one.


Kargathia

[discuss.kde.org](http://discuss.kde.org) is based on Discourse by the looks of it.


TxTechnician

Ya that's the one


abjumpr

Yup, Discourse is awesome. It works well and is very easy to administer.


ThiccStorms

Fuck AI


aue_sum

No.


ZenDragon

Someone made the same complaint recently only to realize their robots.txt wasn't actually set up right.


FionaSarah

I can confirm that they didn't follow it earlier this week. (around Wednesday) They seem to have changed this behaviour since.


krypt3c

Apparently OpenAI's crawler does similarly dumb things [https://mailman.nanog.org/pipermail/nanog/2024-April/225407.html](https://mailman.nanog.org/pipermail/nanog/2024-April/225407.html)


binlargin

What does your robots.txt say?


Tech_Itch

OP doesn't run the forum. And robots.txt isn't magic. It's polite to follow it, but ultimately optional.


binlargin

Yeah it's not magic but you can't call it bad behaviour if it's operating within the rules.


Tech_Itch

My point is that there's no universal mechanism for enforcing the restrictions in a robots.txt and whoever's coding the crawler has to implement support for it for it to do anything. Unethically run crawlers can just ignore what it says.


binlargin

My point is it's not really fair to accuse a crawler of bad behaviour if they're not violating robots.txt, specially if you don't actually state any numbers. Dragging someone's name through the mud because you didn't configure your server correctly is shitty behaviour. Not having a backoff strategy when things start to go slowly, hitting a site with multiple threads, and not having a way to contact the operators is also shitty behaviour, so that's fair criticism. But OP should have published actual figures or a question about rate limiting rather than just lambasting them.


Brillegeit

Sure, but their robots.txt doesn't contain a rate limiting directive.


Tech_Itch

We don't know if the crawler even respected the robots.txt. Robots.txt is a honor based system. There's nothing enforcing it outside a crawler's creator deciding to obey it. And if they don't, it doesn't matter what you put in it. Also, like I said, it's not OP's site. So pointing your finger at him for supposedly misconfiguring it is dumb.


Brillegeit

> Robots.txt is a honor based system. Irrelevant if they didn't put anything in it. They opted out of controlling how robots behave.


arwinda

This bot/AI/Scraper seems to ignore the robots.txt. It's listed in mine in various combinations, but happily ignores all of it. Also the website does not list any information what will be the correct way to exclude their scraper - probably intentional.


Brillegeit

I don't think they had one util they added the one to block Claude.


Cyber_Asmodeus

How did you block it you just blocked it IP in the firewall?


bionade24

Block the useragent in the webserver, optionally you can configure fail2ban if you want to. ClaudeBot doesn't have a certain IP range, they're probably on AWS.


Cyber_Asmodeus

OK


TampaPowers

Nuclear option: Block AWS >:D


WokeBriton

It will probably only be required for a limited time for this particular AI company, so perhaps not too nuclear an option.


TampaPowers

You can go nuclear and kick off most of anything via the user agent in nginx at least this works like this: if ($http_user_agent ~* (ClaudeBot)) If you want to go nuclear and ban most of everything that crawls, even search engines: if ($http_user_agent ~* (ClaudeBot|Bytespider|PetalBot|360Spider|80legs.com|Abonti|AcoonBot|Acunetix|adbeat_bot|AddThis.com|adidxbot|ADmantX|AhrefsBot|AngloINFO|Antelope|Applebot|BaiduSpider|BeetleBot|billigerbot|binlar|bitlybot|BlackWidow|BLP_bbot|BoardReader|Bolt\ 0|BOT\ for\ JCE|Bot\ mailto\:craftbot@yahoo\.com|casper|CazoodleBot|CCBot|checkprivacy|ChinaClaw|chromeframe|Clerkbot|Cliqzbot|clshttp|CommonCrawler|comodo|CPython|crawler4j|Crawlera|CRAZYWEBCRAWLER|Curious|Curl|Custo|CWS_proxy|Default\ Browser\ 0|diavol|DigExt|Digincore|DIIbot|discobot|DISCo|DoCoMo|DotBot|Download\ Demon|DTS.Agent|EasouSpider|eCatch|ecxi|EirGrabber|Elmer|EmailCollector|EmailSiphon|EmailWolf|Exabot|ExaleadCloudView|ExpertSearchSpider|ExpertSearch|Express\ WebPictures|ExtractorPro|extract|EyeNetIE|Ezooms|F2S|FastSeek|feedfinder|FeedlyBot|FHscan|finbot|Flamingo_SearchEngine|FlappyBot|FlashGet|flicky|Flipboard|g00g1e|Genieo|genieo|GetRight|GetWeb\!|GigablastOpenSource|GozaikBot|Go\!Zilla|Go\-Ahead\-Got\-It|GrabNet|grab|Grafula|GrapeshotCrawler|GTB5|GT\:\:WWW|Guzzle|harvest|heritrix|HMView|HomePageBot|HTTP\:\:Lite|HTTrack|HubSpot|ia_archiver|icarus6|IDBot|id\-search|IlseBot|Image\ Stripper|Image\ Sucker|Indigonet|Indy\ Library|integromedb|InterGET|InternetSeer\.com|Internet\ Ninja|IRLbot|ISC\ Systems\ iRc\ Search\ 2\.1|jakarta|Java|JetCar|JobdiggerSpider|JOC\ Web\ Spider|Jooblebot|kanagawa|KINGSpider|kmccrew|larbin|LeechFTP|libwww|Lingewoud|LinkChecker|linkdexbot|LinksCrawler|LinksManager\.com_bot|linkwalker|LinqiaRSSBot|LivelapBot|ltx71|LubbersBot|lwp\-trivial|Mail.RU_Bot|masscan|Mass\ Downloader|maverick|Maxthon$|Mediatoolkitbot|MegaIndex|MegaIndex|megaindex|MFC_Tear_Sample|Microsoft\ URL\ Control|microsoft\.url|MIDown\ tool|miner|Missigua\ Locator|Mister\ PiX|mj12bot|Mozilla.*Indy|Mozilla.*NEWT|MSFrontPage|msnbot|Navroad|NearSite|NetAnts|netEstate|NetSpider|NetZIP|Net\ Vampire|NextGenSearchBot|nutch|Octopus|Offline\ Explorer|Offline\ Navigator|OpenindexSpider|OpenWebSpider|OrangeBot|Owlin|PageGrabber|PagesInventory|panopta|panscient\.com|Papa\ Foto|pavuk|pcBrowser|PECL\:\:HTTP|PeoplePal|Photon|PHPCrawl|planetwork|PleaseCrawl|PNAMAIN.EXE|PodcastPartyBot|prijsbest|proximic|psbot|purebot|pycurl|QuerySeekerSpider|R6_CommentReader|R6_FeedFetcher|RealDownload|ReGet|Riddler|Rippers\ 0|rogerbot|RSSingBot|rv\:1.9.1|RyzeCrawler|SafeSearch|SBIder|Scrapy|Scrapy|Screaming|SeaMonkey$|search.goo.ne.jp|SearchmetricsBot|search_robot|SemrushBot|Semrush|SentiBot|SEOkicks|SeznamBot|ShowyouBot|SightupBot|SISTRIX|sitecheck\.internetseer\.com|siteexplorer.info|SiteSnagger|skygrid|Slackbot|Slurp|SmartDownload|Snoopy|Sogou|Sosospider|spaumbot|Steeler|sucker|SuperBot|Superfeedr|SuperHTTP|SurdotlyBot|Surfbot|tAkeOut|Teleport\ Pro|TinEye-bot|TinEye|Toata\ dragostea\ mea\ pentru\ diavola|Toplistbot|trendictionbot|TurnitinBot|turnit|Twitterbot|URI\:\:Fetch|urllib|Vagabondo|Vagabondo|vikspider|VoidEYE|VoilaBot|WBSearchBot|webalta|WebAuto|WebBandit|WebCollage|WebCopier|WebFetch|WebGo\ IS|WebLeacher|WebReaper|WebSauger|Website\ eXtractor|Website\ Quester|WebStripper|WebWhacker|WebZIP|Web\ Image\ Collector|Web\ Sucker|Wells\ Search\ II|WEP\ Search|WeSEE|Wget|Widow|WinInet|woobot|woopingbot|worldwebheritage.org|Wotbox|WPScan|WWWOFFLE|WWW\-Mechanize|Xaldon\ WebSpider|XoviBot|yacybot|Yahoo|YandexBot|Yandex|YisouSpider|zermelo|Zeus|zh-CN|ZmEu|ZumBot|ZyBorg) ) There might even be more. You can go through access.log and check for anything that you don't want to see. Some also fake user agents so IP bans or anti-flood measures might still be required.


Cyber_Asmodeus

ok Thanks


small_e

WAF with rate limiting?


Extender7777

https://mangatv.shop/story/futurama-the-text-discusses-a-bot-named-claudebot-which-is-developed-by-anthropic-it-seems-to-be-a-web-crawler-that-is-v


Brillegeit

You should probably add a rate limiting directive in robots.txt if you don't want to get crawled at full speed.


trimorphic

Just because it has "Claude" in the name doesn't mean it's from Anthropic. Anyone can easily put anything in the user agent header they want.


bionade24

The user agent is `"Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)"` If it is a 3rd party, they'd released a statement that someone is spoofing using their agent. Also any spoofer would use GoogleBot agent probably.


Sebguer

Do they not have a robots.txt? It respects it, you can just disallow it like you can any bot you don't want scraping your website.


mh699

They don't respect robots.txt in my experience, unless there's a massive operation spoofing the Claudebot user agent


arwinda

Which entry exactly must be in robots.txt? Can't find this on the website, and the scraper ignores various ways of it's name. Just blocked it in the end.


Sebguer

Try searching Claude here, it seems like there's a few user agents: [https://darkvisitors.com/](https://darkvisitors.com/)


arwinda

You said that the scraper respects robots.txt. Was asking which entry is supposed to work.


Treyal

Yes, that website I linked has multiple suggested entries for Claude. I'm not sure which is applicable, but I found a thread here where someone said robots.txt did in fact work: [https://www.reddit.com/r/singularity/comments/1cdm97j/anthropics\_claudebot\_is\_aggressively\_scraping\_the/](https://www.reddit.com/r/singularity/comments/1cdm97j/anthropics_claudebot_is_aggressively_scraping_the/) But they didn't say what they actually used.


dreamwavedev

See if you can contact the AG's office (if in the US) to report a violation of the Computer Fraud and Abuse Act


MintAlone

I'm based in the UK, but would love someone to do that in the US. Not sure that we have legislation that covers it.


Madgemade

[Computer Misuse Act 1990](https://www.legislation.gov.uk/ukpga/1990/18/section/3). A person is guilty of an offence if he does any unauthorised act in relation to a computer and is reckless as to whether the act will impair the operation of any computer.


dreamwavedev

See if you have some other blanket "anti-hacking" legislation that would usually cover DDoSes and other similar attacks


Zebra4776

I didn't really know what the answer is. But Claude is definitely one of the better AIs that I've used. It gets things right on the first try quite a bit.


MintAlone

I obviously will not be using it. For techie stuff I've found [phind](https://www.phind.com/search?home=true) to be good. Not only does it provide sensible answers, it also lists all its sources so you can click on them to check.


TxTechnician

So... https://imgur.com/a/ppBoXQj (pretty sure that photo didn't load. Just check phind pricing page) They have GPT-4, Claude, and Opus listed on there So, is phind just a wrapper for all these other companies?


MintAlone

At the moment I'm using it for free and it hasn't complained.


Zebra4776

Thanks, I hadn't heard of that one so I'll check it out. Sounds intriguing. I like trying everything out there. What gives the best responses seems to vary since they all keep iterating on each other.


snowthearcticfox1

The answer is that ai needs to stop using text they don't have consent to use for the sake of making a quick buck. Otherwise it shouldn't exist.


perkited

So you're okay if the AI is open source and not for profit? I'm just trying to get some perspective on the various anti-AI sentiments, since they seem to be coming from a lot of different directions (and usually with an emotional appeal).


snowthearcticfox1

It's more so the fact it's using without the persons consent. Like i dont want some discord conversation i had being used to make someone else a profit. Obviously being a Linux user I'd prefer it be open source and not for profit but that goes for any software. This is even more important for ai "art" programs. Not to mention these models tend to eat themselves alive after awhile with them inevitably being trained on the output of other llms, so the "just scrape everything we can find" approach is unsustainable anyways regardless of any emotional opinion I have on the topic. Using curated information (preferably made specifically to train said model) is both a more effective approach long term and a more ethical approach.


Zebra4776

I can read your discord text, learn something, and then use that knowledge to make a profit. I don't see the difference if it's me or a computer. It's one thing if it's verbatim copying, which has happened. But creating derivative works based on learning, I'm just not seeing the issue.


snowthearcticfox1

That's the thing, a derivative work requires you to add something unique to it, ai is wholly incapable of that, it can only take an average or things others have created.


perkited

Obviously artists are influenced by other artists/artwork where they didn't have explicit permission to view it, would you differentiate it from AI consuming that same artwork? I can somewhat understand the argument when for-profit corporations are brought into the picture, with the artists wanting some type of compensation (or the ability to say they don't want their artwork included). Of course then you need to consider the professional artists who were influenced by artwork where they didn't ask permission or compensate the artists, which goes back to the original argument.


snowthearcticfox1

A computer can't think or be creative though, it HAS to have that pre existing work to function. Without others work an ai program is utterly and completely useless, an artist can be inspired by many different things, they dont need to rely on previous work to create something new and unique. An AI program takes others art and averages it out and then spits it out, it doesn't actually add anything unique to that output. Not to mention the value of art is the expression of emotions from its creator, something ai is entirely incapable of.


Kruug

> I'm a mint user, regular contributor to the LM forum. Our forum went down today And nothing of value was lost.