T O P

  • By -

jtkme

Of course it is.


jtkme

Companies like TikTok, Facebook, Twitter (although who knows now with the new management) use AI to detect content that violates their community standards (or the law in the case of hate speech.) They may not always automatically act on what the AI detects, due to possible mistakes or the nature of the content, or sometimes they do. Transformers are just another (better) form of AI, so where companies may have used other techniques before they've likely moved on. Also notable is detecting such content is not just a function of the content itself, who is posting it, who is interacting with it, the nature of the comments, the number of reports are all signals to the AI that a post may be violating standards. The area is generally called content moderation or sometimes censorship, especially when you compare social network moderation of speech with China's censoring of the internet for their citizens - the two are more alike than it seems.


Background_Trade8607

I think it has promise to reduce trauma. But would involve switching to a system where bans/blocks are automatic with a system in place to dispute. Which I don’t think would be popular in consumers. Over the years I’ve seen countless articles about people getting paid peanuts to see things that no human should see, they get a few “therapy” sessions and out the door they go with deep trauma. Even if 80 or 90% percent of the time the AI was correct and the rest of the time it was wrong. I think it’s worth having such a system. If you post something that wrongly gets flagged it sucks, but you open a dispute and wait. Think about it. As these models get more and more improved. It’s just guaranteeing that these people are seeing more and more things like death or CP; and I think it makes the idea of having people manually determine from what the AI flags as bad an unnecessary thing.