This whole #Tumblr
debacle just goes to show that AI led community moderation is not only a failure but a dangerous one at that.
It would seem that the premise of AI as sold to the general public through mass media is far off into the future and certainly not ready to be making decisions on the behalf of humans,
The problem goes further than Tumblr, various governments are beginning to clamp down on social media with the UK having recently made demands that automation be put in place to take down content that is inappropriate citing child safety as the moral panic of choice to get the public on side.
Moderators should be thought of as akin to doctors in the real world. In the UK for example there are on average 2.8 doctors for every 1000 people; how many moderators do you think organisations such as Tumblr, Twitter or Facebook have for every 1000 accounts? This is why automation is "key" to their moderating at a cost that enabled their service to be "free."
Nobody want to pay for their social media do they?
Arguably the Fediverse is a solution to this, with many instances with 10K or less active users, they can often keep the moderators per 1000 users within an acceptable range.
However moderation isn't free and in the case of the Fediverse it is entirely donated time kindly given up by volunteers. As instances grow, there may become a point when we have to take a strong look at ourselves and maybe begin crowd funding the Fediverse to keep it a pleasant place to be social within.