wow we sure have some amazingly talented sheith artists delivering the best content we can hope for and i love them so much but to this day I’m still mindblown by my favorite sheith piece ever written
This post is to clarify Pillowfort’s position on sexually explicit
visual art, particularly art containing minors. We apologize for not
clarifying our position on this earlier; we have been working on
revising the ToS to address these issues, but an incident arose today
that necessitated us taking action on this matter, so it seems necessary
to put things in plain terms now.
As of this statement, Pillowfort will not allow explicit
visual art of characters that appear to be underage. Now, due to the
inherently subjective nature of trying to determine the age of a
fictional character in a stylized medium, the “gray area” cases will be
up to the discretion of the moderator reviewing the content. Broadly, if
your characters are close to 18 and could plausibly or arguably be of
age based on appearance, that will be allowed. We are not going to comb
through the wiki of a particular fandom to try and find the canonical
age of a fictional character and then try to figure out if the character
in the art piece is supposed to be represented at their canonical age
or not. Similarly, many works of art contain sexual activity as a theme
of adolescence and coming-of-age– I’m sure you yourself can think of
many movies, shows, etc. you’ve seen that featured underage characters
in a sexual relationship, some even explicitly– and even if you wanted
to argue that art that explored adolescent sexuality was fine and it’s
‘just pornography’ that should be censured, we aren’t going to be the
arbiters of what is art vs. ‘merely’ pornography. Therefore, sexual
depictions of characters in later stages of adolescence may broadly be
considered ‘safe.’
However, any explicit or sexual art
of a character that appears to be physically pre-pubescent or barely
pubescent (i.e. not plausibly or arguably of age, or even close to it)
will be prohibited. We realize this decision may anger some people, but
we have devoted a considerable amount of thought to this decision and we
feel this is the best approach for our community.
Importantly,
we don’t want people to worry that this heralds a broader ban on
‘problematic’ content. When it comes to fictional depictions or
descriptions of divisive subjects such as incest, rape, abuse, etc., we
do not intend to restrict that content, for largely the same reasons as
discussed above with regards to ‘gray-area’ underage content. We
understand these subjects can be distressing for many, and we do request
that people posting content that is potentially upsetting tag such
posts with the relevant terms so that it can be avoided by those who
don’t wish to view such content. The only situation we foresee
penalizing the posting of this content would be a case already covered
in our ToS, where a bad-faith user appears to be intentionally spamming
completely unrelated tags or communities with violent or sexual content
with the intent or result of disrupting the usual functioning of that
tag or community.
We will update the ToS to reflect
these new guidelines soon. As always, if these new guidelines are
unsatisfactory to you and you want a refund for your PayPal payment to
us, you can email us at info@pillowfort.io and we will process your refund as soon as we can.
Why is everyone in the notes complaining this doesn’t make sense? This makes perfect sense.
@the people who are asking why l*li and other art like it has to be banned: it’s illegal in canada and many other countries. I am sure PF doesn’t want to face legal battles ahead for this kind of thing.
@the people who are complaing that they are going to/should ban more: the other fictional harm described is not illegal to the extent of l*li and will not result in lawsuits for PF.
This policy will not affect anyone short of those drawing toddlers because that is the only thing that could EVER hold weight in a COURT CASE. Your Harry Potter drawings are safe, your DxD art is safe. No one will be coming for you. This is not a “slippery slope.”
Only two kinds of people won’t be happy with PF, genuine creeps and antis. And I am 100% fine with that.
Now, the alarming aspect of this story is that the very same technology is probably what tumblr is using to identify porn. Now, if it can’t tell that an empty field is not, in fact, full of sheep, what hope do we have that it can’t tell an empty room isn’t full of writing human forms engaged in passionate coitus?
this really does sound like an episode of black mirror
This is gonna produce some absolutely baffling pornography.
…. oh my fucking god they actually are using open source software. They’re using a fucking one-layer unidirectional bicategory tag-trained neural network. This will never work. Literally, it will never work. There’s just not enough algorithmic complexity to do what they’re asking of it. I bet you I could prove on a mathematical level that this joke of a neural net fundamentally lacks the abstraction necessary to do its job.
This will never get better. Their algorithm will never stop fucking up, it will never actually flag porn reliably and it will always require a massive quantity of human hours to deal with the deluge of mistagged pictures. This isn’t just a case of an insufficiently trained algorithm, it’s just … this is the most basic neural network you can make. It probably hasa a lot of neurons and has loads of training data but like … you can’t just brute force this kind of stuff. One layer of neurons is just Not Enough.
Also, just to make this clear, Tumblr lied. I mean, we already know this, but I mean they liiiieeeeed. All that stuff they promised about what would or would not be censored? That cannot be delivered on with a system this simple. Nude classical sculptures, political protests, male-presenting nipples (really Tumblr?), nude art outside the context of sex, all that? You cannot train a bicategory one-layer neural network to exclude those things. It cannot be done. Tumblr never intended for those things to actually be permitted, they were just lying. Because the system they have cannot actually do what they said it would and never will be able to.
Also, this kind of system is super vulnerable to counter-neural strategies. I bet you before the end of the month someone hooks up their own open source one layer bicategory neural network which puts an imperceptible (to humans) layer of patterned static over arbitrary images, and trains it by having it bot-post static-ed images to Tumblr and reinforcing based on whether the images are labeled nsfw or sfw. Seriously, within a month someone will have an input-output machine which can turn any image ‘sfw’ in Tumblr’s eyes.
This is genuinely pathetic. Like, I have real pity for whoever implemented this, because it’s clear Tumblr doesn’t actually have any engineers with any expertise with machine learning left at all and they foisted the job off on some poor bastard who has no idea what they’re doing and is going to get all kinds of flak for their (perfectly reasonable and predetermined) failure from management.
As has been pointed out before, there are no humans behind this at all. The review process just reruns either the same algorithm or another algorithm, but people have posted screen shots showing obviously SFW pictures that were still deemed NSFW on review, despite the fact that any human, no matter how overworked / tired would have seen that these pictures were not porn.