I’m fairly new to reddit so please forgive me if there’s a subreddit this thread would be more suited to!
Context: I’m currently working on my research proposal paper for a PhD in Fine Arts. I’m primarily a painter, so this is a practice-led research project on the subject of post-photography/image theory, post-digital visual culture and traumatic representation. I am by no means a data scientist and have a very base level understanding of ML and image recognition, but as I’m exploring traumatic representation in images on the internet/in relation to screen culture, my work does somewhat intersect with the field of computer vision – which is, of course, what brings me to Reddit.
I’m interested in how image recognition is used for the automated moderation/censorship/removal of “sensitive” content on social media platforms. I’m trying to locate any known dataset that’s been used to train this kind of image recognition model – I know there are plenty of datasets specifically for training ML to identify porn, but as my research revolves around trauma I’d ideally like to find one that includes a broader range of NSFW categories (violence, gore, etc.). I’m not too hopeful that any image based dataset of this kind would be publicly accessible (I suppose you’d hope it wasn’t), but alas, just putting this out here if anyone has any leads.
Even if you can’t answer my question, any thoughts/feedback/comments on this are more than welcome. I don’t particularly speak the language of computer science, but always open to having conversations about the project 🙂
submitted by /u/sentient-glue
[link] [comments]