{"id":18615,"date":"2023-06-01T23:29:16","date_gmt":"2023-06-01T21:29:16","guid":{"rendered":"https:\/\/www.graviton.at\/letterswaplibrary\/requesting-an-images-dataset-with-annotated-human-actions-to-train-visual-description-model-for-accessibility-app\/"},"modified":"2023-06-24T10:35:56","modified_gmt":"2023-06-24T08:35:56","slug":"requesting-an-images-dataset-with-annotated-human-actions-to-train-visual-description-model-for-accessibility-app","status":"publish","type":"post","link":"https:\/\/www.graviton.at\/letterswaplibrary\/requesting-an-images-dataset-with-annotated-human-actions-to-train-visual-description-model-for-accessibility-app\/","title":{"rendered":"Requesting An Images Dataset With Annotated Human Actions To Train Visual Description Model For Accessibility App"},"content":{"rendered":"<p><!-- SC_OFF --><\/p>\n<div class=\"md\">\n<p>Hi everyone, I need help finding a <strong>dataset of images annotated with human actions<\/strong> [such as <strong>sitting+in-chair, working+on-laptop<\/strong>, etc.]. I found a model capable of generating such tags on Huggingface <a href=\"https:\/\/huggingface.co\/franco1102\/human-actions-vit-model-francomedin\">here<\/a>, however I was unable to locate its source dataset.<\/p>\n<p>Just for context, I am trying to create a fine-tuned <a href=\"https:\/\/arxiv.org\/abs\/2010.11929\">ViT model<\/a>, that incorporates as broad a set of visual tags as possible. My plan is to optimize this model for edge devices [using <a href=\"https:\/\/www.tensorflow.org\/model_optimization\/guide\/quantization\/training_example\">Quantization aware training + TFLite model<\/a> conversion] and <strong>open-source the weights<\/strong>. Eventually, I am hoping this can be used for a broad range of visual search\/tagging\/QnA tasks. Currently, I am training the model on top <a href=\"https:\/\/huggingface.co\/datasets\/animelover\/danbooru2022\">2500 Danbooru tags<\/a> + <a href=\"https:\/\/groups.csail.mit.edu\/vision\/SUN\/hierarchy.html\">MIT SUN<\/a> indoor location tags.<\/p>\n<p>An <a href=\"https:\/\/modelpubsub.com\/SafeUnsafeAndImageTagsDemo\">online demo<\/a> of the model can be found here. If anyone has any suggestions regarding what other dataset\/tags to add, or would like to help with the training efforts, please drop a line. I would really appreciate it.<\/p>\n<p>[Disclosures: I am not affiliated in any way with any of the HuggingFace \/Arxiv\/Mit.edu links I posted here. The link to the online-demo is maintained by me, but there are no ads or anything else that procures me financial gain on it.]<\/p>\n<\/div>\n<p><!-- SC_ON -->   submitted by   <a href=\"https:\/\/www.reddit.com\/user\/DisintegratingBo\"> \/u\/DisintegratingBo <\/a> <br \/> <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/13xswp1\/requesting_an_images_dataset_with_annotated_human\/\">[link]<\/a><\/span>   <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/13xswp1\/requesting_an_images_dataset_with_annotated_human\/\">[comments]<\/a><\/span><\/p><div class='watch-action'><div class='watch-position align-right'><div class='action-like'><a class='lbg-style1 like-18615 jlk' href='javascript:void(0)' data-task='like' data-post_id='18615' data-nonce='65e0e39b87' rel='nofollow'><img class='wti-pixel' src='https:\/\/www.graviton.at\/letterswaplibrary\/wp-content\/plugins\/wti-like-post\/images\/pixel.gif' title='Like' \/><span class='lc-18615 lc'>0<\/span><\/a><\/div><\/div> <div class='status-18615 status align-right'><\/div><\/div><div class='wti-clear'><\/div>","protected":false},"excerpt":{"rendered":"<p>Hi everyone, I need help finding a dataset of images annotated with human actions [such as sitting+in-chair,&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[85],"tags":[27],"class_list":["post-18615","post","type-post","status-publish","format-standard","hentry","category-datatards","tag-english","wpcat-85-id"],"_links":{"self":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/18615","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/comments?post=18615"}],"version-history":[{"count":1,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/18615\/revisions"}],"predecessor-version":[{"id":19470,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/18615\/revisions\/19470"}],"wp:attachment":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/media?parent=18615"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/categories?post=18615"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/tags?post=18615"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}