{"id":40108,"date":"2026-04-02T14:40:52","date_gmt":"2026-04-02T12:40:52","guid":{"rendered":"https:\/\/www.graviton.at\/letterswaplibrary\/are-there-efforts-to-create-gold-silver-subsets-for-open-ml-datasets\/"},"modified":"2026-04-02T14:40:52","modified_gmt":"2026-04-02T12:40:52","slug":"are-there-efforts-to-create-gold-silver-subsets-for-open-ml-datasets","status":"publish","type":"post","link":"https:\/\/www.graviton.at\/letterswaplibrary\/are-there-efforts-to-create-gold-silver-subsets-for-open-ml-datasets\/","title":{"rendered":"Are There Efforts To Create Gold\/silver Subsets For Open ML Datasets?"},"content":{"rendered":"<p><!-- SC_OFF --><\/p>\n<div class=\"md\">\n<p>We experimented with MNIST and BDD100K and noticed two recurring issues: about 2\u20134% of samples were noisy or confusing, and there was significant redundancy in the datasets.<\/p>\n<p>We achieved ~87% accuracy on MNIST with only 10 samples (1 per class), and on BDD, we matched baseline performance with less than ~40% of the dataset after removing obvious redundancies and very low-quality samples.<\/p>\n<p>This made us wonder why we don\u2019t see more \u201cdataset goldifying\u201d approaches, where datasets are split into something like:<\/p>\n<ul>\n<li>Gold subset (very clean, ~1%)<\/li>\n<li>Silver subset (medium, ~5%)<\/li>\n<li>Full dataset<\/li>\n<\/ul>\n<p>Are there any canonical methods or open-source efforts for creating curated gold\/silver subsets of datasets?<\/p>\n<\/div>\n<p><!-- SC_ON -->   submitted by   <a href=\"https:\/\/www.reddit.com\/user\/taranpula39\"> \/u\/taranpula39 <\/a> <br \/> <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1sag07b\/are_there_efforts_to_create_goldsilver_subsets\/\">[link]<\/a><\/span>   <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1sag07b\/are_there_efforts_to_create_goldsilver_subsets\/\">[comments]<\/a><\/span><\/p><div class='watch-action'><div class='watch-position align-right'><div class='action-like'><a class='lbg-style1 like-40108 jlk' href='javascript:void(0)' data-task='like' data-post_id='40108' data-nonce='65e0e39b87' rel='nofollow'><img class='wti-pixel' src='https:\/\/www.graviton.at\/letterswaplibrary\/wp-content\/plugins\/wti-like-post\/images\/pixel.gif' title='Like' \/><span class='lc-40108 lc'>0<\/span><\/a><\/div><\/div> <div class='status-40108 status align-right'><\/div><\/div><div class='wti-clear'><\/div>","protected":false},"excerpt":{"rendered":"<p>We experimented with MNIST and BDD100K and noticed two recurring issues: about 2\u20134% of samples were noisy&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[85],"tags":[],"class_list":["post-40108","post","type-post","status-publish","format-standard","hentry","category-datatards","wpcat-85-id"],"_links":{"self":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/40108","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/comments?post=40108"}],"version-history":[{"count":0,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/40108\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/media?parent=40108"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/categories?post=40108"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/tags?post=40108"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}