{"id":40463,"date":"2026-04-17T19:46:00","date_gmt":"2026-04-17T17:46:00","guid":{"rendered":"https:\/\/www.graviton.at\/letterswaplibrary\/discussion-a-7-dimension-quality-scoring-system-for-reasoning-datasets-methodology-feedback-wanted\/"},"modified":"2026-04-17T19:46:00","modified_gmt":"2026-04-17T17:46:00","slug":"discussion-a-7-dimension-quality-scoring-system-for-reasoning-datasets-methodology-feedback-wanted","status":"publish","type":"post","link":"https:\/\/www.graviton.at\/letterswaplibrary\/discussion-a-7-dimension-quality-scoring-system-for-reasoning-datasets-methodology-feedback-wanted\/","title":{"rendered":"[Discussion] A 7-dimension Quality Scoring System For Reasoning Datasets \u2014 Methodology + Feedback Wanted"},"content":{"rendered":"<p><!-- SC_OFF --><\/p>\n<div class=\"md\">\n<p>Most dataset quality labels I&#8217;ve seen are a single score (accuracy, or &#8220;is_valid: true&#8221;). After building three reasoning datasets for LLM fine-tuning (legal, clinical, financial) I kept hitting cases where a single score hid the actual problem \u2014 e.g., an answer that was factually correct but cited a nonexistent case, or one with perfect citations but a broken reasoning chain.<\/p>\n<p><strong>So I broke quality into 7 dimensions, scored per-example:<\/strong><\/p>\n<ol>\n<li>\n<p>Correctness \u2014 does the conclusion match ground truth?<\/p>\n<\/li>\n<li>\n<p>Reasoning coherence \u2014 does each step follow from the previous?<\/p>\n<\/li>\n<li>\n<p>Citation accuracy \u2014 every reference verified against source?<\/p>\n<\/li>\n<li>\n<p>Completeness \u2014 are all required fields populated?<\/p>\n<\/li>\n<li>\n<p>Factual grounding \u2014 any hallucinated facts?<\/p>\n<\/li>\n<li>\n<p>Consistency \u2014 are labels applied the same way across the corpus?<\/p>\n<\/li>\n<li>\n<p>Reproducibility \u2014 can the conclusion be re-derived from the rule\/inputs alone?<\/p>\n<\/li>\n<\/ol>\n<p>Each dimension gets 0.0\u20131.0. Final score is the geometric mean (one bad dimension should tank the example, not average out). Low-scoring examples are kept in the corpus but flagged in metadata so downstream users can filter them.<\/p>\n<p><strong>What surprised me during scoring:<\/strong><\/p>\n<p>&#8211; ~18% of GPT-4 generated legal analyses had fabricated citations that looked real (wrong year, wrong court, right-ish case name)<\/p>\n<p>&#8211; Reasoning coherence and citation accuracy were almost uncorrelated \u2014 you can have one without the other<\/p>\n<p>&#8211; Consistency (dimension 6) was the hardest to measure and the most valuable once I did \u2014 it surfaced a whole class of &#8220;label drift&#8221; where mid-corpus annotation standards had shifted<\/p>\n<p><strong>Applied to:<\/strong><\/p>\n<p>&#8211; 445 US appellate legal reasoning examples (median score 0.92)<\/p>\n<p>&#8211; 493 clinical reasoning traces (median 0.88)<\/p>\n<p>&#8211; 1,000 financial routing\/classification examples (median 0.94)<\/p>\n<p>Full methodology writeup: <a href=\"https:\/\/labelsets.ai\/lqs-methodology\">https:\/\/labelsets.ai\/lqs-methodology<\/a><\/p>\n<p><strong>Genuinely curious:<\/strong><\/p>\n<p>&#8211; Has anyone tried something similar with more\/fewer dimensions?<\/p>\n<p>&#8211; Is geometric mean the right aggregation, or does anyone use a weighted model?<\/p>\n<p>&#8211; For reasoning datasets specifically, which dimensions are you most suspicious of when evaluating external data before buying\/using it?<\/p>\n<p><strong><em>Happy to go deeper on any dimension in the comments.<\/em><\/strong><\/p>\n<\/div>\n<p><!-- SC_ON -->   submitted by   <a href=\"https:\/\/www.reddit.com\/user\/plomii\"> \/u\/plomii <\/a> <br \/> <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1so782r\/discussion_a_7dimension_quality_scoring_system\/\">[link]<\/a><\/span>   <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1so782r\/discussion_a_7dimension_quality_scoring_system\/\">[comments]<\/a><\/span><\/p><div class='watch-action'><div class='watch-position align-right'><div class='action-like'><a class='lbg-style1 like-40463 jlk' href='javascript:void(0)' data-task='like' data-post_id='40463' data-nonce='65e0e39b87' rel='nofollow'><img class='wti-pixel' src='https:\/\/www.graviton.at\/letterswaplibrary\/wp-content\/plugins\/wti-like-post\/images\/pixel.gif' title='Like' \/><span class='lc-40463 lc'>0<\/span><\/a><\/div><\/div> <div class='status-40463 status align-right'><\/div><\/div><div class='wti-clear'><\/div>","protected":false},"excerpt":{"rendered":"<p>Most dataset quality labels I&#8217;ve seen are a single score (accuracy, or &#8220;is_valid: true&#8221;). After building three&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[85],"tags":[],"class_list":["post-40463","post","type-post","status-publish","format-standard","hentry","category-datatards","wpcat-85-id"],"_links":{"self":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/40463","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/comments?post=40463"}],"version-history":[{"count":0,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/40463\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/media?parent=40463"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/categories?post=40463"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/tags?post=40463"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}