{"id":40370,"date":"2026-04-13T09:27:12","date_gmt":"2026-04-13T07:27:12","guid":{"rendered":"https:\/\/www.graviton.at\/letterswaplibrary\/back-again-with-another-training-problem-i-keep-running-into-while-building-dataset-slices-for-smaller-llms\/"},"modified":"2026-04-13T09:27:12","modified_gmt":"2026-04-13T07:27:12","slug":"back-again-with-another-training-problem-i-keep-running-into-while-building-dataset-slices-for-smaller-llms","status":"publish","type":"post","link":"https:\/\/www.graviton.at\/letterswaplibrary\/back-again-with-another-training-problem-i-keep-running-into-while-building-dataset-slices-for-smaller-llms\/","title":{"rendered":"Back Again With Another Training Problem I Keep Running Into While Building Dataset Slices For Smaller LLMs"},"content":{"rendered":"<p><!-- SC_OFF --><\/p>\n<div class=\"md\">\n<p>Hey, I\u2019m back with another one from the pile of model behaviors I\u2019ve been trying to isolate and turn into trainable dataset slices.<\/p>\n<p>This time the problem is <strong>reliable JSON extraction from financial-style documents<\/strong>.<\/p>\n<p>I keep seeing the same pattern:<\/p>\n<p>You can prompt a smaller\/open model hard enough that it looks good in a demo.<br \/> It gives you JSON.<br \/> It extracts the right fields.<br \/> You think you\u2019re close.<\/p>\n<p>That\u2019s the part that keeps making me think this is not just a prompt problem.<\/p>\n<p>It feels more like a <strong>training problem<\/strong>.<\/p>\n<p>A lot of what I\u2019m building right now is around this idea that model quality should be broken into very narrow behaviors and trained directly, instead of hoping a big prompt can hold everything together.<\/p>\n<p>For this one, the behavior is basically:<\/p>\n<p><strong>Can the model stay schema-first, even when the input gets messy?<\/strong><\/p>\n<p>Not just:<br \/> \u201ccan it produce JSON once?\u201d<\/p>\n<p>But:<\/p>\n<ul>\n<li>can it keep the same structure every time<\/li>\n<li>can it make success and failure outputs equally predictable<\/li>\n<\/ul>\n<p>One of the row patterns I\u2019ve been looking at has this kind of training signal built into it:<\/p>\n<pre><code>{ \"sample_id\": \"lane_16_code_json_spec_mode_en_00000001\", \"assistant_response\": \"Design notes: - Storage: a local JSON file with explicit load and save steps. - Bad: vague return values. Good: consistent shapes for success and failure.\" } <\/code><\/pre>\n<p>What I like about this kind of row is that it does not just show the model a format.<\/p>\n<p>It teaches the rule:<\/p>\n<ul>\n<li>vague output is bad<\/li>\n<li>stable structured output is good<\/li>\n<\/ul>\n<p>That feels especially relevant for stuff like:<\/p>\n<ul>\n<li>financial statement extraction<\/li>\n<li>invoice parsing<\/li>\n<\/ul>\n<p>So this is one of the slices I\u2019m working on right now while building out behavior-specific training data.<\/p>\n<p>Curious how other people here think about this.<\/p>\n<\/div>\n<p><!-- SC_ON -->   submitted by   <a href=\"https:\/\/www.reddit.com\/user\/JayPatel24_\"> \/u\/JayPatel24_ <\/a> <br \/> <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1sk3pjy\/back_again_with_another_training_problem_i_keep\/\">[link]<\/a><\/span>   <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1sk3pjy\/back_again_with_another_training_problem_i_keep\/\">[comments]<\/a><\/span><\/p><div class='watch-action'><div class='watch-position align-right'><div class='action-like'><a class='lbg-style1 like-40370 jlk' href='javascript:void(0)' data-task='like' data-post_id='40370' data-nonce='65e0e39b87' rel='nofollow'><img class='wti-pixel' src='https:\/\/www.graviton.at\/letterswaplibrary\/wp-content\/plugins\/wti-like-post\/images\/pixel.gif' title='Like' \/><span class='lc-40370 lc'>0<\/span><\/a><\/div><\/div> <div class='status-40370 status align-right'><\/div><\/div><div class='wti-clear'><\/div>","protected":false},"excerpt":{"rendered":"<p>Hey, I\u2019m back with another one from the pile of model behaviors I\u2019ve been trying to isolate&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[85],"tags":[],"class_list":["post-40370","post","type-post","status-publish","format-standard","hentry","category-datatards","wpcat-85-id"],"_links":{"self":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/40370","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/comments?post=40370"}],"version-history":[{"count":0,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/40370\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/media?parent=40370"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/categories?post=40370"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/tags?post=40370"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}