{"id":36307,"date":"2025-11-01T21:27:12","date_gmt":"2025-11-01T20:27:12","guid":{"rendered":"https:\/\/www.graviton.at\/letterswaplibrary\/building-a-synthetic-dataset-from-a-200mb-documented-c-yaml-codebase-for-lora-fine-tuning\/"},"modified":"2025-11-01T21:27:12","modified_gmt":"2025-11-01T20:27:12","slug":"building-a-synthetic-dataset-from-a-200mb-documented-c-yaml-codebase-for-lora-fine-tuning","status":"publish","type":"post","link":"https:\/\/www.graviton.at\/letterswaplibrary\/building-a-synthetic-dataset-from-a-200mb-documented-c-yaml-codebase-for-lora-fine-tuning\/","title":{"rendered":"Building A Synthetic Dataset From A 200MB Documented C#\/YAML Codebase For LoRA Fine-Tuning"},"content":{"rendered":"<p><!-- SC_OFF --><\/p>\n<div class=\"md\">\n<p>hello everyone.<\/p>\n<p>I&#8217;m building a synthetic dataset from our ~200MB private codebase to fine-tune a <strong>120B parameter GPT-OSS LLM<\/strong> using <strong>QLoRA<\/strong>. The model will be used for <strong>bug fixing, new code\/config generation<\/strong>.<\/p>\n<p><strong>Codebase specifics:<\/strong><\/p>\n<ul>\n<li>Primarily <strong>C#<\/strong> with extensive <strong>JSON\/YAML<\/strong> configs (with common patterns)<\/li>\n<li><strong>Good documentation &amp; comments<\/strong> exist throughout<\/li>\n<li>Total size: ~200MB of code\/config files<\/li>\n<\/ul>\n<p><strong>My plan:<\/strong><\/p>\n<ol>\n<li>Use <code>tree-sitter<\/code> to parse C# and extract methods\/functions with their docstrings<\/li>\n<li>Parse JSON\/YAML files to identify configuration patterns<\/li>\n<li>Generate synthetic prompts using existing docstrings + maybe light LLM augmentation<\/li>\n<li>Format as JSONL with prompt-completion pairs<\/li>\n<li>Train using QLoRA for efficiency<\/li>\n<\/ol>\n<p><strong>Specific questions:<\/strong><\/p>\n<ol>\n<li><strong>Parsing with existing docs:<\/strong> Since I have good comments\/docstrings, should I primarily use those as prompts rather than generating synthetic ones? Or combine both?<\/li>\n<li><strong>Bug-fixing specific data:<\/strong> How would you structure training examples for bug fixing? Should I create &#8220;broken code -&gt; fixed code&#8221; pairs, or &#8220;bug report -&gt; fix&#8221; pairs?<\/li>\n<li><strong>Configuration generation:<\/strong> For JSON\/YAML, what&#8217;s the best way to create training examples? Show partial configs and train to complete them?<\/li>\n<li><strong>Scale considerations:<\/strong> For a 200MB codebase targeting a 120B model with LoRA &#8211; what&#8217;s a realistic expected dataset size? Thousands or tens of thousands of examples?<\/li>\n<li>Tooling recommendations: Are there any code-specific dataset tools that work particularly well with documented codebases?<\/li>\n<\/ol>\n<p>Any experiences with similar code-to-dataset pipelines would be incredibly valuable! especially from those who&#8217;ve worked with C# codebases or configuration generation.<\/p>\n<\/div>\n<p><!-- SC_ON -->   submitted by   <a href=\"https:\/\/www.reddit.com\/user\/gagarinten\"> \/u\/gagarinten <\/a> <br \/> <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1oly14c\/building_a_synthetic_dataset_from_a_200mb\/\">[link]<\/a><\/span>   <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1oly14c\/building_a_synthetic_dataset_from_a_200mb\/\">[comments]<\/a><\/span><\/p><div class='watch-action'><div class='watch-position align-right'><div class='action-like'><a class='lbg-style1 like-36307 jlk' href='javascript:void(0)' data-task='like' data-post_id='36307' data-nonce='65e0e39b87' rel='nofollow'><img class='wti-pixel' src='https:\/\/www.graviton.at\/letterswaplibrary\/wp-content\/plugins\/wti-like-post\/images\/pixel.gif' title='Like' \/><span class='lc-36307 lc'>0<\/span><\/a><\/div><\/div> <div class='status-36307 status align-right'><\/div><\/div><div class='wti-clear'><\/div>","protected":false},"excerpt":{"rendered":"<p>hello everyone. I&#8217;m building a synthetic dataset from our ~200MB private codebase to fine-tune a 120B parameter&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[85],"tags":[],"class_list":["post-36307","post","type-post","status-publish","format-standard","hentry","category-datatards","wpcat-85-id"],"_links":{"self":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/36307","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/comments?post=36307"}],"version-history":[{"count":0,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/36307\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/media?parent=36307"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/categories?post=36307"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/tags?post=36307"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}