{"id":40821,"date":"2026-05-05T15:27:03","date_gmt":"2026-05-05T13:27:03","guid":{"rendered":"https:\/\/www.graviton.at\/letterswaplibrary\/open-source-tool-for-generating-and-cleaning-synthetic-instruction-tuning-datasets\/"},"modified":"2026-05-05T15:27:03","modified_gmt":"2026-05-05T13:27:03","slug":"open-source-tool-for-generating-and-cleaning-synthetic-instruction-tuning-datasets","status":"publish","type":"post","link":"https:\/\/www.graviton.at\/letterswaplibrary\/open-source-tool-for-generating-and-cleaning-synthetic-instruction-tuning-datasets\/","title":{"rendered":"Open Source Tool For Generating And Cleaning Synthetic Instruction-tuning Datasets"},"content":{"rendered":"<p><!-- SC_OFF --><\/p>\n<div class=\"md\">\n<p>Built this because I wanted a reproducible way to build fine-tuning datasets without doing it all by hand.<\/p>\n<p>You give it seed prompts or an existing dataset, it generates instruction-output pairs via any OpenRouter model, scores them with a local or remote LLM judge, and exports a clean JSONL you can use directly for training. <\/p>\n<p>You can also ingest datasets straight from HuggingFace and filter or relabel them through the same pipeline.<\/p>\n<p>The export step lets you set a score threshold and a train\/val split ratio so what comes out is ready to use.<\/p>\n<p>MIT licensed, everything is stored locally, no data leaves your machine unless you choose a cloud judge backend.<\/p>\n<p>Github project link is in comments below \ud83d\udc47<\/p>\n<\/div>\n<p><!-- SC_ON -->   submitted by   <a href=\"https:\/\/www.reddit.com\/user\/gvij\"> \/u\/gvij <\/a> <br \/> <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1t4e93n\/open_source_tool_for_generating_and_cleaning\/\">[link]<\/a><\/span>   <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1t4e93n\/open_source_tool_for_generating_and_cleaning\/\">[comments]<\/a><\/span><\/p><div class='watch-action'><div class='watch-position align-right'><div class='action-like'><a class='lbg-style1 like-40821 jlk' href='javascript:void(0)' data-task='like' data-post_id='40821' data-nonce='65e0e39b87' rel='nofollow'><img class='wti-pixel' src='https:\/\/www.graviton.at\/letterswaplibrary\/wp-content\/plugins\/wti-like-post\/images\/pixel.gif' title='Like' \/><span class='lc-40821 lc'>0<\/span><\/a><\/div><\/div> <div class='status-40821 status align-right'><\/div><\/div><div class='wti-clear'><\/div>","protected":false},"excerpt":{"rendered":"<p>Built this because I wanted a reproducible way to build fine-tuning datasets without doing it all by&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[85],"tags":[],"class_list":["post-40821","post","type-post","status-publish","format-standard","hentry","category-datatards","wpcat-85-id"],"_links":{"self":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/40821","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/comments?post=40821"}],"version-history":[{"count":0,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/40821\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/media?parent=40821"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/categories?post=40821"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/tags?post=40821"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}