{"id":39253,"date":"2026-02-26T22:27:09","date_gmt":"2026-02-26T21:27:09","guid":{"rendered":"https:\/\/www.graviton.at\/letterswaplibrary\/building-a-synthetic-dataset-can-you-help\/"},"modified":"2026-02-26T22:27:09","modified_gmt":"2026-02-26T21:27:09","slug":"building-a-synthetic-dataset-can-you-help","status":"publish","type":"post","link":"https:\/\/www.graviton.at\/letterswaplibrary\/building-a-synthetic-dataset-can-you-help\/","title":{"rendered":"Building A Synthetic Dataset, Can You Help?"},"content":{"rendered":"<p><!-- SC_OFF --><\/p>\n<div class=\"md\">\n<p>I built a pipeline to detect a bunch of \u201csignals\u201d inside generated conversations, and my first real extraction eval was brutal: macro F1 was 29.7% because I\u2019d set the bar at 85% and everything collapsed. My first instinct was \u201cmy detector is trash,\u201d but the real problem was that I\u2019d mashed three different failure modes into one score.<\/p>\n<ol>\n<li>The spec was wrong. One label wasn\u2019t expected in any call type, so true positives were literally impossible. That guarantees an F1 of 0.<\/li>\n<li>The regex layer was confused. Some patterns were way too broad, others were too narrow, so some mentions were being phrased in ways the patterns never caught<\/li>\n<li>My contrast eval was too rigid. It was flagging pairs as \u201cinconsistent\u201d when the overall outcome stayed the same but small events drifted a bit\u2026 which is often totally fine.<\/li>\n<\/ol>\n<p>So instead of touching the model immediately, I fixed the evals first. For contrast sets, I moved from an all-or-nothing rule to something closer to constraint satisfaction. That alone took contrast from 65% \u2192 93.3%: role swaps stopped getting punished for small event drift, and signal flips started checking the <em>direction<\/em> of change instead of demanding a perfect structural match.<\/p>\n<p>Then I accepted the obvious truth: regex-only was never going to clear an 85% gate on implicit, varied, LLM-style wording. There\u2019s a real recall ceiling. I switched to a two-gate setup: a cheap regex gate for CI, and a semantic gate for actual quality.<\/p>\n<p>The semantic gate is basically weak supervision + embeddings + a simple classifier per label. I wrote 30+ labeling functions across 7 signals (explicit keywords, indirect cues, metadata hints, speaker-role heuristics, plus \u201cabsent\u201d functions to keep noise in check), combined them Snorkel-style with an EM label model, embedded with all-MiniLM-L6-v2, and trained LogisticRegression per label.<\/p>\n<p>Two changes made everything finally click:<\/p>\n<ul>\n<li>I stopped doing naive CV and switched to GroupKFold by conversation_id. Before that, I was leaking near-identical windows from the same convo into train and test, which inflated scores and gave me thresholds that didn\u2019t transfer.<\/li>\n<li>I fixed the embedding\/truncation issue with a multi-instance setup. Instead of embedding the whole conversation and silently chopping everything past ~256 tokens, I embedded 17k sliding windows of 3 turns and max-pooled them into a conversation-level prediction. That brought back signals that tend to show up late (stalls, objections).<\/li>\n<\/ul>\n<p>I also dropped the idea of a global 0.5 threshold and optimized one threshold per signal from the PR curve. After that, the semantic gate macro F1 jumped from 56.08% \u2192 78.86% (+22.78). Per-signal improvements were big also.<\/p>\n<p>Next up is active learning on the uncertain cases (uncertainty sampling &amp; clustering for diversity is already wired), and then either a small finetune on corrected labels or sticking with LR if it keeps scaling.<\/p>\n<p>If anyone here has done multi-label signal detection on transcripts: would you keep max-pooling for \u201cpresence\u201d detection, or move to learned pooling\/attention? And how do you handle thresholding\/calibration cleanly when each label has totally different base rates and error costs?<\/p>\n<\/div>\n<p><!-- SC_ON -->   submitted by   <a href=\"https:\/\/www.reddit.com\/user\/Euphoric_Network_887\"> \/u\/Euphoric_Network_887 <\/a> <br \/> <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1rfmyw4\/building_a_synthetic_dataset_can_you_help\/\">[link]<\/a><\/span>   <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1rfmyw4\/building_a_synthetic_dataset_can_you_help\/\">[comments]<\/a><\/span><\/p><div class='watch-action'><div class='watch-position align-right'><div class='action-like'><a class='lbg-style1 like-39253 jlk' href='javascript:void(0)' data-task='like' data-post_id='39253' data-nonce='65e0e39b87' rel='nofollow'><img class='wti-pixel' src='https:\/\/www.graviton.at\/letterswaplibrary\/wp-content\/plugins\/wti-like-post\/images\/pixel.gif' title='Like' \/><span class='lc-39253 lc'>0<\/span><\/a><\/div><\/div> <div class='status-39253 status align-right'><\/div><\/div><div class='wti-clear'><\/div>","protected":false},"excerpt":{"rendered":"<p>I built a pipeline to detect a bunch of \u201csignals\u201d inside generated conversations, and my first real&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[85],"tags":[],"class_list":["post-39253","post","type-post","status-publish","format-standard","hentry","category-datatards","wpcat-85-id"],"_links":{"self":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/39253","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/comments?post=39253"}],"version-history":[{"count":0,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/39253\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/media?parent=39253"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/categories?post=39253"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/tags?post=39253"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}