{"id":38186,"date":"2026-01-20T14:27:45","date_gmt":"2026-01-20T13:27:45","guid":{"rendered":"https:\/\/www.graviton.at\/letterswaplibrary\/self-release-65-hours-of-kenyan-filipino-english-dialogue-split-track-webrtc-vad-segmented\/"},"modified":"2026-01-20T14:27:45","modified_gmt":"2026-01-20T13:27:45","slug":"self-release-65-hours-of-kenyan-filipino-english-dialogue-split-track-webrtc-vad-segmented","status":"publish","type":"post","link":"https:\/\/www.graviton.at\/letterswaplibrary\/self-release-65-hours-of-kenyan-filipino-english-dialogue-split-track-webrtc-vad-segmented\/","title":{"rendered":"[Self-Release] 65 Hours Of Kenyan\/Filipino English Dialogue | Split-Track WebRTC | VAD-Segmented"},"content":{"rendered":"<p><!-- SC_OFF --><\/p>\n<div class=\"md\">\n<p>Hi all,<\/p>\n<p>I\u2019m the Co-founder of Datai. We are releasing a <strong>65-hour dataset<\/strong> of spontaneous, two-speaker dialogues focused on Kenyan (KE) and Filipino (PH) English accents.<\/p>\n<p>We built this to solve a specific internal problem: standard datasets (like LibriSpeech) are too clean. We needed data that reflects <strong>WebRTC\/VoIP acoustics<\/strong> and <strong>non-Western prosody<\/strong>.<\/p>\n<p>We are releasing this batch on Hugging Face for the community to use for ASR benchmarking, accent robustness testing, or diarization experiments.<\/p>\n<p><strong>The Specs:<\/strong><\/p>\n<ul>\n<li><strong>Total Duration:<\/strong> ~65 hours (Full dataset is 800+ hours)<\/li>\n<li><strong>Speakers:<\/strong> &gt;150 (Majority Kenyan interviewees, ~15 Filipino interviewers)<\/li>\n<li><strong>Topic:<\/strong> Natural, unscripted day-to-day life conversations.<\/li>\n<li><strong>Audio Quality:<\/strong> Recorded via WebRTC in Opus 48kHz, transcoded to <code>pcm_s16le<\/code>.<\/li>\n<li><strong>Structure:<\/strong> <strong>Split-track<\/strong> (Stereo). Each speaker is on a separate track.<\/li>\n<\/ul>\n<p><strong>Processing &amp; Segmentation:<\/strong> We processed the raw streams using <code>silero-vad<\/code> to chunk audio into 1 to 30-second segments.<\/p>\n<p><strong>File\/Metadata Structure:<\/strong> We\u2019ve structured the filenames to help with parsing: <code>ROOM-ID_TRACK-ID_START-MS_END-MS<\/code><\/p>\n<ul>\n<li><code>ROOM-ID<\/code>: Unique identifier for the conversation session.<\/li>\n<li><code>TRACK-ID<\/code>: The specific audio track (usually one speaker per track).<\/li>\n<\/ul>\n<p><strong>Technical Caveat (the edge case):<\/strong> Since this is real-world WebRTC data, we are transparent about the dirt in the data: If a speaker drops connection and rejoins, they may appear on a new <code>TRACK-ID<\/code> within the same <code>ROOM-ID<\/code>. We are clustering these in v2, but for now, treat Track IDs as session-specific rather than global speaker identities.<\/p>\n<p><strong>Access:<\/strong> The dataset is hosted on Hugging Face (gated to prevent bots\/abuse, but I approve manual requests quickly).<\/p>\n<p>Link is in the comments.<\/p>\n<\/div>\n<p><!-- SC_ON -->   submitted by   <a href=\"https:\/\/www.reddit.com\/user\/Downtown_Valuable_44\"> \/u\/Downtown_Valuable_44 <\/a> <br \/> <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1qf5uhz\/selfrelease_65_hours_of_kenyanfilipino_english\/\">[link]<\/a><\/span>   <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1qf5uhz\/selfrelease_65_hours_of_kenyanfilipino_english\/\">[comments]<\/a><\/span><\/p><div class='watch-action'><div class='watch-position align-right'><div class='action-like'><a class='lbg-style1 like-38186 jlk' href='javascript:void(0)' data-task='like' data-post_id='38186' data-nonce='65e0e39b87' rel='nofollow'><img class='wti-pixel' src='https:\/\/www.graviton.at\/letterswaplibrary\/wp-content\/plugins\/wti-like-post\/images\/pixel.gif' title='Like' \/><span class='lc-38186 lc'>0<\/span><\/a><\/div><\/div> <div class='status-38186 status align-right'><\/div><\/div><div class='wti-clear'><\/div>","protected":false},"excerpt":{"rendered":"<p>Hi all, I\u2019m the Co-founder of Datai. We are releasing a 65-hour dataset of spontaneous, two-speaker dialogues&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[85],"tags":[],"class_list":["post-38186","post","type-post","status-publish","format-standard","hentry","category-datatards","wpcat-85-id"],"_links":{"self":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/38186","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/comments?post=38186"}],"version-history":[{"count":0,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/38186\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/media?parent=38186"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/categories?post=38186"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/tags?post=38186"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}