{"id":40806,"date":"2026-05-04T21:25:42","date_gmt":"2026-05-04T19:25:42","guid":{"rendered":"https:\/\/www.graviton.at\/letterswaplibrary\/oc-usenet-corpus-1980-2013-103b-tokens-408m-posts-9-hierarchies-fully-processed\/"},"modified":"2026-05-04T21:25:42","modified_gmt":"2026-05-04T19:25:42","slug":"oc-usenet-corpus-1980-2013-103b-tokens-408m-posts-9-hierarchies-fully-processed","status":"publish","type":"post","link":"https:\/\/www.graviton.at\/letterswaplibrary\/oc-usenet-corpus-1980-2013-103b-tokens-408m-posts-9-hierarchies-fully-processed\/","title":{"rendered":"[OC] Usenet Corpus 1980\u20132013 \u2014 103B Tokens, 408M Posts, 9 Hierarchies, Fully Processed"},"content":{"rendered":"<p><!-- SC_OFF --><\/p>\n<div class=\"md\">\n<p>Shared this on <a href=\"https:\/\/www.reddit.com\/r\/MachineLearning\">r\/MachineLearning<\/a> a few days ago and got good discussion (30K views, 100+ upvotes) \u2014 figured this community would want to know about it too since it&#8217;s more directly relevant here.<\/p>\n<p>I&#8217;ve spent the last several years building and processing a complete Usenet corpus and finally have it documented well enough to share properly.<\/p>\n<p><strong>What it is:<\/strong> A deduplicated, sanitized collection of Usenet posts from 1980 through 2013 \u2014 covering the full arc of Usenet from its academic origins through peak adoption to decline. Pre-web, pre-social media, pre-AI. Entirely human-generated.<\/p>\n<p><strong>Stats:<\/strong><\/p>\n<ul>\n<li>103.1 billion tokens (cl100k_base)<\/li>\n<li>408,236,288 posts<\/li>\n<li>18,347 newsgroups<\/li>\n<li>9 top-level hierarchies: alt, rec, comp, soc, sci, misc, news, talk, humanities<\/li>\n<\/ul>\n<p><strong>Processing applied:<\/strong><\/p>\n<ul>\n<li>alt.binaries.* excluded entirely at hierarchy level (UUencoded\/base64 binary content)<\/li>\n<li>Adult content newsgroups excluded at hierarchy level<\/li>\n<li>Record-level: deduplication by Message-ID, binary detection and removal, PII redaction (email addresses replaced with [email] token, Message-IDs SHA-256 hashed), sensitive content removal<\/li>\n<li>Language detection on every record (fasttext LID-176) \u2014 96.6% English, 100+ languages total<\/li>\n<li>Format: gzip-compressed JSONL, ~141GB compressed<\/li>\n<\/ul>\n<p><strong>Schema:<\/strong><\/p>\n<pre><code>{ \"text\": \"post body\", \"group\": \"comp.lang.python\", \"date\": \"1995-03-14\", \"subject\": \"Re: thread subject\", \"author\": \"Display Name\", \"id\": \"msg-&lt;sha256hex&gt;\" } <\/code><\/pre>\n<p><strong>Samples:<\/strong> 11 sample files (5K posts per hierarchy + combined sets) are freely available \u2014 no approval needed. Full corpus available for licensing.<\/p>\n<p>Dataset has also been added to the AI datasets directory at lifearchitect.ai\/datasets-table.<\/p>\n<p>Link in comments.<\/p>\n<\/div>\n<p><!-- SC_ON -->   submitted by   <a href=\"https:\/\/www.reddit.com\/user\/OwnerByDane\"> \/u\/OwnerByDane <\/a> <br \/> <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1t3rfq1\/oc_usenet_corpus_19802013_103b_tokens_408m_posts\/\">[link]<\/a><\/span>   <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1t3rfq1\/oc_usenet_corpus_19802013_103b_tokens_408m_posts\/\">[comments]<\/a><\/span><\/p><div class='watch-action'><div class='watch-position align-right'><div class='action-like'><a class='lbg-style1 like-40806 jlk' href='javascript:void(0)' data-task='like' data-post_id='40806' data-nonce='9941108d62' rel='nofollow'><img class='wti-pixel' src='https:\/\/www.graviton.at\/letterswaplibrary\/wp-content\/plugins\/wti-like-post\/images\/pixel.gif' title='Like' \/><span class='lc-40806 lc'>0<\/span><\/a><\/div><\/div> <div class='status-40806 status align-right'><\/div><\/div><div class='wti-clear'><\/div>","protected":false},"excerpt":{"rendered":"<p>Shared this on r\/MachineLearning a few days ago and got good discussion (30K views, 100+ upvotes) \u2014&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[85],"tags":[],"class_list":["post-40806","post","type-post","status-publish","format-standard","hentry","category-datatards","wpcat-85-id"],"_links":{"self":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/40806","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/comments?post=40806"}],"version-history":[{"count":0,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/40806\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/media?parent=40806"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/categories?post=40806"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/tags?post=40806"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}