{"id":34684,"date":"2025-07-13T07:27:13","date_gmt":"2025-07-13T05:27:13","guid":{"rendered":"https:\/\/www.graviton.at\/letterswaplibrary\/tldarc-common-crawl-domain-names-200-million-domain-names\/"},"modified":"2025-07-13T07:27:13","modified_gmt":"2025-07-13T05:27:13","slug":"tldarc-common-crawl-domain-names-200-million-domain-names","status":"publish","type":"post","link":"https:\/\/www.graviton.at\/letterswaplibrary\/tldarc-common-crawl-domain-names-200-million-domain-names\/","title":{"rendered":"Tldarc: Common Crawl Domain Names &#8211; 200 Million Domain Names"},"content":{"rendered":"<p><!-- SC_OFF --><\/p>\n<div class=\"md\">\n<p>I wanted the zone files to create a namechecker MCP service, but they aren&#8217;t freely available. So, I spent the last 2 weeks downloading Common Crawl&#8217;s 10TB of indexes, streaming the org-level domains and deduped them. After ~50TB of processing, and my laptop melting my legs, I&#8217;ve published them to Zenodo.<\/p>\n<p><strong>all_domains.tsv.gz<\/strong> contains the main list in dns,first_seen,last_seen format, from 2008 to 2025. Dates are in YYYYMMDD format. The intermediate tar.gz files (duplicate domains for each url with dates) are <strong>CC-MAIN.tar.gz.tar<\/strong><\/p>\n<p>Source code can be found in the github repo: <a href=\"https:\/\/github.com\/bitplane\/tldarc\">https:\/\/github.com\/bitplane\/tldarc<\/a><\/p>\n<\/div>\n<p><!-- SC_ON -->   submitted by   <a href=\"https:\/\/www.reddit.com\/user\/david-song\"> \/u\/david-song <\/a> <br \/> <span><a href=\"https:\/\/zenodo.org\/records\/15872040\">[link]<\/a><\/span>   <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1lyke2s\/tldarc_common_crawl_domain_names_200_million\/\">[comments]<\/a><\/span><\/p><div class='watch-action'><div class='watch-position align-right'><div class='action-like'><a class='lbg-style1 like-34684 jlk' href='javascript:void(0)' data-task='like' data-post_id='34684' data-nonce='614a020375' rel='nofollow'><img class='wti-pixel' src='https:\/\/www.graviton.at\/letterswaplibrary\/wp-content\/plugins\/wti-like-post\/images\/pixel.gif' title='Like' \/><span class='lc-34684 lc'>0<\/span><\/a><\/div><\/div> <div class='status-34684 status align-right'><\/div><\/div><div class='wti-clear'><\/div>","protected":false},"excerpt":{"rendered":"<p>I wanted the zone files to create a namechecker MCP service, but they aren&#8217;t freely available. So,&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[85],"tags":[],"class_list":["post-34684","post","type-post","status-publish","format-standard","hentry","category-datatards","wpcat-85-id"],"_links":{"self":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/34684","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/comments?post=34684"}],"version-history":[{"count":0,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/34684\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/media?parent=34684"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/categories?post=34684"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/tags?post=34684"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}