{"id":37516,"date":"2025-12-30T10:27:12","date_gmt":"2025-12-30T09:27:12","guid":{"rendered":"https:\/\/www.graviton.at\/letterswaplibrary\/stream-huge-hugginface-and-kaggle-datasets\/"},"modified":"2025-12-30T10:27:12","modified_gmt":"2025-12-30T09:27:12","slug":"stream-huge-hugginface-and-kaggle-datasets","status":"publish","type":"post","link":"https:\/\/www.graviton.at\/letterswaplibrary\/stream-huge-hugginface-and-kaggle-datasets\/","title":{"rendered":"Stream Huge HugginFace And Kaggle Datasets"},"content":{"rendered":"<p><!-- SC_OFF --><\/p>\n<div class=\"md\">\n<p>Greetings. I am trying to train an OCR system on huge datasets, namely:<\/p>\n<ul>\n<li><a href=\"https:\/\/huggingface.co\/datasets\/DonkeySmall\/OCR-Cyrillic-Printed-1\">OCR Cyrillic Printed 1<\/a><\/li>\n<li><a href=\"https:\/\/huggingface.co\/datasets\/pumb-ai\/synthetic-cyrillic-large\">Synthetic Cyrillic Large<\/a><\/li>\n<li><a href=\"https:\/\/www.kaggle.com\/datasets\/constantinwerner\/cyrillic-handwriting-dataset\">Cyrillic Handwriting Dataset<\/a><\/li>\n<\/ul>\n<p>They contain millions of images, and are all in different formats &#8211; <code>WebDataset<\/code>, <code>zip<\/code> with folders, etc. I will be experimenting with different hyperparameters locally on my M2 Mac, and then training on a <a href=\"http:\/\/vast.ai\/\">Vast.ai<\/a> server.<\/p>\n<p>The thing is, I don&#8217;t have enough space to fit even one of these datasets at a time on my personal laptop, and I don&#8217;t want to use permanent storage on the server. The reason is that I want to rent the server for as short of a time as possible. If I have to instantiate server instances multiple times (e.g. in case of starting all over), I will waste several hours every time to download the datasets. Therefore, I think that streaming the datasets is a flexible option that would solve my problems both locally on my laptop, and on the server.<br \/> However, two of the datasets are available on Hugging Face, and one &#8211; only on Kaggle, where I can&#8217;t stream it from. Furthermore, I expect to hit rate limits when streaming the datasets from Hugging Face.<\/p>\n<p>Having said all of this, I consider just uploading the data to Google Cloud Buckets, and use the <code>Google Cloud Connector for PyTorch<\/code> to efficiently stream the datasets. This way I get a dataset-agnostic way of streaming the data. The interface directly inherits from PyTorch <code>Dataset<\/code>:<\/p>\n<pre><code>from dataflux_pytorch import dataflux_iterable_dataset PREFIX = \"simple-demo-dataset\" iterable_dataset = dataflux_iterable_dataset.DataFluxIterableDataset( project_name=PROJECT_ID, bucket_name=BUCKET_NAME, config=dataflux_mapstyle_dataset.Config(prefix=PREFIX) ) <\/code><\/pre>\n<blockquote>\n<p>The <code>iterable_dataset<\/code> now represents an iterable over data samples.<\/p>\n<\/blockquote>\n<p>I have two questions:<\/p>\n<ol>\n<li>Are my assumptions correct and is it worth uploading everything to Google Cloud Buckets (assuming I pick locations close to my working location and my server location, enable hierarchical storage, use prefixes, etc.). Or I should just stream the Hugging Face datasets, download the Kaggle dataset, and call it a day?<\/li>\n<li>If uploading everything to Google Cloud Buckets is worth it, how do I store the datasets to GCP Buckets in the first place? <a href=\"https:\/\/github.com\/GoogleCloudPlatform\/gcs-connector-for-pytorch\/blob\/main\/demo\/simple-walkthrough\/Getting%20Started%20with%20Dataflux%20Dataset%20for%20PyTorch%20with%20Google%20Cloud%20Storage.ipynb\">This<\/a> and <a href=\"https:\/\/github.com\/GoogleCloudPlatform\/gcs-connector-for-pytorch\/blob\/main\/demo\/image_segmentation\/README.md\">this<\/a> tutorials only work with images, not with image-string pairs.<\/li>\n<\/ol><\/div>\n<p><!-- SC_ON -->   submitted by   <a href=\"https:\/\/www.reddit.com\/user\/Suspicious-Pick-7961\"> \/u\/Suspicious-Pick-7961 <\/a> <br \/> <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1pzdm3g\/stream_huge_hugginface_and_kaggle_datasets\/\">[link]<\/a><\/span>   <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1pzdm3g\/stream_huge_hugginface_and_kaggle_datasets\/\">[comments]<\/a><\/span><\/p><div class='watch-action'><div class='watch-position align-right'><div class='action-like'><a class='lbg-style1 like-37516 jlk' href='javascript:void(0)' data-task='like' data-post_id='37516' data-nonce='65e0e39b87' rel='nofollow'><img class='wti-pixel' src='https:\/\/www.graviton.at\/letterswaplibrary\/wp-content\/plugins\/wti-like-post\/images\/pixel.gif' title='Like' \/><span class='lc-37516 lc'>0<\/span><\/a><\/div><\/div> <div class='status-37516 status align-right'><\/div><\/div><div class='wti-clear'><\/div>","protected":false},"excerpt":{"rendered":"<p>Greetings. I am trying to train an OCR system on huge datasets, namely: OCR Cyrillic Printed 1&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[85],"tags":[],"class_list":["post-37516","post","type-post","status-publish","format-standard","hentry","category-datatards","wpcat-85-id"],"_links":{"self":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/37516","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/comments?post=37516"}],"version-history":[{"count":0,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/37516\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/media?parent=37516"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/categories?post=37516"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/tags?post=37516"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}