Looking For Dataset For LLM Tokenization: Need Around 1GB Multi-lingual + Code

I’ve been working on a tokenizer that determines the best possible tokens to represent the test dataset in the least number of tokens for various different vocabulary sizes.

It works well but I’ve been testing with The Pile test data, but it’s mostly English so it’s a not good representation for multi-lingual. It also lacks a fair amount of code and tags.

I need around 1-2GB raw text uncleaned and uncensored, that represents a few different languages and a fair amount of code from different programming languages. Better to be raw, and include data both with HTML tags as it would be when scraped, and also without HTML tags (as it would prioritize the HTML tags too heavily if they were always present).

So just a good representation of general text.

I know I could build my own dataset from various different ones, but it seems to me that a dataset like this should already exist. Any leads would be helpful. Thank you.

submitted by /u/Pan000
[link] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *