Category: Datatards

Here you can observe the biggest nerds in the world in their natural habitat, longing for data sets. Not that it isn’t interesting, i’m interested. Maybe they know where the chix are. But what do they need it for? World domination?

DESPERATELY Seeking For Help To Find A Dataset That Fits Specific Requirements

Hello everyone, I am losing my mind and on the verge of tears to find a dataset (can be ANY topic) that fits the following criteria:

  • not synthetic
  • minimum of 700 rows and 14 columns
  • 8 quantitative variables, 2 ordinal variables, 4 nominal, 1 temporal

By ordinal I mean things like ratings (in integers), education level, letter grades, etc.

Thank you in advance. I’ve had 5 mental breakdowns over this.

submitted by /u/anxiousandtroubled
[link] [comments]

Best Way To Create Grammar Labels For Large Raw Language Datasets?

Im in need of a way to label a large raw language dataset, and i need labels to identify what form each word takes and prefferably what sort of grammar rules are used dominantely in each sentence. I was looking at «UD parsers» like the one from Stanza, but it struggled with a lot of words. I do not have time to start creating labels myself. Has anyone solved a similar problem before?

submitted by /u/osamaistmeinefreund
[link] [comments]

What’s The Best Way To Analyze Logs As A Beginner?

I just started studying cybersecurity in college and for one of my courses i have to practice logging.

For this exercise i have to analyze a large log and try to find who the attacker was, what attack method he used, at what time the attack happened, the ip adress of the attacker and the event code.

(All this can be found in the file our teacher gave us.)

This is a short example of what is in the document:

Timestamp; Country; IP address; Event Code

29/09/2024 12:00 AM;Galadore;3ffe:0007:0000:0000:0000:0000:0000:0685;EVT1039

29/09/2024 12:00 AM;Ithoria;3ffe:0009:0000:0000:0000:0000:0000:0940;EVT1008

29/09/2024 12:00 AM;Eldoria;3ffe:0005:0000:0000:0000:0000:0000:0090;EVT1037

So my question is, how do i get started on this? And what is the best way to analyze this/learn how to analyze this?

(Note: this data is not real and are from a made-up scenario)

submitted by /u/AdOpen4997
[link] [comments]

New Dataset For Code Now Available On Hugging Face! CodeReality

Hi,

I’ve just released my latest work: CodeReality.
For now, you can access a 19GB evaluation subset, designed to give a concrete idea of the structure and value of the full dataset, which exceeds 3TB.

👉 Dataset link: CodeReality on Hugging Face

Inside you’ll find:

  • the complete analysis also performed on the full 3TB dataset,
  • benchmark results for code completion, bug detection, license detection, and retrieval,
  • documentation and notebooks to help experimentation.

I’m currently working on making the full dataset available directly on Hugging Face.
In the meantime, if you’re interested in an early release/preview, feel free to contact me.

[vincenzo.gallo77@hotmail.com](mailto:vincenzo.gallo77@hotmail.com)

submitted by /u/CodeStackDev
[link] [comments]

New Dataset For Code Now Available On Hugging Face! CodeReality

Hi,
I’ve just released my latest work: CodeReality.
For now, you can access a 19GB evaluation subset, designed to give a concrete idea of the structure and value of the full dataset, which exceeds 3TB.

  • Dataset link: CodeReality on Hugging Face
  • Inside you’ll find:
  • the complete analysis also performed on the full 3TB dataset,
  • benchmark results for code completion, bug detection, license detection, and retrieval,
  • documentation and notebooks to help experimentation.

I’m currently working on making the full dataset available directly on Hugging Face.
In the meantime, if you’re interested in an early release/preview, feel free to contact me.

[vincenzo.galllo77@hotmail.com](mailto:vincenzo.galllo77@hotmail.com)

submitted by /u/CodeStackDev
[link] [comments]

Need Datasets (~3) On Companies/entities That Offer Subscription-based Products.

Hello! I am enrolled in a Data Viz/management class for my Master’s, and for our course project, we need to use a SUBSCRIPTION-BASED company’s data to weave a narrative/derive insights etc.

I need help identifying companies that would have reliable, relatively clean (not mandatory) multivariate datasets, so that we can explore them and select what works best for our project.

Free datasets would be ideal, but a smaller fee of ~10 eur or so would also work, since it is for academic purposes, and not commerical.

Any help would be appreciated! Thanks!

submitted by /u/ChaosAndEntropy
[link] [comments]

Fetch Thousands Of YouTube Videos With Structured Transcripts & Metadata In Python

I made a Python package called YTFetcher that lets you grab thousands of videos from a YouTube channel along with structured transcripts and metadata (titles, descriptions, thumbnails, publish dates).

You can also export data as CSV, TXT or JSON.

Install with:

pip install ytfetcher

Here’s a quick CLI usage for getting started:

ytfetcher from_channel -c TheOffice -m 50 -f json

This will give you to 50 videos of structured transcripts and metadata for every video from TheOffice channel.

If you’ve ever needed bulk YouTube transcripts or structured video data, this should save you a ton of time.

Check it out on GitHub: https://github.com/kaya70875/ytfetcher

submitted by /u/nagmee
[link] [comments]

Looking For Unique, Raw Datasets That Track The Customer Lifecycle / Journey

I’m working on a group project for my Data Management & Visualisation class, and we want to analyze end-to-end customer journeys , ideally from first touch (ads, web analytics, etc.) through purchase and post-purchase retention/churn.

We’d love suggestions for something less common or a bit messy (multi-table, event logs, JSON, clickstreams) so we can showcase data cleaning and modeling skills. If you’ve stumbled on interesting clickstream/e-commerce/retention/open web analytics data or know obscure public APIs or research corpora, please point me their way!

Thanks in advance 🙏 we’ll happily credit any cool finds and redditors in our final project.

submitted by /u/jimmynotchoo1
[link] [comments]

Looking For Advice On Scaling SEC Data App (10 Rps Limit)

I’ve built a financial app that pulls company financials from the SEC—nearly verbatim (a few tags can be missing)—covering the XBRL era (2009/2010 to present). I’m launching a site to show detailed quarterly and annual statements.

Constraint: The SEC allows ~10 requests/second per IP, so I’m worried I can only support a few hundred concurrent users if I fetch on demand.

Goal: Scale beyond that without blasting the SEC and without storing/downloading the entire corpus.

What’s the best approach to: • stay under ~10 rps to the SEC, • keep storage minimal, and • still serve fast, detailed statements to lots of users?

Any proven patterns (caching, precomputed aggregates, CDN, etc.) you’d recommend?

submitted by /u/Ok-Access5317
[link] [comments]

UFC Data Lab – The Most Complete Dataset On UFC

Hi folks! I was looking for a complete UFC fights dataset with fight-based and fighter-based data in one place, but couldn’t find one that has fight scorecards information, so I decided to collect it myself. Maybe this ends up useful for someone else!

Features of the dataset:

  • Fight-based data from names and surnames to the accuracy of significant strikes landed to the head/body/legs, sig. str. from ground/clinch/distance position, number of reversals, etc.
  • Fighter-based data from anthropometric features like height and reach to career-based features like significant strikes landed per minute throughout career, average takedowns landed per minute, takedown accuracy, etc.
  • Fight scorecards from 3 judges throughout all rounds.
  • The data is available in both cleaned and raw formats!

Stats and scorecards were scraped; scorecards were in the form of images, so these were further OCR parsed into text, then the data was cleaned, merged, and cleaned again.

The stats data was scraped from this official source, and scorecards from this official source.

submitted by /u/Financial-Grass4819
[link] [comments]

Looking For A Video Game Dataset For My Bachelor’s Thesis

Hi everyone,

I’m working on my Bachelor’s thesis, and I’m looking for a real-world dataset about video games for analysis and visualization purposes. Ideally, the dataset should include as many of the following attributes as possible:

Basic information
• Game title
• Platform (e.g., PC, PlayStation, Xbox)
• Release year and release region
• Genre
• Publisher
• Developer
• Price at release

Sales and market data
• Global sales and/or sales by region (NA, EU, JP, others)
• Digital vs. physical sales
• Number of copies sold in the first week
• Total revenue vs. number of units sold
• Pricing strategy (standard, deluxe edition, DLC bundles)

Game features and technical details
• Game mode (single-player, multiplayer, co-op)
• Game engine (Unreal, Unity, custom engine)
• Open world vs. linear gameplay (yes/no)
• Average gameplay length (hours to finish)
• Number of missions/levels

• Indie game X non-Indie (yes/no)

Ratings and popularity
• Critic rating and user rating (e.g., Metacritic, Steam reviews)
• Number of reviews

• Number of active players
• Popularity on social media (mentions, Twitch/YouTube views)
• Marketing budget (if available)

Audience and regulations
• Age rating (PEGI, ESRB)
• Regional restrictions (e.g., censorship in certain countries)

Lifecycle data
• Announcement date
• Release date(s) (if different per region)
• Number of patches/DLCs released after launch

I’m open to either a single comprehensive dataset or multiple datasets that can be merged. Open-source or publicly available datasets would be ideal. I already found something on Kaggle with sales by region but I would love to get some bigger and different datasets ;))

Any tips or links would be greatly appreciated!

Thank you very much in advance!!!!

submitted by /u/Extra_Box4242
[link] [comments]

Recipe Database That Uses Metric Measurements

Hello all, I’m currently working on a side project to improve my datascience skills/portfolio by creating a application that measures what ingredients a person has in their fridge in metric measurements and it will have a recommender system. This system will suggest recipes the user can cook by seeing what food the user likes, if they have enough of each ingredient in their fridge etc.

I have found an ingredient database on this subreddit here which was good for the fridge storage database however I can’t seem to find a recipe database that uses metric measurements. If anyone knows a database that would suit this project and would like to recommend it I’d appreciate it thank you a lot

submitted by /u/GlobalBuffalo2904
[link] [comments]

Help My Final Year Project In Finetuning Llms

Hey all,

I’m building my final year project: a tool that generates quizzes and flashcards from educational materials (like PDFs, docs, and videos). Right now, I’m using an AI-powered system that processes uploaded files and creates question/answer sets, but I’m considering taking it a step further by fine-tuning my own language model on domain-specific data.

I’m seeking advice on a few fronts:

  • Which small language model would you recommend for a project like this (quiz and flashcard generation)? I’ve heard about VibeVoice-1.5B, GPT-4o-mini, Haiku, and Gemini Pro—curious about what works well in the community.
  • What’s your preferred workflow to train or fine-tune a model for this task? Please share any resources or step-by-step guides that worked for you!
  • Should I use parameter-efficient fine-tuning (like LoRA/QLoRA), or go with full model fine-tuning given limited resources?
  • Do you think this approach (custom fine-tuning for educational QA/flashcard tasks) will actually produce better results than prompt-based solutions, based on your experience?
  • If you’ve tried building similar tools or have strong opinions about data quality, dataset size, or open-source models, I’d love to hear your thoughts.

I’m eager to hear what models, tools, and strategies people found effective. Any suggestions for open datasets or data generation strategies would also be super helpful.

Thanks in advance for your guidance and ideas! Would love to know if you think this is a realistic approach—or if there’s a better route I should consider.

submitted by /u/Ghostgame4
[link] [comments]

I Need A Dataset For My Project , In Reserch I Find This .. Look At It Please

Hey so i am looking for datasets for my ml during research i find something called

the HTTP Archive with BigQuery

link: https://har.fyi/guides/getting-started/

it forward me to google cloud

I want the real data set of traffic pattern of any website for my predictive autoscaling ?

I am looking for server metrics , requests in the website along with dates and i will modify the data set a bit but i need minimum of this

I am new to ml and dataset finding i am more into devops and cloud but my project need ml as this is my final year project so.

submitted by /u/Successful_Tea4490
[link] [comments]

Daily Practice Under The Pressure Of Interviews

I’m in my last year of CS, and most of my nights lately are spent between data exploration and interview prep. Instead of just browsing problem sets, I started treating datasets like they were scripts written for an invisible interviewer.

For example, I’ll pull an SQL challenge from interview question bank, set a timer, and pretend I’m being grilled on it. I’d read the prompt, talk through the schema, explain joins and indexes, then move on. But real interviews aren’t this gentle. They push back. They throw “What if?” at you when you least expect it. Then I used beyz interview assistant to pressures me with those dreaded follow-ups: What happens if the dataset grows tenfold? How do you scale beyond memory limits? Could your approach handle concurrent writes?

This won’t take a lot of time, you can complete a whole set of exercises in just a few spare moments. This little routine has started to feel less like “prep” and more like a habit. Some nights I still blank out, other nights everything clicks, but either way I close my laptop with the sense that I’m slowly getting better at thinking on my feet.

submitted by /u/Various_Candidate325
[link] [comments]

[self-promotion] Daily Updated Sephora Australia Skincare Sales (by Category, Brand, And Promotion %)

I’ve been tracking Sephora Australia’s skincare promotions and put together a dataset that might be useful for anyone studying beauty retail, pricing, or promotions.

  • Covers all skincare products currently on sale
  • Organized by category and subcategory
  • Further grouped by brand and promotion %
  • Updated daily
  • Free to view and explore

Here’s the link: [https://www.kungfutemplate.com/What-s-on-Sale-Today-Australia-Sephora-2763de239fe3801f82fefe478cd72c53?source=copy_link ]

Hope it helps anyone interested in retail analytics, consumer behavior, or just curious about beauty sales trends

submitted by /u/IntelligentHome2342
[link] [comments]

[Tool] I Built A Free Web Tool To Automatically Join And Enrich Different Datasets Using AI.

Hey r/datasets,

I’ve often found amazing related datasets on this sub and elsewhere, but combining them for a project was always a manual chore. If the column names or key formats didn’t line up, it meant breaking out Python scripts.

To make this easier, I built a free tool called Datum Fuse AI.

The main goal is to help you take two separate datasets and quickly harmonize and join them. For example, if you have a CSV with country names and another with country codes, it can help you merge them.

Key features:

  • AI suggests how to map columns between two files.
  • It can join the files based on your mapped keys.
  • It can also augment a dataset with things like Geolocation (City/State/County from a Zip Code column) or add a column for US Holidays if your data is time-based.

It’s in free public beta right now. I’m hoping it can be a useful utility for this community when you’re working on your data projects. I’d appreciate any feedback on what other features or augmentations would be helpful.

Check it out at: https://www.datumfuse.ai

Thanks!

submitted by /u/Bootes-sphere
[link] [comments]

[Request] IEEE DataPort Datasets: PV Arrays: Suffled Frog Leaping Algorithm And Other MPPTs Under Partial Shading – PSIM Model

We have a college project coming ahead. Please help sharing this dataset for us. Thanks ahead

Fábio José Rodrigues, Fernando Marcos de Oliveira, Oswaldo Hideo Ando Junior, “PV arrays: Suffled Frog Leaping Algorithm and other MPPTs under partial shading – PSIM model”, IEEE Dataport, July 23, 2024, doi:10.21227/a1m0-gs94

https://ieee-dataport.org//documents/pv-arrays-suffled-frog-leaping-algorithm-and-other-mppts-under-partial-shading-psim-model

submitted by /u/Vivid-Turnover-620
[link] [comments]

Need Real Dataset Like Mimic-iv For ML Model

Can You give me real dataset contaning department like icu,telemetry,medical,surgery in bedtype and departments like oncology,cardio,etc with real los Around 1000 rows atleast I am working on an AI model to reduce LOS but the current one I was using is synthetic which has data like in ICU a patient admitted for 2 mins only Which ks not logical so can you help me out ?

submitted by /u/Time_Photograph6748
[link] [comments]