Category: Datatards

Here you can observe the biggest nerds in the world in their natural habitat, longing for data sets. Not that it isn’t interesting, i’m interested. Maybe they know where the chix are. But what do they need it for? World domination?

Training AI Models With High Dimensionality?

I’m working on a project predicting the outcome of 1v1 fights in League of Legends using data from the Riot API (MatchV5 timeline events). I scrape game state information around specific 1v1 kill events, including champion stats, damage dealt, and especially, the items each player has in his inventory at that moment.

Items give each player a significant stat boosts (AD, AP, Health, Resistances etc.) and unique passive/active effects, making them highly influential in fight outcomes. However, I’m having trouble representing this item data effectively in my dataset.

My Current Implementations:

  1. Initial Approach: Slot-Based Features
    • I first created features like player1_item_slot_1, player1_item_slot_2, …, player1_item_slot_7, storing the item_id found in each inventory slot of the player.
    • Problem: This approach is fundamentally flawed because item slots in LoL are purely organizational; they have no impact on the item’s effectiveness. An item provides the same benefits whether it’s in slot 1 or slot 6. I’m concerned the model would learn spurious correlations based on slot position (e.g., erroneously learning an item is “stronger” only when it appears in a specific slot), not being able to learn that item Ids have the same strength across all player item slots.
  2. Alternative Considered: One-Feature-Per-Item (Multi-Hot Encoding)
    • My next idea was to create a binary feature for every single item in the game (e.g., has_Rabadons=1, has_BlackCleaver=1, has_Zhonyas=0, etc.) for each player.
    • Benefit: This accurately reflects which specific items a player has in his inventory, regardless of slot, allowing the model to potentially learn the value of individual items and their unique effects.
    • Drawback: League has hundreds of items. This leads to:
      • Very High Dimensionality: Hundreds of new features per player instance.
      • Extreme Sparsity: Most of these item features will be 0 for any given fight (players hold max 6-7 items).
      • Potential Issues: This could significantly increase training time, require more data, and heighten the risk of overfitting (Curse of Dimensionality)!?

So now I wonder, is there anything else that I could try or do you think that either my Initial approach or the alternative one would be better?

I’m using XGB and train on a Dataset with roughly 8 Million lines (300k games).

submitted by /u/Revolutionary_Mine29
[link] [comments]

Bachelor Thesis – How Do I Find Data?

Dear fellow redditors,

for my thesis, I currently plan on conducting a data analysis on global energy prices development over the course of 30 years. However, my own research has led to the conclusion that it is not as easy as hoped to find data sets on this without having to pay thousands of dollars to research companies. Can anyone of you help me with my problem and e.g. point to data sets I might have missed out on?

If this is not the best subreddit to ask, please tell me your recommendation.

submitted by /u/TheGameTraveller
[link] [comments]

Synthetic Autoimmune Dataset For AI/ML Research (9 Diseases, Labs, Meds, Demographics)

Hey everyone,

After three years of work and reading 580+ research papers, I built a synthetic patient dataset that models 9 autoimmune diseases including labs, medications, diagnoses, and demographics features with realistic clinical interactions. About 190 features in all!

It’s designed for AI research, ML model development, or educational use.

I’m offering free sample sets (about 1,000 patients per disease, currently over 10,000 available) for anyone interested in healthcare machine learning, diagnostics, or synthetic data.

Would love any feedback too!

https://www.leukotech.com/data

submitted by /u/_loading-comment_
[link] [comments]

Help Me Find A Good Dataset For My First Project

Hi!

I’m thrilled to announce I’m about to start my first data analysis project, after almost a year studying the basic tools (SQL, Python, Power BI and Excel). I feel confident and am eager to make my first ent-to-end project come true.

Can you guys lend me a hand finding The Proper Dataset for it? You can help me with websites, ideas or anything you consider can come in handy.

I’d like to build a project about house renting prices, event organization (like festivals), videogames or boardgames.

I found one in Kaggle that is interesting (‘Rent price in Barcelona 2014-2022‘, if you want to check it), but, since it is my first project, I don’t know if I could find a better dataset.

Thanks so much in advance.

submitted by /u/Donnie_McGee
[link] [comments]

Looking For A Raw Dataset With Gen Z Political Leanings

Hi, I’m trying to find a raw dataset that at least has something to do with changes in political views of Gen Z in the United States. I’ve found several studies but couldn’t find any actual datasets. Haven’t been able to find anything so far, so I figured I could ask over here. I don’t really know where to start looking lol.

submitted by /u/-Firefish-
[link] [comments]

Hybrid Model Ideas For Multiple Datasets?

So I’m working on a project that has 3 datasets. A dataset connectome data extracted from MRIs, a continuous values dataset for patient scores and a qualitative patient survey dataset.

The output is multioutput. One output is ADHD diagnosis and the other is patient sex(male or female).

I’m trying to use a gcn(or maybe even other types of gnn) for the connectome data which is basically a graph. I’m thinking about training a gnn on the connectome data with only 1 of the 2 outputs and get embeddings to merge with the other 2 datasets using something like an mlp.

Any other ways I could explore?

Also do you know what other models I could you on this type of data? If you’re interested the dataset is from a kaggle competition called WIDS datathon. I’m also using optuna for hyper parameters optimization.

submitted by /u/Luccy_33
[link] [comments]

Help On Interest Rate Data-inflation

Hi everyone,

I’m working on a project about inflation in Turkey. I plan to analyze how exchange rates, interest rates, and import indexes affect inflation.

I need monthly data between 2000-2025 because I will be running a time series analysis.

However, I’m struggling to find the correct data on interest rates.

I’m specifically looking for data from the Central Bank of the Republic of Turkey (CBRT), but I’m not sure under which name or section the interest rate data is listed.

If anyone could guide me on where or how to find it (or what it’s exactly called in their database), I would really appreciate it!

Thank you so much in advance!

submitted by /u/Elegant610
[link] [comments]

We Need A Dataset For Aquaponics/Hydroponics Detailing The Water And Plant Parameters

We are college students and we have already worked on aquaponics before and we require water parameters such as dissolved oxygen, pH, ammonia, nitrate, and similar ones for plants such as height of root, height shoot, biomass, gas exchange rate, photosynthesis rate, humidity, etc

we also require a parameter that details how acclimatised the plant is after a specific amount of time

submitted by /u/sacredspectralsword
[link] [comments]

How To Assess The Quality Of Written Feedback/ Comments Given My Managers.

I have the feedback/comments given by managers from the past two years (all levels).

My organization already has an LLM model. They want me to analyze these feedbacks/comments and come up with a framework containing dimensions such as clarity, specificity, and areas for improvement. The problem is how to create the logic from these subjective things to train the LLM model (the idea is to create a dataset of feedback). How should I approach this?

I have tried LIWC (Linguistic Inquiry and Word Count), which has various word libraries for each dimension and simply checks those words in the comments to give a rating. But this is not working.

Currently, only word count seems to be the only quantitative parameter linked with feedback quality (longer comments = better quality).

Any reading material on this would also be beneficial.

submitted by /u/Sandwichboy2002
[link] [comments]

Complete JFK Files Archive Extracted Text (73,468 Files)

I just finished creating a GitHub and Hugging Face repositories containing extracted text from all available JFK files on archives.gov.

Every other archive I’ve found only contains the 2025 release and often not even the complete 2025 release. The 2025 release contained 2,566 files released between March 18 – April 3, 2025. This is only 3.5% of the total available files on archives.gov.

The same goes for search tools (AI or otherwise), they all focus on only the 2025 release and often an incomplete subset of the documents in the 2025 release.

The only files that are excluded are a few discrepancies described in the README and 17 .wav audio files that are very low quality and contain lots of blank space. Two .mp3 files are included.

The data is messy, the files do not follow a standard naming convention across releases. Many files are provided repeatedly across releases, often with less information redacted. The files are often referred to by record number, or even named according to their record number but in some releases record numbers tie to multiple files as well as multiple record numbers tie to a single file.

I have documented all the discrepancies I could find as well as the methodology used to download and extract the text. Everything is open source and available to researchers and builders alike.

The next step is building an AI chat bot to search, analyze and summarize these documents (currently in progress). Much like the archives of the raw data, all AI tools I’ve found so far focus only on the 2025 release and often not the complete set.

Release Files
2017-2018 53,526
2021 1,484
2022 13,199
2023 2,693
2025 2,566

This extracted data amounts to a little over 1GB of raw text which is over 350,000 pages of text (single space, typed pages). Although the 2025 release supposedly contains 80,000 pages alone, many files are handwritten notes, low quality scans and other undecipherable data. In the future, more advanced AI models will certainly be able to extract more data.

The archives(.)gov files supposedly contain over 6 million pages in total. The discrepancy is likely blank pages, nearly blank pages, unrecognizable handwriting, poor quality scans, poor quality source data or data that was unextractable for some other reason. If anyone has another explanation or has sucessfully extracted more data, I’d like to hear about it.

Hope you find this useful.

GitHub: [https://github.com/noops888/jfk-files-text/](https://github.com/noops888/jfk-files-text/)

Hugging Face (in .parque format): https://huggingface.co/datasets/mysocratesnote/jfk-files-text

submitted by /u/brass_monkey888
[link] [comments]

Aggregated Historical Flight Price Dataset

I am working on a personal project that requires aggregated flight prices based on origin-destination pairs. I am specifically interested in data that includes both the price fetch date (booking date) and the travel date. The price fetch date is particularly important for my analysis.

For reference, I’ve found an example dataset on Kaggle https://www.kaggle.com/datasets/yashdharme36/airfare-ml-predicting-flight-fares/data, but it only covers a three-month period. To effectively capture seasonality, I need at least two years’ worth of data.

The ideal features for the dataset would include:

  1. Origin airport
  2. Destination airport
  3. Travel date
  4. Booking date or price fetch date (or the number of days left until the travel date)
  5. Time slot (optional), such as morning, evening, or night
  6. Price

I am looking specifically for a dataset of Indian domestic flights, but I am finding it challenging to locate one. I plan to combine this flight data with holiday datasets and other relevant information to create a flight price prediction app.

I would appreciate any suggestions you may have, including potential global datasets. Additionally, I would like to know the typical costs associated with acquiring such datasets from data providers. Thank you!

submitted by /u/athuljyothis
[link] [comments]

Spotify 100,000 Podcasts Dataset Availability

https://podcastsdataset.byspotify.com/ https://aclanthology.org/2020.coling-main.519.pdf

Does anybody have access to this dataset which contains 60,000 hours of English audio?

The dataset was removed by Spotify. However, it was originally released under a Creative Commons Attribution 4.0 International License (CC BY 4.0) as stated in the paper. Afaik the license allows for sharing and redistribution – and it’s irrevocable! So if anyone grabbed a copy while it was up, it should still be fair game to share!

If you happen to have it, I’d really appreciate if you could send it my way. Thanks! 🙏🏽

submitted by /u/OogaBoogha
[link] [comments]

Rf-stego-dataset: Python Based Tool That Generates Synthetic RF IQ Recordings + Optional Steganographic Payloads Embedded Via LSB (repo Includes Sample Dataset)

rf-stego-dataset [tegridydev]

Python based tool that generates synthetic RF IQ recordings (.sigmf-data + .sigmf-meta) with optional steganographic payloads embedded via LSB.

It also produces spectrogram PNGs and a manifest (metadata.csv + metadata.jsonl.gz).

Key Features

  • Modulations: BPSK, QPSK, GFSK, 16-QAM (Gray), 8-PSK
  • Channel Impairments: AWGN, phase noise, IQ imbalance, Rician / Nakagami fading, frequency & phase offsets
  • Steganography: LSB embedding into the I‑component
  • Outputs: SigMF files, spectrogram images, CSV & gzipped JSONL manifests
  • Configurable: via config.yaml or interactive menu

Dataset Contents

Each clip folder contains: 1. clip_<idx>_<uuid>.sigmf-data 2. clip_<idx>_<uuid>.sigmf-meta 3. clip_<idx>_<uuid>.png (spectrogram)

The manifest lists: – Dataset name, sample rate – Modulation, impairment parameters, SNR, frequency offset – Stego method used – File name, generation time, clip duration

Use Cases

  • Machine Learning: train modulation classification or stego detection models
  • Signal Processing: benchmark algorithms under controlled impairments
  • Security Research: study steganography in RF domains

Quick Start

  1. Clone repo: git clone https://github.com/tegridydev/rf-stego-dataset.git
  2. Install dependencies: pip install -r requirements.txt
  3. Edit config.yaml or run: python rf-gen.py and choose Show config / Change param
  4. Generate data: select Generate all clips

~~Enjoy <3

submitted by /u/tegridyblues
[link] [comments]

Seeking ESG Controversy Scores (2021–2024) For S&P 500 Financial Sector Companies

Hi,
I’m doing an academic research project and urgently need ESG controversy scores (not general ESG ratings) for financial sector companies in the S&P 500 from 2021 to 2024 from any reliable source (MSCI, Refinitiv, Sustainalytics, etc.).

Ideally, I need scores that reflect the timing and severity of ESG controversies so I can conduct an event study on their stock price impact. My university (Tunis Business School) doesn’t provide access to these databases, and I’m a student working on a tight (read: nonexistent) budget.

Would appreciate any help, pointers, or sample datasets. Thank you!

submitted by /u/B3ss1
[link] [comments]

Seeking Ninja-Level Scraper For Massive Data Collection Project

I’m looking for someone with serious scraping experience for a large-scale data collection project. This isn’t your average “let me grab some product info from a website” gig – we’re talking industrial-strength, performance-optimized scraping that can handle millions of data points.

What I need:

  • Someone who’s battle-tested with high-volume scraping challenges
  • Experience with parallel processing and distributed systems
  • Creative problem-solver who can think outside the box when standard approaches hit limitations
  • Knowledge of handling rate limits, proxies, and optimization techniques
  • Someone who enjoys technical challenges and finding elegant solutions

I have the infrastructure to handle the actual scraping once the solution is built – I’m looking for someone to develop the approach and architecture. I’ll be running the actual operation, but need expertise on the technical solution design.

Compensation: Fair and competitive – depends on experience and the final scope we agree on. I value expertise and am willing to pay for it.

If you’re the type who gets excited about solving tough scraping problems at scale, DM me with some background on your experience with high-volume scraping projects and we can discuss details.

Thanks!

submitted by /u/polawiaczperel
[link] [comments]

Tired Of Robotic Chatbots? Train Them To Sound Human – Try My Dataset

Hi !

I’ve just uploaded a new dataset designed for NLP and chatbot applications:

Tone Adjustment Dataset

This dataset contains English sentences rewritten in three different tones:

  • Polite
  • Professional
  • Casual

Use Cases:

  • Training tone-aware LLMs and chatbot models
  • Fine-tuning transformers for style transfer tasks
  • Improving user experience by making bots sound more natural

    I’d love to hear your thoughts—feedback, ideas, or collaborations are welcome!

Cheers,
Gopi Krishnan

submitted by /u/ZenQuery
[link] [comments]