Category: Datatards

Here you can observe the biggest nerds in the world in their natural habitat, longing for data sets. Not that it isn’t interesting, i’m interested. Maybe they know where the chix are. But what do they need it for? World domination?

You, Too Can Now Leverage “Artificial Indian”

There was a joke for a while, that “AI” actually stood for “Artificial Indian”, after multiple companys’ touted “AI” turned out to be a bunch of outsourced, low cost-of-living country workers remotely, behind the scenes.

I just found out that AWS’s assorted SageMaker AI offerings, now offer direct, non-hidden Artificial Indian for anyone to hire, through a convenient interface they are calling “Mechanical Turk”.

https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-management-public.html

I’m posting here, because its primary purpose is to give people a standardized AI to pay for HUMAN INPUT on labelling datasets, so I figured the more people on the research side who knew about this, the better.

Get your dataset captioned by the latest in AI technology! 🙂

(disclaimer: I’m not being paid by AWS for posting this, etc., etc.)

submitted by /u/lostinspaz
[link] [comments]

Will Using Synthetic Data Affect My ML Model Accuracy Or My Resume?

Hey everyone 👋 I’m currently working on my final year engineering project based on disease prediction using Machine Learning.

Since real medical datasets are hard to find, I decided to generate synthetic data for training and testing my model. Some people told me it’s not a good idea — that it might affect my model accuracy or even look bad on my resume.

But my main goal is to learn the entire ML workflow — from preprocessing to model building and evaluation.

So I wanted to ask: 👉 Will using synthetic data affect my model’s performance or generalization? 👉 Does it look bad on a resume or during interviews if I mention that I used synthetic data? 👉 Any suggestions to make my project more authentic or practical despite using synthetic data?

Would really appreciate honest opinions or experiences from others who’ve been in the same situation 🙌

submitted by /u/shrinivas-2003
[link] [comments]

Finance-Instruct-500k-Japanese Dataset

Introducing the Finance-Instruct-500k-Japanese dataset 🎉

This is a Japanese dataset that includes complex questions and answers related to finance and economics.

This dataset is useful for training, evaluating, and instruction-tuning LLMs on Japanese financial and economic reasoning tasks.

submitted by /u/Ok_Employee_6418
[link] [comments]

[Self-Promotion] VC And Funded Startups Databases

After 5 years of curating VC contacts and funded startup data, I’m moving on to a new project. Instead of letting all this data disappear, I’m offering one last chance to grab it at 60% off.

What’s included:

VC Contact Lists (13 databases):

  • Complete VC contact database (1,300+ firms)
  • Specialized lists: AI, Biotech, Fintech, HealthTech, SaaS VCs
  • Stage-focused: Pre-Seed VCs, Seed VCs
  • Geography-focused: Silicon Valley, New York, Europe, USA
  • Bonus: AI Investors list

Funded Startup Databases (10 databases):

  • Full database: 6,000+ verified funded startups
  • By sector: AI/ML, SaaS, Fintech, Biotech/Pharma, Digital Health, Climate Tech
  • By region: USA, Europe, Silicon Valley

Everything is in Excel format, ready to download and use immediately.

Link: https://projectstartups.com

Happy to answer questions!

submitted by /u/project_startups
[link] [comments]

We Have A 60M Influencer Database And We’re Ready To Share It With You

Hey everyone! We’re the Crossnetics team, and we specialize in large-scale web data extraction. We handle any type of request and build custom databases with 30, 50, 100+ million records in just a few days (yes, we really have that kind of power).

We’ve already collected a ready-to-use database of 60M influencers worldwide, and we’re happy to share it with you. We can export it in any format and with any parameters you need.

If you’re interested, drop a comment or DM us — we’ll send details and what we can build for you.

submitted by /u/unicornsz03
[link] [comments]

Looking For Reliable Live Ocean Data Sources – Australia

Hey everyone! I’m a Master’s student based in Melbourne working on a project called FLOAT WITH IT, an interactive installation that raises awareness about rip currents and beach safety to reduce drowning among locals and tourists who often visit Australian beaches without knowing the risks. The installation uses real-time ocean data to project dynamic visuals of waves and rip currents onto the ground. Participants can literally step into the projection, interact with motion-tracked currents, and learn how rip currents behave and more importantly, how to respond safely.

For this project, I’m looking for access to a live ocean data API that provides: Wave height / direction / period Tidal data Current speed and direction For Australian coastal areas (especially Jan Juc Beach, Victoria) I’ve already looked into sources like Surfline, and some open marine data APIs, but most are limited or don’t offer live updates for Australian waters. Does anyone know of a public, educational, or low-cost API I could use for this? Even tips on where to find reliable live ocean datasets would be super helpful! This is a non-commercial, university research project, and I’ll be crediting any data sources used in the final installation and exhibition. Thanks so much for your help I’d love to hear from anyone working with ocean data, marine monitoring, or interactive visualisation!

TLDR; Im a Master’s student creating an interactive installation about rip currents and beach safety in Australia. Looking for live ocean data APIs (wave, tide, current info, especially for Jan Juc Beach VIC). Need something public, affordable, or educational-access friendly. Any leads appreciated!

submitted by /u/pranavron
[link] [comments]

Looking For Official E-ZPass / Toll Transaction APIs Or Vendor Contacts (building Driver Platform)

Hi all — I’m building a platform for drivers that consolidates toll activity and alerts drivers to unpaid or missed E-ZPass transactions (cases where the transponder didn’t register at a toll booth, or missed/failed toll posts). This can save drivers and fleet owners thousands in fines and plate suspensions — but I’m hitting a roadblock: finding a lawful, reliable data source / API that provides toll transaction records (or near-real-time missed/toll event feeds).

What I’m looking for:

  • Official APIs or data feeds (state toll agencies, E-ZPass Group members, DOTs) that provide: account/plate/toll-event, timestamp, toll location, amount, status (paid/unpaid), and reconciliation IDs.
  • Vendor/portal contacts at toll system vendors or third-party integrators who expose APIs.
  • Advice on legal/contractual path: who to contact to get read-only access for fleets, or how others built partnerships with toll agencies.
  • Pointers to public datasets or FOIA requests that returned usable toll transaction data.

If you’ve done something similar, worked at a toll authority, or can introduce me to the right dev/ops/partnership contact, please DM or reply here. Happy to share high-level architecture and the compliance steps we’ll follow. Thanks!

submitted by /u/CustomerAway5611
[link] [comments]

Open Maritime Dataset: Ship-tracking + Registry + Ownership Data (Equasis + GESIS + Transponder Signals) — Seeking Ideas For Impactful Analysis

I’m developing an open dataset that links ship-tracking signals (automatic transponder data) with registry and ownership information from Equasis and GESIS. Each record ties an IMO number to: • broadcast identity data (position, heading, speed, draught, timestamps) • registry metadata (flag, owner, operator, class society, insurance) • derived events such as port calls, anchorage dwell times, and rendezvous proximity

The purpose is to make publicly available data more usable for policy analysis, compliance, and shipping-risk research — not to commercialize it.

I’m looking for input from data professionals on what analytical directions would yield the most meaningful insights. Examples under consideration: • detecting anomalous ownership or flag changes relative to voyage history • clustering vessels by movement similarity or recurring rendezvous • correlating inspection frequency (Equasis PSC data) with movement patterns • temporal analysis of flag-change “bursts” following new sanctions or insurance shifts

If you’ve worked on large-scale movement or registry datasets, I’d love suggestions on:

  1. variables worth normalizing early (timestamps, coordinates, ownership chains, etc.)

  2. methods or models that have worked well for multi-source identity correlation

  3. what kinds of aggregate outputs (tables, visualizations, or APIs) make such datasets most useful to researchers

Happy to share schema details or sample subsets if that helps focus feedback.

submitted by /u/captain_boh
[link] [comments]

Dataset Streaming For Distributed SOTA Model Training

“Streaming datasets: 100x More Efficient” is a new blog post sharing improvements on dataset streaming to train AI models

link: https://huggingface.co/blog/streaming-datasets

Summary of the blog post:

We boosted load_dataset('dataset', streaming=True), streaming datasets without downloading them with one line of code! Start training on multi-TB datasets immediately, without complex setups, downloading, no “disk out of space”, or 429 “stop requesting!” errors.
It’s super fast! Outrunning our local SSDs when training on 64xH100 with 256 workers downloading data. We’ve improved streaming to have 100x fewer requests, → 10× faster data resolution → 2x sample/sec, → 0 worker crashes at 256 concurrent workers.

there is also a 1min video explaining the impact of this: https://x.com/andimarafioti/status/1982829207471419879

submitted by /u/qlhoest
[link] [comments]

How To Get The Earthquake Data LATEST DATA From Japan Metereological Agency

HELLO!

Working on a project at the moment that has to do with earthquakes, and the agency only provides data until 2023 (provided in txt), and although they have updated information of their earthquakes in their site, they didn’t update their archives so I really can’t get the updated ones (that is already provided in txt). Is there anything I can do to aggregate the latest data without having to use other sites like USGS? Thank you so much.

submitted by /u/takoyaki_elle
[link] [comments]

Complete NBA Dataset, Box Scores From 1949 To Today

Hi everyone. Last year I created a dataset containing comprehensive player and team box scores for the NBA. It contains all the NBA box scores at team and player level since 1949, kept up to date daily. It was pretty popular, so I decided to keep it going for the 25-26 season. You can find it here: https://www.kaggle.com/datasets/eoinamoore/historical-nba-data-and-player-box-scores

Specifically, here’s what it offers:

  • Player Box Scores: Statistics for every player in every game since 1949.
  • Team Box Scores: Complete team performance stats for every game.
  • Game Details: Information like home/away teams, winners, and even attendance and arena data (where available).
  • Player Biographies: Heights, weights, and positions for all players in NBA history.
  • Team Histories: Franchise movements, name changes, and more.
  • Current Schedule: Up-to-date game times and locations for the 2025-2026 season.

I was inspired by Wyatt Walsh’s basketball dataset, which focuses on play-by-play data, but I wanted to create something focused on player-level box scores. This makes it perfect for:

  • Fantasy Basketball Enthusiasts: Analyze player trends and performance for better drafting and team-building strategies.
  • Sports Analysts: Gain insights into long-term player or team trends.
  • Data Scientists & ML Enthusiasts: Use it for machine learning models, predictions, and visualizations.
  • Casual NBA Fans: Dive deep into the stats of your favorite players and teams.

The dataset is packaged as .csv files for ease of access. It’s updated daily with the latest game results to keep everything current.

If you’re interested, check it out. Again, you can find it here: https://www.kaggle.com/datasets/eoinamoore/historical-nba-data-and-player-box-scores/

I’d love to hear your feedback, suggestions, or see any cool insights you derive from it! Let me know what you think, and feel free to share this with anyone who might find it useful.

Cheers.

submitted by /u/Low-Assistance-325
[link] [comments]

Looking For A Greenhouse Dataset For A University Project 🌱

Hi everyone! 👋

I’m currently working on a university project related to greenhouse crop production and I’m in need of a dataset. Specifically, I’m looking for data that includes:

  • Crop yield (kg/ha) — for crops like tomato, cucumber, capsicum, or similar
  • Environmental and input parameters such as temperature, humidity, light, CO₂, fertilizer usage, electricity consumption, and water usage

If anyone already has access to such a dataset or knows a reliable source where I could find one, I’d be incredibly grateful for your help. 🙏

Thank you in advance for any leads or suggestions! 🌿

submitted by /u/BobcatNo8108
[link] [comments]

ITI Student Dropout Dataset For ML & Education Analytics

Hey everyone! 👋

– Ever wondered which factors push students to drop out? 🤔

I built a synthetic dataset that lets you explore exactly that – combining academic, social, and personal variables to model dropout risk.

🔗 Check it out on Kaggle:

ITI Student Dropout Synthetic Dataset

📊 About the Dataset

The dataset contains 22 features covering:

  • 🎯 Demographics: age, gender, location, income, etc.
  • 📘 Academics: marks, attendance, backlogs, program type.
  • 💬 Personal & Social: motivation, family support, ragging, stress.
  • 🌐 Digital & Environmental: internet issues, distance from institute.

Target variable: dropout (Yes/No)

🧠 What You Can Do With It

  • Build and compare classification models (Logistic Regression, XGBoost, Random Forest, etc.)
  • Perform EDA and correlation analysis on academic + social factors.
  • Explore feature importance for understanding dropout causes.
  • Use it for education, ML portfolio, or student analytics dashboards.

📚 Dataset Provenance:
Inspired by research like MDPI Data Journal’s dropout prediction study and India’s ITI Tracer Study (CENPAP), this dataset was programmatically generated in Python using probabilistic, rule-based logic to mimic real dropout patterns – fully synthetic and privacy-safe.

– ITI (Industrial Training Institute) offers vocational and technical education programs in India, helping students gain hands-on skills for industrial and technical careers.
These institutes mainly train students after 10th grade in trades like electrical, mechanical, civil, and computer IT.

If you like the dataset, please upvote, drop a comment, or try building models/code using it – so more learners and researchers can discover it and build something impactful!

submitted by /u/Grouchy-Peak-605
[link] [comments]

Made A 200 Dataset Save 50+ Hours Of Data Cleaning

I spent months cleaning and organizing 200+ datasets (CSV, Excel, JSON) for my own machine-learning and analytics projects.

They cover finance, retail, text, IoT, weather, and more — all structured, ready to use, and properly labeled.

It started as a side project but turned into something I use daily for modeling and dashboards.

If anyone’s interested in using them too, the link is in the comments 👇

submitted by /u/Smurgen6000
[link] [comments]

Welcome To R/learndataa. Let’s Make Learning Data Actually Practical.

Hey everyone!

This subreddit is for anyone learning data science, analytics, and AI. From beginners trying to understand Python to pros sharpening their machine learning skills.

The goal is simple: learn data by doing data.

Here’s what you can expect:

  • Weekly practice challenges
  • Honest discussions about learning paths and projects
  • Tips, tools, and code snippets that actually help
  • Community-led learning projects

I’d love to hear from you. What’s your biggest struggle right now with learning data? Let’s build this space around your needs.

u/Responsible-Gas-1474
Let’s learndataa, together.

submitted by /u/Responsible-Gas-1474
[link] [comments]

Sharing My Free Tool For Easy Handwritten Fine-tuning Datasets!

Hello everyone! I wanted to share a tool that I created for making hand written fine-tuning datasets, originally I built this for myself when I was unable to find conversational datasets formatted the way I needed when I was fine-tuning for the first time and hand typing JSON files seemed like some sort of torture so I built a little simple UI for myself to auto format everything for me.

I originally built this back when I was a beginner, so it is very easy to use with no prior dataset creation/formatting experience, but also has a bunch of added features I believe more experienced devs would appreciate!

I have expanded it to support :
– many formats; chatml/chatgpt, alpaca, and sharegpt/vicuna
– multi-turn dataset creation, not just pair-based
– token counting from various models
– custom fields (instructions, system messages, custom IDs),
– auto saves and every format type is written at once
– formats like alpaca have no need for additional data besides input and output, as default instructions are auto-applied (customizable)
– goal tracking bar

I know it seems a bit crazy to be manually typing out datasets, but handwritten data is great for customizing your LLMs and keeping them high-quality. I wrote a 1k interaction conversational dataset within a month during my free time, and this made it much more mindless and easy.

I hope you enjoy! I will be adding new formats over time, depending on what becomes popular or is asked for

Get it here

submitted by /u/ella0333
[link] [comments]

[WIP] ChatGPT Forecasting Dataset — Tracking LLM Predictions Vs Reality

Hey everyone,

I know LLMs aren’t typical predictors, but I’m curious about their forecasting ability. Since I can’t access the state of, say, yesterday’s ChatGPT to compare it with today’s values, I built a tool to track LLM predictions against actual stock prices.

Each record stores the prompt, model prediction, actual value, and optional context like related news. Example schema:

class ForecastCheckpoint: date: str predicted_value: str prompt: str actual_value: str = “” state: str = “Upcoming”

Users can choose what to track, and once real data is available, the system updates results automatically. The dataset will be open via API for LLM evaluation etc.

MVP is live: https://glassballai.com

Looking for feedback — would you use or contribute to something like this?

submitted by /u/aufgeblobt
[link] [comments]

Should My Business Focus On Creating Training Datasets Instead?

I run a YouTube business built on high-quality, screen-recorded software tutorials. We’ve produced 75k videos (2–5 min each) in a couple of months using a trained team of 20 operators. The business is profitable, and the production pipeline is consistent, cheap and scalable.

However, I’m considering whether what we’ve built is more valuable as AI agent training/evaluation data. Beyond videos, we can reliably produce:
– Human demonstrations of web tasks
– Event logs, (click/type/url/timing, JSONL) and replay scripts (e.g Playwright)
– Evaluation runs, (pass/fail, action scoring, error taxonomy) – Preference labels with rationales (RLAIF/RLHF)
– PII-safe/redacted outputs with QA metrics

I’m looking for some validation from anyone in the industry:
1. Is large-scale human web-task data (video + structured logs) actually useful for training or benchmarking browser/agent systems?
2. What formats/metadata are most useful (schemas, DOM cues, screenshots, replays, rationales)?
3. Do teams prefer custom task generation on demand or curated non-exclusive corpora?
4. Is there any demand for this? If so any recommendations of where to start? (I think i have a decent idea about this)

Im trying to decide whether to formalise this into a structured data/eval offering. Technical, candid feedback is much appreciated! Apologies if this isnt the right place to ask!

submitted by /u/cardDecline
[link] [comments]

I Analyzed 300+ Beauty Ads From 6 Major Brands. Here’s What Actually Worked.

1.Glossier & Rare Beauty: Emotion-led authenticity wins. Ads featuring real voices, personal moments, and self-expression hooks outperformed studio visuals by 42% in watch-through.

“This is how I wear it every day” outperformed polished tagline intros 3:1.
Lo-fi camera, warmth, and vulnerability = higher trust + saves.

2.Fenty Beauty & Dior Beauty: Identity & luxury storytelling rule. These brands drove results with bold openings + inclusivity or opulence.

Fenty’s shade range flex and Dior’s cinematic luxury scenes both delivered 38% higher brand recall and stronger engagement when paired with clear product hero shots.

Emotional tone + clear visual brand world = scroll-stopping authority.

3.The Ordinary & Estée Lauder: Ingredient authority converts. Proof-first ads highlighting hero actives (“Niacinamide 10% + Zinc”) or clinical claims delivered 52% higher CTR than emotion-only ads.

Estée Lauder’s “derm-tested” visuals with scientific overlays maintained completion rates above 70% impressive for long-form content.

Ingredient + measurable benefit = high-intent traffic.

Actionable Checklist

– Lead with a problem/solution moment, not a logo.

– Name one hero ingredient or one emotional hook—not both.

– Match tone to brand: authentic (Glossier), confident (Fenty), expert (The Ordinary).

– Show proof before the CTA: testimonials, texture close-ups, or visible transformation.

– Keep the benefit visual (glow, smoothness, tone) front and center.

Want me to analyze your beauty niche next? Drop a comment.

This analysis was compiled as part of a project I’m working on. If you’re interested in this type of creative and strategic analysis, they’re still looking for alpha testers to help build and improve the product.

submitted by /u/RedBunnyJumping
[link] [comments]

[Release] I Built A Dataset Of Truth Social Posts/comments

I’m releasing a limited open dataset of Truth Social activity focused on Donald Trump’s account.
This dataset includes:

  • 31.8 million comments
  • 18,000 posts (Trump’s Truths and Retruths)
  • 1.5 million unique users

Media and URLs were removed during collection, but all text data and metadata (IDs, authors, reply links, etc.) are preserved.

The dataset is licensed under CC BY 4.0, meaning anyone can use, analyze, or build upon it with attribution.
A future version will include full media and expanded user coverage.

Heres the link 🙂 https://huggingface.co/datasets/notmooodoo9/TrumpsTruthSocialPosts

submitted by /u/Ok-Analysis-6589
[link] [comments]