Looking for a B2C US list with a tilt toward finance, business and investing. Which websites delivered decent quality for you, and how was support and replacements? Real experiences wanted.
submitted by /u/Real_Jay_Dee
[link] [comments]
Here you can observe the biggest nerds in the world in their natural habitat, longing for data sets. Not that it isn’t interesting, i’m interested. Maybe they know where the chix are. But what do they need it for? World domination?
Looking for a B2C US list with a tilt toward finance, business and investing. Which websites delivered decent quality for you, and how was support and replacements? Real experiences wanted.
submitted by /u/Real_Jay_Dee
[link] [comments]
Hi , I’m building a model to generate step-by-step pencil portrait tutorials from a face photo. I need a small, high-quality dataset of face photo → 8 progressive sketch frames (or vector stroke sequences for faces). Ideally: 50–500 identities, neutral pose, consistent pose across steps, and cumulative stroke frames or stroke-ordered vector drawings.
If you have existing photo↔sketch data (CUFS, person-face-sketch data etc.) and are open to: (a) sharing vector/stroke info, or (b) helping infer stroke order for progressive frames, please reply or DM me. Will provide credit and/or co-authorship for contributors. Happy to pay for high-quality artist contributions (10–100 high-quality tutorials).
submitted by /u/Dizzy_Level455
[link] [comments]
I compiled and structured a global automotive specifications dataset covering more than 12,000 vehicle variants from over 100 brands, model years 1990–2025.
Each record includes: Brand, model, year, trim Engine specifications (fuel type, cylinders, power, torque, displacement) Dimensions (length, width, height, wheelbase, weight) Performance data (0–100 km/h, top speed, CO₂ emissions, fuel consumption) Price, warranty, maintenance, total cost per km Feature list (safety, comfort, convenience)
Available in CSV, JSON, and SQL formats. Useful for developers, researchers, and AI or data analysis projects.
GitHub (sample, details and structure): https://github.com/vbalagovic/cars-dataset
submitted by /u/Ok_Cucumber_131
[link] [comments]
I’m collecting data for analysis prices or rankings. Do you run scrapes at fixed intervals (daily/hourly), or trigger them on changes (like detected updates)? I’m exploring event-driven scraping but not sure if it’s overengineering for most datasets. How to handle temporal accuracy?
submitted by /u/Vivid_Stock5288
[link] [comments]
Introducing JFLEG-JA, a new Japanese language error correction benchmark with 1,335 sentences, each paired with 4 high-quality human corrections.
Inspired by the English JFLEG dataset, this dataset covers diverse error types, including particle mistakes, kanji mix-ups, incorrect contextual verb, adjective, and literary technique usage.
You can use this for evaluating LLMs, few-shot learning, error analysis, or fine-tuning correction systems.
submitted by /u/Ok_Employee_6418
[link] [comments]
im looking for a free source of cannabis genomic data from recent years
submitted by /u/zynbobguey
[link] [comments]
Hello,
I’ve been building a platform that reconstructs and displays SEC-filed financial statements (www.freefinancials.com). The backend is working well, but I’m now working through a data-standardization challenge.
Some companies report the same financial concept using different XBRL tags across periods. For example, one year they might use us-gaap:SalesRevenueNet, and the next year they switch to us-gaap:Revenues. This results in duplicated rows for what should be the same line item (e.g., “Revenue”).
Does anyone have experience normalizing or mapping XBRL tags across filings so that concept names remain consistent across periods and across companies? Any guidance, best practices, or resources would be greatly appreciated.
Thanks!
submitted by /u/Ok-Access5317
[link] [comments]
Hi, I previously built a project for a hackathon and needed some open jobs data so I built some aggregators. You can find it in the readme.
submitted by /u/Own_Relationship9794
[link] [comments]
hi guys , i need good dataset sources for my data analyst capstone project
submitted by /u/ConcentrateMain1862
[link] [comments]
Sharing my processed archive of 100+ real estate + census metrics, broken down by zip code and date. I don’t want to promote, but I built it for a fun (and free) data visualization tool thats linked in my profile. I’ve had a few people ask me for this data since real estate data (at the zip code level) is really large and hard to process.
It took many hours to clean and process the data, but it has:
– home values going back to 2005 (broken down by home size)
– Rents per home size, dating 5 years back
– Many relevant census data points since 2009 I believe
– Home listing counts (+ listing prices, price cuts, price increases, etc.)
– Section 8 profitability per home size + various Section 8 metrics
– All in all about 120 metrics IIRC
Its a tad bit abridged at <1gb, the raw data is about 80gb but its gone through heavy processing (rounding, removing irrelevant columns, etc.). I have a larger dataset thats about 5gb with more data points, can share that later if anybody is interested.
Link to data: https://www.prop-metrics.com/about#download-data
submitted by /u/maps_can_be_fun
[link] [comments]
Hey r/datasets, If you’re into training AI that actually works in the messy real world buckle up. An 18-year-old founder just dropped Egocentric-10K, a massive open-source dataset that’s basically a goldmine for embodied AI. What’s in it?
Why does this matter? Current robots suck at dynamic tasks because datasets are tiny or too “perfect.” This one’s raw, scalable, and licensed Apache 2.0—free for researchers to train imitation learning models. Could mean safer factories, smarter home bots, or even AI surgeons that mimic pros. Eddy Xu (Build AI) announced it on X yesterday: Link to X post:
Grab it here: https://huggingface.co/datasets/builddotai/Egocentric-10K
submitted by /u/NotSuper-man
[link] [comments]
Hello, I was wondering if anyone might have any good ideas about how to go about getting data like this. I have already tried the Bureau of Transportation Statistics DB1B and T-100 data, but they don’t have anything on the intermediate stops of the itineraries.
So is there some other way to get data on which passengers at an airport are simply connecting on an itinerary that includes a connection (self-connections obviously excluded), and which passengers are originating or terminating at the airport?
Any help and ideas would be greatly appreciated. Thanks!
submitted by /u/Vyksendiyes
[link] [comments]
High-Quality USA Data Available — Fresh & Verified ✅
Hey everyone, I have access to fresh, high-quality USA data available in bulk. Packages start from 10,000 numbers and up. The data is clean, updated, and perfect for anyone who needs verified contact datasets.
🔹 Flexible quantities 🔹 Fast delivery 🔹 Reliable source
If you’re interested or need more details, feel free to DM me anytime.
Thanks!
submitted by /u/Alphaboi123
[link] [comments]
I scraped the top 100 products in a few categories daily for 30 days and got this chunky dataset with rank histories, prices, and reviews. What do i go after first? maybe trend analysis, price elasticity, or review manipulation patterns. If you had this data, how would you guys start to work on it?
submitted by /u/Vivid_Stock5288
[link] [comments]
Hey everyone,
I’ve got two big lists of songs that I need to compare: • List 1: 3,509 songs • List 2: 3,402 songs Most of the songs appear in both lists, but I need to find which songs are in List 1 but not in List 2
I’ve tried running it through ChatGPT but I don’t have pro so I’m limited
If someone can do this for me I’d be willing to pay
CSV files: https://drive.google.com/drive/folders/1VxLHnw9lfGhB-yOoZv_mcwNTGcrTF0dS
submitted by /u/Vidwiz_
[link] [comments]
I recently made one of 10,000 cars simply to train my AI project and i wanted to know if i could take this on further
submitted by /u/SouthernPermit6190
[link] [comments]
Can anyone point me towards actual recipe database(s), not API services, that permit commercial use?
I’m looking to do a project with a view to eventual Commercial implementation based around ingredient/recipe matching. I am aware that online recipe matching is quite a crowded field with many web services offering simple recipe matching already out there. I have a couple of specific angles that makes my idea different that I don’t want to go into here but I have not seen anyone else doing.
There are also many recipe API services with of course tiered pricing, rate limiting and so on. The fundamental problem with using third party recipe APIs is that, cost aside, it’s essentially impossible to query outside of the search parameters that they already provide. I am not interested in trying to put together my own clone of what’s fundamentally a widely and freely available turnkey service- If my thing is no different than I see no point.
In order for my project to work I need to be able to directly access a recipe database, not just run queries that someone else already thought of through their API. I would be happy to self host this but I have to get the data from somewhere. Is anyone able to suggest sources for actual database access, either to query against directly or to clone for self hosting? So far everything I found seems to be either non-commercial only with no other licensing option presented or things like datasets that people have scraped on Kaggle or things that aren’t actually recipe databases e.g. Nutritionix.
Thanks
submitted by /u/SquiffSquiff
[link] [comments]
Looking for a reliable and frequently updated football data API that covers: Premier League, Serie A, La Liga, Bundesliga, Ligue 1, and EFL Championship.
What I need • Competitions: EPL, Serie A, La Liga, Bundesliga, Ligue 1, EFL Championship • Data types: • Live: match scores, ongoing results, live match events (goals, cards, substitutions, etc.) • Recent: updated league tables and standings (within minutes of change) • Player stats: appearances, minutes, goals, assists, xG/xA if available • Club stats: team form, possession, shots, xG/xGA, PPDA, etc. • Historical: access to past seasons (preferably 2010/11 → present) • Update frequency: Real-time or near real-time (<1-min delay preferred) • Format: JSON REST API or GraphQL, with good documentation • Licensing: Open or paid — just needs clear usage rights and stable uptime
Bonus • Webhooks or push updates for live events • Consistent player/club IDs across seasons • Advanced metrics (xG models, passing maps, pressure events)
If you know any trusted APIs or data providers, please share: • Link • Coverage (competitions + seasons) • Update frequency • Known limitations • Pricing/licence details
Thanks in advance, I’ll compile and share the best options for others looking for up-to-date football data
submitted by /u/isolba9
[link] [comments]
Two web-sites are tracking deletions, changes, or reduced accessibility to Federal datasets.
America’s Essential Data
America’s Essential Data is a collaborative effort dedicated to documenting the value that data produced by the federal government provides for American lives and livelihoods. This effort supports federal agency implementation of the bipartisan Evidence Act of 2018, which requires that agencies prioritize data that deeply impact the public.
https://fas.org/publication/deleted-federal-datasets/
They identified three types of data decedents. Examples are below, but visit the Dearly Departed Dataset Graveyard at EssentialData.US for a more complete tally and relevant links.
submitted by /u/Slight-Fix9564
[link] [comments]
Hi everyone,
I’ve been working on a skin condition detection project using CNNs, with 5 classes — Wrinkles, Hyperpigmentation, Blackheads, Acne, and Open Pores.
I’ve collected around 3,000 images per class from various open sources and uploaded them to Google Drive for model training.
Now that I’ve trained and saved my model weights, I’m planning to delete the dataset from Drive to save space. But since I worked really hard to collect and clean it, I don’t want it to go to waste.
Can I upload the dataset to Kaggle Datasets for free and reference it in my GitHub project for future users?
Or is there a better alternative for sharing it publicly with proper licensing and access?
Any advice or experience sharing datasets like this would be super helpful.
submitted by /u/Plane_Race_840
[link] [comments]
Hey everyone,
I’m working on a small project related to website characterization and categorization — basically classifying domains into types like E-commerce, News, Social Media, Adult, etc.
I’ve heard that OpenDNS (now Cisco Umbrella) has a large Domain Tagging dataset where domains are categorized by the community. I’d love to use it (or even a subset) as part of my training or benchmarking data.
However, I can’t find any public dataset download or API endpoint that provides the full tagged domain list — only individual lookups or some small sample lists.
Does anyone know if:
I’ve already checked the official OpenDNS community site and Cisco forums, but I didn’t see a bulk export option.
Any pointers, mirrors, or even partial exports would be amazing.
Thanks in advance!
OpenDNS Link: https://community.opendns.com/domaintagging/
submitted by /u/mrjohndoe42069
[link] [comments]
I am looking forward to make a dream interpreter so I need a Dream dataset. So if anyone knows something about it. Plus get me the dataset I am looking forward for the reply from the ambitious people in our community.
submitted by /u/ClassroomLumpy3014
[link] [comments]
I’m working on a computer vision project for solar panel defect detection and localization. Specifically, I need datasets where defects are annotated with bounding boxes so the model can learn to detect where the problem is, not just classify the image as faulty or normal. I want to download the data and work locally, and I don’t want to use any online platforms for training.
submitted by /u/Successful-Life8510
[link] [comments]
Bonjour à tous,
Je développe une application mobile (Expo / React Native + backend Flask) où il est affiché les prix des stations carburants.
Je consomme déjà le jeu de données officiel [Prix des carburants en temps réel]() disponible sur data.gouv.fr, qui fournit les identifiants, adresses, coordonnées GPS et prix.
Problème : ce flux ne contient pas systématiquement le nom commercial (enseigne) des stations (ex : TotalEnergies, Leclerc, Intermarché, Carrefour Market…).
Je cherche une solution légale et durable, sans scraping, pour associer chaque station à son enseigne.
Le but est d’afficher dans l’application :
les prix actualisés des carburants.
Existe-t-il un jeu de données officiel (CSV / JSON / API) qui relie les identifiants de stations (id, adresse, cp, ville) à leur enseigne / nom commercial ? → Si oui, pouvez-vous indiquer le lien exact ou le nom du dataset ?
Si ce jeu n’est pas public :
Connaissez-vous une source alternative légale (par exemple open data régionaux, INSEE, ou bases professionnelles) pour obtenir les enseignes correspondantes ?
Côté technique : recommandez-vous de précharger ces correspondances côté serveur (ex : table SQLite ou CSV importé) afin d’éviter tout appel excessif ou scraping client ?
Enfin, si quelqu’un a déjà fusionné ces données (via ID, adresse ou géolocalisation), je serais très intéressé par :
Je souhaite obtenir une structure de données de ce type :
{ "id_station": "12345678", "enseigne": "TotalEnergies", "adresse": "4 Rue Étienne Kernours", "ville": "Douarnenez", "prix_gazole": 1.622, "prix_sp98": 1.739 }
Merci d’avance pour toute aide, piste ou contact !
Cordialement,
Tom
submitted by /u/OpenApartment1246
[link] [comments]
All-Party Parliamentary Groups (APPGs) are informal cross-party groups within the UK Parliament. APPGs exist to examine particular topics or causes, for example, small modular reactors, blood cancer, and Saudi Arabia.
While APPGs can provide useful forums for bringing together stakeholders and advancing policy discussions, there have been instances of impropriety, and the groups have faced criticism for potential conflicts of interest and undue influence from external bodies.
I have pulled data from Parliament’s register of APPGs (individual webpages / single PDF) into a JSON object for easy interrogation. Each APPG entry lists a chair, a secretariat, sources of funding, and so on.
How many APPGs are there on cancer; which political party chairs the most APPGs; how many donations do they receive?
Click HERE to view the dataset on Kaggle.
submitted by /u/2AEP
[link] [comments]