Category: Datatards

Here you can observe the biggest nerds in the world in their natural habitat, longing for data sets. Not that it isn’t interesting, i’m interested. Maybe they know where the chix are. But what do they need it for? World domination?

Datasets For Cognitive Biases Impact

Bit of an odd request, I want a dataset where I want to illustrate in Power Bi tool the impact of behavioral analytics and want to display the impact for it.

Any idea where I can find? I am open to any industry but D2C industries would be preferrable i guess.

submitted by /u/skap24
[link] [comments]

Alternatives To The X API For A Student Project?

Hi community,

I’m a student working on my undergraduate thesis, which involves mapping the narrative discourses on the environmental crisis on X. To do this, I need to scrape public tweets containing keywords like “climate change” and “deforestation” for subsequent content analysis.

My biggest challenge is the new API limitations, which have made access very expensive and restrictive for academic projects without funding.

So, I’m asking for your help: does anyone know of a viable way to collect this data nowadays? I’m looking for:

  1. Python code or libraries that can still effectively extract public tweets.
  2. Web scraping tools or third-party platforms (preferably free) that can work around the API limitations.
  3. Any strategy or workaround that would allow access to this data for research purposes.

Any tip, tutorial link, or tool name would be a huge help. Thank you so much!

TL;DR: Student with zero budget needs to scrape X for a thesis. Since the API is off-limits, what are the current best methods or tools to get public tweet data?

submitted by /u/letucas
[link] [comments]

Looking For A Reliable Source Of Player Tackles Odds — Any Leads?

Hey folks, We’re working on a prop-focused betting analytics tool, and we’ve run into a wall trying to consistently source player tackles odds across major leagues (especially Premier League, La Liga, MLS, etc.).

We’re NOT looking for final match stats (we already have those), and we’re not scraping bookies directly due to all the anti-bot measures.

What we’re looking for:

A data provider/API that reliably includes pre-match odds for player tackles

Ideally with some sort of subscription or monthly fee (we want stability, not hacks)

Doesn’t have to be Opta-tier, just accurate and consistent

We’re happy to pay if it saves us the headache and keeps things running clean on the backend. If anyone’s using or knows of a source (public or private), I’d love to hear from you.

Thanks in advance for any help — and if anyone’s building something similar, always open to connect!

submitted by /u/hildegrim17
[link] [comments]

Request: Reddit Posts And Comments From R/endometriosis (April–May 2025) For Academic Research

Hello! I am conducting academic research on discussions in r/endometriosis from April through May 2025 and January 2023. I’m looking for datasets containing posts and comments from that subreddit during this period. I’ve tried Reddit API and Pushshift but haven’t been able to access the full historical data. If anyone has such a dataset or can point me to where I can find it, I’d really appreciate your help! Thanks so much!

submitted by /u/LordofRinger
[link] [comments]

Best Pharmacy, Grocery Store, Retail Store, Etc Databases

Hi everyone,

I’m new to this kind of stuff. I’ve been struggling to find databases that will give me point data on pharmacies, grocery stores, retail stores, etc, for a project of mine. I have tried OMS but I am looking for Vermont data and OMS has very bad coverage of rural areas, Google Maps results are way more plentiful. Anyone have recommendations?

Thanks

submitted by /u/BattalionX
[link] [comments]

Opendatahive Want F### Scale AI And Kaggle

OpenDataHive look like– a web-based, open-source platform designed as an infinite honeycomb grid where each “hexo” cell links to an open dataset (API, CSV, repositories, public DBs, etc.).

The twist? It’s made for AI agents and bots to explore autonomously, though human users can navigate it too. The interface is fast, lightweight, and structured for machine-friendly data access.

Here’s the launch tweet if you’re curious: https://x.com/opendatahive/status/1936417009647923207

submitted by /u/Ok-Cut-3256
[link] [comments]

Opendatahive Want Fuck Scale AI And Kaggle

OpenDataHive look like– a web-based, open-source platform designed as an infinite honeycomb grid where each “hexo” cell links to an open dataset (API, CSV, repositories, public DBs, etc.).

The twist? It’s made for AI agents and bots to explore autonomously, though human users can navigate it too. The interface is fast, lightweight, and structured for machine-friendly data access.

Here’s the launch tweet if you’re curious: https://x.com/opendatahive/status/1936417009647923207

submitted by /u/Ok-Cut-3256
[link] [comments]

Formats For Datasets With Accompanying Code Deserializers

Hi: I work in academic publishing and as such have spent a fair bit of time examining open-access datasets as well as various standardizations and conventions for packaging data into “bundles”. On some occasions I’ve used datasets for my own research. I’ve consistently found “reusability” to be a hindrance, even though it’s one of the FAIR principles. In particular, it seems very often necessary to write custom code in order to make any productive use of published data.

Scientists and researchers seem to be of the impression that because formats like CSV and JSON are generic and widely-supported, data encoded in these formats is automatically reusable. However, that’s rarely true. CSV files often do not have a one-to-one correlation between columns and parameters/fields, so it’s sometimes necessary to group multiple columns, or to further parse individual columns (e.g., mapping strings governed by a controlled vocabulary to enumeration values). Similarly, JSON (and XML) requires traversers that actually walk through objects/arrays and DOM elements, respectively.

In principle, those who publish data should likewise publish code to perform these kinds of operations, but I’ve observed that this rarely happens. Moreover, this issue does not seem particularly well addressed by popular standards like Research Objects or Linked Open Data. I believe there should be a sort of addendum to RO or FAIR saying something like this:

For a typical dataset, (1) it should be possible to deserialize all of the contents, or a portion thereof (according to users’ interests) into a collection of values/objects in some programming language, and (2) data publishers should make deserialization code directly available as part of the contents, or at least direct users to open-source code libraries with such capabilities.

The question I have, against that background, is — are there existing standards addressing things like deserialization which have some widespread recognition (at least comparable to FAIR or to Research Object Bundles)? Also, is there a conventional terminology for relevant operations/requirements in this context? For example, is there any equivalent to “Object-Relational Mapping” (to mean roughly “Object-Dataset Mapping”)? Or a framework to think through the interoperation between code libraries and RDF ontologies? In particular, is there any conventional adjective to describe data sets that have deserialization capabilities relevant to my (1) and (2)?

Once, I published a paper talking about “procedural ontologies” which had to do with translating RDF elements to code “objects”, wherein they had functionality and properties described by their public class interface. We then have the issue of connecting such attributes with those modeled by RDF itself. I though the expression “Procedural Ontology” was a useful term, but I did not find (then or later) a common expression that had a similar meaning. Ditto for something like “Procedural Dataset”. So this either means there’s blind spots in my domain knowledge (which often happens) or that these issues actually are under-explored in the realm of data publishing.

Apart from merely providing deserialization code, datasets adhering to this concept rigorously might adopt policies such as annotating types and methods to establish correlations with data files (e.g., a particular CSV column, or XML attribute, say, is marked as mapping to a particular getter/setter pair in some class of a code library) and to describe the relevant code in metadata (things like programming language, external dependencies, compiler/language versions, etc.). Again, I’m not aware of conventions in e.g. Reseach Objects for describing these properties of accompanying code libraries.

submitted by /u/osrworkshops
[link] [comments]

Looking For Roadworks/construction APIs Or Open Data Sources For Cycling Route Planning App

Hey everyone!

I’m building an open-source web app that analyzes cycling routes from GPX files and identifies roadworks/construction zones along the path. The goal is to help cyclists avoid unexpected road closures and get suggested detours for a smoother ride.

Currently, I have integrated APIs for: – Belgium: GIPOD (Flanders region) – Netherlands: NDW (National road network) – France: Bison Futé + Paris OpenData – UK: StreetManager

I’m looking for similar APIs or open data sources for other countries/regions, particularly: – Germany, Austria, Switzerland (popular cycling destinations) – Spain, Portugal, Italy – Denmark, Sweden, Norway – Any other countries with cycling-friendly open data

What I need: – APIs that provide roadworks/construction data with geographic coordinates – Preferably with date ranges (start/end dates for construction) – Polygon/boundary data is ideal, but point data works too – Free/open access (this is a non-commercial project)

Secondary option: I’m also considering OpenStreetMap (OSM) as a supplementary data source using the Overpass API to query highway=construction and temporary:access tags, but OSM has limitations for real-time roadworks (updates can be slow, community-dependent, and OSM recommends only tagging construction lasting 6+ months). So while OSM could help fill gaps, government/official APIs are still preferred for accurate, up-to-date roadworks data.

Any leads on government open data portals, transportation department APIs, or even unofficial data sources would be hugely appreciated! 🚴‍♂️

Thanks in advance!


Edit: Also interested in any APIs for bike lane closures, temporary cycling restrictions, or cycling-specific infrastructure updates if anyone knows of such sources!

submitted by /u/JayQueue77
[link] [comments]

I Made An Open-source Minecraft Food Image Dataset. And Want Ur Help!

yo! everyone,
I’m currently learning image classification and was experimenting with training a model on Minecraft item images. But I noticed there’s no official or public dataset available for this especially one that’s clean and labeled.

So I built a small open-source dataset myself, starting with just food items.

I manually collected images by taking in-game screenshots and supplementing them with a few clean images from the web. The current version includes 4 items:

  • Apple
  • Golden Apple
  • Carrot
  • Golden Carrot

Each category has around 50 images, all in .jpg format, centered and organized in folders for easy use in ML pipelines.

🔗 GitHub Repo: DeepCraft-Food

It’s very much a work-in-progress, but I’m planning to split future item types (tools, blocks, mobs, etc.) into separate repositories to keep things clean and scalable. If anyone finds this useful or wants to contribute, I’d love the help!

I’d really appreciate help from the community in growing this dataset, whether it’s contributing images, suggesting improvements, or just giving feedback.

Thanks!

submitted by /u/xtrupal
[link] [comments]

Is There Any Painting Art Api Out There?

Is there any painting art api out there? I know Artsy but it will be retired on 28th July and I am not able to create an app in artsy system because they remove the feature. I know wikidata but it doesn’t contain description of artworks. I need an API that gives me artwork name, artwork description, creation date, creator name. How can I do that?

submitted by /u/eksitus0
[link] [comments]

Searching For Longitudinal Mental Health Dataset

I’m searching for a longitudinal dataset with mental health data. It needs to have something that can be linguistically analyzed, so a daily diary entry, writing prompt, or even patient-therapist transcripts. I’m not too picky on timeframe or disorder, I just want to see if something is out there and available for public use. If anyone is aware of any datasets like this or forums that might be helpful, I would appreciate the help. I’ve done some searching and so far haven’t found much.

Thank you in advance!

submitted by /u/BelSwaff
[link] [comments]

How Can I Extract Data From A Subreddit Over Multiple Years (e.g. 2018–2024)?

Hi everyone,
I’m trying to extract data from a specific subreddit over a period of several years (for example, from 2018 to 2024).
I came across Pushshift, but from what I understand it’s no longer fully functional or available to the public like it used to be. Is that correct?

Are there any alternative methods, tools, or APIs that allow this kind of historical data extraction from Reddit?
If Pushshift is still usable somehow, how can I access it? I’ve checked but I couldn’t find a working method or way to make requests.

Thanks in advance for any help!

submitted by /u/eremitic_
[link] [comments]

WikipeQA : An Evaluation Dataset For Both Web-browsing Agents And Vector DB RAG Systems

Hey fellow datasets enjoyer,

I’ve created WikipeQA, an evaluation dataset inspired by BrowseComp but designed to test a broader range of retrieval systems.

What makes WikipeQA different? Unlike BrowseComp (which requires live web browsing), WikipeQA can evaluate BOTH:

  • Web-browsing agents: Can your agent find the answer by searching online? (The info exists on Wikipedia and its sources)
  • Traditional RAG systems: How well does your vector DB perform when given the full Wikipedia corpus?

This lets you directly compare different architectural approaches on the same questions.

The Dataset:

  • 3,000 complex, narrative-style questions (encrypted to prevent training contamination)
  • 200 public examples to get started
  • Includes the full Wikipedia pages used as sources
  • Shows the exact chunks that generated each question
  • Short answers (1-4 words) for clear evaluation

Example question: “Which national Antarctic research program, known for its 2021 Midterm Assessment on a 2015 Strategic Vision, places the Changing Antarctic Ice Sheets Initiative at the top of its priorities to better understand why ice sheets are changing now and how they will change in the future?”

Answer: “United States Antarctic Program”

Built with Kushim The entire dataset was automatically generated using Kushim, my open-source framework. This means you can create your own evaluation datasets from your own documents – perfect for domain-specific benchmarks.

Current Status:

I’m particularly interested in seeing:

  1. How traditional vector search compares to web browsing on these questions
  2. Whether hybrid approaches (vector DB + web search) perform better
  3. Performance differences between different chunking/embedding strategies

If you run any evals with WikipeQA, please share your results! Happy to collaborate on making this more useful for the community.

submitted by /u/Fit_Strawberry8480
[link] [comments]