Category: Datatards

Here you can observe the biggest nerds in the world in their natural habitat, longing for data sets. Not that it isn’t interesting, i’m interested. Maybe they know where the chix are. But what do they need it for? World domination?

Trying To Work With NOAA Coastal Data. How Are People Navigating This?

I’ve been trying to get more familiar with NOAA coastal datasets for a research project, and honestly the hardest part hasn’t been modeling — it’s just figuring out what data exists and how to navigate it.

I was looking at stations near Long Beach because I wanted wave + wind data in the same area. That turned into a lot of bouncing between IOOS and NDBC pages, checking variable lists, figuring out which station measures what, etc. It felt surprisingly manual.

I eventually started exploring here:
https://aquaview.org/explore?c=IOOS_SENSORS%2CNDBC&lon=-118.2227&lat=33.7152&z=12.39

Seeing IOOS and NDBC stations together on a map made it much easier to understand what was available. Once I had the dataset IDs, I pulled the data programmatically through the STAC endpoint:
https://aquaview-sfeos-1025757962819.us-east1.run.app/api.html#/

From there I merged:

  • IOOS/CDIP wave data (significant wave height + periods)
  • Nearby NDBC wind observations

Resampled to hourly (2016–2025), added a couple lag features, and created a simple extreme-wave label (95th percentile threshold). The actual modeling was straightforward.

What I’m still trying to understand is: what’s the “normal” workflow people use for NOAA data? Are most people manually navigating portals? Are STAC-based approaches common outside satellite imagery?

Just trying to learn how others approach this. Would appreciate any insight.

submitted by /u/Signal_Sea9103
[link] [comments]

Epstein File Explorer Or How I Personally Released The Epstein Files

[OC] I built an automated pipeline to extract, visualize, and cross-reference 1 million+ pages from the Epstein document corpus

Over the past ~2 weeks I’ve been building an open-source tool to systematically analyze the Epstein Files — the massive trove of court documents, flight logs, emails, depositions, and financial records released across 12 volumes. The corpus contains 1,050,842 documents spanning 2.08 million pages.

Rather than manually reading through them, I built an 18-stage NLP/computer-vision pipeline that automatically:

Extracts and OCRs every PDF, detecting redacted regions on each page

Identifies 163,000+ named entities (people, organizations, places, dates, financial figures) totaling over 15 million mentions, then resolves aliases so “Jeffrey Epstein”, “JEFFREY EPSTEN”, and “Jeffrey Epstein*” all map to one canonical entry

Extracts events (meetings, travel, communications, financial transactions) with participants, dates, locations, and confidence scores

Detects 20,779 faces across document images and videos, clusters them into 8,559 identity groups, and matches 2,369 clusters against Wikipedia profile photos — automatically identifying Epstein, Maxwell, Prince Andrew, Clinton, and others

Finds redaction inconsistencies by comparing near-duplicate documents: out of 22 million near-duplicate pairs and 5.6 million redacted text snippets, it flagged 100 cases where text was redacted in one copy but left visible in another

Builds a searchable semantic index so you can search by meaning, not just keywords

The whole thing feeds into a web interface I built with Next.js. Here’s what each screenshot shows:

Documents — The main corpus browser. 1,050,842 documents searchable by Bates number and filterable by volume.

  1. Search Results — Full-text semantic search. Searching “Ghislaine Maxwell” returns 8,253 documents with highlighted matches and entity tags.

  2. Document Viewer — Integrated PDF viewer with toggleable redaction and entity overlays. This is a forwarded email about the Maxwell Reddit account (r/maxwellhill) that went silent after her arrest.

  3. Entities — 163,289 extracted entities ranked by mention frequency. Jeffrey Epstein tops the list with over 1 million mentions across 400K+ documents.

  4. Relationship Network — Force-directed graph of entity co-occurrence across documents, color-coded by type (people, organizations, places, dates, groups).

  5. Document Timeline — Every document plotted by date, color-coded by volume. You can clearly see document activity clustered in the early 2000s.

  6. Face Clusters — Automated face detection and Wikipedia matching. The system found 2,770 face instances of Epstein, 457 of Maxwell, 61 of Prince Andrew, and 59 of Clinton, all matched automatically from document images.

  7. Redaction Inconsistencies — The pipeline compared 22 million near-duplicate document pairs and found 100 cases where redacted text in one document was left visible in another. Each inconsistency shows the revealed text, the redacted source, and the unredacted source side by side.

Tools: Python (spaCy, InsightFace, PyMuPDF, sentence-transformers, OpenAI API), Next.js, TypeScript, Tailwind CSS, S3

Source: github.com/doInfinitely/epsteinalysis

Data source: Publicly released Epstein court documents (EFTA volumes 1-12)

submitted by /u/lymn
[link] [comments]

Where Are You Buying High-quality/unique Datasets For Model Training? (Tired Of DIY Scraping & AI Sludge)

Hey everyone, I’m currently looking for high-quality, unique datasets for some model training, and I’ve hit a bit of a wall. Off-the-shelf datasets on Kaggle or HuggingFace are great for getting started, but they are too saturated for what I’m trying to build.

Historically, my go-to has been building a scraper to pull the data myself. But honestly, the “DIY tax” is getting exhausting.

Here are the main issues I’m running into with scraping my own training data right now:

  • The “Splinternet” Defenses: The open web feels closed. It seems like every target site now has enterprise CDNs checking for TLS fingerprinting and behavioral biometrics. If my headless browser mouse moves too robotically, I get blocked.
  • Maintenance Nightmares: I spend more time patching my scripts than training my models.
  • The “Dead Internet” Sludge: This is the biggest risk for model training. So much of the web is now just AI-generated garbage. If I just blanket-scrape, I’m feeding my models hallucinations and bot-farm reviews.

I was recently reading an article about the shift from using web scraping tools (like Puppeteer or Scrapy) to using automated web scraping companies (like Forage AI), and it resonated with me.

These managed providers supposedly use self-healing AI agents that automatically adapt to layout changes, spoof fingerprints at an industrial scale, and even run “hallucination detection” to filter out AI sludge before it hits your database. Basically, you just ask for the data, and they hand you a clean schema-validated JSON file or a direct feed into BigQuery.

So, my question for the community is: Where do you draw the line between “Build” and “Buy” for your training data?

  1. Do you have specific vendors or marketplaces you trust for buying high-quality, ready-made datasets?
  2. Has anyone moved away from DIY scraping and switched to these fully managed, AI-driven data extraction companies? Does the “self-healing” and anti-bot magic actually hold up in production?

Would love to hear how you are all handling data sourcing right now!

submitted by /u/3iraven22
[link] [comments]

Newly Published Big Kink Dataset + Explorer

https://www.austinwallace.ca/survey

Explore connections between kinks, build and compare demographic profiles, and ask your AI agent about the data using our MCP:
I’ve built a fully interactive explorer on top of Aella’s newly released Big Kink Survey dataset: https://aella.substack.com/p/heres-my-big-kink-survey-dataset

All of the data is local on your browser using DuckDB-WASM: A ~15k representative sample of a ~1mil dataset.

No monetization at all, just think this is cool data and want to give people tools to be able to explore it themselves. I’ve even built an MCP server if you want to get your LLM to answer a specific question about the data!

I have taken a graduate class in information visualization, but that was over a decade ago, and I would love any ideas people have to improve my site! My color palette is fairly colorblind safe (black/red/beige), so I do clear the lowest of bars 🙂

https://github.com/austeane/aella-survey-site

submitted by /u/austeane
[link] [comments]

Thinking Of Open Sourcing A 250k Tables Dataset, Would This Be Valuable?

I’ve been working on a company for about 3 years with my co-founder. Our original goal was to build an intelligent document processing tool because we tried building a research co-pilot but found the document processing services available were bad. We got kind of carried away and built a data engine pipeline that reads in any latex, cleans it, and brings it to an intermediate representation where we can apply any augmentation (color, alignment, spacing). However, this has been a massive undertaking (~200k lines of python code), and to this point we have focused mostly on tables (the full document is written but it’s not refined or ready for production).

Due to our burnout and need to hit the real world, we decided to train an image -> Word, Excel, and latex converter using an architecture similar to nougat. It out-performed (except robustness) basically all table extraction models we’ve seen (and we’ve studied them all), but launching something that only extracts tables is not really a commercial product (it lacks focus). So hardly anyone used it.

We were looking into different use cases for the technology, but kept finding that it required the full document and meaningfully higher robustness to be commercially viable. Furthermore, we are good at focusing on one thing and doing it perfectly, and training a model + launching a website + marketing are a lot of things that split our focus. Not to mention that there is a lot of (well funded) competition in the space and we’re just a team of two.

Then we got to thinking: what if we sold our data. We have a pipeline that lets us create virtually any table (eventually document) with any kind of source data which can be augmented via an LLM. Then because we bring it into a form where we have control, we can apply programmatic augmentations to said tables of any kind and then go to any output ground truth format (Word, json, latex, html, …). That is to say, we have complete control and can generate any kind of data someone would need to improve their model.

So, we were thinking of dropping 250k tables + a benchmark based on our synthetic data (and real world validation) to demonstrate our capability and hopefully get companies that have custom requirements that can pay us to generate the data their model lacks. We can also probe the weaknesses of existing models similar to a security researcher and then offer our data as a solution.

What do you think? Is dropping 250k highly diverse and perfectly annotated tables (with multiple ground truth formats) a good idea? Would that be something that’s valuable to people and could gain traction?

We’re trying to be quick about it (next month or two) so publishing a paper or going to a conference probably isn’t the best move.

submitted by /u/Says_Watt
[link] [comments]

[self-promotion] Dataset Search For Kaggle & Huggingface

We made a tool for searching datasets and calculate their influence on capabilities. It uses second-order loss functions making the solution tractable across model architectures. It can be applied irrespective of domain and has already helped improve several models trained near convergence as well as more basic use cases.

The influence scores act as a prioritization in training. You are able to benchmark the search results in the app.
The research is based on peer-reviewed work.
We started with Huggingface and this weekend added Kaggle support.

Am looking for feedback and potential improvements.

https://durinn-concept-explorer.azurewebsites.net/

Currently supported models are casualLM but we have research demonstrating good results for multimodal support.

submitted by /u/New-Mathematician645
[link] [comments]

I Built An Open Hebrew Wikipedia Sentences Corpus: 11M Sentences From 366K Articles, Cleaned And Deduplicated

Hey all,

I just released a dataset I’ve been working on: a sentence-level corpus extracted from the entire Hebrew Wikipedia. It’s up on HuggingFace now:

https://huggingface.co/datasets/tomron87/hebrew-wikipedia-sentences-corpus

Why this exists: Hebrew is seriously underrepresented in open NLP resources. If you’ve ever tried to find a clean, large-scale Hebrew sentence corpus for downstream tasks, you know the options are… limited. I wanted something usable for language modeling, sentence similarity, NER, text classification, and benchmarking embedding models, so I built it.

What’s in it:

  • ~11 million sentences from ~366,000 Hebrew Wikipedia articles
  • Crawled via the MediaWiki API (full article text, not dumps)
  • Cleaned and deduplicated (exact + near-duplicate removal)
  • Licensed under CC BY-SA 3.0 (same as Wikipedia)

Pipeline overview: Articles were fetched through the MediaWiki API, then run through a rule-based sentence splitter that handles Hebrew-specific abbreviations and edge cases. Deduplication was done at both the exact level (SHA-256 hashing) and near-duplicate level (MinHash).

I think this could be useful for anyone working on Hebrew NLP or multilingual models where Hebrew is one of the target languages. It’s also a decent foundation for building evaluation benchmarks.

I’d love feedback. If you see issues with the data quality, have ideas for additional metadata (POS tags, named entities, topic labels), or think of other use cases, I’m all ears. This is v1 and I want to make it better.

submitted by /u/tomron87
[link] [comments]

Videos From DFDC Dataset Https://ai.meta.com/datasets/dfdc/

The official page has no s3 link anymore and it goes blank. The alternatives are already extracted images and not the videos. I want the videos for a recent competition. Any help is highly appreciated. I already tried
1. kaggle datasets download -d ashifurrahman34/dfdc-dataset(not videos)
2. kaggle datasets download -d fakecatcherai/dfdc-dataset(not videos)
3. kaggle competitions download -c deepfake-detection-challenge(throws 401 error as competition ended)
4. kaggle competitions download -c deepfake-detection-challenge -f dfdc_train_part_0.zip
5. aws s3 sync s3://dmdf-v2 . –request-payer –region=us-east-1

submitted by /u/Illustrious_Coast_68
[link] [comments]

Looking For Real Transport & Logistics Document Datasets To Validate My Platform

Hi everyone,

I’ve been building a platform focused on automated processing of transport and logistics documents, and I’m now at the stage where I need real-world data to properly test and validate it.

The system already handles structured and unstructured data for common logistics documents, including (but not limited to):

  • CMR (Consignment Note)
  • Commercial Invoices
  • Delivery Notes / POD
  • Bills of Lading
  • Air Waybills
  • Packing Lists
  • Customs documents
  • Certificates of Origin
  • Dangerous Goods Declarations
  • Freight Bills / Freight Invoices
  • And other related transport / logistics paperwork

Right now I’ve only used synthetic and manually designed doucments samples following publicly available templates, which isn’t representative of the complexity and messiness of real operations. I’m specifically looking for:

  • Anonymized / redacted real document sets, or
  • Companies, freight forwarders, carriers, 3PLs, etc. who are open to a collaboration where I can run their existing documents through the platform in exchange for insights, automation prototypes, or custom integrations.

I’m happy to sign NDAs, follow strict data handling rules, and either work with fully anonymized PDFs/images or set up a secure environment depending on what’s feasible.

  • Questions:
    • Do you know of any public datasets with realistic logistics documents (PDFs, scans, etc.)?
    • Are there any companies or projects that share sample packs for research or validation purposes?
    • Would anyone here be interested in collaborating or running a small pilot using their historical docs?

Any pointers, contacts, or links to datasets would be hugely appreciated.

Thanks in advance!

submitted by /u/AcanthisittaNo6887
[link] [comments]

Our AI Was Making Up Data For Months And Nobody Caught It, Here’s What I’ve Learned

Came across a post here recently about someone who trusted an AI tool to handle their analytics, only to find out it had been hallucinating metrics and calculations the whole time. No one on their team had the background to spot it, so it went unnoticed until real damage was done.

Honestly, I’ve watched this happen with people I’ve worked with too. The tool gets treated as a source of truth rather than a starting point, and without someone who understands the basics of how the data is being processed, the errors just pile up quietly.

The fix isn’t complicated, you don’t need a dedicated data scientist. You just need someone who can sanity-check the outputs, understand roughly how the model is arriving at its numbers, and flag when something looks off.

Has anyone here dealt with something like this? Curious how your teams handle AI oversight for anything data-sensitive.

submitted by /u/ansh17091999
[link] [comments]

SIDD Dataset Question, Trying To Find Validation Subset

Hello everyone!

I am a Master’s student currently working on my dissertation project. As of right now, I am trying to develop a denoising model.

I need to compare the results of my model with other SOTA methods, but I have ran into an issue. Lots of papers seem to test on the SIDD dataset, however i noticed that it is mentioned that this dataset is split into a validation and benchmark subset

I was able to make a submission on Kaggle for the benchmark subset, but I also want to test on the validation dataset. Does anyone know where I can find it? I was not able to find any information about it on their website, but maybe I am missing something.

Thank you so much in advance.

submitted by /u/veganmkup
[link] [comments]

Causal Ability Injectors – Deterministic Behavioural Override (During Runtime)

I have been spending a lot of time lately trying to fix agent’s drift or get lost in long loops. While most everyone just feeds them more text, I wanted to build the rules that actually command how they think. Today, I am open sourcing the Causal Ability Injectors. A way to switch the AI’s mindset in real-time based on what’s happening while in the flow.

[ Example:
during a critical question the input goes through lightweight rag node that dynamically corresponds to the query style and that picks up the most confident way of thinking to enforce to the model and keeping it on track and prohibit model drifting]

[ integrate as retrieval step before agent, OR upsert in your existing doc db for opportunistical retrieval, OR best case add in an isolated namespace and use as behavioral contstraint retrieval]

[Data is already graph-augmented and ready for upsertion]

You can find the registry here: https://huggingface.co/datasets/frankbrsrk/causal-ability-injectors And the source is here: https://github.com/frankbrsrkagentarium/causal-ability-injectors-csv

How it works:

The registry contains specific mindsets, like reasoning for root causes or checking for logic errors. When the agent hits a bottleneck, it pulls the exact injector it needs. I added columns for things like graph instructions, so each row is a command the machine can actually execute. It’s like programming a nervous system instead of just chatting with a bot.

This is the next link in the Architecture of Why. Build it and you will feel how the information moves once you start using it. Please check it out; I am sure it’s going to help if you are building complex RAG systems.

Agentarium | Causal Ability Injectors Walkthrough

1. What this is

Think of this as a blueprint for instructions. It’s structured in rows, so each row is the embedding text you want to match against specific situations. I added columns for logic commands that tell the system exactly how to modify the context.

2. Logic clusters

I grouped these into four domains. Some are for checking errors, some are for analyzing big systems, and others are for ethics or safety. For example, CA001 is for challenging causal claims and CA005 is for red-teaming a plan.

3. How to trigger it

You use the

trigger_condition 

If the agent is stuck or evaluating a plan, it knows exactly which ability to inject. This keeps the transformer’s attention focused on the right constraint at the right time.

4. Standalone design

I encoded each row to have everything it needs. Each one has a full JSON payload, so you don’t have to look up other files. It’s meant to be portable and easy to drop into a vector DB namespace like

causal-abilities 

5. Why it’s valuable

It’s not just the knowledge; it’s the procedures. Instead of a massive 4k-token prompt, you just pull exactly what the AI needs for that one step. It stops the agent from drifting and keeps the reasoning sharp.

It turns ai vibes, to adaptive thought , through retrieved hard-coded instruction set.

State A always pulls Rule B.
Fixed hierarchy resolves every conflict.
Commands the system instead of just adding text.

Repeatable, traceable reasoning that works every single time.

Take Dataset and Use It, Just Download It and Give It To Ur LLM for Analysis

I designed it for power users, and If u like it, give me some feedback report,

This is my work’s broader vision, applying cognition when needed, through my personal attention on data driven ability.

frank_brsrk

submitted by /u/frank_brsrk
[link] [comments]

Knowledge Graph Datasets Extracted From FTX Collapse Articles And Giuffre V. Maxwell Depositions

I used sift-kg (an open-source CLI I built) to extract structured knowledge graphs from raw documents. The output includes entities (people, organizations, locations, events), relationships between them, and evidence text linking back to source passages — all extracted automatically via LLM.

Two datasets available:

– FTX Collapse — 9 news articles → 431 entities, 1,201 relations. https://juanceresa.github.io/sift-kg/ftx/graph.html

– Giuffre v. Maxwell — 900-page deposition → 190 entities, 387 relations. https://juanceresa.github.io/sift-kg/epstein/graph.html

Both are available as JSON in the repo. The tool that generated them is free and open source — point it at any document collection and it builds the graph for you: https://github.com/juanceresa/sift-kg

Disclosure: sift-kg is my project — free and open source.

submitted by /u/garagebandj
[link] [comments]

Dataset: January 2026 Beauty Prices In Singapore — SKU-Level Data By Category, Brand & Product (Sephora + Takashimaya)

I’ve been tracking non-promotional beauty prices across major retailers in Singapore and compiled a January 2026 dataset that might be useful for analysis or projects.

Coverage includes:

  • SKU-level prices (old vs new)
  • Category and subcategory classification
  • Brand and product names
  • Variant / size information
  • Price movement (%) month-to-month
  • Coverage across Sephora and Takashimaya Singapore

The data captures real shelf prices (excluding temporary promotions), so it reflects structural pricing changes rather than sale events.

Some interesting observations from January:

  • Skincare saw the largest increases (around +12% on average)
  • Luxury brands drove most of the inflation
  • Fragrance gift sets declined after the holiday period
  • Pricing changes were highly concentrated by category

I built this mainly for retail and pricing analysis, but it could also be useful for:

  • consumer price studies
  • retail strategy research
  • brand positioning analysis
  • demand / elasticity modelling
  • data visualization projects

Link in the comment.

submitted by /u/IntelligentHome2342
[link] [comments]

[self-promotion] Built A Startup Funding Tracker For Founders, Analysts & Investors

Keeping up with startup funding, venture capital rounds, and investor activity across news + databases was taking too much time.

So I built a simple Funding Tracker API that aggregates startup funding data in one place and makes it programmatic.

Useful if you’re:

• tracking competitors

• doing market/VC research

• building fintech or startup tools

• sourcing deals or leads

• monitoring funding trends

Features:

• latest funding rounds

• company + investor search

• funding history

• structured startup/VC data via API

Would love feedback or feature ideas.

https://rapidapi.com/shake-chillies-shake-chillies-default/api/funding-tracker

submitted by /u/Capable_Atmosphere_7
[link] [comments]