Category: Datatards

Here you can observe the biggest nerds in the world in their natural habitat, longing for data sets. Not that it isn’t interesting, i’m interested. Maybe they know where the chix are. But what do they need it for? World domination?

Where To Find More Recent Energy Markets Financial Data Of EU Countries?

In the past there were these documents of the European Union:

Energy markets in the European Union in 2011 & 2024.

However it seems like they do not make them anymore. I could find the EU energy in figures Statistical pocketbook 2024, but it does not have the same data noted.

I am specifically looking for the electricity and gas market value for The Netherlands. Does anybody know where I can find it?

submitted by /u/superpauwer2
[link] [comments]

Looking For More Testers For Our Data Analytics Tool

Disclaimer: We’re building a data science tool that lets you upload csv datasets and interact with your data using conversational AI. You can prompt the AI to clean and preprocess data, generate visualizations, run analysis models, and create pdf reports—all while seeing the python scripts running under the hood.

Try out our beta here: actuarialai.io

We’re shipping updates daily and are looking for more testers, so your feedback is greatly appreciated!(Note: The site isn’t optimized for mobile yet)

submitted by /u/coke_and_coldbrew
[link] [comments]

PyVisionAI: Instantly Extract & Describe Content From Documents With Vision LLMs(Now With Claude And Homebrew)

If you deal with documents and images and want to save time on parsing, analyzing, or describing them, PyVisionAI is for you. It unifies multiple Vision LLMs (GPT-4 Vision, Claude Vision, or local Llama2-based models) under one workflow, so you can extract text and images from PDF, DOCX, PPTX, and HTML—even capturing fully rendered web pages—and generate human-like explanations for images or diagrams.

Why It’s Useful

All-in-One: Handle text extraction and image description across various file types—no juggling separate scripts or libraries. Flexible: Go with cloud-based GPT-4/Claude for speed, or local Llama models for privacy. CLI & Python Library: Use simple terminal commands or integrate PyVisionAI right into your Python projects. Multiple OS Support: Works on macOS (via Homebrew), Windows, and Linux (via pip). No More Dependency Hassles: On macOS, just run one Homebrew command (plus a couple optional installs if you need advanced features).

Quick macOS Setup (Homebrew)

brew tap mdgrey33/pyvisionai brew install pyvisionai # Optional: Needed for dynamic HTML extraction playwright install chromium # Optional: For Office documents (DOCX, PPTX) brew install –cask libreoffice

This leverages Python 3.11+ automatically (as required by the Homebrew formula). If you’re on Windows or Linux, you can install via pip install pyvisionai (Python 3.8+).

Core Features (Confirmed by the READMEs)

Document Extraction PDFs, DOCXs, PPTXs, HTML (with JS), and images are all fair game. Extract text, tables, and even generate screenshots of HTML. Image Description Analyze diagrams, charts, photos, or scanned pages using GPT-4, Claude, or a local Llama model via Ollama. Customize your prompts to control the level of detail. CLI & Python API CLI: file-extract for documents, describe-image for images. Python: create_extractor(…) to handle large sets of files; describe_image_* functions for quick references in code. Performance & Reliability Parallel processing, thorough logging, and automatic retries for rate-limited APIs. Test coverage sits above 80%, so it’s stable enough for production scenarios.

Sample Code

from pyvisionai import create_extractor, describe_image_claude # 1. Extract content from PDFs extractor = create_extractor(“pdf”, model=”gpt4″) # or “claude”, “llama” extractor.extract(“quarterly_reports/”, “analysis_out/”) # 2. Describe an image or diagram desc = describe_image_claude( “circuit.jpg”, prompt=”Explain what this circuit does, focusing on the components” ) print(desc)

Choose Your Model

Cloud:export OPENAI_API_KEY=”your-openai-key” # GPT-4 Vision export ANTHROPIC_API_KEY=”your-anthropic-key” # Claude Vision Local:brew install ollama ollama pull llama2-vision # Then run: describe-image -i diagram.jpg -u llama

System Requirements

macOS (Homebrew install): Python 3.11+ Windows/Linux: Python 3.8+ via pip install pyvisionai 1GB+ Free Disk Space (local models may require more)

Want More?

Official Site: pyvisionai.com GitHub: MDGrey33/pyvisionai – open issues or PRs if you spot bugs! Docs: Full README & Usage Homebrew Formula: mdgrey33/homebrew-pyvisionai

Help Shape the Future of PyVisionAI

If there’s a feature you need—maybe specialized document parsing, new prompt templates, or deeper local model integration—please ask or open a feature request on GitHub. I want PyVisionAI to fit right into your workflow, whether you’re doing academic research, business analysis, or general-purpose data wrangling.

Give it a try and share your ideas! I’d love to know how PyVisionAI can make your work easier.

submitted by /u/Electrical-Two9833
[link] [comments]

Random Object Detection Dataset For Machine Learning

So I am trying to train an AI to detect all the small miscellaneous stuff within a image, for example like keys,bottle cap, bottle, wrapping paper, broken glass, paper and I want to exclude larger items like chair, table, fan, sofa, etcs. This AI will first need to detect these items before picking them up via some mechanical system.

submitted by /u/GateCodeMark
[link] [comments]

Generate My Own Data For Fine-tuning. Thoughts/tips/feedback?

So much focus on better models, not nearly enough on better post training data. I recently came across Curator, open source tool for dataset generation and refinement. It seems promising for automating parts of the process, has anyone here tried it? Would love to hear thoughts!

Also curious—how do you all handle data generation? Any tools that have worked well please feel free to share

submitted by /u/Ambitious_Anybody855
[link] [comments]

Need Help Finding Data Research Project

I am in dire need of help finding a viable dataset for my research project. I am in my final semester of undergrad and have been tasked with a major research project which will soon need to be transferred into STATA but for now, I need to run basic descriptive statisitcs and come up with my hypothesis, research question, and equation. No matter what topic I bounce around I can’t seem to find data to back it up. For example, the effect of Conceal carry laws on crime rates. My professor wants the data to be on the county level with thousands of observations over years and years but that is just adding an extra layer of difficulty. Any ideas? I could use any direction for an interesting research question or useable/understandable data. I feel like this project could be easy if I have the right data and question (my prof also suggested starting with data as it could help make things easier

submitted by /u/Pleasant_Weakness_72
[link] [comments]

*In Search Of DATA* Research Project

I am in dire need of help finding a viable dataset for my research project. I am in my final semester of undergrad and have been tasked with a major research project which will soon need to be transferred into STATA but for now, I need to run basic descriptive statisitcs and come up with my hypothesis, research question, and equation. No matter what topic I bounce around I can’t seem to find data to back it up. For example, the effect of Conceal carry laws on crime rates. My professor wants the data to be on the county level with thousands of observations over years and years but that is just adding an extra layer of difficulty. Any ideas? I could use any direction for an interesting research question or useable/understandable data. I feel like this project could be easy if I have the right data and question (my prof also suggested starting with data as it could help make things easier)

submitted by /u/Pleasant_Weakness_72
[link] [comments]

Best Way To Find Resident Names From A List Of Addresses?

I have a list of addresses (including city, state, ZIP, latitude, and longitude) for a specific area, and I need to find the resident names associated with them.

I’ve already used Geocodio to get latitude and longitude, but I haven’t found a good way to pull in names. I’ve heard that services like Whitepages, Melissa Data, or Experian might work, but I’m not sure which is best or how to set it up.

Does anyone have experience with this? Ideally, I’d love a tool or API that can batch process the list. Open to paid or free solutions!

submitted by /u/Ljr1014
[link] [comments]

Movies That Were Added On Streaming Services

Hey,

I’m building my own dataset about movies that were added later on streaming services (like Netflix, Hulu, Disney+, etc). I’ve found some useful datasets in Kaggle that include the date which a specific movie was added on Netflix, for example. I need to find the dates for other movies I have in my dataset, in all other streaming services which those movies were added on. Does anyone have any idea where can I find it? When I search a specific movie in Amazon Prime, for example, I don’t find the date in which it was added on their platform.

Thanks.

submitted by /u/Porcoddio45
[link] [comments]

Paid Product Discovery Call – Dataset Procurement Protocol

Hey all!

I am building a data procurement protocol to make it easier for researchers to access proprietary datasets. We’re in the end stages of designing the UI/UX and are looking for more data points regarding pain points in the dataset procurement process. We’re offering $25 USD to anyone who can spare 15 minutes to talk about their experiences purchasing proprietary datasets. Some of the questions to expect include “Can you walk me through your typical data procurement process from identifying a need to acquiring the data?” and “Are there any specific improvements or innovations you’re hoping to implement in your data sourcing approach?”

Send me a DM if you’re interested and I’ll send you a calendly to pick a time!

submitted by /u/EmetResearch
[link] [comments]

Hello, I’m New To Datasets And Would Like To See Whether It’s Possible To Filter A Dataset From Huggingface Before Downloading It.

Hello everyone. I’m currently trying to find a more or less complete corpus of data that is completely public domain or under a free software / culture license. Something like a bundle of Wikipedia, Stack Overflow, the Gutenberg Project, and maybe some GitHub repositories for good measure. And I found RedPajama is painfully close to that, but not quite:

It includes the Common Crawl and C4 datasets, which are decidedly not completely open-source. It includes the Arxiv dataset, which might work for my purposes, but it includes both open-source and proprietary-licensed papers, so it would need filtering before I proceed. And it had to drop the Gutenberg dataset parser because of issues with it accidentally fetching copyrighted content (!!)

So, what I would like to do with RedPajama is:

Fetching Wikipedia, like usual, but also add other Wiki-projects like Wikinews and Wiktionary, and languages other than English, for completion purposes (as we’re ditching C4) Fetching more of the Stack Overflow data to compensate for the lack of C4 Fixing the Gutenberg parser so it can actually download the public-domain books from there. Alternately, download the Wikibooks dataset instead Filtering the Arxiv dataset to remove anything not under a public-domain, CC-By, or CC-By-SA license, preferably before downloading each individual paper

Is it possible to do that as a Huggingface script, or do I need to execute some manual pruning after downloading the entire RedPajama dataset instead?

submitted by /u/csolisr
[link] [comments]

Multimodal Terror Propaganda Repository Research

Looking for data on terror propaganda, Ideally multimodal e.g. social media image video audio text or others. Should be recent and ideally have a time component. The specific group behind the content is not that relevant, the more the better. There is a number of issues regarding this type of data. But as i am getting desperate i am greatful for what ever. Am looking to run some ML models for sentiment clasification tasks so need a few thousand observations. Cheers!

submitted by /u/SixMight
[link] [comments]

Looking For Options To Curate Or Download A Precurated Dataset Of Pubmed Articles On Evidence Based Drug Repositioning

To be clear, I am not looking for articles on the topic of drug repositioning, but articles that contain evidence of different drugs (for example, metformin in one case) having the potential to be repurposed for a disease other than its primary known mechanism of action or target disease (for example. metformin for Alzheimer’s). I need to be able to curate or download a dataset already curated like this. Any leads? Please help!

So far, I have found multiple ways I can curate such a database, using available API or Entrez etc. Thats good but before I put in the effort, I want to make sure there is no other way, like a dataset already curated for this purpose on kaggle or something.

For context, I am creating a RAG/LLM model that would understand connections between drugs and diseases other than the target ones.

submitted by /u/LukewarmTakesOnly
[link] [comments]