Category: Datatards

Here you can observe the biggest nerds in the world in their natural habitat, longing for data sets. Not that it isn’t interesting, i’m interested. Maybe they know where the chix are. But what do they need it for? World domination?

Needed Fb, Insta And Twitter Comment Dataset For Sentiment Analysis

I’m currently working on a project to develop an application that can fetch the most recent posts from a provided company’s Facebook, Instagram, and Twitter profiles. The application also needs to perform sentiment analysis on the comments for these posts and create a notification system to alert users if any negative comments are detected.

I need to train the model based on dataset from Facebook, Instagram and Twitter but I can’t find what I need on github/kaggle

submitted by /u/DeVoe69
[link] [comments]

AI Books4 Dataset For Training LLMs Further

What?

More than 400,000 fiction and non-fiction book full-texts. Multiple languages, curated, deduplicated.

More than 6,000,000 scholarly publications, magazines, and manuals full-texts. Multiple languages, curated, deduplicated.

150,000,000 metadata records

Format

Zstd compressed file, JSON lines, one per book/publication.

abstract, content – description and content in markdown format

issued_at – time of issuing of the object (not of the record itself)

metadata – ISBNs, publishers, series etc

id – identifier in external systems, if applicable (i.e. DOI)

other fields should be self-descriptive

Download:

magnet:?xt=urn:btih:a904e660355c49006b2e7d43893d31bf3c2be9cc&dn=libstc2.jsonl.zst&tr=udp://tracker.opentrackr.org:1337/announce&tr=https://tracker1.ctix.cn:443/announce&tr=udp://open.demonii.com:1337/announce

submitted by /u/JohnTheMelancholic
[link] [comments]

I Will Create Free Data Pipeline + Analytics Dashboard For You

I am an experienced data engineer and I have three free days next week.

If you have a dataset for which you would like to create a data pipeline for continuous ingestion, and you would like a dashboard built and/or AI-based Q&A on top of that, I am available to help. I will take on the project if it is interesting enough and if you can benefit from it – for free :).

The dashboard/Q&A would be made available at dataflick.dev ‘s free tier.

Let us see if there are some interesting usecases

submitted by /u/Such-Cartographer750
[link] [comments]

Looking For Synonym Database In Sqlite

Hi all,
I’m looking to program a fun CLI tool in Rust that will take a string and then replace all of the words with a random synonym. I plan on implementing this using a sqlite3 package to make queries to an already existing (SQLite) database containing a bunch of synoyms.
The only issue now is that I can’t seem to find a page for said database, and writing one by hand sounds like a terribly daunting task 😅

Would somebody be able to help me find this?

submitted by /u/7turtlereddit
[link] [comments]

Finding Industry Employment Data Broken Down By Age

I’m trying to find info on employment by sector and age but am having a hard time finding it.

I’d to get a breakdown of where young people work in Austin, TX to compare to El Paso, TX just to try to get some ideas on why El Paso loses so many young people to other cities and what kinds of industries are attracting them

I’ve found good data on different job sectors from the US Bureau of Labor Statistics, but it doesn’t break down by age range: https://www.bls.gov/oes/current/oes_12420.htm

submitted by /u/asarcosghost
[link] [comments]

Looking For Car Theft Data Either City, State, Or National

Hi I’m looking for a dataset that has car theft data. I’m looking for make/model, time of theft, location, recovered(y/n), and details if possible. This is for a school project that I hope becomes a helpful tool to mitigate car thefts.

I reached out to the FBI and local PD, but haven’t received a response. I don’t care much for the location of the dataset but am prioritizing location of thefts.

submitted by /u/iamaguesttoo
[link] [comments]

Popular Streaming Services (eg. Netlifx, AmazonPrime, Disney+, Ect.) Metadata

I’m looking to do a python-based data analysis and visualisation project. I was looking to focus on the data and metadata of most, if not all, available movies and TV series provided by the most popular streaming services.

I see most online projects use this kaggle source: https://www.kaggle.com/datasets/shivamb/netflix-shows/data

As nice as it is, it’s not as up to date as I would have liked, as it only goes up to 2021.

Is anyone aware or any other public, free dataset similar to the above which could fit my purpose?

I’m aware there are many sites such as https://flickmetrix.com/ and https://flixable.com/ which seem to have a large amount of movie’s data but I can’t seem to be able to find their source and/or if they have shared it publicly.

Thank you

submitted by /u/the_forgettable
[link] [comments]

Open Source Data Sharing Project For Research Labs / Individuals

Hey guys! I have noticed that there is not much in the realm of open source datasharing services, so I created a Django REST / React app that allows for upload, download, reviewing, etc, of files. Not sure if would be useful to people. Also, please feel free add features. This is meant to be an open source project that allows research labs / people to share and review datasets without needing to pay for any online subscriptions. https://github.com/lxaw/DataDock

submitted by /u/AGenericBackup
[link] [comments]

Data Labeling In Spreadsheets Vs Labeling Software?

Was talking with some of my classmates from undergrad and discussing our jobs/research. Something that we all still complain about is labeling data in spreadsheets.

Looked around online and found a whole host of data labeling tools from open source options (LabelStudio) to more advanced enterprise SaaS (Snorkel AI, Scale AI). Yet, no one I knew seemed to be using these solutions.

I kinda get it from an ease of use/cost stand point – as an undergrad researcher, it was way easier to just paste data into a spreadsheet and send it to my lab. But I’m currently considering doing a much larger body of work. Would love to hear people’s experiences with these other tools, and what they liked/didn’t like.

For context, doing a bunch of Large Language Model output labeling in the medical space (n = ~2000?).

submitted by /u/ninepancakez
[link] [comments]

Need A College Dataset For A AI I’m Making

Hello!

I have spent hours looking for a dataset that includes information over college courses + a description briefly describing the course.

I have had some luck having found thorough datasets explicit to certain colleges. Perhaps I can just use those and call it good; I assume most colleges have roughly the same courses, some differ slightly.

But before I continue my journey I just wanted to see if this community knows of any decent datasets in regards to college information including, but definitely not limited to, the majors and a brief description of the majors?

submitted by /u/sumanila
[link] [comments]

Recommend Me A Dataset For Hands On Project

Hey there, I am learning apache spark and aws cloud. I am planning to make a project basically an ETL project using Glue. I want to perform transformations using spark but I haven’t came around any good dataset, it’s not like there are not datasets but I want a big dataset of thousands of rows and some under 10 columns, like I have found out some myself like UFO, World Bank etc either it is too big or it just not have a good source. Are there any fellow redditors who have worked on something similar or you just have a good Recommendation??

submitted by /u/datastoner
[link] [comments]

[self-promotion] ICYMI: You Can Now Get Notified When Any New Code Is Released For A Given Paper Or Topic!

ICYMI: You can now get notified when any new code is released for a given paper or topic! Just install the code finder extension (Chrome: https://chromewebstore.google.com/detail/ai-code-finder-for-papers/aikkeehnlfpamidigaffhfmgbkdeheil | Firefox: https://addons.mozilla.org/en-US/firefox/addon/code-finder-catalyzex/ | Edge: https://microsoftedge.microsoft.com/addons/detail/get-papers-with-code-ever/mflbgfojghoglejmalekheopgadjmlkm), click on any bell/alert icon you come across while browsing the web and follow the next steps on the screen 🙂 Also, with alerts

get the latest developments in your area of interest delivered straight to your inbox. Author’s newest work: be the first to know when an author releases new papers.

submitted by /u/fullerhouse570
[link] [comments]

How To Price Image Data For Data Monetization?

I’m currently researching how satellite imagery data (or any other type of Image data), especially hyperspectral and multispectral data, is priced by different companies. I’m particularly interested in how these companies determine the cost for various sectors like agriculture, mining, and environmental monitoring.

Here’s some context:

Service Tiers: Companies often offer different service tiers (e.g., tasking, archive access, subscription models).

Resolution and Coverage: Pricing seems to vary based on image resolution (e.g., 5-meter vs. sub-meter) and the area covered.

Applications: Different use cases might influence pricing (e.g., crop health monitoring, yield prediction, soil analysis).

Technology: Advances in satellite technology, such as deployable optics, might impact cost.

I’ve seen companies like Wyvern Space, Planet Labs, and Pixxel offering these services but haven’t found detailed public pricing information.

Could anyone share insights or resources on:

– General pricing strategies for satellite imagery (and image data in general) data and any approximate numbers?

– How factors like resolution, coverage area, and application affect pricing?

– Any case studies or examples from companies in this field?

Thanks in advance for your help!

submitted by /u/sidhulogy
[link] [comments]

DataSet For Training Models For Detecting Levels Of Depression

Hi everyone! I wish to create a dataset with phrases depicting various levels of depression.

I am aware of the fact that I can easily scout through reddit posts and create a dataset, but I wish to create it using a model, which could give me an endless supply of “human-like” phrases which mimics actual people describing their depression.

I was thinking of maybe scraping through some medical journals which could give me some symptoms of depression and related issues, and then create a model which takes these symptoms and creates “human-like” phrases related to these symptoms, but am not sure how I could implement this.

Any help would be appreciated. Thanks a lot!

submitted by /u/CutDangerous127
[link] [comments]

I’m Having Troubles Finding Economic Data About The Democratic People’s Republic Of Korea (North Korea) – Bachelor Thesis

Hi, I’m Paula

I’m working on my bachelor’s thesis and need to find some reliable economic data on North Korea. It’s pretty tricky to locate good sources for this, so I thought I’d ask if you have any suggestions on where to look or who to talk to. I’m looking for data spanning from 1960 to 2023, covering the following indicators:

GDP at constant prices

Investment (Gross Fixed Capital Formation, GFCF)

State intervention: public spending as a percentage of GDP

Country openness: the sum of exports plus imports divided by GDP ((X+M)/GDP)

Real exchange rate

Economic structure (GDP by sector)

Sorry if this is not the right place to post this, but I’m quite lost and don’t know where else to look. I already have some of the data, but it’s either not for all years or it’s incomplete. I’ve also checked the Bank of Korea and World Bank data, but most of it only covers a few years or isn’t very old.

submitted by /u/Fluffy-Advice4967
[link] [comments]

Seeking Dataset For Internet Traffic Analysis (Malicious Vs. Legitimate)

I’m currently working on my bachelor’s thesis, that is aimed at building a classification model to differentiate between malicious and legitimate internet traffic. I’m trying to gather the data on my own but I’m unable to get the ammount of data needed to train a decent model. I’m in need of a dataset containing internet traffic labeled as either malicious or legitimate (binary classification).

The dataset should ideally include features commonly associated with internet traffic analysis, such as IP addresses, timestamps, protocols, packet sizes, etc. Any additional contextual information would be highly beneficial.

If you know of any publicly available datasets or have access to such data, including well-done synthetic datasets, please let me know.

submitted by /u/Ortzadar
[link] [comments]