Hi friends, I’m looking for a dataset of videos that can be used to understand human talking/chatting, very much appreciated for any suggestions. Thanks.
submitted by /u/Character-Size-3083
[link] [comments]
Here you can observe the biggest nerds in the world in their natural habitat, longing for data sets. Not that it isn’t interesting, i’m interested. Maybe they know where the chix are. But what do they need it for? World domination?
Hi friends, I’m looking for a dataset of videos that can be used to understand human talking/chatting, very much appreciated for any suggestions. Thanks.
submitted by /u/Character-Size-3083
[link] [comments]
I’m currently in search of datasets that contain historical cyberattacks and their features. More specifically, I am looking at these columns: Type of Malware, Attack Vector, Purpose, Attacker or Groups, Damages Done (in USD or number of people affected), Type of Sector, and Size of the Organization Affected. Any recommendations or sources where I can find such datasets?
submitted by /u/Much_Pineapple_6027
[link] [comments]
Hi I am planning to create personalized learning system as major project for my comp science degree and facing difficulty finding proper datasets. I need 10,000+ datas on students (preferrably higher education) for the project.
submitted by /u/No_Development2058
[link] [comments]
Looking for business related data sets for tableau practice. Not too worried if I have to pay for access, just looking for something high quality that I can use to pair along with some research. Ultimate goal is to showcase data visualization skills with business related data.
submitted by /u/ChoiceChicken
[link] [comments]
Looking for a dataset containing cyclone/storm damage to apply machine learning. All the damage data that I can find is a single number for each event. Ideally, I would like to know the damage for each event split by region (by region, this could be by postcode/zip code, suburb, etc). To specifically describe what I am after:
Time period: At least for the last 20 years, but more the better. County: Preferably Australia but happy for it to be any other county if that county has the required data available. Event: As mentioned in the title, interested in cyclone and storms. Note, I use the term cyclone to include events such as hurricanes, typhoons, etc. Damage: This could be total economic damage, recovery cost, lives lost, casualties or any other reasonable metric. Granularity: This is the most important feature I am after. The more granular the better. Ideally the damage data would be by postcode/zip code. Though perhaps that is too much to hope for so will take what I can get.
Thanks in advance!
submitted by /u/Nanoputian8128
[link] [comments]
Hi guys.
I have developed a website https://twitter.cworld.ai
You can search twitter user and download their all tweets on this
the tweets can be three format
rawtxt every tweet is split by two newlines nn alpaca alpaca format json file the instruction is fixed play a role the input is the name of this user, maybe it will contains it’s intro origin tweets json the origin tweets json file
submitted by /u/Separate-Awareness53
[link] [comments]
Just looking for a cool dataset I can throw into Python and do a multiple regression on. Ideally, just to add to my GitHub for a job application. What would you do? This is for a entry level DS position and they want to see a couple of projects.
submitted by /u/PSKGM
[link] [comments]
I want to get hold of threaded communication that happens at work.
I have taken a look at,
Mailing lists, but mails are elaborate and I want to specifically train a model on shorter day to day conversations.
IRC archives don’t contain information about the message replied to.
Any open platforms/data sets you have come across where I can find the information containing regular day to day chats?
submitted by /u/lambainsaan
[link] [comments]
Hi everyone, in case you’re working on some projects based on web scraped data from e-commerce fashion websites, you can buy them on databoutique.com for few dollars. Available websites: Zara, Mango, H&M for fast fashion. Gucci, Prada, Balenciaga, Farfetch and more for luxury.
submitted by /u/Pigik83
[link] [comments]
Hi all,
Is there a dataset about electric vehicle reviews, that can be why they don’t want to buy it, what issues that they want to fix, concerns maybe and so on.
If not is it OK if I create a google form to collect these data and post it here? I will ofcourse make the data public affect collecting it.
submitted by /u/rayofhope313
[link] [comments]
I need a dataset that has ideally a list of different stocks and their values as a time series. with a labelling 0 or 1 during the times of manipulation.
I’ve searched around a fair bit and cant find anything. So if that doens’t exist is there a site i can go to to get a list of manipulation cases so i can collect and label the data myself?
submitted by /u/MisinformedOwl
[link] [comments]
Twitter thread about what is in it https://twitter.com/paulnovosad/status/1664269036946067457
submitted by /u/cavedave
[link] [comments]
Hey Everyone!
I’m having issues attempting to decode the information provided by the CDC. I downloaded the Mortality Multiple Cause File for 2021, and the .txt file is – not only over 2GB, but also incomprehensible. I followed the accompanying .pdf file and was even more confused by its “List of File Data Elements and Tape Locations”, and how I’m supposed to use the file to comprehend a list of codes upon codes upon codes? Especially, when the .txt file has no structure, and when I try to follow a top down approach, codes don’t seem to match.
I wanted to ask if there is a common approach to this, or if I am missing something?
Additional Info:
I am using R for statistical analysis. I wanted the raw data for this reason. I attempted to convert the .txt file to a .csv file format using Python, and it helped by structuring the data a little, but I still don’t know what I am looking at in terms of what it all means.
This is how the rows look now: 11 7101 F1080 422210 4D1 2021U7CN C851129 039 13 0511I509 21I518 31I513 41C851 61M481 05 C851 I509 I513 I518 M481 100 01 184005949020
I would appreciate any, and all help. Thank you all very much in advance.
submitted by /u/Meece156
[link] [comments]
I have been working on project related to tsunami and I wanted a graph on this but I am not able to find anything please help me
submitted by /u/Howl_Beast
[link] [comments]
Hi Everyone,
Spotify Dataset contains the number of premium users, number of Ad-supported users and total monthly active users (MAUs), Total revenues and Cost of Revenue, Gross Profit.
Use Dataset : https://www.kaggle.com/datasets/mauryansshivam/spotify-revenue-expenses-and-its-premium-users
Listed Spotify Revenue, Expenses and Its Premium Users since 2017
submitted by /u/AsgardiansLoki
[link] [comments]
Hey everyone,
I have a dataset that contains the positions of 9 flies for each frame. I want to build a behavior classifier based on this data, but I’m not sure how to approach the problem.
Sample: https://drive.google.com/file/d/1W960Z92f1im80o1l6FveWXBQI5883iRx/view?usp=sharing
My goal is to create an input that takes 9 rows at once, where each row represents the position of one fly, and then learn from it by finding distances between each body part of the flies with each other to determine whether they are touching, grooming, or avoiding.
Additionally, I would like to consider past frames while predicting current frame outputs. Does anyone have suggestions on how to approach this problem? Are there any similar models or approaches that already exist for this?
I’m open to using various machine learning models such as decision trees, support vector machines, or even deep learning models.
If you have any insights or resources that could help me get started, please let me know! Thanks in advance.
submitted by /u/SahilSingh2402
[link] [comments]
Can someone please create a dataset of news articles about ‘USA-China relations’. The dataset must have 5-10 years worth of articles and contain the news article name, date and short description.
submitted by /u/raks1811
[link] [comments]
Hello data people!
I’m interested in any datasets of specific humans (identifying details not needed) but with their specific date of birth AND date of death, i.e. to be able to perform inference procedures on exact ages at death (but with specific dates).
Thank you in advance! I searched a bit but didn’t find exactly those specifics.
submitted by /u/ghabibi
[link] [comments]
Hello, everyone! May I know if anyone here has data set for mushroom yield production that includes temperature and humidity data? We need at least 1,500 data for our simulation as part of our capstone project. Thank you.
submitted by /u/Ill-Moose4794
[link] [comments]
I’m looking to scrap the full text for all the proposed bills from the 117th Congress. I want to run the data through NVIVO for content analysis. I tried just downloading all the texts individually from Congress.gov, but I am looking to have all 15,224 documents available for analysis so the one-by-one approach is really unrealistic. I haven’t been able to find this data in a pre-existing dataset, but any assistance would be greatly appreciated!
Of note, I have tried utilizing the Congress.gov API but I can’t figure out how to get all proposed texts. I then tried to run a python script in Google Collab, but I kept getting a “gaierror” error that I couldn’t resolve. I’ve also tried ProPublica and govtrack.us but I couldn’t find a bulk data download option — only a bulk data query for view. I would still have to individually download each bill.
Reference Python Script:
#I removed my API key for privacy purposes, but I assure you it was in the script when I ran it
import requests
import json
def get_bill_data(congress_number):
base_url = “https://api.govinfo.gov“
endpoint = “https://api.congress.gov/v3/bill/117/hr/1/text?api_key=DEMO_KEY”.format(congress_number)
api_key = “[SQUATTINGFOX_API_KEY]”
url = base_url + endpoint
headers = {
“X-API-KEY”: api_key,
“Content-Type”: “application/json”
}
response = requests.get(url, headers=headers)
if response.status_code == 200:
data = response.json()
return data
else:
print(“Error retrieving bill data. Status Code:”, response.status_code)
return None
def save_bill_data(data, output_file):
with open(output_file, ‘w’) as file:
json.dump(data, file)
congress_number = “117”
output_file = “bills_data.json”
bill_data = get_bill_data(congress_number)
if bill_data:
save_bill_data(bill_data, output_file)
print(“Bill data saved to”, output_file)
submitted by /u/squattingfox
[link] [comments]
I am trying to find a dataset with the cost of education in Brazil to look at the demand side effects of the fundef and fundeb program. I have already found the link above but I have not been able to extract the data. Has anyone some experience with that ?
submitted by /u/Cultural-Ad-2470
[link] [comments]
Hey all, I am hoping to learn more about how the big data industry works. Like buying datasets, where you go to find them, how much they cost etc
I’d appreciate any advice or even just a direction to head in. I’ve spoken to Snowflake and Datarade already but they don’t have much insight on what kind of data is actually being purchased or why (apparently anyway)
submitted by /u/Crumbedsausage
[link] [comments]
Interesting idea. Id give it a clap.
The original site is down. It says the data is from the CDC https://www.cdc.gov/std/statistics/default.htm?CDC_AA_refVal=https%3A%2F%2Fwww.cdc.gov%2Fstd%2Fstats%2Fdefault.htm if you are not on a mobile and can find the right source for the actual data please comment
submitted by /u/cavedave
[link] [comments]
I’ve got a couple NOAA datasets where lat/long we’re provided as well as the weather station names. But I cannot for the life of me get the lat/long converted to city (tried geopy, geopandas, and a slew of other things).
submitted by /u/sureshakerdood
[link] [comments]
Hello everyone!
I am planning to test an AI language model for bias – using SHAP and a lexical analyzer – and thus need a dataset I can feed into it. My preferred bias would be gender bias, e.g., a set of statements that are classified as either biased or not biased. However, if such a dataset does not exist I am open for other suggestions or ways how to create such a dataset manually/with AI support.
I am really grateful for any hints/help!
Cheers
submitted by /u/obeseelk
[link] [comments]
I have been on the struggle bus trying to find a dataset on the Morse Fall Risk Assessment tool. I would love some assistance! Thank you all in advance!
submitted by /u/Trabes023
[link] [comments]