What’s the easiest way to get an accurate up to date NBA data set? I’d like to put this structured data in PostgreSQL
submitted by /u/Safe-Worldliness-394
[link] [comments]
Here you can observe the biggest nerds in the world in their natural habitat, longing for data sets. Not that it isn’t interesting, i’m interested. Maybe they know where the chix are. But what do they need it for? World domination?
What’s the easiest way to get an accurate up to date NBA data set? I’d like to put this structured data in PostgreSQL
submitted by /u/Safe-Worldliness-394
[link] [comments]
Does anyone have the USAID GHSC-PSM Health Commodity Delivery Dataset that they could send to me? Need it for a thesis I’m doing and not sure how I can get it after it was taken down
submitted by /u/Public-Consequence62
[link] [comments]
My background is in insights and market research. I’m currently job hunting and I’m seeing a lot of roles in audience insights and marketing research, which I don’t have direct experience in. I was thinking about trying to do some small projects to include in my applications to show I have transferrable skills, but I’m struggling to find open source data to work with. Does anyone have any suggestions? Thanks so much.
submitted by /u/belledamesans-merci
[link] [comments]
Howdy folks,
I’m based in the states. Im just wondering if anyone might know if there is any data out there that would be able to inform when cars/models tend to have whatever services/breakdowns at particular mileage…and what those services or items tend to be?
I’m looking at this regressively, as Im not trying to predict or project what services are needed for future mileage but something that would actually SHOW at what mileage a particular model has received particular services/repairs or breakdowns PREVIOUSLY or shown itself to happen at, etc?
Does anyone know if anything like this exists or is available?
submitted by /u/WhatsTheAnswerDude
[link] [comments]
I found it difficult to find such data. I’ve only found one website, but I would have to pay (warn tracker).
I’m especially interested for layoffs in big tech corporations (META, INTEL etc.)
submitted by /u/Flying_Trying
[link] [comments]
Has anyone ever used data sets from trainingdata.pro or applied to their student program https://trainingdata.pro/university ? I’m interested in one of their dataset (or potentially a combination of 2) for my thesis project and I’m curious how long it takes them to answer and if you’ve had a good experience with them.
submitted by /u/anonymousD1812
[link] [comments]
Hi everyone,
I’m currently working on training a 2D virtual try-on model, specifically something along the lines of TryOnDiffusion, and I’m looking for datasets that can be used for this purpose.
Does anyone know of any datasets suitable for training virtual try-on models that allow commercial use? Alternatively, are there datasets that can be temporarily leased for training purposes? If not, I’d also be interested in datasets available for purchase.
Any recommendations or insights would be greatly appreciated!
Thanks in advance!
submitted by /u/Straight-Piccolo5722
[link] [comments]
I would like to create a database with historical soccer results and odds. Since I have no idea about programming, I had thought about Excel or Google Sheets. The question is, how do I get the data? I have heard of web scraping or using an API. There are some at rapidapi, e.g. from Sofascore. But they have limits in the free version. I imagined it like this: e.g. country, league, date, season, round, home team, away team, goals home, goals, away, half time: goals home, away, odds 1 x 2, elo home, away.
Chatgpt has me Google sheets, there Google Apps script use for the API. I just can’t get along with the endpoints. Furthermore, I want the daily results from the last day/days to be fetched automatically or by command, as well as upcoming games with odds for the next 7 days.
How can I implement this? What ideas do you have Thanks a lot
submitted by /u/PokerMurray
[link] [comments]
It seems 2024 US General election data should be published but I’m not seeing it posted in the usual spots. I see a request from three months ago that stated the data should be available after a few months. Am I just missing something? Does anyone have a lead or am I just impatient?
submitted by /u/SquiggleQuotient
[link] [comments]
I’m working on an econometrics paper for my college course. I am aiming to reproduce the results of the following paper:
Incentives, time use and BMI: The roles of eating, grazing and goods by Daniel S. Hamermesh
I want to reproduce these results with more modern and accurate methods in mind rather than BMI but I am having trouble finding the data. I’d appreciate any help you guys can offer
submitted by /u/seventydaily
[link] [comments]
Hello Everyone,
These data are needed for a student but they are unable to find/download the data.. CDC’s website currently only lists up to phase 8. Does anyone know where or if this dataset can be located?
submitted by /u/Suspicious-One-1260
[link] [comments]
I’ve been doing a lot of work on building computer vision models to track infants in cribs, since becoming a parent. Recently I’ve tried to start making models and datasets that are more generalized and not just for my kid. Turns out this is pretty difficult, since there aren’t a lot of datasets made for tracking infants in cribs.
I made a first attempt at producing a synthetic dataset that can be used to bootstrap a model. The idea is you’d either supplement the synthetic data with a small subset of real data, or something else like transfer learning. The dataset was made using path tracing, so it looks a little bit better than some of the other synthetic datasets on infants that I’ve seen (links on my GitHub repo).
Relevant Links:
https://github.com/tay10r/infant-detection-dataset https://www.kaggle.com/datasets/tay10r/synthetic-infant-dataset
It’ll be a week or so before the full dataset is done rendering (10k images). I’m traveling over the weekend so I was only able to upload a subset of the dataset (a little over 100 images).
Currently I use a trained model I made with about 2000 labeled images on my kid to analyze sleep patterns. I’m hoping this dataset, perhaps after a few improvements, will help produce more general models for this type of work. I’m curious to know if anyone else finds this interesting or practical. Let me know what you think!
submitted by /u/taylorcholberton
[link] [comments]
Does anyone know where I could get a dataset (preferably over 200 rows long) of different songs with the corresponding artist and genre (preferably in csv format) I need it for a project in my computer science and can’t find any datasets. The reason for the csv format being I need to use it with JavaScript code in code.org
submitted by /u/Zanman2000
[link] [comments]
Hey amazing people! First post here! Today, I’m excited to announce that you can now train your own reasoning model with just 5GB VRAM for Qwen2.5 (1.5B) using our open-source project Unsloth: https://github.com/unslothai/unsloth
GRPO is the algorithm behind DeepSeek-R1 and how it was trained. You need a dataset with about 500 rows in question, answer pairs and a reward function and you can then start the whole process!
This allows any open LLM like Llama, Mistral, Phi etc. to be converted into a reasoning model with chain-of-thought process. The best part about GRPO is it doesn’t matter if you train a small model compared to a larger model as you can fit in more faster training time compared to a larger model so the end result will be very similar! You can also leave GRPO training running in the background of your PC while you do other things!
Due to our newly added Efficient GRPO algorithm, this enables 10x longer context lengths while using 90% less VRAM vs. every other GRPO LoRA/QLoRA (fine-tuning) implementations with 0 loss in accuracy. With a standard GRPO setup, Llama 3.1 (8B) training at 20K context length demands 510.8GB of VRAM. However, Unsloth’s 90% VRAM reduction brings the requirement down to just 54.3GB in the same setup. We leverage our gradient checkpointing algorithm which we released a while ago. It smartly offloads intermediate activations to system RAM asynchronously whilst being only 1% slower. This shaves a whopping 372GB VRAM since we need num_generations = 8. We can reduce this memory usage even further through intermediate gradient accumulation. Use our GRPO notebook with 10x longer context using Google’s free GPUs: Llama 3.1 (8B) on Colab-GRPO.ipynb)
Blog for more details on the algorithm, the Maths behind GRPO, issues we found and more: https://unsloth.ai/blog/grpo)
GRPO VRAM Breakdown:
Metric Unsloth TRL + FA2 Training Memory Cost (GB) 42GB 414GB GRPO Memory Cost (GB) 9.8GB 78.3GB Inference Cost (GB) 0GB 16GB Inference KV Cache for 20K context (GB) 2.5GB 2.5GB Total Memory Usage 54.3GB (90% less) 510.8GB
Also we spent a lot of time on our Guide (with pics) for everything on GRPO + reward functions/verifiers so would highly recommend you guys to read it: docs.unsloth.ai/basics/reasoning
Thank you so so much for reading! 😀
submitted by /u/yoracale
[link] [comments]
I am doing a business project and I want to do my project in relation to Korea or Japan but I can’t find much data on many aspect, mainly only kdramas or pollution but i want more business related topics
submitted by /u/PhysicalWorldliness5
[link] [comments]
so guys im cooked and im urgently in need for kicking video datset just simple kicking. ive looked all over the internet and couldnt find it. so this is my last resort. so pls help me
submitted by /u/AccomplishedSnow5004
[link] [comments]
I am a journalism student looking for Hinge datasets to analyze dating patterns. Hinge lets users export their personal data including likes sent and received, matches, conversations, etc. If someone has a dataset of multiple users or is willing to share their own data please let me know. If sharing personal data, I could anonymize your name in my findings if you prefer. Thanks in advance!
submitted by /u/cappingaf
[link] [comments]
I’m exploring how people discover D2C brands and want to improve search/filtering experiences in large directories. To do this, I’m looking for well-structured datasets related to:
D2C brand directories (with categories, tags, or attributes) E-commerce product databases with metadata Consumer search behavior for brands/products
If you know of any publicly available datasets that could help, I’d love to hear about them! Also, if you have tips on structuring datasets for better discoverability, feel free to share.
Thanks in advance!
submitted by /u/Mobile_Candidate_926
[link] [comments]
Does anyone here have image datasets of microplastics in fish meat?
submitted by /u/HOOD_Phant0m
[link] [comments]
In Rugby when you score a try you get to kick for an extra 2 points opposite where you scored a try. As you go closer to the center of the pitch the kicks get easier. But how much easier? As in does 5 meters closer increase probability by 5%?
The data seems to be in Opta but thats expensive https://www.bbc.com/sport/rugby-union/articles/cx2gn3z2l72o
So do you know of a dataset of kicker at position x,y,scored kick?
submitted by /u/cavedave
[link] [comments]
I am doing a business project and I want to do my project in relation to Korea or Japan but I can’t find much data on many aspect, mainly only kdramas or pollution.
submitted by /u/PhysicalWorldliness5
[link] [comments]
…I tried to find a decent autism dataset a few days ago and the blurb at the top of the page said, “Due to the policies of the Trump administration,…” What is going on?
submitted by /u/KryptonSurvivor
[link] [comments]
Have information like website name , email, phone number , country , social profiles etc
submitted by /u/racingdann
[link] [comments]
Hello,
I’m looking for help finding or building a dataset that captures new ICE/Police job postings by state. My hypothesis is that we are going to see an increase in the number of these openings over the year and I’m keen on tracking trends – think it may be a useful leading barometer.
Does anyone know of a database that already tracks job listings by industry by state on a more granular scale that would be useful in this case?
If not maybe we start with California, Texas, Arizona, Florida, NY?
I am completely new to this but am interested in seeing this trend so any help is appreciated.
submitted by /u/Powder9
[link] [comments]
I am really a weather geek and I am looking for historic temperature data (preferably via easy to use API) per location and hourly granularity.
I’d like to use queries in scripts (e.g. python) and visualize data.
Reason for hourly: I’d like to know highest and lowest temperature and average temperature but not (Tmax+Min)/2 but the proper average. Also, I’d like to plot average temperature profiles for different locations.
Weather Underground has just that but no API (free for the end-user) and only available by manually clicking through the data. In the past, I have exported data via the clipboard but it’s too exhausting if the dataset exceeds a few days/locations.
submitted by /u/segdy
[link] [comments]
Hi!!
Can anyone PLEASE PLEASE PRETTY PLEASE give me links or database suggestions for a research paper on “ How do firearm prohibition and relinquishment laws for individuals with a history of domestic violence impact female firearm-related fatalities?”?? any 5yr range is perfectly good, but preferably the 21st century that records and analyzed all 50 states , the gun-related firearm deaths (perpetrated by intimate partners)!!
this will really really help my teammates and i! its for our masters, and we are tryna get a good study out there !! THANK YOU
submitted by /u/Puzzleheaded_Cup8780
[link] [comments]
I want to run backtests on a momentum investing strategy.
So I’m looking for a dataset with a daily list of S&P 500 constituencies, their price for each day, and any possible events (such stock splits or company merger/splits). I bought this dataset in 2014 for $49 (1963-2014) but the company that sold the data to me is no longer in business.
Preferably usable in node.js, Python is a bit rusty.
submitted by /u/SaltBat6229
[link] [comments]
Hi all,
I am a current Social Work PhD student interested in the child welfare system (investigations of abuse/neglectneglect and foster care), especially the experiences of the caseworkers themselves. I am in need of a dataset to analyze for one of my courses and am in the process of requesting restricted data from the US Department of Health and Human Services’ Child Bureau. With everything going on, I am getting a little nervous it may be pulled from the site or my request denied so I’d like to have a backup. Is anyone aware of any public datasets available focusing on the child welfare system that I could look at?
I am looking for a dataset from 2019 or later.
Thank you in advance for your help!!
submitted by /u/ssdgm23
[link] [comments]