Category: Datatards

Here you can observe the biggest nerds in the world in their natural habitat, longing for data sets. Not that it isn’t interesting, i’m interested. Maybe they know where the chix are. But what do they need it for? World domination?

Looking For Someone With A Statista Premium Subscription

Hey everyone,

I hope you’re all doing well. I’m currently working on a startup in the gaming industry and I’m looking for some specific data that is available on Statista. However, I don’t have a premium subscription and unfortunately, the data I need is not available with the free version.

So, I was wondering if anyone here has a Statista Premium subscription and would be willing to help me out. I know it’s a long shot, but I thought I’d give it a try.

I don’t want to take up too much of your time, but if you’re able to help, I would be extremely grateful.

Thank you for reading this far, and I hope you have a great day!

submitted by /u/saltpeppermint
[link] [comments]

Looking For A Dataset With Both Book ISBNs And Genre(s)

I need to do some data visualization work with books, and the dataset from Goodreads is almost perfect for what I need to do.

However, it doesn’t have any genre(s) listed. Is there an existing dataset, which I can use in conjunction with this one, that also has a list of genres? I don’t need it to line up with all 10,000 books in the Goodreads set, but a decent amount.

Any help would be greatly appreciated

Edit: An english equivalent of this is what I’m trying to find.

submitted by /u/jakehenderson01
[link] [comments]

Datasets With Notes, Quick Thoughts, Reminders?

I’m participating in a study on ways in which different people write their thoughts, lecture notes, reminders, and other short-form texts that are usually not meant to be shared.

Does anyone know of datasets that could be helpful here? One of our goals is to do some clustering analysis and determine the main “forms” of notes people use. We also want to find out how often people write multiple notes related to the same topic and obtain other interesting results.

Any suggestions are appreciated!

submitted by /u/smthamazing
[link] [comments]

Banned Books Across U.S. State Prisons

With book bans rising in popularity The Marshall Project compiled a list of 50,000 titles that are banned in 19 states. They’re currently cleaning some additional lists from other states to add to the data.

(Un)surprisingly, Florida bans the most titles at over 20,000 Georgia bans the least at 28 If a reason is given, it’s hard to wrap your head around how something like Coding for Parents could pose a threat to security (Wisconsin)

Source: https://www.themarshallproject.org/2022/12/21/prison-banned-books-list-find-your-state

View the Data: https://app.gigasheet.com/spreadsheet/Banned-Books-in-U-S–Prisons/7b6b282b_a6d1_48bc_9df2_71b27f9ab107

submitted by /u/Adorable-Kitchen-919
[link] [comments]

I Am Looking For A Very Specific Dataset Used In The Paper Short-Term Variations And Long-Term Dynamics In Commodity Prices (2000)

I am trying to replicate the model in this paper and to make sure it works I would like to apply it to the same data as in the original paper

There are two datasets used, one is the weekly prices of 5 NYMEX crude oil futures contracts from 1/2/1990 to 2/17/1995. The paper says that these were made public by Knight-Ridder Financial, a company that has ceased to exist since.

The other one is a set of crude oil prices by Enron Capitial, a company that has also ceased to exist since.

I doubt I could obtain the second dataset but I was wondering if anyone had any suggestions on where I could find the first dataset by Knight-Ridder Financial. I have tried accessing their website through internet archive but I wasn’t able to find anything on there, nor was I able to locate the original publication.

Bloomberg is not an option for me right now either.

Full reference: Schwartz, E. and Smith, J.E., 2000. Short-term variations and long-term dynamics in commodity prices. Management Science, 46(7), pp.893-911.

submitted by /u/horux123
[link] [comments]

I Am Stuck In A Bit Of A Pickle Looking For A Dataset.

My math class has an end of year coding project which uses the basic plotting tools in pandas to analyze and review a dataset of my own choosing. Im pretty okay at coding and i wont struggle to set up everything once i have it planed out. Problem is, all my classmates have picked the cool stuff like weatherpatterns, tempareture changes with correlation to co2 increase and other easy targets.

I would like to stand out a bit. Do you have an interesting dataset that i can use in pytjon without doing any sorting for, outside of the obvious x and y values? I am not an expert at dataset analysis so i can exclusively use pandas and i can only use datasets stored as .csv files.

Im getting slightly stressed over this project as the deadline is creeping closer and closer. So if you have an old coding poject from your class where you learned about comparing graphs and looking for correlations. It would be a huge help to give me some help here.

submitted by /u/RevolutionaryAd4161
[link] [comments]

Hey Guys, Check Out This Dope Map I Found! It Shows All The Locations Of Corn On The Cob Street Vendors In Mexico City. Perfect For Anyone Craving Some Delicious Elotes Or Esquites. Can’t Wait To Try Them All Out!

Hey guys, stumbled upon this sweet dataset the other day. You can export it to KML for some serious parsing and analysis. It’s the crowd-sourced geolocation of every damn corn on the cob vendor in Mexico City! How cool is that? I challenge y’all to train a neural network on it and see what kind of insights you can get. Let’s get cracking, folks!

submitted by /u/JulieJas
[link] [comments]

Anyone Have Any Experience Downloading League Of Legends Data Sets Like Na.op.gg?

Hi everyone.

I was wondering if anyone on this sub had experince working with/downloading solo and duoque League of legends data. Is it possible to export from na.op.gg or maybe riot has an API I get it from.

Ideally I would like to wrangle the data in a way where I could separate my soloq games from my duoq to get some stats and expose my duoq partner.

Anyone have experince with this or think its possible?

EDIT: I can use python with things like pandas, numpy etc for some simple data wrangling and analysis.

submitted by /u/ebscodingjourney
[link] [comments]

Scraping Google Trends Data In 2023?

The issue with the famous 429 when mass scraping google trends seems to have me stuck. I have a list of around 30k keywords I want data on, but don’t want to wait for the timeouts.

I’m using pytrends and have tried using rotating proxies but the high traffic seems to bring my costs up way too high when renting those. I tried multiprocessing using unique Tor circuits for each keyword, but I seem to get authentication errors from google, which seem to get sorted out by including some identity headers, which quickly become invalid due to rate limiting.

Does anyone have a workaround/working code for this? Multiple Google accounts with programmatic login and getting the headers from there, followed by injecting them into pytrends requests? I’d be grateful if you could share your experiences. Thanks!

submitted by /u/thefoque
[link] [comments]

Need Scientific Computing Power For Your Research? Got A Big Dataset To Iterate Over? BOINC Can Get You Teraflops Computing Power Absolutely Free!

For those unfamiliar with it, BOINC is the Berkeley Open Infrastructure for Network Computing. It is a free software and volunteer computing infrastructure focused on science with over 15 active projects. There are teraflops of computing power available to you for absolutely free. If you are working on problems that can be done in a distributed or parallel matter, YSK about it.

The BOINC server software works with any app you have (such as a protein simulator), and can handle all the workunit creation/delivery/validation. You can run the server as a docker container and distribute your app as as pre-compiled binary or inside a virtualbox image to instantly work across platforms. BOINC not only supports 32 and 64-bit Windows/OS X/Linux hosts, but ARM and Android as well. And it supports GPU acceleration as well on both Nvidia and AMD cards. It’s also open-source so you can modify it to suit your use case. For small projects, you can run the BOINC server on a $10/month VPS or a spare laptop in a closet for larger projects obviously the memory and storage needs will scale with complexity.

Once you have your server up (or beforehand, if you need to secure a guarantee of computation before investing development resources), you can approach Science United and Gridcoin for your guaranteed computation (“crunching”). Neither of these mechanisms require you to be affiliated with a university or other institution, they just require that you are doing interesting scientific research.

Science United is a platform run by the BOINC developers which connects volunteer computing participants to BOINC projects. Once they add you to their list, thousands of volunteers around the globe will immediately start crunching data for your project giving you many teraflops of power. Science United is particularly good for smaller projects which don’t have large, ongoing workloads or have sporadic work.

Gridcoin is a cryptocurrency (founded 2013, not affiliated with the BOINC developers) which incentivizes people to crunch workunits for you. They currently incentivize most active BOINC projects (with their permission) and hand out approx $500 USD equivalent in incentivization money to your “crunchers” monthly. The actual value of the computation you receive is much higher than this. All of this happens without you ever needing to do anything aside from have a BOINC server. There are some requirements you must meet such as having a large amount of work to be done (be an ongoing project), but they can direct petaflops of power your way and have a procedure to “pre-approve” your project before it’s done being developed.

BOINC can also be used to harvest under-utilized compute resources on your campus or in your company. It can be installed on platforms and set to compute only while the machine is idle, so it doesn’t slow it down while in use.

Famous research institutes and major universities across the world use BOINC. World Community Grid, the Large Hadron Collider, Rosetta, University of Texas, and the University of California are a handful of the big names that use BOINC for work distribution.

Relevant links:

/r/BOINC4Science

http://boinc.berkeley.edu

submitted by /u/makeasnek
[link] [comments]