Curious why I would ever use R instead of python for data related tasks.
submitted by /u/Nickaroo321
[link] [comments]
Here you can observe the biggest nerds in the world in their natural habitat, longing for data sets. Not that it isn’t interesting, i’m interested. Maybe they know where the chix are. But what do they need it for? World domination?
Curious why I would ever use R instead of python for data related tasks.
submitted by /u/Nickaroo321
[link] [comments]
I’m trying to find out the relative probabilities of the relationship between the abuser and the victim in CSA cases. Since no one seems to extract the data in ways that will help, I need to find a source of anonymous data that has relevant fields.
A culled data set where specific records have been removed because they could lead to identifying the people is fine. I do not need to know the location, nor the economic class. Because I’m looking for low probability events, it needs to be a substantial size. I’d like a hundred thousand records of events, where the following fields are known for each event.
Specifics:
For victim: Victim’s age when abuse started, sex, nature of the abuse.
For abuser Abuser’s age when abuse started. sex, relationship to victim
Relationship: 1st degree relative (Parent, brother, sister), 2nd degree relative (uncle/aunt, cousin, grandparent) Neighbour, family friend, authority figure (coach, minister, teacher, scoutmaster, employer, etc)
So far attempts to find data on places like pubmed have resulted in:
Only an abstract is available without payment. Papers are only summaries of other papers/reports. Datasets are not open to the general public Datasets have substantial price tags on them. Datasets are extremely selective. Data does not link abuser and victim in a 1:1 manner.
I come from a CSA background. I was molested at age 3. My sister says, that mom said it was a neighbour two doors down.
I think even then my mom was covering up. It happened multiple times over a period of I think at least a couple months, and less than a year and a half. The multiple times poses a logistical problem. The two people who had the best access was my mother during the day (stay at home mom) and my brother (age 13) during the night. (separate bedroom opposite end of house, in basement.)
I can’t confront them. Mom is dead. Brother is deep in Alzheimers.
While stats on this won’t form an absolute answer, they form grist for the mill.
submitted by /u/Canuck_Voyageur
[link] [comments]
Greetings. I have an upcoming project that involved using Pyspark. The guidelines for the project indicate a dataset about 7~8 GB at the minimum. The project is graded lesser on analysis and more on the complexity of data and range of methods and techniques we employ for data manipulation and processing. looking for a flexible and large financial dataset. can be anything from stock market data, economic indicators, consumption data, text data etc. Please direct me towards such datasets
submitted by /u/ComprehensiveAd1629
[link] [comments]
Similar to this dataset on F1, but a dataset for GT3 racing. I’m trying to break into the field of analytics so any help on sourcing such data would be extremely helpful.
submitted by /u/thepunnman
[link] [comments]
Hey everyone, suggest any dataset where i can learn KPI creation. Like i want to learn how to create Growth Percentages, Last year’s sales, last month’s sales, Net sales, gross sales for the present year, and also for last year and other similar KPIs, and to learn i need those types of dataset where i can do calculation.
I tried to find it on Kaggle but there are some simple dataset, till now all i do is drag and drop on dashboard.
submitted by /u/Akhand_P_Singh
[link] [comments]
Hi, anyone have an excel of the data of International Country Risk Guide (ICRG). It’s really urgent, I need it for my thesis and I would appreciate if someone has it. Thanks
submitted by /u/jasmn1
[link] [comments]
Hi all! This is probably a stupid question, but please someone just clarify for me without shaming me lol. If I am attempting to complete a research project (for a quantitative research methods class) on fear of crime and media consumption can I take two different datasets and combine them in order to run analysis on the variables? Or, does anyone know of any good survey datasets that contains variables I could use for this (or recode and use)? I apologize for the dumb question, but my professor has been no help. Thanks in advance!
submitted by /u/laurenmarie184_
[link] [comments]
Hello all.
I have spent the entire year of 2023 collecting data on my day-to-day life. I have collected everything I could think of, including quantitative variables like exercise, sleep amount, sex, etc., and qualitative ones like my own feelings and overall happiness. It is my ultimate goal to determine what in my life makes me happier, but there are plenty of other analyses that could be done with this dataset. Please feel free to take a look! If anyone does any interesting analysis please comment the results and/or DM me.
The dataset is pretty extensive… take a look.
https://docs.google.com/spreadsheets/d/1mi1vzfOQ2CpddAQQI25ACBixot2Xs5z-nO5qx91L12c/edit?usp=sharing
submitted by /u/tsawsum1
[link] [comments]
Are there any opensource datasources or APIs that could be used to pull the data related to environmental factors and information of sustainabilty from the cordinates.
submitted by /u/Fast_Whole_8492
[link] [comments]
Hello All,
As per the title, I am looking to pull data on all stocks that traded on the NASDAQ in 2023: I can get only partial attributes from Yahoo. I need
– Outstanding Shares per day (can’t get this from Yahoo; Bloomberg is asking for a fee)
– For each ticker, opening, closing, high, low price daily
-Industry
submitted by /u/BBjayjay
[link] [comments]
I am in dire straits and I need help.
I haven’t had any luck finding any sources that have both the datasets and the author’s user information, I’ve only found tweets that have been identified as real and fake news without the user’s information. I want to know if such a dataset exists before I go and purchase a developer account at X. I’m a student right now and 100$USD would make things pretty tight for me.
Thank you all in advance
submitted by /u/Ok-ButterscotchBabe
[link] [comments]
The database has entries like this, each one of them being a full chess game:
[‘e2e4’, ‘g8f6’, ‘d2d4’, ‘g7g6’, ‘c2c4’, ‘f8g7’, ‘b1c3’, ‘e8g8’, ‘e2e4’, ‘d7d6’, ‘f1e2’, ‘e7e5’, ‘e1g1’, ‘b8c6’, ‘d4d5’, ‘c6e7’, ‘c1g5’, ‘h7h6’, ‘g5f6’, ‘g7f6’, ‘b2b4’, ‘f6g7’, ‘c4c5’, ‘f7f5’, ‘f3d2’, ‘g6g5’, ‘a1c1’, ‘a7a6’, ‘d2c4’, ‘e7g6’, ‘a2a4’, ‘g6f4’, ‘a4a5’, ‘d6c5’, ‘b4c5’, ‘f5e4’, ‘c4e3’, ‘c7c6’, ‘d5d6’, ‘c8e6’, ‘c3e4’, ‘d8a5’, ‘e2g4’, ‘e6d5’, ‘d1c2’, ‘a5b4’, ‘e4g3’, ‘e5e4’, ‘c1b1’, ‘b4d4’, ‘b1b7’, ‘a6a5’, ‘g3f5’, ‘f8f5’, ‘e3f5’]
e2e4 means the piece on e2 (the pawn) moved to e4. Problem is, I have no way of knowing which piece is moving somewhere. For example, “g7h8” means the piece on g7 moved to h8 but unless I run all the previous moves I have no way of knowing which piece is that.
How can I transform this into a more understandable dataset?
I’m not sure this is the sub to ask this, if it isn’t I’d appreciate if you could tell me where to ask it
PD: I’ve checked the chess library in python but I haven’t found anything
submitted by /u/Aston28
[link] [comments]
I am doing my final year project , and it’s a KNN predictive model for prediction of tomato growth rate . When I was doing my proposal I proposed use of cm/week as growth metric . I am open to changing the growth metric .
I’m kindly looking for collaborators and mentors, I’m looking for a data set that has values temp, soil moisture, ph , EC , humidity and or NPK values against either yield or any growth rate metric.
submitted by /u/RMGwinji
[link] [comments]
I’m trynna train a model off of those images using 3 classes, green(not riped) riped and over-riped, the catch is banans should be in farms, like in their trees and stuff not picked up yet. If someone has an idea please let me know! Thank you kindly in advance!
submitted by /u/LoanNice
[link] [comments]
I’m working on a dynamic access control policy generation model using machine learning and I need a real dataset to evaluate the performance. Is there any access control policy dataset consisting entities (device, users), operations, roles, attributes, and permission policies?
I only could find the datasets having network traffic and access logs without any actual permission policies. If the dataset is related IoT (Internet of Things) or WSN (Wireless Sensor Networks) that would be ideal. If anyone knows a good dataset that would be a big help. Also, please mention the source paper (if any) so I can cite it.
submitted by /u/binodmx
[link] [comments]
I am working on a side project and I am looking for some dataset/database where i can search all animals by size and/or weight.
right now im not that picky on how complete it is… but having those attributes is key.
thanks!!
I’m reviewing links here: https://www.doi.gov/library/internet/animals
but so far none seem to be what i want
submitted by /u/thisfunnieguy
[link] [comments]
Hi,
I am looking for pressure transient data set for pipelines or facilities. Is there an online source I could use?
TIA
submitted by /u/jph1022
[link] [comments]
As per the title I’d like to request a dataset of the world’s cities median age.
I have so far only been able to find this stats for single individual countries but not a comprehensive dataset which lists each city median age
submitted by /u/Private_Capital1
[link] [comments]
Looking for sources particularly for the West African market, but happy with other sources too. Not too concerned about granularity (by ip, location, etc) anything useful works. I know Facebook and Netflix collect some of the data, but not sure how to buy/get it from them. I assume other platforms – Google(YouTube) do as well. Thanks in advance!
submitted by /u/cr_re
[link] [comments]
Hello everyone.
I’ve been tasked with finding a suitable dataset for our latest project, which involves training a model to recognize the number of wheels on vehicles. Despite extensive research, I haven’t been able to locate a suitable dataset online.
If anyone knows of a dataset or has access to one that could fulfill our requirements, I would greatly appreciate it if you could share the link or provide any assistance in obtaining it.
submitted by /u/Otherwise-Big-5537
[link] [comments]
Hello! I need a dataset that contains monthly average temperatures at different lattitudes, going as far back as the 1900s. Where can I find something like this?
Also, I saw monthly temperature anomaly data on NOAA’s Climate at a glance tool, which were with respect to the 1901-2000 average. However, I cannot seem to find the 1901-2000 average data. Do any of you know where I can find it? (https://www.ncei.noaa.gov/access/monitoring/climate-at-a-glance/global/time-series)
I really appreciate the help!
submitted by /u/Many_Flatworm_1372
[link] [comments]
Basketball fans 📢 Does anyone know of a compiled dataset of the each teams roster in the March Madness tournament??
submitted by /u/runandstuff
[link] [comments]
This NPR article says “the [Gaza] health ministry’s official database was made public in October”, but I have been unable to find it. Does anyone know where to get access to this?
submitted by /u/-DonQuixote-
[link] [comments]
Hello I am currently working on a university project and need the historical price data for graphics cards from Jan 1st 2019 to Mar 1st 2024. I am wanting to compare the prices of Nvidia and AMD cards over those years. I am looking for day to day data to look at price increases between specific cards. Does anyone know where I can find such datasets or can point me in the right direction? I have wanted to avoid web scraping up to this point, but won’t dismiss the idea if it comes to it. Thank you!
submitted by /u/DrGoneDirty
[link] [comments]
If I have 10,000 records of fields like CashAdvance, Interest Rate, Credit Score and Loan Term and if the loan was default or nor not (boolean 1,0). How do I find all permutation and combination of different ranges of these attributes where the loan was <10% default rate? So like,Bin1 – Credit score 652-673, AdvAmt 23-27K, Interest rate 12-15% and term months 3-7 had 8% defaulted loans. Bin 2 Credit score 625-632, AdvAmt 32-42K, Interest rate 2-5% and term months 6-9 had 5% default loans. Bin 3 Credit score 682-693, AdvAmt 13-17K, Interest rate 2-4% and term months 1-2 had 4% default loans Bin 4 Credit score 692-721, AdvAmt 74-95K, Interest rate 15-17% and term months 8-10 had 9% default loans so on and so forth? My question is how do I find these ranges for all the above mentioned attributes without manually creating where the default rate is low?
submitted by /u/southbeacher
[link] [comments]
Hello,
I do not care for the domain of the dataset, I just need it to include demographics, user behavior (in whatever form) and user opinions in regards to some product or service.
Specifically, I’m hoping that one or more of these columns will have text entries. The demographic should cover numerical and categorical.
Thank you!
submitted by /u/Grand_Comparison2081
[link] [comments]
I can’t get to visualize correctly the dataset, i’ve tried to convert the matlab script into a python script but this is the result:
https://drive.google.com/file/d/1kzA7mNC4th8nbJh4iGoaZJB_xV4HO7r_/view?usp=sharing
and this is the adapted script:
import numpy as np
import os import matplotlib.pyplot as plt
def load_tiny_images(ndx, filename=None): if filename is None: filename = ‘Z:/Tiny_Images_Dataset/data/tiny_images.bin’ # filename = ‘C:/atb/Databases/Tiny Images/tiny_images.bin’
sx = 32 #side size Nimages = len(ndx) nbytes_per_image = sx * sx * 3 img = np.zeros((sx * sx * 3, Nimages), dtype=np.uint8) pointer = (np.array(ndx) – 1) * nbytes_per_image # read data with open(filename, ‘rb’) as f: for i in range(Nimages): f.seek(pointer[i]) # moves the pointer to the beginning of the image img[:, i] = np.frombuffer(f.read(nbytes_per_image), dtype=np.uint8) img = img.reshape((sx, sx, 3, Nimages)) return img
def show_images(images): N = images.shape[3] fig, axes = plt.subplots(1, N, figsize=(N, 1)) if N == 1: axes = [axes] for i, ax in enumerate(axes): ax.imshow(images[:, :, :, i]) ax.axis(‘off’) plt.show()
img = load_tiny_images(list(range(1, 11)))
show_images(img)
What am i missing? is anyone able to correctly open it with python?
just for completeness, this is the original matlab code (i’m a total zero in matlab):
function img = loadTinyImages(ndx, filename)
% % Random access into the file of tiny images. % % It goes faster if ndx is a sorted list % % Input: % ndx = vector of indices % filename = full path and filename % Output: % img = tiny images [32x32x3xlength(ndx)]
if nargin == 1 filename = ‘Z:Tiny_Images_Datasetdatatiny_images.bin’; % filename = ‘C:atbDatabasesTiny Imagestiny_images.bin’; end
% Images sx = 32; Nimages = length(ndx); nbytesPerImage = sxsx3; img = zeros([sxsx3 Nimages], ‘uint8’);
% Pointer pointer = (ndx-1)*nbytesPerImage; offset = pointer; offset(2:end) = offset(2:end)-offset(1:end-1)-nbytesPerImage;
% Read data [fid, message] = fopen(filename, ‘r’); if fid == -1 error(message); end frewind(fid) for i = 1:Nimages fseek(fid, offset(i), ‘cof’); tmp = fread(fid, nbytesPerImage, ‘uint8’); img(:,i) = tmp; end fclose(fid);
img = reshape(img, [sx sx 3 Nimages]);
% load in first 10 images from 79,302,017 images img = loadTinyImages([1:10]);
useless to say: in matlab nothing is working, it gives me some path error i have no idea how to resolve and it shows no image etc, i can’t learn matlab now so i’d like to read this huge bin file with python, am i that fool?
Thanks a lot in advance for any help and sorry about my english
submitted by /u/AstroGippi
[link] [comments]
Struggling to find anything dating back to earlier than 2000, so wondering if anyone knew of any publicly available dataset that might be able to help.
submitted by /u/scoooberman
[link] [comments]