{"id":36748,"date":"2025-11-25T19:27:05","date_gmt":"2025-11-25T18:27:05","guid":{"rendered":"https:\/\/www.graviton.at\/letterswaplibrary\/synthetic-created-a-3-million-instance-dataset-to-equip-ml-models-to-trade-better-in-blackswan-events\/"},"modified":"2025-11-25T19:27:05","modified_gmt":"2025-11-25T18:27:05","slug":"synthetic-created-a-3-million-instance-dataset-to-equip-ml-models-to-trade-better-in-blackswan-events","status":"publish","type":"post","link":"https:\/\/www.graviton.at\/letterswaplibrary\/synthetic-created-a-3-million-instance-dataset-to-equip-ml-models-to-trade-better-in-blackswan-events\/","title":{"rendered":"[Synthetic] Created A 3-million Instance Dataset To Equip ML Models To Trade Better In Blackswan Events."},"content":{"rendered":"<p><!-- SC_OFF --><\/p>\n<div class=\"md\">\n<p>So I recently wrapped up a project where I trained an RL model to backtest on 3 years of synthetic stock data, and it generated 45% returns overall in real-market backtesting.<\/p>\n<p>I decided to push it a lil further and include black swan events. Now the dataset I used is too big for Kaggle, but the second dataset is available <a href=\"https:\/\/www.kaggle.com\/datasets\/toocool69\/synthetic-stock-price-data-1m-instances\">here<\/a>.<\/p>\n<p>I&#8217;m working on a smaller version of the model to bring it soon, but looking for some feedback here about the dataset construction. <\/p>\n<\/div>\n<p><!-- SC_ON -->   submitted by   <a href=\"https:\/\/www.reddit.com\/user\/Legitimate_Monk_318\"> \/u\/Legitimate_Monk_318 <\/a> <br \/> <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1p6k676\/synthetic_created_a_3million_instance_dataset_to\/\">[link]<\/a><\/span>   <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1p6k676\/synthetic_created_a_3million_instance_dataset_to\/\">[comments]<\/a><\/span><\/p><div class='watch-action'><div class='watch-position align-right'><div class='action-like'><a class='lbg-style1 like-36748 jlk' href='javascript:void(0)' data-task='like' data-post_id='36748' data-nonce='65e0e39b87' rel='nofollow'><img class='wti-pixel' src='https:\/\/www.graviton.at\/letterswaplibrary\/wp-content\/plugins\/wti-like-post\/images\/pixel.gif' title='Like' \/><span class='lc-36748 lc'>0<\/span><\/a><\/div><\/div> <div class='status-36748 status align-right'><\/div><\/div><div class='wti-clear'><\/div>","protected":false},"excerpt":{"rendered":"<p>So I recently wrapped up a project where I trained an RL model to backtest on 3&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[85],"tags":[],"class_list":["post-36748","post","type-post","status-publish","format-standard","hentry","category-datatards","wpcat-85-id"],"_links":{"self":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/36748","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/comments?post=36748"}],"version-history":[{"count":0,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/36748\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/media?parent=36748"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/categories?post=36748"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/tags?post=36748"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}