{"id":24177,"date":"2023-11-26T13:27:11","date_gmt":"2023-11-26T12:27:11","guid":{"rendered":"https:\/\/www.graviton.at\/letterswaplibrary\/dataset-for-programming-mistakes-from-all-experience-levels\/"},"modified":"2023-11-26T13:27:11","modified_gmt":"2023-11-26T12:27:11","slug":"dataset-for-programming-mistakes-from-all-experience-levels","status":"publish","type":"post","link":"https:\/\/www.graviton.at\/letterswaplibrary\/dataset-for-programming-mistakes-from-all-experience-levels\/","title":{"rendered":"Dataset For Programming Mistakes From All Experience Levels"},"content":{"rendered":"<p><!-- SC_OFF --><\/p>\n<div class=\"md\">\n<p>I am building a project and I want to fine-tune an LLM to incorporate it as a ChatBot.<\/p>\n<p>The ChatBot will deliver feedback to students who submit programming solutions for exercises they are solving. I want to train the ChatBot on a specific way to give feedback like not giving the correct answer explicitly and not answering questions unrelated to the domain, and also being able to give hints when a student asks for it.<\/p>\n<p>I couldn&#8217;t find a dataset close to what I need. Obviously I will need to clean any dataset that I find to match my needs perfectly.<\/p>\n<p>If you know of any dataset that might help me with this, or any way that I can automate the generation of a mock dataset, because ChatGPT has limitions and I wasn&#8217;t able to make it generate the number of examples I need.<\/p>\n<\/div>\n<p><!-- SC_ON -->   submitted by   <a href=\"https:\/\/www.reddit.com\/user\/iTsObserv\"> \/u\/iTsObserv <\/a> <br \/> <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/18491o0\/dataset_for_programming_mistakes_from_all\/\">[link]<\/a><\/span>   <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/18491o0\/dataset_for_programming_mistakes_from_all\/\">[comments]<\/a><\/span><\/p><div class='watch-action'><div class='watch-position align-right'><div class='action-like'><a class='lbg-style1 like-24177 jlk' href='javascript:void(0)' data-task='like' data-post_id='24177' data-nonce='65e0e39b87' rel='nofollow'><img class='wti-pixel' src='https:\/\/www.graviton.at\/letterswaplibrary\/wp-content\/plugins\/wti-like-post\/images\/pixel.gif' title='Like' \/><span class='lc-24177 lc'>0<\/span><\/a><\/div><\/div> <div class='status-24177 status align-right'><\/div><\/div><div class='wti-clear'><\/div>","protected":false},"excerpt":{"rendered":"<p>I am building a project and I want to fine-tune an LLM to incorporate it as a&#8230;<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[85],"tags":[],"class_list":["post-24177","post","type-post","status-publish","format-standard","hentry","category-datatards","wpcat-85-id"],"_links":{"self":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/24177","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/comments?post=24177"}],"version-history":[{"count":0,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/24177\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/media?parent=24177"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/categories?post=24177"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/tags?post=24177"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}