{"id":27124,"date":"2024-04-26T20:27:42","date_gmt":"2024-04-26T18:27:42","guid":{"rendered":"https:\/\/www.graviton.at\/letterswaplibrary\/domain-tagged-specific-text-generation-datasets-for-language-models\/"},"modified":"2024-04-26T20:27:42","modified_gmt":"2024-04-26T18:27:42","slug":"domain-tagged-specific-text-generation-datasets-for-language-models","status":"publish","type":"post","link":"https:\/\/www.graviton.at\/letterswaplibrary\/domain-tagged-specific-text-generation-datasets-for-language-models\/","title":{"rendered":"Domain-tagged\/specific Text Generation Datasets For Language Models"},"content":{"rendered":"<p><!-- SC_OFF --><\/p>\n<div class=\"md\">\n<p>I want to investigate parameter-efficient fine-tuning (PEFT) methods (LoRA, bottleneck adapters, etc.) in the context of generative LLMs in different domains. I started reading the PEFT literature to find established benchmarks for my project. I saw people using datasets like SQuAD, E2E dataset, and XSum. Despite addressing multiple domains, there are no tags for the domain of each sample. I would need to have this information for my project. I could just use one dataset as one domain but the datasets I found do not usually have specific domains but contain samples from different domains. To summarize I would need datasets that<\/p>\n<p>require a generative model (e.g. question answering with open answers, not multiple-choice)<\/p>\n<p>cover a specific domain (sports, medicine, science, law, etc.) or contain this information as a feature for every sample<\/p>\n<\/div>\n<p><!-- SC_ON -->   submitted by   <a href=\"https:\/\/www.reddit.com\/user\/beanswithoutjeans\"> \/u\/beanswithoutjeans <\/a> <br \/> <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1cds8od\/domaintaggedspecific_text_generation_datasets_for\/\">[link]<\/a><\/span>   <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1cds8od\/domaintaggedspecific_text_generation_datasets_for\/\">[comments]<\/a><\/span><\/p><div class='watch-action'><div class='watch-position align-right'><div class='action-like'><a class='lbg-style1 like-27124 jlk' href='javascript:void(0)' data-task='like' data-post_id='27124' data-nonce='614a020375' rel='nofollow'><img class='wti-pixel' src='https:\/\/www.graviton.at\/letterswaplibrary\/wp-content\/plugins\/wti-like-post\/images\/pixel.gif' title='Like' \/><span class='lc-27124 lc'>0<\/span><\/a><\/div><\/div> <div class='status-27124 status align-right'><\/div><\/div><div class='wti-clear'><\/div>","protected":false},"excerpt":{"rendered":"<p>I want to investigate parameter-efficient fine-tuning (PEFT) methods (LoRA, bottleneck adapters, etc.) in the context of generative&#8230;<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[85],"tags":[],"class_list":["post-27124","post","type-post","status-publish","format-standard","hentry","category-datatards","wpcat-85-id"],"_links":{"self":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/27124","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/comments?post=27124"}],"version-history":[{"count":0,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/27124\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/media?parent=27124"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/categories?post=27124"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/tags?post=27124"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}