{"id":30505,"date":"2024-09-21T11:27:49","date_gmt":"2024-09-21T09:27:49","guid":{"rendered":"https:\/\/www.graviton.at\/letterswaplibrary\/word2vec-data-set-with-object-definitions\/"},"modified":"2024-09-21T11:27:49","modified_gmt":"2024-09-21T09:27:49","slug":"word2vec-data-set-with-object-definitions","status":"publish","type":"post","link":"https:\/\/www.graviton.at\/letterswaplibrary\/word2vec-data-set-with-object-definitions\/","title":{"rendered":"Word2vec Data Set With Object Definitions?"},"content":{"rendered":"<p><!-- SC_OFF --><\/p>\n<div class=\"md\">\n<p>Does anybody know of a word2vec model that is trained on object definitions? Perhaps something trained on an encyclopedia? I can&#8217;t seem to find anything online.<\/p>\n<p>My ideal scenario would be that it finds similarities between, say, &#8220;rollercoaster&#8221;, and its constituent parts (metal, tracks, moving fast, speed), etc.<\/p>\n<p>Or between &#8220;saturn&#8221; and (rings, space, stars, gas, yellow, huge)<\/p>\n<p>It&#8217;s a little more complex than the above examples, but I&#8217;m pretty solid on the approach, so I&#8217;ve simplified it for ease.<\/p>\n<p>If there are none trained on encylopdia, would Wikipedia be a suitable dataset for this kind of use case? <\/p>\n<p>(Before anyone says the obvious; I know that Wikipedia is an &#8220;online encyclopedia,&#8221; but as you all know, it goes way further than that. There are wiki pages for all sorts of games, events like natural disasters, etc, and I&#8217;m worried that those might taint the data pool.)<\/p>\n<\/div>\n<p><!-- SC_ON -->   submitted by   <a href=\"https:\/\/www.reddit.com\/user\/notquitehuman_\"> \/u\/notquitehuman_ <\/a> <br \/> <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1flylxt\/word2vec_data_set_with_object_definitions\/\">[link]<\/a><\/span>   <span><a href=\"https:\/\/www.reddit.com\/r\/datasets\/comments\/1flylxt\/word2vec_data_set_with_object_definitions\/\">[comments]<\/a><\/span><\/p><div class='watch-action'><div class='watch-position align-right'><div class='action-like'><a class='lbg-style1 like-30505 jlk' href='javascript:void(0)' data-task='like' data-post_id='30505' data-nonce='65e0e39b87' rel='nofollow'><img class='wti-pixel' src='https:\/\/www.graviton.at\/letterswaplibrary\/wp-content\/plugins\/wti-like-post\/images\/pixel.gif' title='Like' \/><span class='lc-30505 lc'>0<\/span><\/a><\/div><\/div> <div class='status-30505 status align-right'><\/div><\/div><div class='wti-clear'><\/div>","protected":false},"excerpt":{"rendered":"<p>Does anybody know of a word2vec model that is trained on object definitions? Perhaps something trained on&#8230;<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[85],"tags":[],"class_list":["post-30505","post","type-post","status-publish","format-standard","hentry","category-datatards","wpcat-85-id"],"_links":{"self":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/30505","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/comments?post=30505"}],"version-history":[{"count":0,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/posts\/30505\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/media?parent=30505"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/categories?post=30505"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.graviton.at\/letterswaplibrary\/wp-json\/wp\/v2\/tags?post=30505"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}