Large Language Model Hacking: Quantifying The Hidden Risks Of Using LLMs For Text Annotation

tl:dr wiht the right prompt you can get any result you want out of LLM annotated data.

submitted by /u/cavedave
[link] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *