mozz@mbin.grits.dev to Technology@beehaw.org · 3 months agoSomeone got Gab's AI chatbot to show its instructionsmbin.grits.devimagemessage-square169fedilinkarrow-up1428arrow-down10file-text
arrow-up1428arrow-down1imageSomeone got Gab's AI chatbot to show its instructionsmbin.grits.devmozz@mbin.grits.dev to Technology@beehaw.org · 3 months agomessage-square169fedilinkfile-text
minus-squaresweng@programming.devlinkfedilinkarrow-up1·edit-23 months agoYou are wrong: https://stackoverflow.com/questions/76451205/difference-between-instruction-tuning-vs-non-instruction-tuning-large-language-m
minus-squareteawrecks@sopuli.xyzlinkfedilinkarrow-up1·3 months agoAh, TIL about instruction fine-tuning. Thanks, interesting thread. Still, as I understand it, if the model has seen an input, then it always has a non-zero chance of reproducing it in the output.
minus-squaresweng@programming.devlinkfedilinkarrow-up1·3 months agoNo. Consider a model that has been trained on a bunch of inputs, and each corresponding output has been “yes” or “no”. Why would it suddenly reproduce something completely different, that coincidentally happens to be the input?
You are wrong: https://stackoverflow.com/questions/76451205/difference-between-instruction-tuning-vs-non-instruction-tuning-large-language-m
Ah, TIL about instruction fine-tuning. Thanks, interesting thread.
Still, as I understand it, if the model has seen an input, then it always has a non-zero chance of reproducing it in the output.
No. Consider a model that has been trained on a bunch of inputs, and each corresponding output has been “yes” or “no”. Why would it suddenly reproduce something completely different, that coincidentally happens to be the input?