(frozen comment) Re: misinformation correction

(Anonymous) 2024-05-05 08:05 pm (UTC)(link)
same anon again

im trying to correct misinformation so im going to be extra pedantic about my own. sometimes generative ai will closely reproduce images in its training set, but this is rare. its usually due to overfitting, which happens when a particular very popular image (like a famous painting, think starry night) is in a training data set very repeatedly, or when theres not enough images available for a very specific prompt (say theres only one photograph available of some public figure). this generally happens with public domain images that have already been reproduced thousands of times, and is also considered undesirable by model producers. i dont want to be taken as saying it never happens. i dont want to spread further misinformation. my point is just that its not the norm for any given image to closely resemble just one or two others

(frozen comment) Re: misinformation correction

(Anonymous) 2024-05-06 01:18 am (UTC)(link)
this is really fascinating! im always down to learn more about this kind of thing. i hope people dont do the aforementioned wasp-ass-shoving