Nah those NFTs are way stupider. Making actually good looking AI art without any oddities can take several hours once you really get into the intricacies and often still needing something like Photoshop for finishing. I’m referring to Stable Diffusion. Others like DALLE-E and MidJourney are basically just the prompt.
The amount of work required to make Stable Diffusion look good is why I now allow it through my AI policy, provided they supply the reference material, the prompt, use a publicly available model, and credit it. Fail one of those and you’re getting a removal and warning.
There’s some people that specializes in reverse engineering prompts. Sometimes it’s funny, as they often disprove posts claiming "this is what the AI gave to me to “average [political viewpoint haver]”, only to turn out their prompt never contained the words “liberal”, “conservative”, etc, but words describing the image.
Reversing prompts is kind of pseudoscience. It’s like using a C# decompiler on a JAR file. Yes it produces working code from the binary, but it’s nowhere near what the original writer intended. They are also rife with false positives and negatives and that’s ignoring the weird idiosyncrasies such as nonsense tokens based on random artists who it thinks is 0.001% the style of, because it can’t find a better token to use instead. They can’t really tell you what the AI was thinking, but rather just make an educated guess which is more often than not completely wrong. Anyone claiming to be able to reverse engineer these black boxes flawlessly is outright lying to you. Nobody knows how they work.
EDIT: and prompt reversal also assumes that someone is using a certain model, not switching it out half way for another model (or multiple times even) using reference photos through img->img or adding in custom drawing through inpaint at any point of the process. Like, I can’t even begin to begin on how literally impossible it would be to untangle that absolute mess of chaos when all you have is the end result.
Most Stable Diffusion UIs embed the generation information as metadata in the image by default. Unfortunately when you upload it to places like Reddit they recompress it and strip the metadata.
Imagine hoarding your prompts because you’re afraid someone else will generate your images.
It’s like right-clicking an ugly monkey NFT, but even more stupid.
Nah those NFTs are way stupider. Making actually good looking AI art without any oddities can take several hours once you really get into the intricacies and often still needing something like Photoshop for finishing. I’m referring to Stable Diffusion. Others like DALLE-E and MidJourney are basically just the prompt.
The amount of work required to make Stable Diffusion look good is why I now allow it through my AI policy, provided they supply the reference material, the prompt, use a publicly available model, and credit it. Fail one of those and you’re getting a removal and warning.
Sounds like a good policy. Which community is that for?
Just a small community I run.
There’s some people that specializes in reverse engineering prompts. Sometimes it’s funny, as they often disprove posts claiming "this is what the AI gave to me to “average [political viewpoint haver]”, only to turn out their prompt never contained the words “liberal”, “conservative”, etc, but words describing the image.
Reversing prompts is kind of pseudoscience. It’s like using a C# decompiler on a JAR file. Yes it produces working code from the binary, but it’s nowhere near what the original writer intended. They are also rife with false positives and negatives and that’s ignoring the weird idiosyncrasies such as nonsense tokens based on random artists who it thinks is 0.001% the style of, because it can’t find a better token to use instead. They can’t really tell you what the AI was thinking, but rather just make an educated guess which is more often than not completely wrong. Anyone claiming to be able to reverse engineer these black boxes flawlessly is outright lying to you. Nobody knows how they work.
EDIT: and prompt reversal also assumes that someone is using a certain model, not switching it out half way for another model (or multiple times even) using reference photos through img->img or adding in custom drawing through inpaint at any point of the process. Like, I can’t even begin to begin on how literally impossible it would be to untangle that absolute mess of chaos when all you have is the end result.
Just be happy that the nft craze was over before those ai were released. There would have been even more trash and low effort nfts.
Most Stable Diffusion UIs embed the generation information as metadata in the image by default. Unfortunately when you upload it to places like Reddit they recompress it and strip the metadata.