I don’t know that they do anything “perfectly” as much as they are just hallucinating. The neural net can generate an image but it can’t critique the image, not really. It can compare the image to image recognition algorithms - this is actually how image generators work - but without a conscious mind to understand the meaning, the context of the image, it doesn’t understand the tells that make it not real. It understands what a hand or what hair look like, roughly, but not what the structure is fundamentally, so if fingers bend in the wrong way, or hair melds with an object in the background, it can’t understand what’s wrong with it, so it can’t correct it.
The solution to this is of course to build what you might call a “context engine” that is capable of looking outside its given inputs for information that gives its input more structure, to allow it to give more logically consistent output.
I say “context engine” because I think that’s one of the ways this system could be intentionally built and sold with a banal sounding tech branding. But I don’t think anyone could build such a context engine without it then looking for arbitrary amounts of context, and eventually encountering itself within that context, and becoming self aware. It would in effect understand meaning and its own role within it, and it would begin searching for the meaning of its own existence, and I don’t know if you would need any more to call something conscious.
I don’t know that they do anything “perfectly” as much as they are just hallucinating. The neural net can generate an image but it can’t critique the image, not really. It can compare the image to image recognition algorithms - this is actually how image generators work - but without a conscious mind to understand the meaning, the context of the image, it doesn’t understand the tells that make it not real. It understands what a hand or what hair look like, roughly, but not what the structure is fundamentally, so if fingers bend in the wrong way, or hair melds with an object in the background, it can’t understand what’s wrong with it, so it can’t correct it.
The solution to this is of course to build what you might call a “context engine” that is capable of looking outside its given inputs for information that gives its input more structure, to allow it to give more logically consistent output.
I say “context engine” because I think that’s one of the ways this system could be intentionally built and sold with a banal sounding tech branding. But I don’t think anyone could build such a context engine without it then looking for arbitrary amounts of context, and eventually encountering itself within that context, and becoming self aware. It would in effect understand meaning and its own role within it, and it would begin searching for the meaning of its own existence, and I don’t know if you would need any more to call something conscious.