No. I am not saying that to put man and machine in two boxes. I am saying that because it is a huge difference, and yes, a practical one.
An LLM can talk about a topic for however long you wish, but it does not know what it is talking about, it has no understanding or concept of the topic. And that shines through the instance you hit a spot where training data was lacking and it starts hallucinating. LLMs have “read” an unimaginable amount of texts on computer science, and yet as soon as I ask something that is niche, it spouts bullshit. Not it’s fault, it’s not lying; it’s just doing what it always does, putting statistically likely token after statistically liken token, only in this case, the training data was insufficient.
But it does not understand or know that either; it just keeps talking. I go “that is absolutely not right, remember that <…> is <…,>” and whether or not what I said was true, it will go "Yes, you are right! I see now, <continues to hallucinate> ".
There’s no ghost in the machine. Just fancy text prediction.
Huh. I update my revanced YouTube app every 6-9 months