While GPT-4 might not churn out top-notch books just yet, this tech is getting better and will be a major part of how we interact with the world and eachother in the future.
Id like Lemmy to still be relevant in a few years. We shouldn’t shy away from new tech.
this tech is getting better and will be a major part of how we interact with the world and eachother in the future.
Well that’s bloody terrifying, yeah? The tech is still young and people are already using it to avoid learning new things. It just churns out text that sounds correct but often isn’t and people just take it at its word. It has no more understanding of the text it shits out than a toddler who has learned to swear.
Look, mate, people are fuckin’ lazy and if there’s a shortcut, they’re gonna take it. They’re not gonna fact check the magic bot because that’s work and defeats the purpose of using the magic answer-bot in the first place. I do not look forward to a world where intellectual curiosity dies at the hand of “let me just ask a bot to do it for me”. That’s not even getting into the use of it by malicious actors to influence the gullible or stupid or the massive carbon footprint for all the compute resources it takes to run those models.
Id like Lemmy to still be relevant in a few years. We shouldn’t shy away from new tech.
Agree on the first part, but heavily disagree on the second when it comes to bot-generated crap,
What really strikes me here is that your perspective on this seems to be so disconnected from the experience i have gotten working with AI, which is a power-tool to drastic enhance your capabilities in advanced cognitive tasks.
Since ChatGPT last year i have learned:
Advanced power shell and basic python (the first is more useful for my job)
In just the last 3 weeks (when i got GPT4) i learned
How to work a Linux command-terminal, something i have been struggling with for 2+ years
Set up and work with both Arch and Debian based systems
How to work docker trough cli and how to create heavy customization on many of the servers catered to the needs of my home-network. This includes some advanced reprogramming of how some of my smart devices behave, something i have wanted to do for over 3 years.
I have also gotten many compliments at work for my emerging ability to quickly create scripts to automate tedious tasks, giving us more time to think about and improve our workflow rather then always trying to finish a never ending backlog.
This thing has supercharged my life as a computer enthusiast. I never had a teacher that was capable of teaching me in such a customized manner. On my own tempo, in a requested structure and regardless of how stupid my question might be.
But you are correct that there are clear pitfalls when working with AI. I myself have used it enough that i believe i know how to use them, some notes:
The user is always the brain behind the creative process
Like you said “It has no more understanding of the text it shits out than a toddler who has learned to swear”
The uploader of the post you linked also stated it himself “it is a tool” not a genie that does all the work for you.
AI Enhances your knowledge. Ten times zero…
Setting up Linux servers on my home network is something i have been trying and failing to do for a while now. (Mostly because i am entirely self learned) but i understood it well enough to know if ChatGPT output is realistic at all. I am always directing it to do what i planned to do, and i never copy its work without first understanding what it actually does.
Know the limitations.
There are some topics that current AI is much better at then others, in my experience that’s coding and computers. To plan a holiday trip? I tried, its really not that good.
Break it down, use what your learned, Build something better:
Handwrite an email -> have ChatGPT reason what it thinks i am trying to say -> Have Chatgpt rewrite it to better reflect what i am trying to say -> Read and understand what it did -> Discard previous emails and write a final one.
For someone who pre ChatGPT was horrible at writing emails, My boss has now started asking me to craft standardized emails to be send in bulk.
Now to address the original post which really just a low quality cut and dry standard reply from ChatGPT.
I am gonna go on a limb and say the comment from OP that it is just a tool is probably more a recent realization. The first week of using these models they do indeed feel a bit like magic know it all boxes, but just like Altman stated this feeling fades quickly. You realize if you actually want to create something of real quality (swindlers will swindle) you are going to have to remain in charge, understand what parts of your tools you can and cant rely on.
I believe there is only one way to learn this and that is for people to use and learn this technology for themselves. I hope i am wrong for the next line but i extrapolate that AI is very much a case of “Get in the motorboat now, or peddle behind forever” because things are going to start to move really fast.
Do you think humans don’t just “churn out text” at a basic level? Do you think humans don’t make mistakes or don’t confabulate? The confusion is not that we think too lowly of LLMs rather than that we think we are very special as humans. LLMs like GPT-4 have “sparks of AGI” in them, they are not just stochastic parrots. There are several academic papers that prove LLMs uderstand meaning. GPT-4, the full model, not the crippled chat variant for consumers, demonstrated superhuman coding abilities when researchers probed it’s abilities. It passed Google and Amazon coding interviews with 100 out of 100 score and finished the 2 hour assignment in 3minutes surpassing all human candidates. And it took it so long (3m) only because a human had to paste in the tasks.
Ask chatgpt everything. Learn nothing, develop no new skills of your own, and let’s see what you regret on your deathbed. I’m willing to bet most people are going to regret not learning to paint, write, compose music, play an instrument, etc rather than regret all the things they didn’t get a bot to do for them.
A major part of how we interact. Not replace human interactions and definitely not put a centralized corporate AI in charge.
My vision of what interaction could look like on Lemmy with AI tools (with a few more years of progress):
Instant summaries on long posts
Live fact checking with additional sources
Complete translations that maintain sentiment
Advanced spell check and suggesting alternative grammar live while typing
Imagine if everyone had a small Wikipedia genie on their shoulder, at your demand telling you information about whatever subject your writing about. We all know Wikipedia has mistakes and that some expert levels stuff really is best to leave to experts. I tend to go back and forth with google a lot if i want to get the details in a post right, it has the same problems. But in general Wikipedia and the internet are much more right than the average single person. For some stuff i rather have a transparent trusted AI provide the details then a random internet stranger that may only claim to have done research, or worse has malicious goals to spread misinformation.
While GPT-4 might not churn out top-notch books just yet, this tech is getting better and will be a major part of how we interact with the world and eachother in the future.
Id like Lemmy to still be relevant in a few years. We shouldn’t shy away from new tech.
Well that’s bloody terrifying, yeah? The tech is still young and people are already using it to avoid learning new things. It just churns out text that sounds correct but often isn’t and people just take it at its word. It has no more understanding of the text it shits out than a toddler who has learned to swear.
Look, mate, people are fuckin’ lazy and if there’s a shortcut, they’re gonna take it. They’re not gonna fact check the magic bot because that’s work and defeats the purpose of using the magic answer-bot in the first place. I do not look forward to a world where intellectual curiosity dies at the hand of “let me just ask a bot to do it for me”. That’s not even getting into the use of it by malicious actors to influence the gullible or stupid or the massive carbon footprint for all the compute resources it takes to run those models.
Agree on the first part, but heavily disagree on the second when it comes to bot-generated crap,
What really strikes me here is that your perspective on this seems to be so disconnected from the experience i have gotten working with AI, which is a power-tool to drastic enhance your capabilities in advanced cognitive tasks.
Since ChatGPT last year i have learned:
In just the last 3 weeks (when i got GPT4) i learned
How to work a Linux command-terminal, something i have been struggling with for 2+ years
Set up and work with both Arch and Debian based systems
How to work docker trough cli and how to create heavy customization on many of the servers catered to the needs of my home-network. This includes some advanced reprogramming of how some of my smart devices behave, something i have wanted to do for over 3 years.
I have also gotten many compliments at work for my emerging ability to quickly create scripts to automate tedious tasks, giving us more time to think about and improve our workflow rather then always trying to finish a never ending backlog.
This thing has supercharged my life as a computer enthusiast. I never had a teacher that was capable of teaching me in such a customized manner. On my own tempo, in a requested structure and regardless of how stupid my question might be.
But you are correct that there are clear pitfalls when working with AI. I myself have used it enough that i believe i know how to use them, some notes:
The user is always the brain behind the creative process Like you said “It has no more understanding of the text it shits out than a toddler who has learned to swear” The uploader of the post you linked also stated it himself “it is a tool” not a genie that does all the work for you.
AI Enhances your knowledge. Ten times zero… Setting up Linux servers on my home network is something i have been trying and failing to do for a while now. (Mostly because i am entirely self learned) but i understood it well enough to know if ChatGPT output is realistic at all. I am always directing it to do what i planned to do, and i never copy its work without first understanding what it actually does.
Know the limitations. There are some topics that current AI is much better at then others, in my experience that’s coding and computers. To plan a holiday trip? I tried, its really not that good.
Break it down, use what your learned, Build something better:
Handwrite an email -> have ChatGPT reason what it thinks i am trying to say -> Have Chatgpt rewrite it to better reflect what i am trying to say -> Read and understand what it did -> Discard previous emails and write a final one.
For someone who pre ChatGPT was horrible at writing emails, My boss has now started asking me to craft standardized emails to be send in bulk.
Now to address the original post which really just a low quality cut and dry standard reply from ChatGPT. I am gonna go on a limb and say the comment from OP that it is just a tool is probably more a recent realization. The first week of using these models they do indeed feel a bit like magic know it all boxes, but just like Altman stated this feeling fades quickly. You realize if you actually want to create something of real quality (swindlers will swindle) you are going to have to remain in charge, understand what parts of your tools you can and cant rely on.
I believe there is only one way to learn this and that is for people to use and learn this technology for themselves. I hope i am wrong for the next line but i extrapolate that AI is very much a case of “Get in the motorboat now, or peddle behind forever” because things are going to start to move really fast.
Do you think humans don’t just “churn out text” at a basic level? Do you think humans don’t make mistakes or don’t confabulate? The confusion is not that we think too lowly of LLMs rather than that we think we are very special as humans. LLMs like GPT-4 have “sparks of AGI” in them, they are not just stochastic parrots. There are several academic papers that prove LLMs uderstand meaning. GPT-4, the full model, not the crippled chat variant for consumers, demonstrated superhuman coding abilities when researchers probed it’s abilities. It passed Google and Amazon coding interviews with 100 out of 100 score and finished the 2 hour assignment in 3minutes surpassing all human candidates. And it took it so long (3m) only because a human had to paste in the tasks.
You know what, mate, I give up. 🤦♂️
Ask chatgpt everything. Learn nothing, develop no new skills of your own, and let’s see what you regret on your deathbed. I’m willing to bet most people are going to regret not learning to paint, write, compose music, play an instrument, etc rather than regret all the things they didn’t get a bot to do for them.
GPT doesn’t prevent you from doing whatever you want.
I like AI’s immense power but using it to replace human interaction with corporation-controlled LLM API bots is a ridiculous idea.
A major part of how we interact. Not replace human interactions and definitely not put a centralized corporate AI in charge.
My vision of what interaction could look like on Lemmy with AI tools (with a few more years of progress):
Imagine if everyone had a small Wikipedia genie on their shoulder, at your demand telling you information about whatever subject your writing about. We all know Wikipedia has mistakes and that some expert levels stuff really is best to leave to experts. I tend to go back and forth with google a lot if i want to get the details in a post right, it has the same problems. But in general Wikipedia and the internet are much more right than the average single person. For some stuff i rather have a transparent trusted AI provide the details then a random internet stranger that may only claim to have done research, or worse has malicious goals to spread misinformation.