• 0 Posts
  • 8 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • Garuda.

    I’d never used Arch or Arch derivatives but if this is the experience I understand the memes a little more.

    The package management is easy and very up to date. I like the BTRFS snapshots, and it had everything game-related available right out of the box. My Nvidia graphics card, which was the thing I couldn’t get working on Ubuntu, performed as well or better than under windows.

    The only thing that didn’t work for me was ZFS - but because everything else was working well, I just went another route.


  • Longtime every OS user. But have been using Linux since the days of Mandrake in ‘96. Switched to Debian shortly thereafter though mostly as a server/SDN device. Then a long spell on Ubuntu starting with 8.something. While I don’t use Linux on the desktop as my primary work OS, I do use it daily.

    Recently, annoyed with windows, which I only used/booted up for gaming, I gave gaming on Linux a try. It’s been mostly flawless even when the games aren’t Linux-native. Hilariously Ubuntu was awful and I couldn’t get it working so I’ve switched to something more gaming specific and couldn’t happier.






  • You’re conferring a level of agency where none exists.

    It appears to “understand.” It appears to be “knowledgeable. “

    But LLMs do neither of those things.

    Take this note from an OpenAI dev:

    It’s that these models have leveraged so much data they’ve been able to map out relationships between words (or images) in way as to be able to generate what seem like new versions of those things.

    I grant you that an LLM has more base level knowledge than any one human, but again this is thanks to terrifyingly large dataset and a design that means it can access this data reasonably reliably.

    But it is still a prediction model. It just has more context, better design and (most importantly) data to make predictions at a level never before seen.

    If you’ve ever had a chance to play with a model at level where you can control some of its basic parameters it offers a glimpse into just how much of a prediction machine it can be.

    My favourite game for a while was to give midjourney a wildly vague prompt but crank the chaos up to 100 (literally the chaos flag at the highest level) to see what kind of wild connections exist but are being filtered out during “normal” use.

    The same with the GPT-3.5 API in the “early days” - you could return multiple versions of the response and see the sausage being made to a very small degree.

    It doesn’t take away from the sense of magic using these tools. It just helps frame what’s going on under the hood.