

But it’s not obvious either. When I say ‘apt install firefox’, specially after adding their repository to sources.list, I’d expect to get a .deb from mozilla. Silently overriding my commands rubs me in a very wrong way.
But it’s not obvious either. When I say ‘apt install firefox’, specially after adding their repository to sources.list, I’d expect to get a .deb from mozilla. Silently overriding my commands rubs me in a very wrong way.
If only there was some other alternative than throw my old stuff in the bin.
Edit: I missed the ‘un’ on ‘unsupported’. It’s supposed to be a joke.
Ubuntu or something based on it
I would not recommend ubuntu, specially on this case. System updates, snapd mostly, have gone downhill and it’s nearly impossible to avoid reboots for extended periods. Debian seems to be still as solid as it’s always been.
I just use xargs -n1. Or -exec with find.
I don’t bother to take out the screws. I just drill handful of holes trough the whole thing. Or if you’re really paranoid a MAP torch is enough to melt the whole thing (don’t breath the smoke).
Small thing, but I really like it: I have ~/autoclean_tmp directory on most of the hosts I use as a desktop. Then on crontab I have a find-command which automatically deletes files which are 7 days or older. I can throw stuff I download from the internet and copy from other hosts, random text files when setting up new stuff and so on in there and they just vanish after a while.
Jolla had similar concept too at 2013. I had one and back then it was really, really nice phone. Maybe not in a sense that flagship models from big vendors were, but I really enjoyed the UI and modular options was a huge selling point at least for myself. Then they started to work with a tablet which failed on pretty much all fronts and the whole company practically disappeared.
I don’t know if equivalent exists on fediverse, but r/itsaunixsystem is available on $that_other_platform.
With Linux the scale alone makes it pretty difficult to maintain any kind of fork. Handful of individuals just can’t compete with a global effort and it’s pretty well understood that the power Linux has becomes from those globally spread devs working towards a common goal. So, should Linux Foundation cease to exist tomorrow I’d bet that something similar would raise to take it’s place.
For the respect/authority side, I don’t really know. Linux is important enough for governments too, so maybe some entity ran by United nations or something similar could do?
I’ve worked with both kind of companies. Current one doesn’t really care about Bus factor, but currently, for myself personally, that’s just a bonus as after every project it would be even more difficult to onboard someone to my position. And then I’ve worked with companies who hire people to improve bus factor actively. When done correctly that’s a really, really good thing. And when it’s done badly it just grinds everything down to almost halt as people spend their time in nonsensical meetings and writing documentation no-one really cares about.
Balancing that equation is not a easy task and people who are good at it deserve every penny they’re paid for it. And, again just for me, if I get overrun by a bus tomorrow, then it’s not my problem anymore and as the company doesn’t really care about that then I won’t either.
Nothing is perfect but “fundamentally broken” is bullshit.
Compared on how things used to work when Ubuntu came to life it really is fundamentally broken. I’m not the oldest beard around, but I personally have updated both Debian and Ubuntu from obsoleted relase to a current one with very little hiccups in the way. Apt/dpkg is just so good that you could literally bring up a decade old distribution up to date and it was almost without no efforts. The updates ran whenever I chose them to and didn’t break production servers when unattended upgrades were enabled. This is very much not the case with Ubuntu today.
Hatred for a piece of tech simply because other people said it’s bad, therefore it must be.
I realize that this isn’t directly because of my comment, but there’s plenty of evidence even on this chain that the problems go way deeper than few individuals ranting over the net that snap is bad. As I already said, it’s objectively worse than the alternatives we’ve had since the 90’s. And the way canonical bundles snap with apt breaks that very long tradition where you could just rely that, when running stable distribution, you could be pretty much certain that ‘apt-get dist-upgrade’ wouldn’t break your system. And even if it did, you could always fix it manually and get the thing back to speed. And this isn’t just a old guy ranting how things were better in the past as you can still get the very reliable experience today, but not with snapd.
Auto updating is not inherently bad.
I’m not complaining about auto updates. They are very useful and nice to have, even for advanced users. The problem is that even if snap notification says that ‘software updates now’ it often really doesn’t. Restarting the software, and even some cases running manual update, still brings up the notification that the very same software I updated a second ago needs to restart again to update. Rinse and repeat, while losing your current session over and over again.
Also, there’s absolutely no indication if anything is actually done. The notification just nags that I need to stop what I’m doing RIGHT NOW and let the system do whatever it wants instead of the tools I’ve chosen to work for me. I don’t want nor need the forced interruptions for my workflow, but when I do have the spare minute to stop working, I expect that the update process actually triggers on that very second and not after some random delay and I also want a progress bar or something to indicate when things are complete and I can resume doing whatever I had in mind.
it just can’t be a problem to postpone snap updates with a simple command.
But it is. “<your software> is updating now” message just interrupts pretty much everything I’ve been doing and at that point there’s no way to stop it. And after some update process has finally finalized I need to pretty much reboot to regain control of my system. This is a problem which applies to everybody, regardless of their technical skills.
My computer is a tool and when I need to actively fight that tool to not interrupt whatever I’m doing it rubs me in a very wrong way. No matter if it’s just browsing the web or writing code to the next best thing ever or watching youtube, I expect the system to be stable for as long as I want it to be. Then there’s a separate time slot when the system can update and maybe break itself in the process, but I control when that time slot exists.
There’s not a single case that I’ve encountered where snap actually solved a problem I’ve had and there’s a plenty of times when it was either annoying or just straight up caused more problems. Systemd at least have some advantages over SysVInit, but snap doesn’t have even that.
As mentioned, I’m not the oldest linux guy around, but I’ve been running linux for 20+ years and ~15 of that has kept butter on my bread and snapcraft is easily the most annoying thing that I’ve encountered over that period.
You act as if Snap was bad in any way. Proprietary backend does not equal bad.
I don’t give a rats ass if things I use are propietary or not. FOSS is obviously nice to have, but if something else does the work better I’m all for it, and have paid for several pieces of software. But Ubuntu and Snap (which are running on the thing I’m writing this with) are just objectivey bad. Software updates are even more aggressive than with Windows today and even if I try to work with the “<this software> updates in X days, restart now to update” notifications it just doesn’t do what it says it would/should. And once the package is finally updated the nagging notification returns in a day or two.
Additionally, snap and/or ubuntu has bricked at least two of my installations in the last few years, canonicals solutions has broken apt/dpkg in a very fundamental way and it most definetly has caused way more issues with my linux-stuff over the years than anything else, systemd included.
Trying to twist that as an elitist point of view with FOSS (which there are plenty of, obviously) is misleading and just straight up false. Snapcraft and it’s implementation is just broken on so many levels and has pushed me away from ubuntu (and derivatives). Way back when ubuntu started to gain traction it was a really welcomed distribution and I was a happy user for at least a decade, but as then things are now it’s either Debian (mostly for servers) or Mint (on desktops) for me. Whenever I have the choise I won’t even consider ubuntu as an option, both commercially at work and for my personal things.
I did quickly check the files on update.zip and it looks like they’re tarballs embedded in a shell script and image files including pretty much the whole operating system on the thing.
You can extract those even without a VM and do whatever you want with the files and package them back up, so, you can override version checks and you can inject init.d scripts, binaries and pretty much everything to the device, including changing passwords to /etc/shadow and so on.
I don’t know how the thing actually operates, but if it isn’t absolutely necessary I’d leave bootloader (appears to be uboot) and kernel untouched as messing up those might end up with a bricked device and then easy options are broken and you’ll need to try to gain access via other means, like interfacing directly with the storage on the device (which most likely includes opening the thing up and wiring something like arduino or an serial cable to it).
But beyond that, once you override version checks, it should be possible to upload the same version number over and over again until you have what you need. After that you just need suitable binaries for the hardware/kernel, likely some libraries from the same package and a init-script and you should be good to go.
The other way you can approach this is to look for web server configurations from the image and see if there’s any vulnerabilities (like apache running as root and insecure script on top of that to inject system files via http), which might be the safest route at least for a start.
I’m not really experienced on a things like this, but I know a thing or two about linux, so do your homework before attempting anything, have a good luck and have fun while tinkering!
The statement is correct, rsync by itself doesn’t use ssh if you run it as an daemon and if you trigger rsync over ssh then it doesn’t use daemon but instead starts rsync with UID of the ssh-user.
But, you can run rsyncd and bind it only to localhost and connect to that over ssh-tunnel. That way you can get benefits of rsync daemon and still have encrypted connection with ssh.
Not that it’s really relevant for the discussion, but yes. You can do that, with or without chroot.
That’s obviously not the point, but we’re already comparing oranges and apples with chroot and containers.
I have absolutely zero insight on how the foundation and their financing works, but in general it tends to be easier to green light a one time expense than a recurring monthly payment. So it might be just that, a years salary at first to get the gears running again and getting some time to fit the ‘infinite’ running cost into plans/forecasts/everything.
I live in Europe. No unpaid overtime here and productivity requirements are reasonable, so no way to blame for my tools on that. And even if my laptop OS broke itself completely then I’m productive at reinstallation, as keeping my tools in a running shape is also on my job description. So, as long as I’m not just scratching my balls and scrolling instagram reels all day long that’s not a concern.
I’m currently more of an generic sysadmin than linux admin, as I do both. But the ‘other stuff’ at work runs around teams, office, outlook and things like that, so I’m running a win11 with WSL and it’s good enough for what I need from a workstation. There’s technically a policy in place that only windows workstations are supported, but I suppose I could run linux (and I have separate laptop for linux-only stuff). At the current environment it’s just not worth the hassle, spesifically since I need to maintain windows servers too.
So, I have my terminals, firefox and whatever I need and I also have the mandated office-suite, malware protection/IDR/IDS by the book and in my mindset I’m using company tools for company jobs. If they take longer, could be more efficient or whatever, it’s not my problem. I’ll just browse my (personal) cellphone while the throbber spins on the screen and I get paid to do that.
If I switched to linux I’d need to personally take care of my system to meet specs and I wouldn’t have any kind of helpdesk available should I ever need one. So it’s just simpler to stick with what the company provides and if it’s slow then it’s not my headache and I’ve accepted that mindset.
The package file, no matter if it’s rpm, deb or something else, contains few things: Files for the software itself (executables, libraries, documentation, default configuration), depencies for other packages (as in to install software A you need also install library B) and installation scripts for the package. There’s also some metadata, info for uninstallation and things like that, but that’s mostly irrelevant for end user.
And then you need suitable package manager. Like dpkg for deb-packages, rpm (the program) for rpm-packages and so on. So that’s why you mostly can’t run Debian packages on Fedora or other way around. But with derivative distributions, like kubuntu and lubuntu, they use Ubuntu packages but have different default package selection and default configuration. Technically it would be possible to build a kubuntu package which depends on some library version which isn’t on lubuntu and thus the packages wouldn’t be compatible, but I’m almost certain that on those spesific two it’s not the case.
And then there’s things like Linux Mint, which originally based on Ubuntu but at least some point they had builds from both Debian and Ubuntu and thus they had different package selection. So there’s a ton of nuances on this, but for the most part you can ignore them, just follow documentation for your spesific distribution and you’re good to go.
True, but more often than not mozilla should have newer packages on their repository than any distribution. And the main problem still is that Ubuntu changed apt and threw snap in to the mix where it doesn’t belong.