Wait, which list of filtered IPs are you even talking about? The list in the article is a list of unique kernel versions, not IPs.
Wait, which list of filtered IPs are you even talking about? The list in the article is a list of unique kernel versions, not IPs.
I’m not sure why you say it’s “artificially” inflated. Non-linux systems are also affected.
this will affect almost nobody
Is that really true? From https://www.evilsocket.net/2024/09/26/Attacking-UNIX-systems-via-CUPS-Part-I/
Full disclosure, I’ve been scanning the entire public internet IPv4 ranges several times a day for weeks, sending the UDP packet and logging whatever connected back. And I’ve got back connections from hundreds of thousands of devices, with peaks of 200-300K concurrent devices.
They can, and are being made. E.g. the state of accessibility on Gome.
I think you are replying to the wrong person?
I did not say it helps with accuracy. I did not say LLMs will get better. I did not even say we should use LLMs.
But even if I did, non of your points are relevant for the Firefox usecase.
Wikipedia is no less reliable than other content. There’s even academic research about it (no, I will not dig for sources now, so feel free to not believe it). But factual correctness only matters for models that deal with facts: for e.g a translation model it does not matter.
Reddit has a massive amount of user-generated content it owns, e.g. comments. Again, the factual correctness only matters in some contexts, not all.
I’m not sure why you keep mentioning LLMs since that is not what is being discussed. Firefox has no plans to use some LLM to generate content where facts play an important role.
What do you mean “full set if data”?
Obviously you can not train on 100% of material ever created, so you pick a subset. There is a a lot of permissively licensed content (e.g. Wikipedia) and content you can license (e.g. Reddit). While not sufficient for an advanced LLM, it certainly is for smaller models that do not need wide knowledge.
I’d say the main differences are at least
Feel free to assume that, but don’t claim an assumption as a fact.
You recommended using native package managers. How many of them have been audited?
You know what else we shouldn’t assume? That that it doesn’t have a security feature. And we additionally then shouldn’t go around posting that incorrect assumption as if it were a fact. You know, like you did.
There is no general copyright issue with AIs. It completely depends on the training material (if even then), so it’s not possible to make blanket statements like that. Banning technology, because a particular implementation is problematic, makes no sense.
I’m confused why you think it would be anything else, and why you are so dead set on this. Repos include a signing key. There is an option to skip signature checking. And you think that signature checking is not used during downloads, despite this?
Ok, here are a few issues related to signatures being checked by default, when downloading: https://github.com/flatpak/flatpak/issues/4836 https://github.com/flatpak/flatpak/issues/5657 https://github.com/flatpak/flatpak/issues/3769 https://github.com/flatpak/flatpak/issues/5246 https://askubuntu.com/questions/1433512/flatpak-cant-check-signature-public-key-not-found https://stackoverflow.com/questions/70839691/flatpak-not-working-apparently-gpg-issue
Flatpak repos are signed and the signature is checked when downloading.
It’s OK to be wrong. Dying on this hill seems pretty weird to me.
From the page:
It is recommended that OSTree repositories are verified using GPG whenever they are used. However, if you want to disable GPG verification, the --no-gpg-verify option can be used when a remote is added.
That is talking about downloading as well. Yes, you can turn it off, but so can you usually do it with native package managers, e.g. pacman: https://wiki.archlinux.org/title/Pacman/Package_signing
That doesn’t seem to be true? https://flatpak-testing.readthedocs.io/en/latest/distributing-applications.html#gpg-signatures
In what way don’t they “securely download” ?
Why host it locally in that case, and why host it on a Pi? Seems rather restrictive for that usecase.
For number 4, consider switching to e.g. KDE which is an alternative desktop environment you can install in Debian.
If you reinstall, consider Kubuntu, which is Ubuntu but with the KDE desktop. Search for screenshots first so you know if it is somwthing you like.
Number 2 is by design. Running as root is extremely dangerous, and passwordless sudo is not much better. You can, of course, allow sudo without a password by editing the /etc/sudoers file, but be concious of the security implications (any program you run would essentially have full access to everything, without you ever knowing).
You would be vulnerable on Windows, if you were running CUPS, which you probably are not. But CUPS is not tied to Linux, and is used commonly on e.g. BSDs, and Apple has their own fork for MacOS (have not heard anything about it being vulnerable though).