Are you using zfs?
Are you using zfs?
You can’t trust any full disk encryption without it because only a TPM can verify that your bootloader and initrd are not compromised.
What’s the problem with that script? That’s such a basic use case and not very hard to do at all in systemd.
Where do you struggle with it? Can we maybe help with something?
Replace Debian apt sources with Ubuntu ones, do system upgrade and install the Ubuntu-Desktop package, now you have Ubuntu.
It’s been a while since I have done this, but it’s totally possible.
We did this transition from Ubuntu to Debian at Work with thousands of workstations.
It requires a bit of time and testing but it’s possible.
Not really. You can still use dm-verity for a normal raid and get checksumming and normal performance, which is better and faster than using btrfs.
But in any case, I’d recommend just going with zfs because it has all the features and is plenty fast.
From arch wiki:
Disabling CoW in Btrfs also disables checksums. Btrfs will not be able to detect corrupted nodatacow files. When combined with RAID 1, power outages or other sources of corruption can cause the data to become out of sync.
No thanks
If you are planning to have any kind of database with regular random writes, stay away from btrfs. It’s roughly 4-5x slower than zfs and will slowly fragment itself to death.
I’m migrating a server from btrfs to zfs right now for this very reason. I have multiple large MySQL and SQLite tables on it and they have accumulated >100k file fragments each and have become abysmally slow. There are lots of benchmarks out there that show that zfs does not have this issue and even when both filesystems are clean, database performance is significantly higher on zfs.
If you don’t want a COW filesystem, then XFS on LVM raid for databases or ext4 on LVM for everything else is probably fine.
Most of the time, it’s enough to copy the whole EFI partition to the new machine and update whatever boot entries are in there to point to the right new partitions.
In case of a switch to something like zfs, it’s a bit more involved and you need to boot a live Linux, chroot into the new “/” with /boot mounted and /dev, /proc, /sys bind mounted into the chroot.
Then you can run the distro-appropriate command to reinstall/ update grub into the EFI partition and they will usually take care of adding the right drivers.
Btrfs is in the mainline kernel since 2.6.29, that’s 14 years ago my friend 😃
It’s included in every major distro for a long long time.
I disagree, you usually just need to get /boot and your EFI things right on the new disk, rsync stuff over and fix any references to old disks in /etc/fstab and maybe your grub config and you are done. I have done this migration>10 times over the years onto different filesystems, partition Layout and raid configurations and it’s never been particularly hard.
Pretty much every alerting system I know also has a filter option to only apply automated discovery rules to certain filesystem types.
But yes, most don’t first squashfs or mounted read-only snapshots by default and it sucks.