cultural reviewer and dabbler in stylistic premonitions

  • 11 Posts
  • 128 Comments
Joined 3 years ago
cake
Cake day: January 17th, 2022

help-circle


  • security updates are for cowards, amirite? 😂

    seriously though, Debian 7 stopped receiving security updates a couple of years prior to the last time you rebooted, and there have been a lot of exploitable vulnerabilities fixed between then and now. do your family a favor and replace that mailserver!

    From the 2006 modification times, i wonder: did you actually start off with a 3.1 (sarge) install and upgrade it to 7 (wheezy) and then stopped upgrading at some point? if so, personally i would be tempted to try continuing to upgrade it all the way to bookworm, just to marvel at debian stable’s stability… but only after moving its services to a fresh system :)











  • Arthur Besse@lemmy.mlto196@lemmy.blahaj.zonefirefox rule
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 months ago

    I work in tech, and I don’t understand people’s obsession with having all their RAM free at all times.

    If you don’t use it, why do you have it?

    Windows (not the best OS, but the one I know the most about), will lie to you about how much memory you have that’s free. It puts data in RAM as cache. In the event you need that data, it’s already loaded in RAM. Usually this is stuff like DLLs and executables for programs.

    There’s a difference between “free” memory, and “available” memory.

    Linux and macOS do the same, although I wouldn’t call it lying per se :)

    There is certainly a lack of understanding of the difference between free and available RAM. TLDR: yes, free RAM is indeed wasted RAM.

    If you actually have a lot of free RAM, it’s probably because you either booted or freed a lot of RAM very recently. After using your computer for a while, most of your available RAM should not be free but rather being used for page cache and other caches.

    After a program has just read and/or written more data from disk than will fit in available RAM, the kernel’s page cache (which is typically the bulk of that not-free-but-available memory) should be mostly populated by the most recent of those operations. This means that if that program (or any other program) reads those files again, before they are evicted from cache by other things, they will not need to wait for the disk and will get them back much faster.

    However, managing all of this is the kernel’s job, and the not-free-but-available RAM being used for page cache is not (in any OS, as far as I know, though I mostly know Linux) attributed to the program(s) responsible for putting things there.

    So, when people are complaining about an application using 40% of their RAM it is not necessarily due to them misunderstanding free-vs-available RAM. The used number for an application does not include the portion of the system’s not-free-but-available RAM which the application is also responsible for occupying.

    (If you want to know which programs and/or which files are responsible for occupying your page cache… on Linux at least, it is not really possible without instrumenting your kernel. The kernel is just tracking blocks. There several tools which will let you see which blocks of a given file are cached, but there isn’t a reverse mapping from blocks to files.)


  • (disclaimer: this information might be years out of date but i think it is still accurate?)

    SSH doesn’t have a null cipher, and if it did, using it still wouldn’t make an SSH tunnel as fast as a TCP connection because SSH has its own windowing mechanism which is actually what is slowing you down. Doing the cryptography at line speed should not be a problem on a modern CPU.

    Even though SSH tunnels on your LAN are probably faster than your internet connection (albeit slower than LAN TCP connections), SSH’s windowing overhead will also make for slower internet connections (vs rsync or something else over TCP) due to more latency exacerbating the problem. (Whenever the window is full, it is sitting there not transmitting anything…)

    So, to answer OP’s question:

    • if you want to rsync over SSH, you usually don’t need a daemon (or to specify --rsh=ssh as that is the default).
    • if you the reason you want to use the rsync daemon is performance, then you don’t want to use SSH. you’ll need to open a port for it.
    • besides performance, there are also some rsync features which are only available in “daemon mode”. if you want to use those, you have at least 3 options:
      • open a port for your rsync daemon, and don’t use SSH (bonus: you also get the performance benefit. downside, no encryption.)
      • setup an SSH tunnel and tell the rsync client it is connecting to a daemon on localhost
      • look at man rsync and read the section referred to by this:
        • The remote-shell transport is used whenever the source or destination path contains a single colon (:) separator after a host specification. Contacting an rsync daemon directly happens when the source or destination path contains a double colon (::) separator after a host specification, OR when an rsync:// URL is specified (see also the USING RSYNC-DAEMON FEATURES VIA A REMOTE-SHELL CONNECTION section for an exception to this latter rule).

    HTH.






  • Funny that blog calls it a “failed attempt at a backdoor” while neglecting to mention that the grsec post (which it does link to and acknowledges is the source of the story) had been updated months prior to explicitly refute that characterization:

    5/22/2020 Update: This kind of update should not have been necessary, but due to irresponsible journalists and the nature of social media, it is important to make some things perfectly clear:

    Nowhere did we claim this was anything more than a trivially exploitable vulnerability. It is not a backdoor or an attempted backdoor, the term does not appear elsewhere in this blog at all; any suggestion of the sort was fabricated by irresponsible journalists who did not contact us and do not speak for us.

    There is no chance this code would have passed review and be merged. No one can push or force code upstream.

    This code is not characteristic of the quality of other code contributed upstream by Huawei. Contrary to baseless assertions from some journalists, this is not Huawei’s first attempt at contributing to the kernel, in fact they’ve been a frequent contributor for some time.




  • Arthur Besse@lemmy.mlto196@lemmy.blahaj.zoneTransmasc Godzilla Rule
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    3 months ago

    After a minute of research I’m inclined to believe Godzilla egg-laying only happened in Roland Emmerich’s 1998 film.

    Here is some contemporary reporting about it: https://www.chicagotribune.com/1998/05/19/godzilla-lays-an-egg-does-this-surprise-you/

    Big, buff and bodacious, he’s so cool he can even reproduce himself–or herself. Turns out, Godzilla’s a hermaphrodite.

    Consistent with the mythology, this giant lizard is a mutant by-product of nuclear radiation. As the only member of its species to have survived a bomb test in French Polynesia, Godzilla must assume male and female reproductive functions to maintain the lineage.

    Why Godzilla feels compelled to travel all the way to Manhattan to lay its eggs is a mystery not clearly explained in the script, but, like any Sinatra fan, the monster probably thought, “If I can make it there, I’ll make it anywhere.” So, it was off to New York, New York, where–like the Knicks–the creature lays a lot of eggs in Madison Square Garden.

    see also: https://fictionhorizon.com/how-does-godzilla-reproduce/