I thought I’ll make this thread for all of you out there who have questions but are afraid to ask them. This is your chance!

I’ll try my best to answer any questions here, but I hope others in the community will contribute too!

  • SineIraEtStudio@midwest.social
    link
    fedilink
    arrow-up
    32
    ·
    9 months ago

    Mods, perhaps a weekly post like this would be beneficial? Lowering the bar to entry with some available support and helping to keep converts.

      • Arthur Besse@lemmy.mlM
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Ok, I just stickied this post here, but I am not going to manage making a new one each week :)

        I am an admin at lemmy.ml and was actually only added as a mod to this community so that my deletions would federate (because there was a bug where non-mod admin deletions weren’t federating a while ago). The other mods here are mostly inactive and most of the mod activity is by me and other admins.

        Skimming your history here, you seem alright; would you like to be a mod of /c/[email protected] ?

        • Cyclohexane@lemmy.mlOPM
          link
          fedilink
          arrow-up
          6
          ·
          9 months ago

          Please feel free to make me a mod too. I am not crazy active, but I think my modest contributions will help.

          And I can make this kind of post on a biweekly or monthly basis :) I think weekly might be too often since the post frequency here isn’t crazy high

        • d3Xt3r@lemmy.nzM
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          9 months ago

          Thanks! Yep I mentioned you directly seeing as all the other other mods here are inactive. I’m on c/linux practically every day, so happy to manage the weekly stickies and help out with the moderation. :)

  • I Cast Fist@programming.dev
    link
    fedilink
    arrow-up
    13
    ·
    9 months ago

    Why does it feel that Linux infighting is the main reason why it never takes off? It’s always “distro X sucks”, “installing from Y is stupid”, “any system running Z should burn”

    • johannesvanderwhales@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      9 months ago

      Linux generally has a higher (perceived?) technical barrier to entry so people who opt to go that route often have strong opinions on exactly what they want from it. Not to mention that technical discussions in general are often centered around decided what the “right” way to do a thing is. That said regardless of how the opinions are stated, options aren’t a bad thing.

      • wolf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        This.

        It is a ‘built-in’ social problem: Only people who care enough to switch to Linux do it, and this people are pre-selected to have strong opinions.

        Exactly the same can be observed in all kind of alternative projects, for example alternative housing projects usually die because of infighting for everyone has their own definition of how it should work.

    • bloodfart@lemmy.ml
      link
      fedilink
      arrow-up
      5
      ·
      9 months ago

      Because you don’t have an in person user group and only interact online where the same person calling all mandrake users fetal alcohol syndrome babies doesn’t turn around and help those exact people figure out their smb.conf or trade sopranos episodes with them at the lan party.

    • ipkpjersi@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      9 months ago

      Linux users are often very passionate about the software they put on their computers, so they tend to argue about it. I think the customization and choices scares off a lot of beginners, I think the main reason is lack of compatibility with Windows software out of the box. People generally want to use software they are used to.

    • msch@feddit.de
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      It did take off, just not so much on the Desktop. I think those infights are really just opinions and part of further development. Having choices might be a great part of the overall success.

      • I Cast Fist@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        9 months ago

        just not so much on the Desktop

        Unix already had a significant presence in server computers during the late 80s, migrating to Linux wasn’t a big jump. Besides, the price of zero is a lot more attractive when the alternative option costs several thousand dollars

        • MonkeMischief@lemmy.today
          link
          fedilink
          arrow-up
          1
          ·
          9 months ago

          the price of zero is a lot more attractive when the alternative option costs several thousand dollars

          Dang, I WISH. Places that constantly beg for donations like public libraries and schools will have Windows-everything infrastructure “because market share”. (This is what I was told when I was interviewing for a library IT position)

          They might have gotten “lucky” with a grant at some point, but having a bank of 30+ computers for test-taking that do nothing but run MS Access is a frivilous budget waste, and basically building your house on sand when those resources could go to, I dunno… paying teachers, maybe?

          • Trainguyrom@reddthat.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 months ago

            Licensing is weird especially in schools. It may very well be practically free for them to license. Or for very small numbers of computers they might be able to come out ahead by only needing to hire tech staff that are competent with Windows compared to the cost of staff competent with Linux. Put another way, in my IT degree program every single person in my graduating class was very competent as a Windows admin, but only a handful of us were any good with Linux (with a couple actively avoiding Linux for being different)

    • Cyclohexane@lemmy.mlOPM
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      9 months ago

      Doesn’t feel like that to me. I’ll need to see evidence that that is the main reason. It could be but I just don’t see it.

      • I Cast Fist@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        9 months ago

        I mean, Wayland is still a hot topic, as are snaps and flatpaks. Years ago it was how the GTK2 to GTK3 upgrade messed up Gnome (not unlike the python 2 to 3 upgrade), some hardcore people still want to fight against systemd. Maybe it’s just “the loud detractors”, dunno

        • Cyclohexane@lemmy.mlOPM
          link
          fedilink
          arrow-up
          2
          ·
          9 months ago

          Why would one be discouraged by the fact that people have options and opinions on them? That’s the part I’m not buying. I don’t disagree that people do in fact disagree and argue. I don’t know if I’d call it fighting. People being unreasonably aggressive about it are rare.

          I for one am glad that people argue. It helps me explore different options without going through the effort of trying every single one myself.

          • billgamesh@lemmy.ml
            link
            fedilink
            arrow-up
            0
            ·
            9 months ago

            I’m using wayland right now, but still use X11 sometimes. I love the discussion and different viewpoints. They are different protocols, with different strengths and weaknesses. People talking about it js a vitrue in my opinion

            • ObliviousEnlightenment@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              9 months ago

              I can only use x11 myself. The drivers for Wayland on nvidia aren’t ready for prime time yet, my browser flickers and some games don’t render properly. I’m frankly surprised the KDE folks shipped it out

            • Captain Aggravated@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              Being I’m on Mint Cinnamon and using an Nvidia card, I’ve never even tried to run Wayland on this machine. Seems to work okay on the little Lenovo I put Fedora GNOME on. X11 is still working remarkably well for me, and I’m looking forward to the new features in Wayland once the last few kinks are worked out with it.

            • MonkeMischief@lemmy.today
              link
              fedilink
              arrow-up
              0
              ·
              9 months ago

              I like the fact that I can exercise my difficulty with usage commitment by installing both and switching between them :D.

              Wayland is so buttery smooth it feels like I just upgraded my computer for free…but I still get some window Z-fighting and screen recording problems and other weirdness.

              I’m glad X11 is still there to fall back on, even if it really feels janky from an experience point of view now.

              • billgamesh@lemmy.ml
                link
                fedilink
                arrow-up
                2
                ·
                9 months ago

                For me, it’s building software from source on musl. Just one more variable to contend with

  • DosDude👾@retrolemmy.com
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    9 months ago

    Is there a way to remove having to enter my password for everything?

    Wake computer from Screensaver? Password.
    Install something? Password.
    Updates (biggest one. Updates should in my opinion just work without, because being up to date is important for security reasons)? Password.

    I understand sudo needs a password,but all the other stuff I just want off. The frequency is rediculous. I don’t ever leave my house with my computer, and I don’t want to enter a password for my wife everytime she wants to use it.

    • lemmyreader@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 months ago

      I understand sudo needs a password

      You can configure sudo to not need a password for certain commands. Unfortunately the syntax and documentation for that is not easily readable. Doas which can be installed and used along side sudo is easier.

      For software updates you can go for unattended-upgrades though if you turn off your computer when it is upgrading software you may have to fix the broken pieces.

      • DosDude👾@retrolemmy.com
        link
        fedilink
        arrow-up
        2
        ·
        9 months ago

        I’ve tried unattended-upgrades once. And I couldn’t get it to work back then. It might be more user friendly now. Or it could just be me.

        • lemmyreader@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 months ago

          It’s not really user friendly, at least not how I know it. But useful for servers and when desktop computers are on for a long time. It would be a matter of enabling or disabling it with : sudo dpkg-reconfigure unattended-upgrades granted that you have the unattended-upgrades package installed. In that case I’m not sure when the background updates will start, though according to the Debian wiki the time for this can be configured.

          But with Ubuntu a desktop user should be able to configure software updated to be done automatically via a GUI. https://help.ubuntu.com/community/AutomaticSecurityUpdates#Using_GNOME_Update_Manager

    • Nibodhika@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      9 months ago

      I understand sudo needs a password,but all the other stuff I just want off.

      Sudo doesn’t need a password, in fact I have it configured not to on the computers that don’t leave the house. To do this open /etc/sudoers file (or some file inside /etc/sudoers.d/) and add a line like:

      nibodhika ALL=(ALL:ALL) NOPASSWD:ALL
      

      You probably already have a similar one, either for your user or for a certain group (usually wheel), just need to add the NOPASSWD part.

      As for the other parts you can configure the computer to not lock the screen (just turn it off) and for updates it depends on distro/DE but having passwordless sudo allows you to update via the terminal without password (although it should be possible to configure the GUI to work passwordless too)

    • shadowintheday2@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      9 months ago

      You can configure this behavior for CLI, and by proxy could run GUI programs that require elevation through the CLI:

      https://wiki.archlinux.org/title/Sudo#Using_visudo

      Defaults passwd_timeout=0(avoids long running process/updates to timeout waiting for sudo password)

      Defaults timestamp_type=global (This makes password typing and it’s expiry valid for ALL terminals, so you don’t need to type sudo’s password for everything you open after)

      Defaults timestamp_timeout=10(change to any amount of minutes you wish)

      The last one may be the difference between having to type the password every 5 minutes versus 1-2 times a day. Make sure you take security implications into account.

    • teawrecks@sopuli.xyz
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      9 months ago

      For wake from screensaver/sleep, this should be configurable. Your window manager is locking your session, so you probably just need to turn that option off.

      For installations and updates, I suspect you’re used to Windows-style UAC where it just asks you Yes or No for admin access in a modal overlay. As I understand it, this is easier said than done on linux due to an insistence on never running GUI applications as admin, which makes sense given how responsibilities are divided and the security and technical challenges involved. I will say, I agree 100% that this is a serious area that’s lacking for linux, but I also (think I) understand why no one has implemented something similar to UAC. I’ll try to give the shortest version I can:

      All programs (on both Windows and Linux) are run as a user. It’s always possible for any program to have a bug in it that gives another program to opportunity to exploit the bug to hijack that program, and start executing arbitrary, malicious code as that user. For this reason, the philosophical stance on all OSes is, if it’s gonna happen, let’s not give them admin access to the whole machine if we can avoid it, so let’s try to run as much as possible as an unprivileged user.

      On linux, the kernel-level processes and admin (root-level) account are fundamentally detached from running anything graphical. This means that it’s very hard to securely, and generically, pop up a window with just a Yes or No box to grant admin-level permissions. You can’t trust the window manager, it’s also unprivileged, but even if you could, it might be designed in a supremely insecure way, and allow just any app with a window to see and interact with any other app’s windows (Xorg), so it’s not safe to just pop up a simple Yes/No box, because then any other unprivileged application could just request root permissions, and then click Yes itself before you even see it. Polkit is possible because even if another app can press OK, you still need to enter the password (it’s not clear to me how you avoid other unprivileged apps from seeing the keystrokes typed into the polkit prompt).

      On windows, since the admin/kernel level stuff is so tightly tied to the specific GUI that a user will be using, it can overlay its own GUI on top of all the other windows, and securely pop in to just say, “hey, this app wants to run as admin, is that cool?” and no other app running in user mode even knows it’s happening, not even their own window manager which is also running unprivileged. The default setting of UAC is to just prompt Yes/No, but if you crank it to max security you get something like linux (prompt for the password every time), and if you crank it to lowest security you get something closer to what others are commenting (disable the prompt, run things as root, and cross your fingers that nothing sneaks in).

      I do think that this is a big deal when it comes to the adoption of linux over windows, so I would like to see someone come up with a kernel module or whatever is needed to make it happen. If someone who knows linux better than me can correct me where I’m wrong, I’d love to learn more, but that is how I understand it currently.

  • cosmicrookie@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    9 months ago

    In the terminal, why can’t i paste a command that i have copied to the clipboard, with the regular Ctrl+V shortcut? I have to actually use the mouse and right click to then select paste.

    (Using Mint cinnamon)

    • r0ertel@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      9 months ago

      Old timer here! As many others replying to you indicate, Ctrl+C means SIGINT (interrupt running program). Many have offered the Ctrl+Shift+C, but back in my day, we used Shift+Insert (paste) and Ctrl+Insert (copy). They still work today, but Linux has 2 clipboard buffers and Shift+Insert works against the primary.

      As an aside, on Wayland, you can use wl-paste and wl-copy in your commands, so git clone "$(wl-paste)" will clone whatever repo you copied to your clipboard. I use this one all the time

      • Trainguyrom@reddthat.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        so git clone "$(wl-paste)" will clone whatever repo you copied to your clipboard. I use this one all the time

        That’s a lot of confidence in not accidentally grabbing a leading/trailing space and grabbing unformatted text. I never trust that I’ve copied clean text and almost exclusively Ctrl+Shift+V to paste without formatting

    • Captain Aggravated@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      9 months ago

      In Terminal land, Ctrl+C has meant Cancel longer than it’s meant copy. Shift + Insert does what you think Ctrl+V will do.

      Also, there’s a separate thing that exists in most window managers called the Primary buffer, which is a separate thing from the clipboard. Try this: Highlight some text in one window, then open a text editor and middle click in it. Ta da! Reminder: This has absolutely nothing to do with the clipboard, if you have Ctrl+X or Ctrl+C’d something, this won’t overwrite that.

    • Cyclohexane@lemmy.mlOPM
      link
      fedilink
      arrow-up
      3
      ·
      9 months ago

      The terminal world has Ctrl+C and Ctrl+(many other characters) already reserved for other things before they ever became standard for copy paste. For for this reason, Ctrl+Shift+(C for copy, V for paste) are used.

    • baseless_discourse@mander.xyz
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      9 months ago

      In most terminal (gnome terminal, blackbox, tilix etc.) you can actually override this behavior by changing keyboard shortcut. Blackbox even have a simple toggle that will enable ctrl+c v copy paste.

      Gnome console is the only terminal I know that doesn’t allow you to change this.

    • ArcaneSlime@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      3
      ·
      9 months ago

      Try ctrl+shift+v, iirc in the terminal ctrl+v is used as some other shortcut (and probably has been since before it was standard for “paste” I’d bet).

      Also linux uses two clipboards iirc, the ctrl+c/v and the right click+copy/paste are two distinct clipboards.

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      …because that would make Ctrl+C Cut/Copy and that would be really bad. It would kill whatever was running.

      So, it becomes Ctrl+Shift+C and paste got moved in the same way for consistency.

      • maxxxxpower@lemmy.ca
        link
        fedilink
        arrow-up
        2
        arrow-down
        3
        ·
        9 months ago

        I use Ctrl+C to copy far more often than to break a process or something. I demand that Ctrl+Shift+C be reconfigured! 😀

    • u_die_for_elmer@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      Use shift+control+v to paste. Shift+control+c to copy in the terminal. It’s this way because control+c in the terminal is to break out of the currently running process.

    • Elsie@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      Ctrl+shift+V is what you should do. Ctrl+V is used by shells for I believe inserting characters without doing some sort of evaluation. I don’t remember the specifics though, but yes Ctrl+shift+V to paste.

    • Pesopes@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      9 months ago

      Ctrl+V is already a shortcut for something (I don’t even know what) but to paste just add shift so Ctrl+Shift+V.

      (Also a beginner btw)

  • Blizzard@lemmy.zip
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    9 months ago

    Why do programs install somewhere instead of asking me where to?

    EDIT: Thank you all, well explained.

    • NaN@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      9 months ago

      Because Linux and the programs themselves expect specific files to be placed in specific places, rather than bunch of files in a single program directory like you have in Windows or (hidden) MacOS.

      If you compile programs yourself you can choose to put things in different places. Some software is also built to be more self contained, like the Linux binaries of Firefox.

      • krash@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        9 months ago

        Actually, windows puts 95% of it files in a single directory, and sometimes you get a surprise DLL in your \system[32] folder.

    • Julian@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 months ago

      Someone already gave an answer, but the reason it’s done that way is because on Linux, generally programs don’t install themselves - a package manager installs them. Windows (outside of the windows store) just trusts programs to install themselves, and include their own uninstaller.

    • shadowintheday2@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      you install program A, it needs and installs libpotato then later you install program B that depends on libfries, and libfries depends on libpotato, however since you already have libpotato installed, only program B and libfries are installed The intelligence behind this is called a package manager

      In windows when you install something, it usually installs itself as a standalone thing and complains/reaks when dependencies are not met - e.g having to install Visual C++ 2005-202x for games, JRE for java programs etc

      instead of making you install everything that you need to run something complex, the package manager does this for you and keep tracks of where files are

      and each package manager/distribution has an idea of where some files be stored

    • penquin@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      I wish every single app installed in the same directory. Would make life so much easier.

        • penquin@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          9 months ago

          Not all. I’ve had apps install in opt, flatpaks install in var out of all places. Some apps install in /etc/share/applications

          • teawrecks@sopuli.xyz
            link
            fedilink
            arrow-up
            0
            ·
            9 months ago

            In /etc? Are you sure? /usr/share/applications has your system-wide .desktop files, (while .local/share/applications has user-level ones, kinda analogous to installing a program to AppData on Windows). And .desktop files could be interpreted at a high level as an “app”, even though they’re really just a simple description of how to advertise and launch an application from a GUI of some kind.

            • penquin@lemm.ee
              link
              fedilink
              arrow-up
              0
              ·
              9 months ago

              OK, that was wrong. I meant usr/share/applications. Still, more than one place.

              • teawrecks@sopuli.xyz
                link
                fedilink
                arrow-up
                2
                ·
                9 months ago

                The actual executables shouldn’t ever go in that folder though.

                Typically packages installed through a package manager stick everything in their own folder in /usr/lib (for libs) and /usr/share (for any other data). Then they either put their executables directly in /usr/bin or symlink over to them.

                That last part is usually what results in things not living in a consistent place. A package might have something that qualifies as both an executable and a lib, so they store it in their lib folder, but symlink to it from bin. Or they might not have a lib folder, and just put everything in their share folder and symlink to it from bin.

        • Ramin Honary@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          9 months ago

          They do! /bin has the executables, and /usr/share has everything else.

          Apps and executables are similar but separate things. An app is concept used in GUI desktop environments. They are a user-friendly front end to one or more executable in /usr/bin that is presented by the desktop environment (or app launcher) as a single thing. On Linux these apps are usually defined in a .desktop file. The apps installed by the Linux distribution’s package manager are typically in /usr/share/applications, and each one points to one of the executables in /usr/bin or /usr/libexec. You could even have two different “apps” launch a single executable, but each one using different CLI arguments to give the appearance of different apps.

          The desktop environment you use might be reconfigured to display apps from multiple sources. You might also install apps from FlatHub, Lutris, Nix, Guix, or any of several other package managers. This is analogous to how in the CLI you need to set the “PATH” environment variable. If everything is configured properly (and that is not always the case), your desktop environment will show apps from all of these sources collected in the app launcher. Sometimes you have the same app installed by multiple sources, and you might wonder “why does Gnome shell show me OpenTTD twice?”

          For end users who install apps from multiple other sources besides the default app store, there is no easy solution, no one agreed-upon algorithm to keep things easy. Windows, Mac OS, and Android all have the same problem. But I have always felt that Linux (especially Guix OS) has the best solution, which is automated package management.

    • Julian@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 months ago

      /bin, since that will include any basic programs (bash, ls, cd, etc.).

    • SmashFaster@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      9 months ago

      There is no direct equivalent, system32 is just a collection of libraries, exes, and confs.

      Some of what others have said is accurate, but to explain a bit further:

      Longer explanation:

      spoiler

      system32 is just some folder name the MS engineers came up back in the day.

      Linux on the other hand has many distros, many different contributors, and generally just encourages a … better … separation for types of files, imho

      The linux filesystem is well defined if you are inclined to research more about it.
      Understanding the core principals will make understanding virtually everything else about “linux” easier, imho.

      https://tldp.org/LDP/intro-linux/html/sect_03_01.html

      tl;dr; “On a UNIX system, everything is a file; if something is not a file, it is a process.”

      The basics:

      • /bin - base level executables, ls, mv, things like that
      • /sbin - super-level-only (root) executables, parted, reboot, etc
      • /lib - Somewhat self-explanatory, holds libraries, lots of things put their libs here, including linux kernel modules, /lib/modules/*, similar to system32’s function of holding critical libraries
      • /etc - Configuration lives here, generally speaking, /etc/<application name> can point you in the right direction, typically requires super-user (root) to edit
      • /usr - “User installed” software, which can be a murky definition in today’s world, but lots of stuff ends up here for installed software, manuals, icon files, executables

      Bonus:

      • /opt - A special location, generally third-party, bundled-style software likes to use this, Java for instance, but historically some admins use it as the “company location”, meaning internally developed software would live there.
      • /srv - Largely subjective, but myself and others I know use it for partitions that are outside the primary disk, for instance we use /srv/db for database volumes, /srv/www for web-data volumes, /srv/Media for large-file storage, etc, etc

      For completeness:

      • /home - You’ll find your user directories here, personally, this is my directory I backup, I don’t carry much more with me on most systems.
      • /var - “Variable data”, basically meaning any data that will likely grow over time, eg: /var/log
    • NaN@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      Don’t think there is.

      system32 holds files that are in various places in Linux, because Windows often puts libraries with binaries and Linux shares them.

      The bash in /bin depends on libraries in /lib for example.

    • ogeist@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      9 months ago

      For the memes:

      sudo rm -rf /*

      This deletes everything and is the most popular linux meme

      The same “expected” functionality:

      sudo rm -rf /bin/*

      This deletes the main binaries. You kinda can recover here but I have never done it.

    • Captain Aggravated@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      As in, the directory in which much of the operating system’s executable binaries are contained in?

      They’ll be spread between /bin and /sbin, which might be symlinks to /usr/bin and /usr/sbin. Bonus points is /boot.

  • neidu2@feddit.nl
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    9 months ago

    What’s the difference between /bin and /usr/bin and /usr/local/bin from an architectural point of view? And how does sbin relate to this?

  • Tovervlag@feddit.nl
    link
    fedilink
    arrow-up
    6
    ·
    9 months ago

    Ctrl Alt f1 f2 etc. Why do these desktops/cli exist. What was their intended purpose and what do people use them for today? Is it just legacy of does it stll serve a purpose?

    • d3Xt3r@lemmy.nzM
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      9 months ago

      To add to what @bloodfart wrote, the history of TTYs (or virtual consoles) goes all the way back to the early days of computing and teletypewriter machines.

      In the old days, computers were gigantic, super expensive, and operated in batch mode. Input was often provided through punched cards or magnetic tape, and output was printed on paper. As interactive computing developed, the old teletypewriters (aka TTYs) were repurposed from telecommunication, to serve as interactive terminals for computers. These devices allowed operators to type commands and receive immediate feedback from the computer.

      With advancements in technology, physical teletypewriters were eventually replaced by electronic terminals - essentially keyboards and monitors connected to the mainframe. The term “TTY” persisted, however, now referring to these electronic terminals.

      When Unix came out in the 70s, it adopted the TTY concept to manage multiple interactive user sessions simultaneously. As personal computing evolved, particularly with the introduction of Linux, the concept of virtual consoles (VCs) was introduced. These were software implementations that mimicked the behavior of physical terminals, allowing multiple user sessions to be managed via a single physical console. This was particularly useful in multi-user and server environments.

      This is also where the term “terminal” or “console” originates from btw, because back in the day these were physical terminals/consoles, later they referred to the virtual consoles, and now they refer to a terminal app (technically called a “terminal emulator” - and now you know why they’re called an “emulator”).

      With the advent of graphical interfaces, there was no longer a need for a TTY to switch user sessions, since you could do that via the display manager (logon screen). However, TTYs are still useful for offering a reliable fallback when the graphical environment fails, and also as a means to quickly switch between multiple user sessions, or for general troubleshooting. So if your system hangs or crashes for whatever reason - don’t force a reset, instead try jumping into a different TTY. And if that fails, there’s REISUB.

      • Tovervlag@feddit.nl
        link
        fedilink
        arrow-up
        1
        ·
        9 months ago

        thanks, I enjoyed reading that history. I usually use it when something hangs on the desktop as you said. :)

    • bloodfart@lemmy.ml
      link
      fedilink
      arrow-up
      9
      ·
      9 months ago

      Each one is a virtual terminal and you can use them just like any other terminal. They exist because the easiest way to put some kind of a interactive display up is to just write text to a framebuffer and that’s exactly what your computer does when it boots and shows all that scrolling stuff. The different ones are just different framebuffers that the video card is asked to display when you push ctrl-alt-fnumber. You can add more or disable them altogether if you like.

      Years ago my daily driver was a relatively tricked out compaq laptop and I used a combination of the highest mode set I could get, tmux and a bunch of curses based utilities to stay out of x for as much of the time as I could.

      I mean, each vt had a slightly different colored background image, the text colors were configured, it was slick.

      I used to treat them like multiple desktops.

      With libcaca I was even able to watch movies on it without x.

      I still use them when x breaks, which did happen last year to my surprise. If your adapter supports a vesa mode that’s appropriate to your monitor then you can use one with very fresh looking fonts and have everything look clean. Set you a background image and you’re off to the races with ncurses programs.

    • ArcaneSlime@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      8
      ·
      9 months ago

      If your system is borked sometimes you can boot into those and fix it. I’m not yet good enough to utilize that myself though, I’m still fairly new to linux too.

    • Elsie@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      9 months ago

      They are TTYs, they’re like terminals your computer spawns at boot time that you can use. Their intended purpose is really whatever you need them for. I use them for if I somehow mess up my display configuration and I need to access a terminal, but I can’t launch my DE/WM.

    • Presi300@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Mostly for headless systems, servers and such. That and debugging, if your desktop breaks/quits working for some reason, you need some way to run multiple things at once…

  • stammi@feddit.de
    link
    fedilink
    arrow-up
    5
    ·
    9 months ago

    Thank you for this nice thread! My question: what is Wayland all about? Why would I want to use it and not any of the older alternatives?

    • d3Xt3r@lemmy.nzM
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      9 months ago

      In addition to the other replies, one of the main draws of Wayland is that it’s much less succeptible to screen-tearing / jerky movements that you might sometimes experience on X11 - like when you’re dragging around windows or doing something graphics/video heavy. Wayland just feels much smoother and responsive overall. Other draws include support for modern monitor/GPU features like variable refresh rates, HDR, mixed DPI scaling and so on. And there’s plenty of stuff still in the works along those lines.

      Security is another major draw. Under X11, any program can directly record what’s on your screen, capture your clipboard contents, monitor and simulate keyboard input/output - without your permission or knowledge. That’s considered a huge security risk in the modern climate. Wayland on the other hand employs something called “portals”, that act as a middleman and allow the user to explicitly permit applications access to these things. Which has also been a sore point for many users and developers, because the old way of doing these things no longer works, and this broke a lot of apps and workflows. But many apps have since been updated, and many newer apps have been written to work in this new environment. So there’s a bit of growing pains in this area.

      In terms of major incompatibilities with Wayland - XFCE is still a work-in-progress but nearly there (should be ready maybe later this year), but some older DE/WMs may never get updated for Wayland (such as OpenBox and Fluxbox). Gnome and KDE work just fine though under Wayland. nVidia’s proprietary drivers are still glitchy/incomplete under Wayland (but AMD and Intel work fine). Wine/Proton’s Wayland support is a work-in-progress, but works fine under XWayland.

      Speaking of which, “XWayland” is kinda like a compatibility layer which can run older applications written for X11. Basically it’s an X11 server that runs inside Wayland, so you can still run your older apps. But there are still certain limitations, like if you’ve got a keyboard macro tool running under XWayland, it’ll only work for other X11 apps and not the rest of your Wayland desktop. So ideally you’d want to use an app which has native Wayland support. And for some apps, you may need to pass on special flags to enable Wayland support (eg: Chrome/Chromium based browsers), otherwise it’ll run under XWayland. So before you make the switch to Wayland, you’ll need to be aware of these potential issues/limitations.

    • NoisyFlake@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      9 months ago

      Because there is only one alternative (Xorg/X11), and it’s pretty outdated and not really maintained anymore.

      For now it’s probably still fine, but in a couple of years everything will probably use Wayland.

    • nyan@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      9 months ago

      Wayland has better support for some newer in-demand features, like multiple monitors, very high resolutions, and scaling. It’s also carrying less technical debt around, and has more people actively working on it. However, it still has issues with nvidia video cards, and there are still a few pieces of uncommon software that won’t work with it.

      The only alternative is X. Its main advantage over Wayland is network transparency (essentially it can be its own remote client/server system), which is important for some use cases. And it has no particular issues with nvidia. However, it’s essentially in maintenance mode—bugs are patched, but no new features are being added—and the code is old and crufty.

      If you want the network transparency, have an nvidia card (for now), or want to use one of the rare pieces of software that doesn’t work with Wayland/XWayland, use X. Otherwise, use whatever your distro provides, which is Wayland for most of the large newbie-friendly distros.

      • d3Xt3r@lemmy.nzM
        link
        fedilink
        arrow-up
        1
        ·
        9 months ago

        The network transparency thing is no longer a limitation with Wayland btw, thanks to PipeWire and Waypipe.

    • atzanteol@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      It’s… complicated. Wayland is the heir apparent to Xorg. Xorg is a fork of an older XFree86 which is based on the X11 standard.

      X11 goes back… a long time. It’s been both a blessing and a liability at times. The architecture dates back to a time of multi-user systems and thin clients. It also pre-dates GPUs. Xorg has been updating and modernizing it for decades but there’s only so much you can do while maintaining backward compatibility. So the question arose: fix X or create something new? Most of the devs opted for the later, to start from scratch with a replacement.

      I think they bit off a bit more than they could chew, and they seemed to think they could push around the likes of nvidia. So it’s been a bumpy road and will likely continue to be a bit bumpy for a bit. But eventually things will move over.

    • AMDIsOurLord@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      Because the older alternatives are hacky, laggy, buggy, and quite fundamentally insecure. X.Org’s whole architecture is a mess, you practically have to go around the damn thing to work it (GLX). It should’ve been killed in 2005 when desktop compositing was starting to grow, but the FOSS community has a way with not updating standards fast enough.

      Hell, that’s kinda the reason OpenGL died a slow death, GL3 had it released properly would’ve changed everything

  • noughtnaut@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    8 months ago

    How the hell do I set up my NAS (Synology) and laptop so that I have certain shares mapped when I’m on my home network - AND NOT freeze up the entire machine when I’m not???

    For years I’ve been un/commenting a couple of lines in my fstab but it’s just not okay to do it that way.

      • noughtnaut@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        9 months ago

        Aha, interesting, thank you. So setting nofail and a time out of, say, 5s should work… but what then when I try to access the share, will it attempt to remount it?

        • atzanteol@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          9 months ago

          Look up “automount”. You can tell linux to watch for access to a directory and mount it on demand.

        • ipkpjersi@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          9 months ago

          This is also what I’d like to know, and I think the answer is no. I want to have NFS not wait indefinitely to reconnect, but when I reconnect and try going to the NFS share, have it auto-reconnect.

          edit: This seemed to work for me, without waiting indefinitely, and with automatic reconnecting, as a command (since I don’t think bg is an fstab option, only a mount command option): sudo mount -o soft,timeo=10,bg serveripaddress:/server/path /client/path/

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      You could simply use a graphical tool to mount it. Nautilus has it built in and I’m sure other tools have it as well.

    • bloodfart@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      User login script could do it. Have it compare the wireless ssid and mount the share if it matches. If you set the entry in fstab to noauto it’ll leave it alone till something says to mount it.

  • HATEFISH@midwest.social
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    9 months ago

    How can I run a sudo command automatically on startup? I need to run sudo alsactl restore to mute my microphone from playing In my own headphones on every reboot. Surely I can delegate that to the system somehow?

    • baseless_discourse@mander.xyz
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      9 months ago

      If you run a systemd distro (which is most distro, arch, debian, fedora, and most of their derivatives), you can create a service file, which will autostart as root on startup.

      The service file /etc/systemd/system/<your service>.service should like

      [Unit]
      Description=some description
      
      [Service]
      ExecStart=alsactrl restore
      
      [Install]
      WantedBy=multi-user.target
      

      then

      systemctl enable <your service>.service --now
      

      you can check its status via

      systemctl status <your service>.service
      

      you will need to change <your service> to your desired service name.

      For details, read: https://linuxhandbook.com/create-systemd-services/

      • HATEFISH@midwest.social
        link
        fedilink
        arrow-up
        0
        ·
        9 months ago

        This one seemed perfect but nothing lasts after the reboot for whatever reason. If i manually re-enable the service its all good so I suspect theres no issue with the below - I added the after=multi-user.target after the first time it didn’t hold after reboot.

        
        [Unit]
        Description=Runs alsactl restore to fix microphone loop into headphones
        After=multi-user.target
        [Service]
        ExecStart=alsactl restore
        
        [Install]
        WantedBy=multi-user.target
        

        When I run a status check it shows it deactivates as soon as it runs

        Apr 11 20:32:24 XXXXX systemd[1]: Started Runs alsactl restore to fix microphone loop into headphones.
        Apr 11 20:32:24 XXXXX systemd[1]: alsactl-restore.service: Deactivated successfully.
        
          • HATEFISH@midwest.social
            link
            fedilink
            arrow-up
            1
            ·
            9 months ago

            It seems to have no effect either way. Originally I attempted without, then when it didn’t hold after a reboot and some further reading I added the After= line in attempt to ensure the service isn’t trying to initiate before it should be possible.

            I can manually enable the service with or without the After= line with the same results of it actually working. Just doesn’t hold after a reboot.

            • baseless_discourse@mander.xyz
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              9 months ago

              That is interesting. BTW, I don’t assume that command will run forever right, i.e. it will terminate relatively soon? so that could be why the service is deactivated, not because it is not run. You can try to add ; echo "command terminated" at the end of ExecStart to see if it is terminated, you can also try to echo the exit code to debug.

              If the program you use has a verbose mode, you can also try to turn it on to see if there is any error. EDIT: indeed, alsactrl restore --debug

              There is also a possiblity that this service is run before the device you need to restore is loaded, so it won’t have any effect.

              On a related note, did you install the program via your package manager, and what distro are you running. Because sometimes SELinux will block the program running. But the error message will say permission denied, instead of your message.

    • Hiro8811@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      9 months ago

      Try paveaucontrol, it has an option to lock settings plus it’s a neat app to call when you need to customise settings. You could also add user to the group that has access to mic.

    • Cyclohexane@lemmy.mlOPM
      link
      fedilink
      arrow-up
      5
      ·
      9 months ago

      Running something at start-up can be done multiple ways:

      • look into /etc/rc.d/rc.local
      • systemd (or whatever init system you use)
      • cron job
    • wolf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      You got some good answers already, here is one more option: Create a *.desktop file to run sudo alsactrl, and copy the *.desktop file ~/.config/autostart (Might need to configure sudo to run alsactrl w/o password.)

      IMHO the cleanest option is SystemD.

  • jack@monero.town
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    9 months ago

    Why are debian-based systems still so popular for desktop usage? The lack of package updates creates a lot of unnecessary issues which were already fixed by the devs.

    Newer (not bleeding edge) packages have verifiably less issues, e.g. when comparing the packages of a Debian and Fedora distro.

    That’s why I don’t recommend Mint

    • wolf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      9 months ago

      Debian desktop user here, and I would happily switch to RHEL on the desktop.

      I fully agree, outdated packages can be very annoying (running a netbook with disabled WIFI sleep mode right now, and no, backported kernel/firmware don’t solve my problem.)

      For some years, I used Fedora (and I still love the community and have high respect for it).

      Fedora simply does not work for me:

      • Updated packages can/did break compatibility for stuff I need to get stuff done. Fine if Linux is your hobby, not acceptable if you need to deliver something
      • In the industry, many times not the last recent packages of development environments are used (if you are lucky, you are only a few months or years behind), so having the most recent packages in Fedora helps me exactly zero
      • With Debians 2 years release cycle (and more years of support), I can upgrade to the next version when it is appropriate for me (= 1-2 days when there is a slow week and the worst bugs have been found already)
      • My setup/desktop is heavily customized and fully automated via IaC, no motivation to tweak this stuff constantly (rolling) or every 6-12 months (Fedora)
      • From time to time I have to use software packages from 3rd parties, with Fedora, I might be one update way from breaking this software packages because of version incompatibilities (yes, I might pin a version of something to use a 3rd party software, but this might break Fedora updates (direct and transitive dependencies)
      • I once had a cheap netbook for travel with an infamous chip set bug concerning sleep modes, which would be triggered by some kernels. You can imagine how it is to run Fedora, when you get often Kernel updates and the bug will be triggered or not after double digit numbers of minutes of work.

      Of course, I could now start playing around with containerizing everything I need for work somehow and run something like Silverblue, perhaps I might do it someday, but then I would again need to update my IaC every 6-12months, would have to take care of overlays AND containers etc…

      When people go ‘rolling’ or ‘Fedora’, they simply choose a different set of problems. I am happy we have choice and I can choose the trouble I have to life with.

      On a more positive note: This also shows how far Linux has come along, I always play around with the latest/BETA Fedora Gnome/KDE images in a VM, and seriously don’t feel I am missing anything in Debian stable.

    • ⸻ Ban DHMO 🇦🇺 ⸻@aussie.zone
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      9 months ago

      This is where I see atomic distros like Silverblue becoming the new way to get reliable systems, and up to date packages. Because the base system is standardised there can be a lot more QA as there is alot less entropy in the installed system. Plus free rollbacks if something goes wrong. You don’t get that by default on Debian.

      Distrobox can be used to install other programs (including GUI apps), I currently run Steam in a distrobox container on Silverblue and vscode with all of my development stuff in another one. And of course use flatpaks from FlatHub where I can, these are more stable than distro packages imo (when official) as the developers are developing for a single target with defined library versions. Not whatever ancient version Debian has or the latest which appeared on Arch very soon after release.

      I’ve tried Debian a couple of times but it’s just too out of date. I like new stuff and when developing stuff I need new stuff and it’s not good enough to just install the development/unsupported versions of Debian. It’s probably great for servers, but I think atomic distros will be taking over that space as well, eventually.

      • Trainguyrom@reddthat.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        Distrobox can be used to install other programs (including GUI apps)

        I need to play around with that sometime. Is it a chroot or a privileged container or is it a sandboxed container with limited access? How’s hardware excelleration in those?

        • ⸻ Ban DHMO 🇦🇺 ⸻@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          It’s just a podman/docker container. I’m pretty sure it is unprivileged (you don’t need root). I’ve tried it on both NVIDIA (RTX 3050 Mobile) and AMD (Radeon RX Vega 56) and setting up the distrobox through BoxBuddy (a nice GUI app that makes management easy) I didn’t need to do anything to get the graphics drivers working. I only mentioned BoxBuddy because I haven’t set one up from the command line so I don’t know if it does any initial set up. I haven’t noticed any performance issues (yet).

      • jack@monero.town
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        9 months ago

        You should definetely check out Bazzite, it’s based on Fedora Atomic and has Steam on the base image. Image and Flatpak updates are applied automatically in the background, no need to wait for the update on next boot. Media codecs and necessary drivers are installed by default.

        The Bazzite image also directly consists of the upstream Fedora Atomic image, just with quality of life changes added and optimized for gaming

        • ⸻ Ban DHMO 🇦🇺 ⸻@aussie.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          It looks pretty good, I’ve been planning on installing it on another computer for use as a media centre. Probably wouldn’t use it as my main image as I’m not a huge fan of their customised GNOME experience (I quite like vanilla GNOME with maybe a system tray extension). But I must admit watching some of the videos by the creator of Bazzite and ublue got me interested in this atomic desktop thing again

    • Cyclohexane@lemmy.mlOPM
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      9 months ago

      Unlike other commenters, I agree with you. Debian based systems are less suitable for desktop use, and imo is one of the reasons newcomers have frequent issues.

      When installing common applications, newcomers tend to follow the windows ways of downloading an installer or a standalone executable from the Internet. They often do not stick with the package manager. This can cause breakage, as debian might expect you to have certain version of programs that are different from what the installer from the Internet expects. A rolling release distro is more likely to have versions that Internet installers expect.

      To answer your question, I believe debian based distros are popular for desktop because they were already popular for server use before Linux desktop were significant.

      • Nibodhika@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        9 months ago

        That’s a bad example, Debian is bad because people use it wrong and it breaks is not a really strong argument, same can be said about every other distro.

        I believe Debian based distros are popular because Ubuntu used to be very beginner friendly back in the early 2000s, while other distros not so much. Then a lot of us started with it, and many never switched or switched and came back.

        • Cyclohexane@lemmy.mlOPM
          link
          fedilink
          arrow-up
          1
          ·
          9 months ago

          Debian is not bad. It is just not suitable for newcomers using it for desktop. I think my arguments hold this stance.

    • jdnewmil@lemmy.ca
      link
      fedilink
      arrow-up
      3
      ·
      9 months ago

      Noob question?

      You do seem confused though… Debian is both a distribution and a packaging system… the Debian Stable distribution takes a very conservative approach to updating packages, while Debian Sid (unstable) is more up-to-date while being more likely to break. While individual packages may be more stable when fully-updated, other packages that depend on them generally lag and “break” as they need updating to be able to adapt to underlying changes.

      But the whole reason debian-based distros exist is because some people think they can strike a better balance between newness and stability. But it turns out that there is no optimal balance that satifies everyone.

      Mint is a fine distro… but if you don’t like it, that is fine for you too. The only objection I have to your objection is that you seem to be throwing the baby out with the bathwater… the debian packaging system is very robust and is not intrinsically unlikely to be updated.

      • jack@monero.town
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        9 months ago

        Noob question?

        Should I’ve made a new post instead?

        You do seem confused though… Debian is both a distribution and a packaging system…

        Yes, Debian is a popular distro depending on Debian packages. My concern is about the update policy of the distro

        But the whole reason debian-based distros exist is because some people think they can strike a better balance between newness and stability.

        Debian is pure stability, not the balance between stability and newness. If you mean debian-BASED in particular, trying to introduce more newness with custom repos, I don’t think that is a good strategy to get balance. The custom additional repos quickly become too outdated as well. Also, the custom repos can’t account for the outdatedness of every single Debian package.

        you seem to be throwing the baby out with the bathwater… the debian packaging system is very robust and is not intrinsically unlikely to be updated.

        Yes, I don’t understand/approve the philosophy around the update policy of Debian. It doesn’t make sense to me for desktop usage. The technology of the package system however is great and apt is very fast

        • KISSmyOSFeddit@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          9 months ago

          Debian is a balance between stability and newness.
          If you want to see what pure stability looks like, try Slackware.

    • LoreleiSankTheShip@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      As someone not working in IT and not very knowledgeable on the subject, I’ve had way less issues with Manjaro than with Mint, despite reading everywhere that Mint “just works”. Especially with printers.

      • Nibodhika@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        9 months ago

        Yeah, Manjaro just works, until it doesn’t. Don’t get me wrong, I love Manjaro, used it for years, but if it breaks it’s a pain in the ass to fix, and also hard to get help because the Arch community will just reply with “Not Arch, not my problem” even if it’s a generic error, and the Manjaro community is not as prominent.

        I could also mention them letting their SSL certificate expire, which doesn’t inspire a lot of trust, but they haven’t done that in a while.

    • AMDIsOurLord@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      9 months ago

      Debian systems are verified to work properly without subtle config breakages. You can run Debian practically unattended for a decade and it’s chug along. For people who prefer their device to actually work, and not just be a maintenance princess, it’s ideal.

      • jack@monero.town
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        9 months ago

        Okay, I get that it’s annoying when updates break custom configs. But I assume most newbs don’t want to make custom dotfiles anyways. For those people, having the newest features would be more beneficial, right?

        Linux Mint is advertised to people who generally aren’t willing to customize their system

        • AMDIsOurLord@lemmy.ml
          link
          fedilink
          arrow-up
          3
          ·
          9 months ago

          having a stable base helps. Also, config breakage can happen without user intervention. See Gentoo or Arch’s NOTICE updates

        • Nibodhika@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          9 months ago

          Breaks can happen without user intervention in other distros, there are some safeguards around it, but it happens. Also new users are much more likely to edit their configs because a random guy on the Internet did it than an experienced person who knows what they’re doing, also a lot more likely not to realize that this can break the system during an upgrade.

    • bloodfart@lemmy.ml
      link
      fedilink
      arrow-up
      0
      arrow-down
      2
      ·
      9 months ago

      Because people have the opposite experience and outlook from what you wrote.

      I’m one of those people.

      I’m surprised no one brought up the xz thing.

      Debian specifically targeted by complex and nuanced multi prong attack involving social engineering and very good obfuscation. Defeated because stable (12 stable, mind you, not even 11 which is still in lots of use) was so slow that the attack was found in unstable.

      • Cyclohexane@lemmy.mlOPM
        link
        fedilink
        arrow-up
        3
        ·
        9 months ago

        This is not a good argument imo. It was a miracle that xz vulnerability was found so fast, and should not be assumed as standard. The developer had been contributing to the codebase for 2 years, and their code already landed in debian stable iirc. There’s still no certainty that that code had no vulnerabilities. Some vulnerabilities in the past were caught decades after their introduction.

        • Possibly linux@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          Its not a miracle it is just probability. When you have enough eyes on something you are bound to catch bugs and problems.

          Debian holds back because its primary goal is to be stable, reliable and consistent. It has been around longer that pretty much everything else and it can run for decades without issue. I read a article about a university that still had the original Debian install from the 90’s. It was on newer hardware but they just copied over the files.

          • Cyclohexane@lemmy.mlOPM
            link
            fedilink
            arrow-up
            2
            ·
            9 months ago

            Lots of eyes is not enough. As I mentioned earlier, there are many popular programs found on most machines, and some actually user facing (unlike xz) where vulnerabilities were caught months, years, and sometimes decades later. xz is an exception, not a rule.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      3
      ·
      9 months ago

      I’m not sure what planet you are on but Debian is more stable and secure than anything I have ever tested. Maybe Debian gets a bad rap because of Ubuntu.

      • Cyclohexane@lemmy.mlOPM
        link
        fedilink
        arrow-up
        2
        ·
        9 months ago

        I disagree. Stable, yes. But stable as in unchanging (including bug-for-bug compatibility), which imo is not what most users want. It is what server admins want though. Most newbie desktop users don’t realize this about debian based systems, and is one of the sources of trouble they experience.

        Debian tries to be secure by back porting security fixes, but they just cannot feasibly do this for all software, and last I checked, there were unaddressed vulnerabilities in debian’s version of software that they had not yet backported (and they had been known for a while). I’m happy to look up the source for you if you’re interested.

      • wolf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Debian is for sure not more secure than most other distributions/operating systems. (Might be true for what you tested).

        Not even mentioning the famous Debian weak SSH key fuck up (ups), Debian is notoriously understaffed to take care of back ports of security patches for everything which is not the kernel/web server/Python etc. (and even there I would not be too sure) and don’t get me started on starting services/opening ports on an apt install etc.

  • Godort@lemm.ee
    link
    fedilink
    arrow-up
    5
    ·
    9 months ago

    Maybe not a super beginner question, but what do awk and sed do and how do I use them?

    • mumblerfish@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      9 months ago

      This is 80% of my usage of awk and sed:

      “ugh, I need the 4th column of this print out”: command | awk '{print $4}'

      Useful for getting pids out of a ps command you applied a bunch of greps to.

      ”hm, if I change all ‘this’ to ‘that’ in the print out, I get what I want": command | sed "s/this/that/g"

      Useful for a lot of things, like “I need to change the urls in this to that” or whatever.

      Basically the rest I have to look up.

    • neidu2@feddit.nl
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      9 months ago

      Probably a bit narrow, but my usecases:

      • awk: modify STDIN before it goes to STDOUT. Example: only print the 3rd word for each line
      • sed: run a regex on every line.
    • Ramin Honary@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      9 months ago

      Awk is a programming language designed for reading files line by line. It finds lines by a pattern and then runs an action on that line if the pattern matches. You can easily write a 1-line program on the command line and ask Awk to run that 1-line program on a file. Here is a program to count the number of “comment” lines in a script:

      awk 'BEGIN{comment_count=0;} /^[[:space:]]*[#]/{comment_count++;} END{print(comment_count);}' file.sh
      

      It is a good way to inspect the content of files, espcially log files or CSV files. But Awk can do some fairly complex file editing operations as well, like collating multiple files. It is a complete programming language.

      Sed works similar to Awk, but it is much simplified, and designed mostly around CLI usage. The pattern language is similar to Awk, but the commands are usually just one or two letters representing actions like “print the line” or “copy the line to the in-memory buffer” or “dump the in-memory buffer to output.”