@dr_robot@kbin.social avatar

dr_robot

@[email protected]

This profile is from a federated server and may be incomplete. Browse more on the original instance.

dr_robot,
@dr_robot@kbin.social avatar

Looks perfect! Exactly what I was looking for. Thanks!

dr_robot,
@dr_robot@kbin.social avatar

I wish :( The city centers are very walkable and there's plenty of safe bicycle infrastructure, but cars are still very clearly the dominant mode of transport. Every weekend there's queues to the parking garages in every part of the city.

dr_robot,
@dr_robot@kbin.social avatar

Wireguard easily supports dual stack configuration on a single interface, but the VPN server must also have IPv6 enabled. I use AirVPN and I get both IPv6 and IPv4 with a single wireguard tunnel. In addition to the ::/0 route you also need a static IPv6 address for the wireguard interface. This address must be provided to you by ProtonVPN.

If that's not possible, the only solution is to entirely disable IPv6.

dr_robot,
@dr_robot@kbin.social avatar

A few simple rules make it quite simple for me:

  • Firstly, I do not run anything critical myself. I cannot guarantee that I will have time to resolve issues as they come up. Therefore, I tolerate a moderate risk of a borked update.
  • All servers run the same be OS. Therefore, I don't have to resolve different issues for different machines. There is then the risk that one update will take them all out, but see my first point.
  • That OS is stable, in my case Debian so updates are rare and generally safe to apply without much thought.
  • Run as little as possible on bare metal and avoid third party repos or downloading individual binaries unless absolutely necessary. Complex services should run in containers and update by updating the container image.
  • Run unattended-upgrades on all of them. I deploy the configuration via Ansible. Since they all run the same OS, I only need to figure out the right configuration once and then it's just a matter of using Ansible to deploy it everywhere. I do blacklist kernel updates on my main server, because it has ZFS through DKMS on it so it's too risky to blindly apply.
  • Have postfix set up so that unattended-upgrades can email me when a reboot is required. I reboot only when I know I'll have some time to fix anything that breaks. For the blacklisted packages I will get an email that they've been held back so I know that I need to update manually.

This has been working great for me for the past several months.

For containers, I rely on Podman auto update and systemd. Actually my own script that imitates its behavior because I had issues with Podman pulling images which were not new, but which nevertheless triggered restarts of the containers. However, I lock the major version number manually and check and update major versions manually. Major version updates stung me too much in the past when I'd update them after a long break.

dr_robot,
@dr_robot@kbin.social avatar

Correct. And getting the right configuration is pretty easy. Debian has good defaults. The only changes I make are configuring it to send emails to me when updates are installed. These emails will also then tell you if you need to reboot in subject line which is very convenient. As I said I also blacklist kernel updates on the server that uses ZFS as recompiling the modules causes inconsistencies between kernel and user space until a reboot. If you set up emails, you will also know when these updates are ready to be installed because you'll be notified that they're being held van.

So yea, I strongly recommend unattended-upgrades with email configured.

Edit: you can also make it reboot itself if you want to. Might be worth it on devices that don't run anything very important and that can handle downtime.

dr_robot,
@dr_robot@kbin.social avatar

I expose my services to the web via my own VPS proxy :) I simply run only very few of them, use 2FA when supported, keep them up to date, run each service as rootless podman, and have a very verbose logcheck set up in case the container environment gets compromised, and allow only ports 80 and 443, and, very importantly, truly sensitive data (documents and such) is encrypted at rest so that even if my services are compromised that data remains secure.

For ssh, I have set up a separate raspberry pi as a wireguard server into my home network. Therefore, for any ssh management I first connect via this wireguard connection.

Trying to find a modern alternative to my emacs workflow (kbin.social)

I've been using emacs since 2010. I use doom emacs now, but I have written my own overcomplicated config at one point in the past. I've grown used to it, but sometimes when emacs chokes on some input due to its single threaded nature I have time to wonder if there's something better for me out there....

dr_robot,
@dr_robot@kbin.social avatar

Most open source vpn protocols, afaik, do not obfuscate what they are, because they're not designed to work in the presence of a hostile operator. They only encrypt the user data. That is, they will carry information in their header that they are such and such vpn protocol, but the data payload will be encrypted.

You can open up wireshark and see for yourself. Wireshark can very easily recognize and even filter wireguard packets regardless of port number. I've used it to debug my firewall setups.

In the past when I needed a VPN in such a situation, I had to resort to a paid option where the VPN provider had their own protocol which did try to obfuscate the nature of the protocol.

dr_robot,
@dr_robot@kbin.social avatar

Thanks for your reply! One thing I'm struggling with networkd is hysteresis. That is, toggling the interface down and then back up does not do what I expect it to. That is, setting the interface down does not clear up the configuration, and setting the interface up does not reconfigure the interface. I have to run reconfigure for that. I was hoping that the declarative approach of networkd would make it easy to predict interface state and configuration.

This does make sense because configuration is not the same as operational state. However, what would the equivalent of ifdown (set interface down and remove configuration) and ifup (set interface up and reconfigure) be using networkd and networkctl? This kind of feature would be useful for me to test config changes, debug networking issues, disconnect part of the network while I'm making some changes, etc.

dr_robot,
@dr_robot@kbin.social avatar

Thanks for this useful reply! I think I'll just need to closely examine my setup and figure out if I really need the ability to up/down interfaces like I described or whether the more persistent approach of networkd is actually more suitable for me. Sometimes I just want to reproduce behaviour that I've used before, but may not actually need.

New community for Navidrome (discuss.tchncs.de)

After using Funkwhale for some time, I’ve recently set up Navidrome as a music streaming server and I’m well impressed about the easy installation and quality of the software. Seeing that the official community channels are all outside the fedicerse I have decided to create a Lemmy community for folks interested in the...

dr_robot,
@dr_robot@kbin.social avatar

I subscribed. I use navidrome since it has a slick UI and supports the subsonic API. Having both in one is great.

dr_robot,
@dr_robot@kbin.social avatar

Thanks for your reply! Out of curiosity, what made you go with Prometheus over zabbix and check_mk in the end? Those two seem to be heavily recommended.

dr_robot,
@dr_robot@kbin.social avatar

Thanks a lot for these tips! Especially about using the upstream deb.

dr_robot,
@dr_robot@kbin.social avatar

Maintaining legacy options is always maintenance overhead or things you need to work around when implementing new features. I suspect that they've concluded that not enough people use it anymore to justify the overhead.

dr_robot,
@dr_robot@kbin.social avatar

I deploy as much as I possibly can via Ansible. Then the Ansible code serves as the documentation. I also keep the underlying OS the same on all machines to avoid different OS conventions. All my machines run Debian. The few things I cannot express in Ansible, such as network topology, I draw a diagram for in draw.io, but that's it.

Also, why not automate the certificate renewal with certbot? I have two reverse proxies and they renew their certificates themselves.

dr_robot,
@dr_robot@kbin.social avatar

Why not have the reverse proxy also do renewal for the SMTP relay certificate and just rsync it to the relay? For a while I had one of my proxies do all the renewals and the other would rsync it.

dr_robot,
@dr_robot@kbin.social avatar

Plasma is amazing. It has been my DE of choice for years now. So happy I'm donating to the project.

dr_robot,
@dr_robot@kbin.social avatar

That's because podman-compose is not a goal for the project IIRC. Therefore, it will never be feature complete. They encourage using systemd or other tools to manage the pods. It seems that podman-compose is just not an enterprise use case.

Edit: so if docker-compose is important then yea, stick to docker. I moved to using systemd instead. Podman can generate the systems files for you.

dr_robot,
@dr_robot@kbin.social avatar

Well, that's just not true. WSL indeed is not Linux, but it does have several of the advantages of Linux.

It is not good if you want a home desktop solution, because that's not what it's there for. However, if you need to use Windows for something, e.g., at work to have full outlook MS office compatibility (access through the web is not great) but need Linux for dev work then WSL is great.

In short, I'd say WSL is there if you want to do dev work on Linux, but everything else on Windows.

dr_robot,
@dr_robot@kbin.social avatar

ZFS send to a pair of mirrored HDDs on the same machine ever hour and a daily restic backup to S3 storage. Every six months I test and verify the cloud backup.

dr_robot,
@dr_robot@kbin.social avatar

In addition to what you mentioned, setup logcheck to email you unexpected logs. It does require a bit of time and fine tuning to make it ignore expected logs, but in terms of security measures it's very powerful. I get an email every time I log in, incorrectly type my sudo password, etc. Sounds very verbose, but it also means it's verbose when something unexpected is happening which is what you want security-wise. A nice side effect of having to craft the regexes of what logs to ignore is that I know better what's running on my server :)

dr_robot,
@dr_robot@kbin.social avatar

I recommend fastmail.com though they do have done shortcomings that you need to consider such as the fact that they're based in Australia (five eyes country) and have servers in the USA. Their advantage is a slick interface, fantastic app based on JMAP, and just generally being super convenient. They allow catch all addresses, masked emails, custom domain etc. I find them super convenient.

dr_robot,
@dr_robot@kbin.social avatar

I already posted that I recommend fastmail elsewhere in this thread, but you raised so many good points that it reminded me of some extra points :)

Fastmail offers granular, per-app passwords – I have a single password which has read-only access to IMAP in order to back up all the data on a timer. This feature is missing from many (many) other email providers - using the 80/20 rule, if they even offer it it’s a single password with full access (Mailfence, for example)

Since this community is about selfhosting I think it's worth pointing out that this is AMAZING for selfhosting. I have all me selfhosted services sending e-mail via fastmail's SMTP. With per-app passwords I don't need to store my normal e-mail password and the apps can be limited to SMTP only (so no read access). And in case of compromise you can revoke permissions on a per-app granularity.

Fastmail offers full CardDAV (contacts) and CalDAV (calendar) access, which makes plugging it into any other app that supports this very easy - their DNS wizard helps you set up the service records. I use “DavX5” on my Android to sync all Contacts and Calendar outside of using the Fastmail app (which is a self contained app on Android, it’s not too bad)

Fastmail has become my contacts app now - it's really great to have all your e-mail and contacts in the same place. The contacts don't even need to have an e-mail address - I have a lot of contacts stored for whom I only have a phone number. I sync to android using the same DavX5 app and then immediately have these contacts in whatsapp and signal.

dr_robot,
@dr_robot@kbin.social avatar

I don't think it's just metadata that's leaking though. I would say it's the entire content of the connection. If the reverse proxy terminates the secure connection it will decrypt the data which will be available unencrypted in the VPS. Outside of the VPS instance the traffic remains entirely encrypted.

Admittedly this decrypted data is not easy to access - you would need to have root access and be able to capture the traffic from within the VPS. But a VPS provider has this kind of access - as they run the hypervisor, they have direct access to the RAM (and possibly even a much easier way to just log in as root into the VPS itself). I think you do have to trust the VPS provider not to peek into the VPS itself. As long as you're paying for the service, that's probably a safe assumption.

dr_robot,
@dr_robot@kbin.social avatar

No, I'm not concerned. This is just a theoretical exercise so that I can understand the trade-offs I'm making.

Edit: The certificate transparency monitoring sounds interesting. Did not know about that.

dr_robot,
@dr_robot@kbin.social avatar

If it was just storage/RAM scraping then that could be solved with SSL pass-through though. That way the reverse proxy would not decrypt the traffic and would forward the encrypted traffic further to the home server. I was actually setting that up a few hours ago. However, since the VPS provider owns the IP address of the VPS, they can simply obtain their own certificate for the domain. After all, Let's Encrypt verifies your ownership of the domain by your ability to control the DNS entries. Therefore, even if the certificates weren't on the VPS, the fact that I am redirecting traffic via their IP address makes me vulnerable to a malicious provider.

The "hobby exercise" was just to indicate that this is not for work and that I'm interested in an answer beyond "you need to trust your provider" which I do :) I agree, these are important questions! And they're also interesting!

dr_robot,
@dr_robot@kbin.social avatar

Apparently yes! Based on another comment in this thread: https://certificate.transparency.dev/monitors/.

dr_robot,
@dr_robot@kbin.social avatar

You can limit which CA’s will offer certificates for your domain with the CAA record in DNS.

Yea, I already have that.

You can also at least detect if someone else creates a certificate for your domain if you watch the certificate transparency logs.

Did not know this before today, but now I have it switched on!

dr_robot,
@dr_robot@kbin.social avatar

Thanks for the suggestion! That is also doable with Nginx's SSL pass-through. However, that is still vulnerable to the VPS provider obtaining a certificate. But indeed, it does appear that a combination of redirecting encrypted traffic (SSL passthrough or iptables) with cert monitoring appears to be emerging as a solution.

BTW, I prefer SSL pass-through over iptables, because I do keep one endpoint on the VPS and that's my static website which also needs a cert. With SSL pass-through I can terminate connections to the static website while redirecting all other connections as it can pre-read the destination domain. With iptables I would need two IP addresses to distinguish the connections.

dr_robot,
@dr_robot@kbin.social avatar

I originally used this too, but in the end had to write my own python script that basically does the same thing and is also triggered by systemd. The problem I had was that for some reason podman sometimes thinks there is a new image, but when it pulls it just gets the old image. This would then trigger restarts of the containers because auto-update doesn't check if it actually downloaded anything new. I didn't want those restarts so had to write my own script.

Edit: but I lock the version manually though e.g. nextcloud 27 and check once a month if I need to bump it. I do this manually in case the upgrade needs an intervention.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • KamenRider
  • TheResearchGuardian
  • KbinCafe
  • Socialism
  • oklahoma
  • SuperSentai
  • feritale
  • All magazines