I can’t think of anything that specifically uses ssh, but Syncthing would do this, though for passwords I’m more inclined towards bitwarden.
I can’t think of anything that specifically uses ssh, but Syncthing would do this, though for passwords I’m more inclined towards bitwarden.
With this concept in mind, I recently put together a VDI setup for a person who’s in one location for half of the year and another the other half. The idea is he’ll have a thin client at each location and connect to the same session wherever he is.
I’m doing this via a VM on Proxmox and SPICE. Maybe there’s some idea in there you could use.
In that case, I’m sure you’ll enjoy it. I’ve been reading a little bit before I go to bed and learning a lot that I glossed over when I set up my own mail server years ago. He and Alan Jude wrote some ZFS books as well that I keep coming back to and picking up new tricks each time.
I get pretty much anything Michael Lucas writes. The information is always great and his writing style is fun to read.
Important to note: it’s not a step-by-step guide to copy and paste and have a mail server running. It’s all about understand all the stuff that goes into it.
Hey, sorry for the late response—I missed the reply coming in.
I like docker volumes for multiple nodes because there’s no guarantee that multiple systems will have the same directory structure to bindmount, but moving volumes between nodes is relatively straightforward config-wise, which is a reason you’d use them in k8s.
As for latency in streaming: I think of latency sensitive operations as mostly things that need two-way communication. So, for example, if you wanted to play a game over a network, you’d need the controls to respond to your input immediately. Or if you’re making a voip call, you’d want the two sides of the conversation to be in sync. On the other hand, a video stream doesn’t typically download in real-time. The file fills a buffer on your computer ahead of you watching it. So the downloading isn’t happening synchronously with you watching it unless there’s a serious network bottleneck.
I can’t think of a way off hand to match your scenario, but Ive heard ideas suggested that come close. This is exactly the type of question you should ask at practicalzfs.com.
If you don’t know it, that’s Jim Salter’s forum (author of sanoid and syncoid) and there are some sharp ZFS experts hanging out there.
I’ve only ever tinkered with openmediavault, so I’m by no means an expert, but there is a ZFS plugin available. Here’s a forum post that may help: https://forum.openmediavault.org/index.php?thread/7633-howto-instal-zfs-plugin-use-zfs-on-omv/
That fruit
argument is so that samba plays nicely with Apple’s SMB client implementation.
I don’t think there’s a right answer for most of these, but here are my thoughts.
Data: I almost always prefer bind mounts. I find them easier to manage for data that I’ll need to deal with (e.g. with backups). Docker volumes make a lot of sense to me when you start dealing with multiple nodes and central management, where you want to move containers between nodes (like a swarm).
Cache: streaming video isn’t super latency sensitive, so I can’t think of a need for this type of caching. With multiple users hitting the web interface all the time it might help, but I think I’d do that caching in my reverse proxy instead.
User: I don’t use the *arr stack, but I’d imagine that suite of applications and Jellyfin all need to handle the same files, so I’d be inclined to use the same user (or at least group) on all of them.
DLNA: this is a feature I don’t make much use of, but it allows for Jellyfin to serve media to devices that don’t run a Jellyfin client. It’s an open standard for media sharing among local devices. I don’t think I would jump through any hoops for it unless you have a use, but the default setup won’t get in your way.
Hope that helps a little.
That will be totally doable, but there’s no one way to setup every service. Some you’ll install from the repository (like nginx or HAProxy or samba). Others you’d have to clone from git (like netbox or dokuwiki). Others have entirely different methods. So, unfortunately it’ll be a lot of reading the documentation.
In general, I prefer unprivileged LXC to a full VM unless there’s some specific requirement that countermands that preference (like running an appliance or a non-Linux OS).
What I tend to do is create a new container for each service (unless there’s a related stack). If the service runs on Docker, I’ll install that right inside the container and manage it with docker compose
. By installing Docker directly from get.docker.com instead of the built in packages, it pretty much works all the time.
Since each service is in its own container, restoring backups is pretty service-specific. If you wanted some kind of central control plane for docker, you could check out swarm mode.
In my state (Vermont), the Secretary of State has an rss feed that basically presents the results as an xml file. I’m using that to make some local results spreadsheets. Could be other states have similar things.
I do this with HAProxy and keepalived. My dns servers resolve my domains to a single virtual ip that keepalived manages. If one HAProxy node goes down, the other picks right up.
And this is one of the few things I’ve got setup with ansible, so deploying and making changes is pretty easy.