Hi programmers,

I work from two computers: a desktop and laptop. I often interrupt my work on one computer and continue on the other, where I don’t have access to uncommitted progress on the first computer. Frustrating!

Potential solution: using git to auto save progress.

I’m posting this to get feedback. Maybe I’m missing something and this is over complicated?

Here is how it could work:

Creating and managing the separate branch

Alias git commands (such as git checkout), such that I am always on a branch called “[branch]-autosave” where [branch] is the branch I intend to be on, and the autosave branch always branches from it. If the branch doesn’t exist, it is always created.

handling commits

Whenever I commit, the auto save branch would be squashed and merged with the underlying branch.

autosave functionality

I use neovim as my editor, but this could work for other editors.

I will write an editor hook that will always pull the latest from the autosave branch before opening a file.

Another hook will always commit and push to origin upon the file being saved from the editor.

This way, when I get on any of my devices, it will sync the changes pushed from the other device automatically.

Please share your thoughts.

  • demesisx@infosec.pub
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    28 days ago

    I do this on NixOS. I have a NAS at home where I store most of the files I work on. My computers are internally immutable and almost all the files that change reside solely on the NAS as NFS shares. All of my computers are configured to auto-mount one of its folders at boot. NixOS sees that as an internal drive.
    Then, simply navigate to the project folder where I have a flake and a .envrc file containing the command use flake .which will make direnv use Nix to provision the dependencies automatically. Whenever I save, those changes are reflected on all computers.

    I like to also version control everything using git and this method allows that transparently.

    The only part that I am missing is getting the permissions to align between all computers accessing that same folder. Sometimes I have to create a temp folder that uses rsync to keep up with any changes. If anyone has any pointers, I’m all ears. It rarely gets in my way but does rear its head sometimes. Otherwise, this setup is perfect when I’m at home.

    • leetnewb@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      25 days ago

      I use rclone to mount the Linux NAS from my Linux and Windows computers - SFTP backend is usually fine. Then I am uniformly reading/writing the NAS files as the local NAS user.

  • NegativeLookBehind@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    28 days ago

    Write code on a machine you can remote into from each computer? Less commits, possibly less reverts, less chance of forgetting to git pull after switching machines…idk.

    • matcha_addict@lemy.lolOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      28 days ago

      I have considered this approach, but there are several things I had issues with.

      • there is still a degree of latency. It’s not a deal breaker, but it is annoying
      • clipboard programs don’t work. They copy to the remote host’s clipboard. I bet there’s a solution to this, but I couldn’t find it from spending a limited time looking into it.
      • in the rare case the host is unreachable, I am kinda screwed. Not a deal breaker since its rare, but the host has to be always on, whether the git solution only requires it to be on when it syncs

      To address the issues you brought up:

      • less commits: this would be resolved by squashing every time I make a commit. The auto save commits will be wiped. If I really hated commits, I could just amend instead of commit, but I rather have the history.
      • forgetting to git pull: the hooks I talked about will take care of that. I won’t have to ever worry about forgetting anymore.
      • Strykker@programming.dev
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        24 days ago

        Your git solution still has all of these issues, as you need the git server to be alive, for number 3 use something like rsync so you keep a local copy that is backed up if you are concerned about the file share being offline.

        • matcha_addict@lemy.lolOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          24 days ago

          I don’t need the client computers to be alive, only the central server (which could be github.com for example, so not even a server I manage).

  • GetOffMyLan@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    28 days ago

    Honestly I would just commit your in progress work then on the other machine check it out and reset to the previous commit.

    Then you have your in progress work on the new machine with no random commits.

    You could set up an alias that does commit with message “switching machines” and pushes.

    Similar have one that pulls and resets.

  • Kissaki@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    28 days ago

    I would consider three four approaches.

    1. Commit and push manually and deliberately

    I commit changes early and often anyway. I also push regularly, seeing the remote as a safe and remote (as in backup) baseline and reference state.

    The question would be: Do I switch when I’m still exploring things in the workspace, without committing when switching or moving away from it, and I would want those on the other PC? Then this would not be enough.

    2. Auto-push all local git references into a separate space on the git remote

    Git branches are refs, commit pointers, just like other refs are. And they can be put under arbitrary paths. refs/heads/ holds branches. I can replicate and regularly update all my branches under refs/pcreplica/laptop/*. And then on the other PC, list or fetch those, individually, or all of them, regularly automatically, or manually.

    git push origin refs/heads/*:refs/pcreplica/laptop/*
    git ls-remote
    git fetch origin refs/pcreplica/laptop/*:refs/laptop/*
    

    3. Auto-push the/a local branch like you suggested

    my concern here would be; is only one branch enough? is only the current branch enough?

    4. Remoting into the other system

    Are the systems both online? Can I remote into / connect into it when need be?

  • MajorHavoc@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    28 days ago

    I set that up, once. It went poorly for me. Git behaves much better, for me, when used thoughtfully and manually.

    What I now do instead, is work on certain projects on an SSH accessible host. This gives the same benefits of having my last state easily accesses, without causing noise in my development tools such as git.

  • Matty_r@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    28 days ago

    I used to have a similar situation, I used Vscode remote development to effectively work from any machine. Another thing I tried was using Nextcloud to watch the working directory, which automatically synchronized files when they change.

  • Nighed@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    24 days ago

    For my personal projects I somehow ended up with git being on a OneDrive synced folder - carries over the general changes, then explicitly commit and push to get it to GitHub etc.

  • xmunk@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    28 days ago

    Git doesn’t need to have a single pull source. It’s probably worth just configuring the visibility on each machine so you can do peer pulls.

    I don’t hate the idea of autocommitting in that scenario, though.