Dear HN,

I’m excited to show case a personal project. It has helped me quite a bit with my home lab, I hope it can help you with yours too! ffmpeg-over-ip has two components, a server and a client. You can run the server in an environment with access to a GPU and a locally installed version of ffmpeg, the client only needs network access to the server and no GPU or ffmpeg locally.

Both and client and the server need a shared filesystem for this to work (so the server can write output to it, and client can read from it). In my usecase, smb works well if your (GPU) server is a windows machine, nfs works really well for linux setups.

This utility can be useful in a number of scenarios:

- You find passing through a (v)GPU to your virtual machines complicated

- You want to use the same GPU for ffmpeg in multiple virtual machines

- Your server has a weak GPU so you want to use the GPU from your gaming machine

- Your GPU drivers in one OS are not as good as another (AMD RX6400 never worked for me in linux, but did so in windows)

I’ve posted some instructions in the Github package README, please let me know if they are unclear in any way and I’ll try to help!

Here's the link: https://github.com/steelbrain/ffmpeg-over-ip

  • steelbrain 5 hours ago |
    The latest release[1] on Github should have binaries combinations for almost everybody here. If you don't find a binary for your environment, you can probably just run the javascript files and it'll be fine.

    If you are wondering why the binaries are so large, it's because they are packaged-up node.js binaries. I tried to learn a compile-to-native language to rewrite this in so you won't have to download such bloated binaries but didn't get far. I learned Swift and still have a WIP branch up for it[2]. I gave up after learning that there's no well maintained windows http server for swift.

    I'm currently on my journey to learn Rust. So maybe one day when I do, you'll see the binary sizes drop.

    [1]:https://github.com/steelbrain/ffmpeg-over-ip/releases/tag/v3... [2]:https://github.com/steelbrain/ffmpeg-over-ip/tree/swift-lang

    • Cyph0n 2 hours ago |
      Go would be a good fit for this kind of application. But Rust is a great choice too.

      Keep up the good work!

  • steelbrain 5 hours ago |
    Here[1] is the original HN comment that inspired me to do a Show HN :)

    [1]:https://news.ycombinator.com/item?id=41205253

    • toomuchtodo 2 hours ago |
      Thank you for the Show HN!
  • steelbrain 5 hours ago |
    There is an existing solution in the community called rffmpeg[1] but that did not work for me. It seems too heavy weight for what I was trying to do. It requires access to sudo, global configuration files (/etc/) and most importantly, this, which is a deal-breaker for me:

    > Note that if hardware acceleration is configured in the calling application, the exact same hardware acceleration modes must be available on all configured hosts, and, for fallback to work, the local host as well, or the ffmpeg commands will fail.

    I wanted to mix and match windows and linux, and it was clear rffmpeg wasn't going to work for me.

    One plus rffmpeg does have is that it supports multiple target hosts, so it's useful if you want some load balancing action. Although you could do the same with ffmpeg-over-ip, just selecting the servers dynamically but rffmpeg does make it easier out of the box.

    [1]:https://github.com/joshuaboniface/rffmpeg

  • leshokunin 3 hours ago |
    Sounds super interesting. Maybe the people currently using Tdarr would prefer something like this. I could also imagine something like Plex or Jellyfin making use of this tech and offloading transcoding. Hope this takes off.
    • steelbrain 3 hours ago |
      Thanks! I developed this primarily for plex & jellyfin after struggling with Tdarr myself. For people running plex/jellyfin in containers, it's as simple as mounting the client binary at the ffmpeg path (using docker -v) and adding the config somewhere accessible (also using docker -v? lots of options here).
      • leshokunin 2 hours ago |
        Maybe you could make guides and post it on the various synology and self hosting subreddits. I could see this get traction
      • blue_cookeh an hour ago |
        Does this work well with Plex and if so, what binary are you replacing? Last I looked they used a customised fork of ffmpeg which meant replacing it was more awkward. It would be a nice way to avoid passing a GPU through to a virtual machine.
  • ptspts 3 hours ago |
    What problem does it solve?

    How to use it? Do you have example commands?

    How is video data transferred between the client and the server?

    Would it be possible to connect with SSH to a Linux server, using SFTP with FUSE or Samba with port forwarding for file sharing? This way the server could be zero-configuration (except that ffmpeg has to be installed, but the executable can also be transferred over SSH).

    • steelbrain 3 hours ago |
      > What problem does it solve?

      For different people it's going to solve different problems. For me, most recently, I wanted to use the powerful GPU in my gaming machine for transcoding in my plex server with an integrated GPU.

      > How to use it? Do you have example commands?

      The Github repository should have instructions on how to use. The client usage (once you setup the configuration) is the same as ffmpeg, so anything ffmpeg ... becomes ffmpeg-over-ip-client ... -- you need a server running on the machine with the GPU and then client anywhere network-accessible.

      > How is video data transferred between the client and the server?

      The server and client only transfer commands, stdout/err etc. The data of the transcoded files themselves is transferred over the network mount. The README of the repository has more details here but essentially you'll want a shared filesystem between the two.

      > Would it be possible to connect with SSH to a Linux server, using SFTP with FUSE or Samba with port forwarding for file sharing? This way the server could be zero-configuration (except that ffmpeg has to be installed, but the executable can also be transferred over SSH).

      Configuration should be pretty straight forward but let me know if you try it and find it difficult. A template configuration file is provided and you can edit your way out of it. You can absolutely do this with port forwarding even over the internet, provided the file system mount over the network can keep up.

  • VWWHFSfQ 3 hours ago |
    I used to use dvd::rip [1] (written in perl) that was sort of a similar concept. Deploy transcode jobs onto a cluster of servers accessing a shared filesystem (nfs, smb, etc.). worked really well. I think it used gstreamer though. I set up a homelab of a bunch of pentium 3s that I salvaged from a PC recycler behind my work. They just had a big pile of obsolete computers covered by a tarp. I grabbed a few chassis with working motherboards, and then scrounged around for the best intel CPUs I could find. and memory sticks. I put together a fun little DVD-ripping factory with those machines.

    [1] https://www.exit1.org/dvdrip/

  • slt2021 2 hours ago |
    this is really not much different from ssh-ing to your GPU server and running ffmpeg. Very roundabout way to execute remote bash command on a server

    I dont meant to discourage you, but it is possible to replace your entire repo with a simple bash alias:

      alias ffmpeg-over-ip='ssh myserver "ffmpeg \"\$@\""'
    • steelbrain 2 hours ago |
      This comment gave me flashbacks to another comment I read a while ago: https://news.ycombinator.com/item?id=9224

      If your usecase is solved by an alias, that's really good! I am glad you can use an alias. My usecase required a bit more so I wrote this utility and am sharing it with my peers

      • slt2021 2 hours ago |
        Dropbox had a cutting edge file synchronization algorithm, they solved a problem of large file sync over unreliable network. There was a clear engineering IP they developed. (https://dropbox.tech/infrastructure/rewriting-the-heart-of-o...)

        I looked over your source code and just saw a bash wrapper with webserver, so no significant IP. Any potential innovations: like possible distributed transcoding, sharding/partitioning transcoding pipeline to speed-up are missing.

        its just a bash wrapper, thats why I commented about bash alias.

        I don't mean to sound like a jerk, but I was honestly looking for some innovation about ffmpeg

        • steelbrain 2 hours ago |
          This was not meant to offend. I appreciate you explaining your message further.

          There's no significant IP in this utility, it's something I wrote for a usecase and it works well for that usecase. I ran the server side on a windows machine, and I did not want to setup a full blown ssh server and expose it over the network for this usecase.

          Another thing was logging. The way logging is currently setup really hits the sweet spot for debugability for me. Lastly it's the rewrites. I've used the config to rewrite incoming codecs to something the machine supports.

          This is a purpose built utility that does one job and IMO does it fairly well. It's definitely not as complex as Dropbox but also not as simple as an ssh alias. I appreciate you sharing the alias code (not just the comment) so if some of our peers have usecases that could be solved by it, they are welcome to use that as well!

      • KolmogorovComp 2 hours ago |
        > My usecase required a bit more so I wrote this utility and am sharing it with my peers

        Can you expand on that?

        • steelbrain 2 hours ago |
          For sure! One of the software I was working with was hardcoding what codecs it would use based on the operating system it was running on. The rewrites section of the configuration allows more than just file paths, I've used it to rewrite incoming codec requests
    • amelius 2 hours ago |
      I suppose this only works if you have some shared filesystem. Or does this work with piping too?
      • slt2021 2 hours ago |
        the original poster's project also requires shared filesystem.

        as for bash-ssh solution, you don't need shared FS, if you don't need intermediate results. you can use SCP to get the final result after transcoding has finished. something like:

          alias ffmpeg-over-ip='ssh myserver "ffmpeg \"\$@\" /tmp/output/"'
          alias download-results='scp myserver:/tmp/output/*.* .'
        
          ffmpeg-over-ip <args> && download-results
        
        
        
        my meta point being is, before engineering something with programming language, and handrolling webservers, with auth, and workers - just try to implement your system with bash scripts.

        Martin Klepmann created an entire database using just bash aliases in his book "Designing Data Intensive Applications"

    • asveikau an hour ago |
      I would add screen or tmux to that because you may run a long job that you may want to get back to after a connection drop.
  • jauntywundrkind 2 hours ago |
    Not to steal thunder (nice! Well done!) but also this reminded me to go check in on https://kyber.media (currently a landing page), a ffmpeg streaming project from me ffmmpeg himself (I think?) Jean-Baptiste Kempf. He had a LinkedIn update two weeks ago, mentioning the effort! Yay! https://www.linkedin.com/posts/jbkempf_playruo-the-worlds-fi...

    Submission from 6 months ago, https://news.ycombinator.com/item?id=39929602 https://www.youtube.com/watch?v=0RvosCplkCc

    • steelbrain 2 hours ago |
      Very cool! Thank you for sharing!
  • qwertox 2 hours ago |
    IDK, this lacks a lot of examples and explaining what exactly it is for. Is it for remote transcoding only?

    Because if so, the word transcoding does not appear neither in this Show HN nor in the GitHub README.

    And I can't think of any other use for this than to perform hardware-assisted transcoding on a remote machine.

    Apparently it has nothing to do with OpenGL or CUDA, which are the primary uses for a GPU. And ffmpeg itself has more use cases than just transcoding files.

  • NavinF 2 hours ago |
    > need a shared filesystem for this to work

    Oh oof. I thought removing that requirement would be the whole point of something named "FFmpeg-over-IP". Shared filesystem usually involves full trust between machines, bad network error handling, and setting things up by hand (different config on every distro)

    • steelbrain an hour ago |
      I hear you. If your usecase doesn't require live streaming of converted file, a sibling comment may fit the usecase: https://news.ycombinator.com/item?id=41745593
      • NavinF an hour ago |
        Ah unfortunately my use case is similar to yours: Use Windows desktop to transcode files stored on a Linux NAS. My files are ~100GB so encoding multiple files in parallel would waste a lot of space and unnecessarily burn write cycles
        • steelbrain an hour ago |
          FWIW, you can run an smb server from within a docker container (on the linux side). I forget which one I used but it makes the setup painless and you can configure different auth strategies as well. Network errors (little bit of packet loss) are generally handled by the underlying OS, and in case of windows, it can use multiple network paths simultaneously to give you the aggregate bandwidth.
  • Am4TIfIsER0ppos an hour ago |
    I can encode faster than I can upload. Might be useful if you have gigabit to a computer more powerful than one in your home.