This is like IT security 101.
> A quick IP search revealed that it was registed with a Chinese ISP and located in the Russian Federation.
Welcome to the Internet. Follow me. Always block China and Russia from any of your services. You are never going to need them and the only thing that can come from them is either a remote attack or unauthorized data sent to them.
But does this add much security if bad actors are using VPNs ?
At any rate, blocking China and Russia isn't ever presented as a foolproof defence, it just raises the barrier to attack.
Either way, nothing is full proof. It is part of a solid defense in depth [1].
[1] https://en.m.wikipedia.org/wiki/Defense_in_depth_(computing)
The guy doesn't have a software background.
This is basically the problem with the 'everyone-should-code' approach - a little knowledge can be dangerous.
This prompts a reflection about, as an industry, we should make a better job in providing solid foundations.
When I check tutorials on how to drill in the wall, there is (almost) no warning about how I could lose a finger doing so. It is expected that I know I should be careful around power tools.
How do we make some information part of the common sense? "Minimize the surface of exposure on the Internet" should be drilled in everyone, but we are clearly not there yet
As a society we already have the structures setup: The author had been more than welcome to attend a course or a study programme in server administration that would prepare them to run their own server.
I myself even wouldn't venture into exposing a server to the internet to maintain it in my freetime, and that is with a post graduate degree in an engineering field and more than 20 years of experience.
It's not hackable though in the original sense of the word, so not interesting the crowd at HN. Docker is, for everybody, good and bad.
I definitely believe it is for all to have a NAS server, a home assistant, or a NUC setup to run some docker containers.
Just don't let them accept connections from the internet.
For most normal home setups it is actually super hard to make them accept incoming requests as you need to setup port forwarding or put the server in front of your router.
The default is that the server is not reachable from the internet.
I find this to be extremely sad.
Unlike welding or diving, there is no inherent physical risk to life and limb to running a server. I should be able to stand up a server and leaving it running, unattended and unadministered, and then come back to it 20 years later to find it happily humming along unpwned. The fact that this isn't true isn't due to any sort of physical inevitability, it's just because we, the collective technologists, are shit at what we do.
The issue outlined in the article happened because the author deliberately open their service to the public internet. Replacing Linux with FreeBSD wouldn't have prevented the compromise.
The whole point of my comment is that it's only "ridiculous" because of path dependency and the choices that we have made. There's no inherent need for this to be true, and to think otherwise is just learned helplessness.
Sandstorm has been part of my selfhosted stack since it was a start-up, and it has worked for a decade with virtually zero attention, and no exploits I am aware of.
If there are other hosted apps that want a really easy on-ramp for new users: packaging for sandstorm is an easy way to create one.
I don’t think imperfection is a choice we’ve made. I think imperfection is part of our nature.
That said, the current state of software development is absolutely a choice, and a shockingly poor one in my opinion.
> stand up a server and leaving it running, unattended and unadministered
to, what was my proposition, maintain a server with active access from the internet.
Just what you describe I do myself: I have several home server running, but none accept incoming connections from the internet and the sec surface is much smaller.
good news! there is no inherent risk to life or limb because you left your server exposed. As OP discovered, you might come back to find it running a crypto miner. and that's just really not that big of a deal. maybe we're not all shit at what we do, but rather we have appropriately valued the seriousness of the risks involved, and made the decision that locking everything down to be impossible to hack isn't actually worth the trade-offs to usability, convenience, and freedom.
you can leave your iPad running, unattended, and unadministered for 20 years if that's what you wanted, and come back to find it un-pwned.
The best way to learn is to do. Sure, you might make some mistakes along the way, but fuck it. That’s how you learn.
Most general guides on the other hand regarding docker mention not to expose containers directly to the internet and if a container has to be exposed to do so behind a reverse proxy.
I see this mentioned everywhere in the comments here but they seem to miss that the author explicitly wanted it to be exposed, and the compromise would have happened regardless if the traffic went directly to the container or via a reverse proxy.
The proper fix for OP is to learn about private networks, not put a reverse proxy in front and still leave it running on the public internet...
I think the analogy and the example work better when the warning is that you should be careful when drilling in walls because there may be an electrical wire that will be damaged.
We didn't make the guides better, we made the tradespeople make it so any novice can't burn down the house by not following a poorly written tutorial.
Slightly pedantic point of order: you mean to say every stud finder for sale these days, not in existence, for the old stud finders still exist.
Okay, that's all. Carry on.
Best to just say "most" or "some" to cover all corner cases :)
That sucks, because that means anything built not to that standard (which I guess is a US one?) could lead the person to hurt themselves/the house.
One doesn't exclude the other, and most likely both are needed if you're aiming to actually eliminate the problem as well as you can.
Here is the fundamental confusion: programming is not an industry, it is a (ubiquitous) type of tooling used by industries.
Software itself is insecure in its tooling and in its deployment. So we now have a security industry struggling to improve software.
Some software companies are trying to improve but software in the $cloud is just as big a mess as software on work devices and personal devices.
Things potentially making big trouble like circular saw tables have prety elaborate protection mechanisms built in. Rails on high places, seatbelt, safety locks come to mind as well of countless unmentioned ones protecting those paying attention and those does not alike. Of course, decades of serious accidents promted these measures and mostly it is regulated now not being a courtesy of the manufacturer, other industries matured to this level. Probably IT industry needs some growing up still and less children playing adults - some kicking in the ass for making so rubishly dangerous solutions. Less magic, more down to earth reliability.
However, security is hard and people will drop interest in your project if it doesn't work automatically within five minutes.
The hard part is at what experience level the warnings can stop. Surely developer documentation doesn't need the "docker exposes ports by default" lesson repeated every single time, but there are a _lot_ of "beginner" tutorials on how to set up software through containers that ignore any security stuff.
For instance, when I Google "how to set up postgres on docker", this article was returned, clearly aimed at beginners: https://medium.com/@jewelski/quickly-set-up-a-local-postgres... This will setup a simply-guessable password on both postgres and pgadmin, open from the wider network without warning. Not so bad when run on a VM or Linux computer, quite terrible when used for a small project on a public cloud host.
The problems caused by these missing warnings are almost always the result of lacking knowledge about how Docker configures it networks, or how (Linux) firewalls in general work. However, most developers I've met don't know or care about these details. Networking is complicated beyond the bare basics and security gets in the way.
With absolutely minimal impact on usability, all those guides that open ports to the entire internet can just prepend 127.0.0.1 to their port definitions. Everyone who knows what they're doing will remove them when necessary, and the beginners need to read and figure out how to open ports if they do want them exposed to the internet.
> why didn't somebody stop me?!
I'm not sure if "the industry" has a problem with relaying the reality that: the internet is full of malicious people that will try to hack you.
My take away was closer to. The author knew better but thought some mix of 1) no one would provide incomplete information 2) I'm not a target 3) containers are magic, and are safe. I say that because they admit as much immediately following.
> Ofcourse I password protected it, but seeing as it was meant to be temporary, I didn't dive into securing it properly.
But I sympathize with OP. He is not a developer and it is sad that whatever software engineers produce is vulnerable to script kiddies. Exposing database or any server with a good password should not be exploitable in any way. C and C++ has been failing us for decades yet we continue to use such unsafe stacks.
I'm not sure — what do C and C++ have to do with this?
Of course all languages can produce insecure binaries, but C/C++ buffer overflows and similar vulnerabilities are likely what AlgebraFox refers to.
I'm aware of that, but the C/C++ thing seemed more like a rant, hence my question.
I've searched up the malware and it doesn't seem to use memory exploitation. Rust is not going to magically protect you against any security issue caused by cloud misconfiguration.
Well, even when these exposed services are not built to cause harm or provide admin privileges, like all software they tend to not be memory secure. This gives a lucky attacker a way in from just exposing a single port on the network. I can see where comments on memory unsafe languages fit in here, although vulnerabilities such as XSS also apply no matter what language we build software with.
If I'm not mistaken, there's a self-hosted alternative that let's you run the core of Tailscale's service yourself if you're interested in managing wireguard.
The main reason I haven't jumped into hosting wireguard rather than using Tailscale is mainly because I reach for Tailscale to avoid exposing my home server to the public internet.
It works over UDP so it doesn't even send any acknowledgement or error response to unauthenticated or non-handshake packets.
Tailscale allows you to connect to your home network without opening a port to allow incoming connections.
You could write a similar rant about any development stack and all your rants would be 100% unrelated with your point: never expose a home-hosted service to the internet unless you seriously know your shit.
Realize why Windows still dominates Linux on the average PC desktop? This is why.
Should the recommendation rather be "don't expose anything from your home network publically unless it's properly secured"?
> This was somewhat releiving, as the latest change I made was spinning up a postgres_alpine container in Docker right before the holidays. Spinning it up was done in a hurry, as I wanted to have it available remotely for a personal project while I was away from home. This also meant that it was exposed to the internet, with open ports in the router firewall and everything. Considering the process had been running for 8 days, this means that the infection occured just a day after creating the database. None of the database guides I followed had warned me about the dangers of exposing a docker containerized database to the internet. Ofcourse I password protected it, but seeing as it was meant to be temporary, I didn't dive into securing it properly.
Seems like they opened up a postgres container to the Internet (IIRC docker does this whether you want to or not, it punches holes in iptables without asking you). Possibly misconfigured authentication or left a default postgres password?
Yes, but so what? Getting access to a postgres instance shouldn't allow arbitrary execution on the host.
> IIRC docker does this whether you want to or not, it punches holes in iptables without asking you
Which is only relevant if you run your computer directly connected to the internet. That's a dumb thing to do regardless. The author probably also opened their firewall or forwarded a port to the host, which Docker cannot do.
> it was exposed to the internet, with open ports in the router firewall
Upvoted because you're right that the comments in this thread have nothing to do with what happened here.
The story would have been no different if OP had created an Alpine Linux container and exposed SSH to the internet with SSH password authentication enabled and a weak password.
It's nothing to do with Docker's firewalling.
What? The story would have been VERY different, obviously that's asking for trouble. Opening a port to your database running in a docker container is not a remote execution vulnerability, or if it is, the article is failing to explain how.
See https://www.postgresql.org/docs/current/sql-copy.html#id-1.9...
Specifically the `filename` and `PROGRAM` parameters.
And that is documented expected out of the box behaviour without even looking for an exploit...
If the break in happened as you would explain the article would also mention that:
* the attacker gained access to the postgres user or equally privileged user
* they used specific SQL commands to execute code
* would have not claimed the vulnerability was about docker containers and exposed ports
And the take away would not be "be careful with exposing your home server to the internet", but would be "anyone with admin privileges to postgres is able to execute arbitrary code".
The article never properly explains how the attack happened. Having a port exposed to the internet on any container is a remote execution vulnerability? What? How? Nobody would be using docker in that case.
The article links to a blog post as a source on the vulnerability, but the article is a general "how to secure" article, there is nothing about remote code execution.
In this case, seems like Docker provided a bit of security in keeping the malware sandboxed in the container, as opposed to infecting the host (which would have been the case had the user just run the DB on bare metal and opened the same ports)
Also, had it been a part of the host distro, postgres may have had selinux or apparmor restrictions applied that could have prevented further damage apart from a dump of the DB...
OP explicitly forwarded a port in Docker to their home network.
OP explicitly forwarded their port on their router to the Internet.
OP may have ran Postgres as root.
OP may have used a default password.
OP got hacked.
Imagine having done these same steps on a bare metal server.
1. postgres would have a sane default pg_hba disallowing remote superuser access.
2. postgres would not be running as root.
3. postgres would not have a default superuser password, as it uses peer authentication by default.
4. If ran on a redhat-derived distro, postgres would be a subject to selinux restrictions.
And yes, all of these can be circumvented by an incompetent admin.
- Missing feature; do not connect when on certain SSIDs. - Bug: When the WG connection is active and I put my phone in Flightmode (which I do every night), it drains the battery from full to almost empty during the night.
I'm very surprised by this omission as this feature exists on the official iOS client.
Plus, you can obfuscate that too by using a random port for Wireguard (instead of the default 51820): if Wireguard isn't able to authenticate (or pre-authenticate?) a client, it'll act as if the port is closed. So, a malicious actor/bot wouldn't even know you have a port open that it can exploit.
The only thing missing on the client is Split DNS. With my IPSec/IKEv2 setup, I used a configuration profile created with Apple Configurator plus some manual modifications to make DNS requests for my internal stuff go through the tunnel and DNS requests for everything else go to the normal DNS server.
My compromise for WireGuard is that all DNS does to my home network but only packets destined for my home subnets go through the tunnel.
Whilst easy to point to common sense needed, perhaps we need to have better defaults. In this case, the Postgres images should only permit the cli, and nothing else.
This doesn't make any sense. Running something in a container doesn't magically make it "secure." Where does this misconception come from?
When docker first appeared, a lot of people explaining docker to others said something along the lines "It's like a fast VM you can create with a Dockerfile", leading a bunch of people to believe it's actually not just another process + some more stuff, but instead an actual barrier between host/guest like in a proper VM.
I remember talking about this a lot when explaining docker to people in the beginning, and how they shouldn't use it for isolation, but now after more than a decade with that misconception still being popular, I've lost energy about it...
I usually expose ports like `127.0.0.1:1234:1234` instead of `1234:1234`. As far as I understand, it still punches holes this way but to access the container, an attacker would need to get a packet routed to the host with a spoofed IP SRC set to `127.0.0.1`. All other solutions that are better seem to be much more involved.
You can explain this to them, they don't care, you can even demonstrate how you can access their data without permission, and they don't get it.
Their app "works" and that's the end of it.
Ironically enough even cybersecurity doesn't catch them for it, they are too busy harassing other teams about out of date versions of services that are either not vulnerable, or already patched but their scanning tools don't understand that.
Sysadmins were always the ones who focused on making things secure, and for a bunch of reasons they basically don’t exist anymore.
EDIT: what guidelines did I break?
I guess it's fine if you get rid of sysadmins and have dev splitting their focus across dev, QA, sec, and ops. It's also fine if you have devs focus on dev, QA, code part of the sec and sysadmins focus on ops and network part of the sec. Bottom line is - someone needs to focus on sec :) (and on QAing and DBAing)
True, but over the last twenty years, simple mistakes by developers have caused so many giant security issues.
Part of being a developer now is knowing at least the basics on standard security practices. But you still see people ignoring things as simple as SQL injection, mainly because it's easy and they might not even have been taught otherwise. Many of these people can't even read a Python error message so I'm not surprised.
And your cybersecurity department likely isn't auditing source code. They are just making sure your software versions are up to date.
And you go home at 5pm and had a good work day.
My team where I work is responsible for sending frivolous newsletters via email and sms to over a million employees. We use an OTP for employees to verify they gave us the right email/phone number to send them to. Security sees "email/sms" and "OTP" and therefor, tickets us at the highest "must respond in 15 minutes" priority ticket every time an employee complains about having lost access to an email or phone number.
Doesn't matter that we're not sending anything sensitive. Doesn't matter that we're a team of 4 managing more than a million data points. Every time we push back security either completely ignores us and escalates to higher management, or they send us a policy document about security practices for communication channels that can be used to send OTP codes.
Security wields their checklist like a cudgel.
Meanwhile, our bug bounty program, someone found a dev had opened a globally accessible instance of the dev employee portal with sensitive information and reported it. Security wasn't auditing for those, since it's not on their checklist.
Security heard “otp” and forced us through a 2 month security/architecture review process for this sign-off feature that we built with COTs libraries in a single sprint.
We pushed back and initially they agreed with us and gave us an exception, but about a year later some compliance audit told them it was no longer acceptable and we had to change it ASAP. About a year after that they told us it needed to be ten characters alphanumeric and we did a find and replace in the code base for "verification code" and "otp" and called them verification strings, and security went away.
> My team where I work is responsible for sending frivolous newsletters via email and sms to over a million employees.
"frivolous newsletters" -- Thank you for your honesty!Real question: One million employees!? Even Foxconn doesn't have one million employees. That leaves only Amazon and Walmart according to this link: https://www.statista.com/statistics/264671/top-50-companies-...
They might be a third party service for companies to send mail to _their_ employees
Wow, this really hits home. I spend an inordinate amount of time dealing with false positives from cybersecurity.
Docker, AWS, Kubernetes, some wrapper they've put around Kubernetes, a bunch of monitoring tools, etc.
And none of it will be their main job, so they'll just try to get something working by copying a working example, or reading a tutorial.
If you have your own firewall rules, docker just writes its own around them.
$ nc 127.0.0.1 5432 && echo success || echo no success no success
Example snippet from docker-compose:
DB/cache (e.g. Postgres & Redis, in this example Postgres):
[..]
ports:
- "5432:5432"
networks:
- backend
[..]
App: [..]
networks:
- backend
- frontend
[..]
networks:
frontend:
external: true
backend:
internal: trueThat option has nothing to do with the problem at hand.
https://docs.docker.com/reference/compose-file/networks/#ext...
@globular-toast was not suggesting an iptables setup on a VM, instead they are suggesting to have a firewall on a totally different device/VM than the one running docker. Sure, you can do that with iptables and /proc/sys/net/ipv4/ip_forward (see https://serverfault.com/questions/564866/how-to-set-up-linux...) but that's a whole new level of complexity for someone who is not an experienced network admin (plus you now need to pay for 2 VMs and keep them both patched).
The problem here is the user does not understand that exposing 8080 on external network means it is reachable by everyone. If you use an internal network between database and application, cache and application, application and reverse proxy, and put proper auth on reverse proxy, you're good to go. Guides do suggest this. They even explain LE for reverse proxy.
Security isn't just an at the edge thing.
And I don't see any reason why having to allow a postgres or apache or whatever run through docker through your firewall any more confusing than allowing them through your firewall installed via APT. It's mor confusing that the firewall DOESN'T protect docker services like everything else.
This is daunting because:
Take 50 random popular open source self-hostable solutions and the instructions are invariably: normal bare installation or docker compose.
So what’s the ideal setup when using podman? Use compose anyway and hope it won’t be deprecated, or use SystemD as Podman suggests as a replacement for Compose?
https://github.com/containers/podman/blob/main/docs/tutorial...
And many if not as good as all examples of docker-compose descriptor files don't care about that. Images that use different networks for exposed services and backend services (db, redis, ...) are the rare exception.
Maybe someone more knowledgeable can comment.
I encountered it with Docker on NixOS and found it confusing. They have since documented this behavior: https://search.nixos.org/options?channel=24.11&show=virtuali...
After moving from bare to compose to docker-compose to podman-compose and bunch of things in-between (homegrown Clojure config-evaluators, ansible, terraform, make/just, a bunch more), I finally settled on using Nix for managing containers.
It's basically the same as docker-compose except you get to do it with proper code (although Nix :/ ) and as a extra benefit, get to avoid YAML.
You can switch the backend/use multiple ones as well, and relatively easy to configure as long as you can survive learning the basics of the language: https://wiki.nixos.org/wiki/Docker
Worth noting the tradeoffs, but I agree using Nix for this makes life more pleasant and easy to maintain.
Does it? I'm pretty sure you're able to run Nix (the package manager) on Arch Linux for example, I'm also pretty sure you can do that on things like macOS too but that I haven't tested myself.
Or maybe something regarding this has changed recently?
edit: I actually never checked, but I guess nothing stops home-manager or nix-darwin from working too, but I don't think either supports running containers by default. EOD all NixOS does is make a systemd service which runs `docker run ..` for you.
Nowaday I just ask genAI to convert docker-compose to one of the above options and it almost always works.
UPD: hmm, seems quite promising - https://chat.mistral.ai/chat/1d8e15e9-2d1a-48c8-be3a-856254e...
For "production" (my homelab server), I switched from docker compose to podman quadlets (systemd) and it was pretty straightforward. I actually like it better than compose because, for example, I can ensure a containers dependencies (e.g. database, filesystem mounts) are started first. You can kind of do that with compose but it's very limited. Also, systemd is much more configurable when it comes to dealing service failures.
I have a tailscale container, and a traefik container. Then I use labels with all my other containers to expose themselves on Traefik.
There is bunch of software that makes this easier than trivial too, one example: https://github.com/g1ibby/auto-vpn/
1. Have some external firewall outside of the Docker host blocking the port
2. Explicitly tell Docker to bind to the Tailscale IP only
Does it? I think it only happens if you specifically enumerate the ports. You do not need to enumerate the ports at all if you're using Tailscale as a container.
Worse, the linked bug report is from a DECADE ago, and the comments underneath don't seem to show any sense of urgency or concern about how bad this is.
Have I missed something? This seems appalling.
As someone says in that PR, "there are many beginners who are not aware that Docker punches the firewall for them. I know no other software you can install on Ubuntu that does this."
Anyone with a modicum of knowledge can install Docker on Ubuntu -- you don't need to know a thing about ufw or iptables, and you may not even know what they are. I wonder how many machines now have ports exposed to the Internet or some random IoT device as a result of this terrible decision?
[0] https://docs.docker.com/engine/network/packet-filtering-fire...
As for it not being explicitly permitted, no ports are exposed by default. You must provide the docker run command with -p, for each port you want exposed. From their perspective, they're just doing exactly what you told them to do.
Personally, I think it should default to giving you an error unless you specified what IPs to listen to, but this is far from a big of an issue as people make it out to be.
The biggest issue is that it is a ginormous foot gun for people who don't know Docker.
Maybe it's the difference between "-P" and "-p", or specifying both "8080:8080" instead of "8080", but there is a difference, especially since one wouldn't be reachable outside of your machine and the other one would be on worse case trying to bind 0.0.0.0.
If you just run a container, it will expose zero ports, regardless of any config made in the Docker image or container.
The way you're supposed to use Docker is to create a Docker network, attach the various containers there, and expose only the ports on specific containers that you need external access to. All containers in any network can connect to each other, with zero exposed external ports.
The trouble is just that this is not really explained well for new users, and so ends up being that aforementioned foot gun.
For people unfamiliar with Linux firewalls or the software they're running: maybe. First of all, Docker requires admin permissions, so whoever is running these commands already has admin privileges.
Docker manages its own iptables chain. If you rely on something like UFW that works by using default chains, or its own custom chains, you can get unexpected behaviour.
However, there's nothing secret happening here. Just listing the current firewall rules should display everything Docker permits and more.
Furthermore, the ports opened are the ones declared in the command line (-p 1234) or in something like docker-compose declarations. As explained in the documentation, not specifying an IP address will open the port on all interfaces. You can disable this behaviour if you want to manage it yourself, but then you would need some kind of scripting integration to deal with the variable behaviour Docker sometimes has.
From Docker's point of view, I sort of agree that this is expected behaviour. People finding out afterwards often misunderstand how their firewall works, and haven't read or fully understood the documentation. For beginners, who may not be familiar with networking, Docker "just works" and the firewall in their router protects them from most ills (hackers present in company infra excluded, of course).
Imagine having to adjust your documentation to go from "to try out our application, run `docker run -p 8080 -p 1234 some-app`" to "to try out our application, run `docker run -p 8080 -p 1234 some-app`, then run `nft add rule ip filter INPUT tcp dport 1234 accept;nft add rule ip filter INPUT tcp dport 8080 accept;` if you use nftables, or `iptables -A INPUT -p tcp --dport 1234 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT; iptables -A INPUT -p tcp --dport 8080 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT` if you use iptables, or `sudo firewall-cmd --add-port=1234/tcp;sudo firewall-cmd --add-port=8080/tcp; sudo firewall-cmd --runtime-to-permanent` if you use firewalld, or `sudo ufw allow 1234; sudo ufw allow 8080` if you use UFW, or if you're on Docker for Windows, follow these screenshots to add a rule to the firewall settings and then run the above command inside of the Docker VM". Also don't forget to remove these rules after you've evaluated our software, by running the following commands: [...]
Docker would just not gain any traction as a cross-platform deployment model, because managing it would be such a massive pain.
The fix is quite easy: just bind to localhost (specify -p 127.0.0.1:1234 instead of -p 1234) if you want to run stuff on your local machine, or an internal IP that's not routed to the internet if you're running this stuff over a network. Unfortunately, a lot of developers publishing their Docker containers don't tell you to do that, but in my opinion that's more of a software product problem than a Docker problem. In many cases, I do want applications to be reachable on all interfaces, and having to specify each and every one of them (especially scripting that with the occasional address changes) would be a massive pain.
For this article, I do wonder how this could've happened. For a home server to be exposed like that, the server would need to be hooked to the internet without any additional firewalls whatsoever, which I'd think isn't exactly typical.
I sympathize with your reluctance to push a burden onto the users, but I disagree with this example. That's a false dichotomy: whatever system-specific commands Docker executes by default to allow traffic from all interfaces to the desired port could have been made contingent on a new command parameter (say, --open-firewall). Removing those rules could have also been managed by the Docker daemon on container removal.
This the default value for most aspects of Docker. Reading the source code & git history is a revelation of how badly things can be done, as long as you burn VC money for marketing. Do yourself a favor and avoid all things by that company / those people, they've never cared about quality.
At this point docker should be considered legacy technology, podman is the way to go.
But regardless of software used, it would have led to the same conclusion, a vulnerable service running on the open internet.
Edit: just confirmed this to be sure.
$ podman run --rm -p 8000:80 docker.io/library/nginx:mainline
$ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
595f71b33900 docker.io/library/nginx:mainline nginx -g daemon o... 40 seconds ago Up 41 seconds 0.0.0.0:8000->80/tcp youthful_bouman
$ ss -tulpn | rg 8000
tcp LISTEN 0 4096 *:8000 *:* users:(("rootlessport",pid=727942,fd=10))
Relatedly, a lot of systems in the world either don't block local network addresses, or block an incomplete list, with 172.16.0.0/12 being particularly poorly known.
Trying to get ping to ping `0.0.0.0` was interesting
$ ping -c 1 ""
ping: : Name or service not known
$ ping -c 1 "."
ping: .: No address associated with hostname
$ ping -c 1 "0."
^C
$ ping -c 1 ".0"
ping: .0: Name or service not known
$ ping -c 1 "0"
PING 0 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms
$ ping -c 1 "0.0"
PING 0.0 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms
It’s well-intentioned, but I honestly believe that it would lead to a plethora of security problems. Maybe I am missing something, but it strikes me as on the level of irresponsibility of handing out guardless chainsaws to kindergartners.
If you have opened up a port in your network to the public, the correct assumption is to direct outside connections to your application as per your explicit request.
Why am I running containers as a user that needs to access the Docker socket anyway?
Also, shoutout to the teams that suggest easy setup running their software in a container by adding the Docker socket into its file system.
More confusingly, firewalld has a different feature to address the core problem [1] but the page you linked does not mention 'StrictForwardPorts' and the page I linked does not mention the 'docker-forwarding' policy.
I configured iptables and had no trouble blocking WAN access to docker...
In addition to that there's the default host in daemon.json plus specifying bindings to local host directly in compose / manually.
What happens when you deny access through UFW and permit access through Docker depends entirely on which of the two firewall services was loaded first, and software updates can cause them to reload arbitrarily so you can't exactly script that easily.
If you don't trust Docker at all, you should move away from Docker (i.e. to podman) or from UFW (i.e. to firewalld). This can be useful on hosts where multiple people spawn containers, so others won't mess up and introduce risks outside of your control as a sysadmin.
If you're in control of the containers that get run, you can prevent container from being publicly reachable by just not binding them to any public ports. For instance, in many web interfaces, I generally just bind containers to localhost (-p 127.0.0.1:8123:80 instead of -p 80) and configure a reverse proxy like Nginx to cache/do permission stuff/terminate TLS/forward requests/etc. Alternatively, binding the port to your computer's internal network address (-p 192.168.1.1:8123:80 instead of -p 80) will make it pretty hard for you to misconfigure your network in such a way that the entire internet can reach that port.
Another alternative is to stuff all the Docker containers into a VM without its own firewall. That way, you can use your host firewall to precisely control what ports are open where, and Docker can do its thing on the virtual machine.
well, that's what I opened with: https://news.ycombinator.com/item?id=42601673
problem is, I was told in https://news.ycombinator.com/item?id=42604472 that this protection is easier to work around than I imagined...
No other sever software that I know of touches the firewall to make its own services accessible. Though I am aware that the word being used is "expose". I personally only have private IPs on my docker hosts when I can and access them with wireguard.
this secondary issue with docker is a bit more subtle, it's that they don't respect the bind address when they do forwarding into the container. the end result is that machines one hop away can forward packets into the docker container.
for a home user the impact could be that the ISP can reach into the container. depending on risk appetite this can be a concern (salt typhoon going after ISPs).
more commonly it might end up exposing more isolated work related systems to related networks one hop away
Upd: thanks for a link, looks quite bad. I am now thinking that an adjacent VM in a provider like Hetzner or Contabo could be able to pull it off. I guess I will have to finally switch remaining Docker installations to Podman and/or resort to https://firewalld.org/2024/11/strict-forward-ports
if theres defense in depth it may be worth checking out L2 forwarding within a project for unexpected pivots an attacker could use. we've seen this come up in pentests
I work on SPR, we take special care in our VPN to avoid these problems as well, by not letting docker do the firewalling for us. (one blog post on the issue: https://www.supernetworks.org/pages/blog/docker-networking-c...).
as an aside there's a closely related issue with one-hop attacks with conntrack as well, that we locked down in October.
> by running docker images that map the ports to my host machine
If you start a docker container and map port 8080 of the container to port 8080 on the host machine, why would you expect port 8080 on the host machine to not be exposed?
I don't think you understand what mapping and opening a port does if you think that when you tell docker to expose a port on the host machine that it's a bug or security issue when docker then exposes a port on the host machine...
docker supports many network types, vlans, host attached, bridged, private, etc. There are many options available to run your containers on if you don't want to expose ports on the host machine. A good place to start: If you don't want ports exposed on the host machine then probably should not start your docker container up with host networking and a port exposed on that network...
Regardless of that, your container host machines should be behind a load balancer w/ firewall and/or a dedicated firewall, so containers poking holes (because you told them to and then got mad at it) shouldn't be an issue
Unless you're trying to do one of those designs that cloud vendors push to fully protect every single traffic flow, most people have some kind of very secure entry point into their private network and that's sufficient to stop any random internet attacks (doesn't stop trojans, phishing, etc). You have something like OpenSSH or Wireguard and then it doesn't matter how insecure the stuff behind that is, because the attacker can't get past it.
As a rule of thumb, I will gladly pass on Tor traffic, but no exit node, and I understand if network admins want to block entry node, too. It is a decision everyone who maintains a network has to make themselves.
The reason I block it is also the same reason I block banana republics like CN and RU: these don't prosecute people who break the law with regards to hacking. Why should one accept unrestricted traffic from these?
In the end, the open internet was once a TAZ [1] and unfortunately with the commercialization of the internet together with massive changes in geopolitics the ship sailed.
[1] https://en.m.wikipedia.org/wiki/Temporary_Autonomous_Zone
The seperation of control and function has been a security practice for a long time.
Port 80 and 443 can be open to the internet, and in 2024 whatever port wireguard uses. All other ports should only be accessible from the local network.
With VPS providers this isn't always easy to do. My preferred VPS provider. However provides a separate firewall which makes that easier.
Been using it for years and it’s been solid.
Is this reckless? Reading through all this makes me wonder if SSHFS (instead of NFS) with limited scope might be necessary.
Suppose you have the media server in its own VLAN/Subnet, chances are good that the firewall is instrumental in enforcing that security boundary. If any part of the layer-7 attack surface is running on the firewall... you probably get the idea.
I learned how to do everything through SSH - it is really an incredible Swiss Army knife of a tool.
This exact thing happened to a friend, but there was no need to exploit obscure memory safety vulnerabilities, the password was “password”.
My friend learnt that day that you can start processes in the machine from Postgres.
`Ofcourse I password protected it, but seeing as it was meant to be temporary, I didn't dive into securing it properly.`
Really? Nor any Docker guides warning in general about exposing a container to the Internet?
My guess is that the attacker figured out or used the default password for the superuser. A quick lookup reveals that a pg superuser can create extension and run some system commands.
I think the takeaway here is that the pg image should autogenerate a strong password or not start unless the user defines a strong one. Currently it just runs with "postgres" as the default username and password.
Takeaway for beginner application hosters (aka "webmasters") is to never expose something on the open internet unless you're 100% sure you absolutely have to. Everything should default to using a private network, and if you need to accept external connections, do so via some bastion host that isn't actually hosted on your network, which reaches into your private network via proper connections.
I even get a weird feeling these days with SSH listening on a public interface. A database server, even with a good password/ACLs, just isn’t a great safe idea unless you can truly keep on top of all security patches.
Edit: It can also happen with the official image: https://www.postgresql.org/about/news/cve-2019-9193-not-a-se...
OP can't prove that. The only way is to scrap the server completely and start with a fresh OS image. If OP has no backup and ansible repo (or anything similar) to configure a new home server quickly, then I guess another valuable lesson was learned here.
Especially when it comes to my home network, I would rather be safe than sorry. How would you even begin to investigate a rootkit since it can clean up after itself and basically make itself invisible?
Particularly when it comes to Kinsing attacks, as there seem to been rootkits detected in tandem with it, which is exactly what OP got hit by it seems (although they could only see the coinminer).
My general feeling is that if someone wants to install a hardware rootkit on my extremely boring home servers, it’s highly unlikely that I’ll be able to stop them. I can do best practices (like not exposing things publicly), but ultimately I can’t stop Mossad; on the other hand, I am an unlikely target for anything other than script kiddies and crypto miners.
Sure, but if you already know since before that this specific cryptominer has been found together with rootkits, and you know rootkits aren't as easy to detect, what's your approach to validate if you're infected or not?
Maybe I'm lucky that I can tear down/up my infrastructure relatively easily (thanks NixOS), but I wouldn't take my chances when it's so close to private data.
That's my point – you can do best practices all day long, but short of observing sudden shifts (or long-term trends) in collected metrics, you're not going to be able to notice, let alone defend, against sophisticated attacks. There has been malware that embeds itself into HDD firmware. Good luck.
Especially with a tool you don't have an enterprise class firewall in front of, security needs to be automatic, not afterthought.
Um that's pretty ... naïve?
Closing even your VPN port goes a little far. Wireguard for example doesn't even reply on the port if no acceptable signature is presented. Assuming the VPN software is recent, that's as close as you can come to having everything closed.
I am running Immich on my home server and want to be able to access it remotely.
I’ve seen the options of using wireguard or using a reverse proxy (nginx) with Cloudflare CDN, on top of properly configured router firewalls, while also blocking most other countries. Lots of this understanding comes from a YouTube guide I watched [0].
From what I understand, people say reverse proxy/Cloudflare is faster for my use case, and if everything is configured correctly (which it seems like OP totally missed the mark on here), the threat of breaches into to my server should be minimal.
Am I misunderstanding the “minimal” nature of the risk when exposing the server via a reverse proxy/CDN? Should I just host a VPN instead even if it’s slower?
Obviously I don’t know much about this topic. So any help or pointing to resources would be greatly appreciated.
So, parent poster: yes, you are doing it right.
Grandparent: Use a VPN, close everything else.
I'm in the same boat. I've got a few services exposed from a home service via NGINX with a LetsEncrypt cert. That removes direct network access to your machine.
Ways I would improve my security:
- Adding a WAF (ModSecurity) to NGINX - big time investment!
- Switching from public facing access to Tailscale only (Overlay network, not VPN, so ostensibly faster). Lots of guys on here do this - AFAIK, this is pretty secure.
Reverse proxy vs. Overlay network - the proxy itself can have exploitable vulnerabilities. You should invest some time in seeing how nmap can identify NGINX services, and see if those methods can be locked down. Good debate on it here:
https://security.stackexchange.com/questions/252480/blocking...
As such, I’m hosting Immich and am figuring out remote access options. This kind of misses the point of my question.
If you can, then you can just forward the port to your Immich instance, or put it behind a reverse proxy that performs some sort of authentication (password, certificate) before forwarding traffic to Immich. Alternatively you could host your own Wireguard VPN and just expose that to the internet - this would be my preferred option out of all of these.
If you can't forward ports, then the easiest solution will probably be a VPN like Tailscale that will try to punch holes in NAT (to establish a fast direct connection, might not work) or fall back to communicating via a relay server (slow). Alternatively you could set up your own proxy server/VPN on some cheap VPS but that can quickly get more complex than you want it to be.
> forward a port
From what I understand, my Eero router system will let me forward ports from my NAS. I haven’t tested this to see if it works, but I have the setting available in my Eero app.
> forward port to Immich instance
Can you expand on this further? Wouldn’t this just expose me to the same vulnerabilities as OP? If I use nginx as a reverse proxy, would I be mitigating the risk?
Based on other advice, it seems like the self hosted VPN (wireguard) is the safest option, but slower.
The path of least resistance for daily use sounds ideal (RP), but I wonder if the risk minimization from VPN is worth potential headaches.
Thanks so much for responding and giving some insight.
Yeah I wouldn't do this personally, I just mentioned it as the simplest option. Unless it's meant to be a public service, I always try to at least hide it from automated scanners.
> If I use nginx as a reverse proxy, would I be mitigating the risk?
If the reverse proxy performs additional authentication before allowing traffic to pass onto the service it's protecting, then yes, it would.
One of my more elegant solutions has been to forward a port to nginx and configure it to require TLS client certificate verification. I generated and installed a certificate on each of my devices. It's seamless for me in day to day usage, but any uninvited visitors would be denied entry by the reverse proxy.
However support for client certificates is spotty outside of browsers, across platforms, which is unfortunate. For example HomeAssistant on Android supports it [1] (after years of pleading), but the iOS version doesn't. [2] NextCloud for iOS however supports it [3].
In summary, I think any kind of authentication added at the proxy would be great for both usability and security, but it has very spotty support.
> Based on other advice, it seems like the self hosted VPN (wireguard) is the safest option, but slower.
I think so. It shouldn't be slow per se, but it's probably going to affect battery life somewhat and it's annoying to find it disconnected when you try to access Immich or other services.
[1] https://github.com/home-assistant/android/pull/2526
[2] https://community.home-assistant.io/t/secure-communication-c...
What you have to understand is that having an immich instance on the internet is only a security vulnerability if immich itself has a vulnerability in it. Obviously, this is a big if, so if you want to protect against this scenario, you need to make sure only you can access this instance, and you have a few options here that don't involve 3rd parties like cloudflare. You can make it listen only on the local network, and then use ssh port tunneling, or you can set up a vpn.
Cloudflare has been spamming the internet with "burglars are burgling in the neighbourhood, do you have burglar alarms" articles, youtube is also full of this.
I also use knockd for port knocking to allow the ssh port, just in case I need to log in to my server without having access to one of my devices with Wireguard, but I may drop this since it doesn't seem very useful.
practically all these attacks require downloading remote files to the server once they gain access, using curl, wget or bash.
Restricting arbitrary downloads from curl, wget or bash (or better, any binary) makes these attacks pretty much useless.
Also these cryptominers are usually dropped to /tmp, /var/tmp or /dev/shm. They need internet access to work, so again, restricting outbound connections per binary usually mitigates these issues.
https://www.aquasec.com/blog/kinsing-malware-exploits-novel-...
Any advice what that looks like for a docker container? My border firewall isn't going to know what binary made the request, and I'm not aware of per-process restrictions of that kind
Unless you want it publicly available and expect people to try and illegitimately access it, never expose it to the internet.