The math of āa decadeā seemed wrong to me, since I remembered Docker debuting in 2013 at PyCon US Santa Clara.
Then I found an HN comment I wrote a few years ago that confirmed this:
ā[...] I remember that day pretty clearly because in the same lightning talk session, Solomon Hykes introduced the Python community to docker, while still working on dotCloud. This is what I think might have been the earliest public and recorded tech talk on the subject:ā
Just being pedantic though. Thatās about 13 years ago. The lightning talk is fun as a bit of computing history.
(Edit: as I was digging through the paper, they do cite this YouTube presentation, or a copy of it anyway, in the footnotes. And they refer to a 2013 release. Perhaps there was a multi-year delay between the paper being submitted to ACM with this title and it being published. Again, just being pedantic!)
bmitch3020today at 5:57 PM
I've seen countless attempts to replace "docker build" and Dockerfile. They often want to give tighter control to the build, sometimes tightly binding to a package manager. But the Dockerfile has continued because of its flexibility. Starting from a known filesystem/distribution, copying some files in, and then running arbitrary commands within that filesystem mirrored so nicely what operations has been doing for a long time. And as ugly as that flexibility is, I think it will remain the dominant solution for quite a while longer.
mrbluecoattoday at 5:22 PM
> Docker repurposed SLIRP, a 1990s dial-up tool originally for Palm Pilots, to avoid triggering corporate firewall restrictions by translating container network traffic through host system calls instead of network bridging.
Genuinely fascinating and clever solution!
tzstoday at 8:44 PM
I've not done serious networking stuff for over two decades, and never in as complex an environment as that in the article, so the networking part of the article went pretty much over my head.
What I want to do when running a Docker container on Mac is to be able to have the container have an IP address separate from the Mac's IP address that applications on the Mac see. No port mapping: if the container has a web server on port 80 I want to access it at container_ip:80, not 127.0.0.1:2000 or something that gets mapped to container port 80.
On Linux I'd just used Docker bridged networking and I believe that would work, but on Mac that just bridges to the Linux VM running under the hypervisor rather than to the Mac.
Is there some officially recommended and supported way to do this?
For a while I did it by running WireGuard on the Linux VM to tunnel between that and the Mac, with forwarding enabled on the Linux VM [1]. That worked great for quite a while, but then stopped and I could not figure out why. Then it worked again. Then it stopped.
I then switched to this [2] which also uses WireGuard but in a much more automated fashion. It worked for quite a while, but also then had some problems with Docker updates sometimes breaking it.
It would be great if Docker on Mac came with something like this built in.
A full decade since we took the 'it works on my machine' excuse and turned it into the industry standard architecture ('then we'll just ship your machine to production').
avsmtoday at 6:01 PM
An extremely random fact I noticed when writing the companion article [1] to this (an OCaml experience report):
"Docker, Guix and NixOS (stable) all had their first releases
during 2013, making that a bumper year for packaging aficionados."
Now we get coding agent updates every week, but has there been a similar year since 2013 where multiple great projects all came out at the same time?
Back then I didn't foresee the 22GB image our jupyter/ML is in 2026. There must be a better way.
netremtoday at 10:24 PM
With ML and AI now being pushed into everything, images have ballooned in size. Just having torch as a dependency is some multiple gigabytes. I miss the times of aiming for 30MB images.
Have others found this to be the case? Perhaps we're doing something wrong.
zacwesttoday at 4:57 PM
The historic information in here was really interesting, and a great example of an article rapidly expanding in scope and detail. How they combatted corporate IT āsecurityā software by pretending to be a VPN is quite unexpected.
the__alchemisttoday at 5:20 PM
I'm optimistic we will succeed in efforts to simplify linux application / dependency compatibility instead of relying on abstractions that which work around them.
brtkwrtoday at 7:15 PM
I realise apple containers haven't quite taken off yet as expected but omission from the article stands out. Nice that it mentions alternative approaches like podman and kata though.
rando1234today at 9:21 PM
Didn't Vagrant/Vagrantfiles precede Docker? Unclear why that would be the key to its success if so.
benatkintoday at 10:56 PM
> If you are a developer, our goal is to make Docker an invisible companion
I want it not to just be invisible but to be missing. If you have kubernetes, including locally with k3s or similar, it won't be used to run containers anyway. However it still often is used to build OCI images. Podman can fill that gap. It has a Containerfile format that is the same syntax but simpler than the Docker builds, which now provides build orchestration features similar to earthly.dev which I think are better kept separate.
mberningtoday at 10:45 PM
I remember being pretty skeptical of ādockerizingā applications when it first becamee popular. But Iāve come around to it, if for no other reason than it provided an easily understandable concept which anyone could understand and more importantly use. The onramp to using docker is very gentle.
phplovesongtoday at 7:38 PM
We have shipped unikernels for the last decade. Zero sec issues so far. I highly recommend looking into the unikernel space for a docker alternative. MirageOS being a good start.
politelemontoday at 7:06 PM
Somewhere along the line they started prioritising docker desktop over docker. It's a bit jarring to see new features coming to desktop before it comes to Linux, such as the new sandbox features.
Is there any insight into this, I would have thought the opposite where developers on the platform that made docker succeed are given first preview of features.
tsoukiorytoday at 10:30 PM
I dont no spek anglais
arikrahmantoday at 6:41 PM
I'm hoping the next decade introduces more declarative workflows with Nix and work with docker to that end.
INTPenistoday at 5:45 PM
I thought it was 2014 when it launched? The article says the command line interface hasn't changed since 2013.
heraldgeezertoday at 8:16 PM
I still havent learned it being in IT its so embarassing. Yes I know about the 2-3h Youtube tutorials but just...
1970-01-01today at 8:29 PM
I now wonder if we'll end up switching it all back to VMs so the LLMs have enough room to grow and adapt.
callamdelaneytoday at 8:50 PM
The fact that docker still, in 2026, will completely overwrite iptables rules silently to expose containers to external requests is, frankly, fucking stupid.
brcmthrowawaytoday at 6:23 PM
I dont use Dockerfile. Am i slumming it?
user3939382today at 6:40 PM
It solves a practical problem thatās obvious. And on one hand the practical where-were-at-now is all that matters, thatās a legitimate perspective.
Thereās another one, at least IMHO, that this entire stack from the bottom up is designed wrong and every day we as a society continue marching down this path weāre just accumulating more technical debt. Pretty much every time you find the solution to be, āok so weāll wrap the whole thing and thenā¦ā something is deeply wrong and youāre borrowing from the future a debt that must come due. Energy is not free. We tend to treat compute like it is.
Maybe Iām in a big club but I have a vision for a radically different architecture that fixes all of this and I wish that got 1/2 the attention these bandaids did. Plan 9 is an example of the theme if not the particular set of solutions Iām referring to.
forrestthewoodstoday at 8:28 PM
I am so thoroughly convinced that Docker is a hacky-but-functional solution to an utterly failed userspace design.
Linux user space decided to try and share dependencies. Docker obliterates this design goal by shipping dependencies, but stuffing them into the filesystem as-if they were shared.
If youāre going to do this then a far far far simpler solution is to just link statically or ship dependencies adjacent to the binary. (Aka what windows does). Replicating a faux āsharedā filesystem is a gross hack.
This is a distinctly Linux problem. Windows software doesnāt typically have this issue. Because programs ship their dependencies and then work.
Docker is one way to ship dependencies. So itās not the worst solution in the world. But I swear itās a bad solution. My blood boils with righteous fury anytime anyone on my team mentions they have a 15 minute docker build step. And donāt you damn dare say the fix to Docker being slow is to add more layers of complexity with hierarchical Docker images ohmygodiswear. Running a computer program does not have to be hard I promise!!