~skeeto/public-inbox

1

Re: w64devkit: a Portable C and C Development Kit for Windows

Eric Farmer
Details
Message ID
<CAAuXPZqXQHNTtpzGC2aDL41b1b1VZAwE4h_XB8J+hU4ax=SLvA@mail.gmail.com>
DKIM signature
pass
Download raw message
This is great, thanks for putting together a very useful tool! I plan
to make this a standard part of my Windows sandbox, and to point
others to it as well.

> The main purpose of Docker is to contain and isolate misbehaved software to improve its reliability. Well-behaved, well-designed software benefits little from containers.

I would appreciate any more detailed thoughts you have on this, my
impressions of Docker from environments where I've encountered it (or
a proposed "need" for it) have been mostly negative, but usually
because there have been simpler alternatives, not because I have any
real expert experience with it. Docker isolates conflicting
dependencies-- okay, so I feel like I have a tendency, to a fault, to
want to write code with *zero* dependencies, or at least only those
dependencies already supported by the existing target environment.
This is admittedly easier to do in some languages (like C and C++)
than it is in others (like Python). But I struggle with the
development approach of pulling a large number of libraries off the
shelf, particularly when the functionality used in each of them is
often extremely simple.

Re: w64devkit: a Portable C and C Development Kit for Windows

Details
Message ID
<20200516024839.otdpdt4bidkxaqwx@nullprogram.com>
In-Reply-To
<CAAuXPZqXQHNTtpzGC2aDL41b1b1VZAwE4h_XB8J+hU4ax=SLvA@mail.gmail.com> (view parent)
DKIM signature
missing
Download raw message
Thanks, Eric! You and your cohort were actually on my mind while I was 
working on this, so I'm happy to have hit the mark. BusyBox-w32 was 
already such a hit with them.

> I would appreciate any more detailed thoughts you have on this

First, I must confess: This is me being intentionally provocative. :-)

I've said this, or something close to it, to friends who like Docker. 
They quickly disagree, but I have yet to hear a satisfactory response. 
The most coherent argument is that Docker allows for flexible horizontal 
scaling. This is true, but then my question is, "What the heck are you 
doing that you need to scale so much?"

Hardware is so fast and capable today, why can't one powerful machine 
cut it? A well-written service on modern hardware can handle tens of 
thousands requests per second without breaking a sweat. From the sorts 
of things I see people scaling with Docker, my conclusion is that it's 
nearly all sloppy software engineering. Rather than take the time to do 
it well, they punt on efficiency and throw more hardware at it since 
Docker makes it easy.

So that's been my impression for the past year or so. I've become more 
confident about it since someone more experienced than me recently 
expressed basically the same thoughts:

https://rachelbythebay.com/w/2020/03/07/costly/
https://rachelbythebay.com/w/2020/05/06/scale/
https://rachelbythebay.com/w/2020/05/07/serv/

There are some legitimate cases where that horizontal scale is valid and 
needed — the big players like Google, Netflix, Amazon, etc. — but 
they're not using Docker to do it. They build custom solutions.

The second most coherent argument is ease of deployment. Some languages, 
such as Python and Ruby, don't really have great deployment options. 
Suppose you implement a service in Python. Maybe it's only a couple 
dozen source files, but you've got dependencies too, installed via pip 
or conda or whatever. So ultimately your application is actually a 
sprawl of hundreds of Python source files in a virtual environment, plus 
the CPython installation itself. How do you deploy that? You pack it all 
up in a Docker image.

That solves the problem, but that exactly fits my complaint: Docker is 
just mitigating the lack of good deployment options. The Python service 
is a sprawling virtualenv not well-behaved for deployment. If it was C, 
C++, or Go, you statically link it, and your deployment is just a single 
binary and maybe a configuration file. The dependencies of other 
processes on the same machine don't matter because your dependencies are 
baked into your binary. Go is especially well-equipped for this since 
this is all the default.

At one point awhile back I was thinking about writing an article titled 
something like "ELF is a Container." I was going to argue that a 
statically-linked ELF is the original container image. Docker (and LXC, 
etc.) containers largely exist because of language tooling (CPython, 
etc.) that is unable to target static ELFs. So the workaround is to pack 
it all into container images.

> because there have been simpler alternatives

Agreed. Docker is trendy right now, and so people are excited to use in 
places where it's not appropriate.

My current use at work *is* appropriate and useful. It's the case of 
mitigating misbehavior of poorly-written software. I'd prefer if the 
software was just not crappy in the first place, but that's mostly out 
of my control.