Security

For a long time now I’ve been worried about the lack of security in my dev environment.

Tools like brew make it absurdly easy to install third-party tools on your machine. On my laptop right now, I’ve got 304 formula and 30 casks installed, and my /opt/homebrew directory clocks in at 13G. Other package managers are no better: Nix brings you apparent immutability, but all the hashes in the world won’t tell you if something is safe to install, only that its content matches a digest[1]. No matter which package manager you use, you’re not auditing all of the packages you install, so you are in a sense delegating the task of monitoring the package registry to someone else; it could be a somewhat well-defined set of individuals, such as "the Debian maintainers", or something more nebulous, like "the watchers and contributors of the Homebrew repository".

And then there are the language ecosystems, each with its own package manager. Sampling a couple of small web apps on my machine, I find one[2] with 33,265 files in its node_modules directory and another[3] with 21,361. Note I said "small". These are not complex apps. They use React and number of related runtime and dev packages. Taking a non-web example that uses npm (well Yarn), even my dotfiles repo has 253 files in its node_modules, and that’s a project where I’ve tried to keep my JS dependencies to a bare minimum[4].

At the GUI level, I’m far less worried, although I recognize that it’s no longer true — if it ever was — that "you don’t have to worry about viruses and malware unless you’re on Windows". I spend most of my time in the terminal or in the browser, and the other apps I use — few in number — are either provided by Apple or some other megacorp (with the attendant security measures, which are imperfect but better than nothing), installed via the App Store (again, with some layer of filtering via the approval process, which presumably includes some kind of static analysis), or are from third-parties who have built up my trust over a period of time. None of this is bullet-proof, but the surface area is relatively limited, and it doesn’t feel like the Swiss-cheese command-line environment, with its barn-door-sized holes.

In the dev tooling space, I have felt cold comfort in knowing that pretty much the only thing between me and having my machine devastatingly compromised comes in the form of a "herd effect", where there is a kind of "safety" in numbers. Specifically, I’m relying on the fact that if there is a mass vulnerability propagated via a supply chain attack targeting some widely used dependency, it will probably be detected quickly (via the "many eyeballs" effect[5]), and I’ll be likely to hear about it soon, hopefully in such a way that I (or somebody positioned at a central choke point, such as working at a package registry) can mitigate it before it does me any harm.

I’m not sure who I heard say this, but it is an aphorism that security and convenience are always, always in tension with each other. You cannot have full security and utmost convenience at the same time. That is to say, you could air-gap your computer and audit literally every piece of source code you rely on before building and running it; this includes the kernel, the compiler, the libraries you use, the applications — the whole kit and kaboodle. But obviously, nobody does this, except perhaps for portions of the security apparatus of powerful nation states. And even for those, can they ever really be sure that somebody hasn’t inserted a self-concealing backdoor somewhere in the build chain? They would like to be, but I don’t think they can ever be (100% sure). So, we’re not going to go "all the way", sacrificing all convenience in the name of security. Common trade-offs include:

  • Storing your passwords and your 2FA tokens in the same password storage (eg. 1Password), in the name of convenience.
  • For that matter, storing all of your passwords in the single basket that is the cloud storage and closed-source application code of a for-profit proprietary software vendor (even if they publish a white paper on their security design).
  • Running closed-source software at all, or open-source software that you haven’t audited and that hasn’t been adequately engineered for security (ie. most of it, with notable exceptions).

One thing I want to explore is running more things in containers, which is easier now that Apple has rolled out a built-in container CLI. Now, containers are a pretty thin layer as far as security is concerned[6], but a thin-layer can still be useful in a defense-in-depth approach. The main benefit of containerization is reproducibility: you can specify what your dependencies are, prepare an image with those dependencies, and then run the same code in development and production. If you shift to another machine, you can take the container environment with you and everything will be the same. But while that is the main benefit, it’s not the only one. Installing a dependency in a container is a very different matter than "letting a third-party package shart bits of itself all over your base system". But this is again a security-vs-convenience trade-off: working with containers does involve another layer of abstraction, another set of tooling to learn, and another set of things to go wrong.

Now, it’s pretty clear that I could take those two web apps I mentioned above and containerize them fairly straightforwardly. Containers’ wheel-house, so to speak, is network services. If the ultimate goal of your service is to expose an app over HTTP listening on a port, then containers are a great fit: the container is a black box and you don’t need to care about what is going on inside it. But what about something like my dotfiles repo? I could certainly create a "base" dev image with all the stuff inside it that I need for a typical project — basically all or most of the stuff that I’m currently spraying all over the system with Homebrew — and then have pretty much all my terminal sessions start with me spinning up a container inside which I’ll do the real work.

But I’m not sure of what the details will be and whether it’s worth it for such a marginal additional layer of isolation. Will there be an unacceptable performance hit? Will updating the image be too much of a pain in the butt, even for a guy like me who is all too comfortable exploring the space between security and convenience? Will I still need to keep a bunch of third-party code running outside the container in order to get my daily work done? And if I end up giving the containers access to things like SSH and GPG keys, or Anthropic and other API tokens, won’t I just be back where I started, with a bunch of unaudited and untrusted third-party code running in an environment where it will have access to sensitive things that I would rather keep away from prying eyes and fingers?

All of these are open questions for now, but I’ll keep you posted if and when I actually start carrying out experiments in this direction[7]


  1. Reproducible builds are a related, useful idea, but while they close one potential gap, they are not a panacea. ↩︎

  2. Masochist. ↩︎

  3. I can’t link to this one as it’s private. ↩︎

  4. And while my npm footprint is small there, I have around 70 submodules in the repo, and the file count across those is over 7.5K. ↩︎

  5. "(1) with many eyes, shallow bugs get caught very quickly, and (2) that the more eyes there are, the more likely it is that some member of the group has sufficiently penetrating vision to catch the deeper-swimming bugs." ↩︎

  6. Attacks may exploit vulnerabilities that allow code inside a container, or within a VM, to break out and access, or otherwise affect, the host machine. And even in the absence of vulnerabilities, in order to be actually useful, containers often need access to network facilities and filesystem contents elsewhere. ↩︎

  7. If "keeping you posted" means pushing stuff to my dotfiles repo, that is. ↩︎

git-rebase's new powers

Woah. Look how many fancy things git rebase -i knows how to do now:

# Commands:
# p, pick <commit> = use commit
# r, reword <commit> = use commit, but edit the commit message
# e, edit <commit> = use commit, but stop for amending
# s, squash <commit> = use commit, but meld into previous commit
# f, fixup [-C | -c] <commit> = like "squash" but keep only the previous
#                    commit's log message, unless -C is used, in which case
#                    keep only this commit's message; -c is same as -C but
#                    opens the editor
# x, exec <command> = run command (the rest of the line) using shell
# b, break = stop here (continue rebase later with 'git rebase --continue')
# d, drop <commit> = remove commit
# l, label <label> = label current HEAD with a name
# t, reset <label> = reset HEAD to a label
# m, merge [-C <commit> | -c <commit>] <label> [# <oneline>]
#         create a merge commit using the original merge commit's
#         message (or the oneline, if no original merge commit was
#         specified); use -c <commit> to reword the commit message
# u, update-ref <ref> = track a placeholder for the <ref> to be updated
#                       to this position in the new commits. The <ref> is
#                       updated at the end of the rebase

Brexit

Thanks Brexit, for making the process of ordering and receiving a replacement for an aging peripheral so exquisitely painful. In the old days, goods moved from the UK to other EU countries effortlessly. Now, their transit resembles an Odyssean epic straight out of Greek mythology.

  • Day 1. Place the order. Get an email from the merchant, asking if I’m really sure I want to proceed with the order, given that I’m going to have to pay import duties when I receive it (and they want to avoid me returning the item when I find out how expensive and painful that’s going to be). Confirm my willingness to run the gauntlet, and see exactly how deep the rabbit hole goes.
  • Day 2. Receive notification from DHL that the order is on the way, with an estimated delivery date 7 days in the future.
  • Day 3. Order arrives in my country.
  • Days 4 and 5. A weekend. All quiet on the Western Front.
  • Day 6. Order arrives in my city.
  • Day 7. Order shows as "detained", but status page notes that DHL will contact me to obtain details necessary.
  • Day 8. Order still shows as "detained". Given that DHL hasn’t contacted yet, I contact them. Receive email from DHL asking me to fill in an authorization for them to handle the dispatch through customs. I fill it in and send it back immediately.
  • Day 9. DHL emails me to say they are "going to initiate the corresponding procedures for customs dispatch". Estimated delivery date comes and goes.
  • Days 10 and 11. A weekend. All quiet on the Western Front.
  • Day 12. I receive my daily status update email, informing me that my parcel’s currently in an "exception state".
  • Day 13. Another daily status update ("exception"), and the status page again tells me that DHL needs some information from me and will contact me to let me know what they need; given that they don’t contact me, I contact them, asking them what they need.
  • Day 14. I receive a reply from DHL, indicating that they are still waiting to physically receive the package (which seems false, given all the status updates showing the package’s transit — in DHL’s custody — from the source country to the destination country), and that I should write to the seller to ask them what date they sent it. Finally, later in the day I receive email from DHL inviting me to pay the import duties on the order, which amount to 32.6% of the cost of the goods. Why is this amount more than the expected 21% (this is the rate of the Value Added Tax in my country)? Well, it’s because DHL helps itself to 52.51€ for its "administration" services, in addition to the 9.99€ I paid for shipping in my original order (supposedly, to cover delivery in the span of "3–6 business days"). Mousing over to see the description of what these "administrative services" consist of, I am greeted with the dystopianly inaccurate claim that they refer to "assitance provided by DHL Express to facilitate the efficient clearance of shipments through customs processes; this includes documentation management, ensuring regulatory compliance, and handling duties and taxes" (emphasis added).
  • Day 15. Item is actually delivered.

So, in that 15-day itinerary, 9 days were spent with the package sitting on some shelf waiting to be "processed", while I received an irritating bombardment of inaccurate notifications and requests to engage in hoop-jumping. That they would charge me a premium to facilitate this circus, and have the gall to describe it as "efficient", is the most annoying part.