Skip to content

Instantly share code, notes, and snippets.

@jashkenas
Last active September 5, 2024 17:03
Show Gist options
  • Save jashkenas/cbd2b088e20279ae2c8e to your computer and use it in GitHub Desktop.
Save jashkenas/cbd2b088e20279ae2c8e to your computer and use it in GitHub Desktop.
Why Semantic Versioning Isn't

Spurred by recent events (https://news.ycombinator.com/item?id=8244700), this is a quick set of jotted-down thoughts about the state of "Semantic" Versioning, and why we should be fighting the good fight against it.

For a long time in the history of software, version numbers indicated the relative progress and change in a given piece of software. A major release (1.x.x) was major, a minor release (x.1.x) was minor, and a patch release was just a small patch. You could evaluate a given piece of software by name + version, and get a feeling for how far away version 2.0.1 was from version 2.8.0.

But Semantic Versioning (henceforth, SemVer), as specified at http://semver.org/, changes this to prioritize a mechanistic understanding of a codebase over a human one. Any "breaking" change to the software must be accompanied with a new major version number. It's alright for robots, but bad for us.

SemVer tries to compress a huge amount of information — the nature of the change, the percentage of users that will be affected by the change, the severity of the change (Is it easy to fix my code? Or do I have to rewrite everything?) — into a single number. And unsurprisingly, it's impossible for that single number to contain enough meaningful information.

If your package has a minor change in behavior that will "break" for 1% of your users, is that a breaking change? Does that change if the number of affected users is 10%? or 20? How about if instead, it's only a small number of users that will have to change their code, but the change for them will be difficult? — a common event with deprecated unpopular features. Semantic versioning treats all of these scenarios in the same way, even though in a perfect world the consumers of your codebase should be reacting to them in quite different ways.

Breaking changes are no fun, and we should strive to avoid them when possible. To the extent that SemVer encourages us to avoid changing our public API, it's all for the better. But to the extent that SemVer encourages us to pretend like minor changes in behavior aren't happening all the time; and that it's safe to blindly update packages — it needs to be re-evaluated.

Some pieces of software are like icebergs: a small surface area that's visible, and a mountain of private code hidden beneath. For those types of packages, something like SemVer can be helpful. But much of the code on the web, and in repositories like npm, isn't code like that at all — there's a lot of surface area, and minor changes happen frequently.

Ultimately, SemVer is a false promise that appeals to many developers — the promise of pain-free, don't-have-to-think-about-it, updates to dependencies. But it simply isn't true. Node doesn't follow SemVer, Rails doesn't do it, Python doesn't do it, Ruby doesn't do it, jQuery doesn't (really) do it, even npm doesn't follow SemVer. There's a distinction that can be drawn here between large packages and tiny ones — but that only goes to show how inappropriate it is for a single number to "define" the compatibility of any large body of code. If you've ever had trouble reconciling your npm dependencies, then you know that it's a false promise. If you've ever depended on a package that attempted to do SemVer, you've missed out on getting updates that probably would have been lovely to get, because of a minor change in behavior that almost certainly wouldn't have affected you.

If at this point you're hopping on one foot and saying — wait a minute, Node is 0.x.x — SemVer allows pre-1.0 packages to change anything at any time! You're right! And you're also missing the forest for the trees! Keeping a system that's in heavy production use at pre-1.0 levels for many years is effectively the same thing as not using SemVer in the first place.

The responsible way to upgrade isn't to blindly pull in dependencies and assume that all is well just because a version number says so — the responsible way is to set aside five or ten minutes, every once in a while, to go through and update your dependencies, and make any minor changes that need to be made at that time. If an important security fix happens in a version that also contains a breaking change for your app — you still need to adjust your app to get the fix, right?

SemVer is woefully inadequate as a scheme that determines compatibility between two pieces of code — even a textual changelog is better. Perhaps a better automated compatibility scheme is possible. One based on matching type signatures against a public API, or comparing the runs of a project's public test suite — imagine a package manager that ran the test suite of the version you're currently using against the code of the version you'd like to upgrade to, and told you exactly what wasn't going to work. But SemVer isn't that. SemVer is pretty close to the most reductive compatibility check you would be able to dream up if you tried.

If you pretend like SemVer is going to save you from ever having to deal with a breaking change — you're going to be disappointed. It's better to keep version numbers that reflect the real state and progress of a project, use descriptive changelogs to mark and annotate changes in behavior as they occur, avoid creating breaking changes in the first place whenever possible, and responsibly update your dependencies instead of blindly doing so.

Basically, Romantic Versioning, not Semantic Versioning.

All that said, okay, okay, fine — Underscore 1.7.0 can be Underscore 2.0.0. Uncle.

(typed in haste, excuse any grammar-os, will correct later)

@robnagler
Copy link

@bartlettroscoe we are talking past each other, just as we have done before. Neither approach can be implemented by fiat. A key difference is this: every package that ensures backwards compatibility improves downstream reliability, because dependents can upgrade without thinking about compatibility. With semver, there is no such guarantee except for minor releases. With backwards compatibility, software becomes more reliable in general, because upgrades to the "latest and greatest" are seamless and there is less friction so there is more time to maintain the dependent software.

The cost of backwards incompatibility is immense, because it is NM where N is the number of packages and M is the number of dependents. The cost of backwards compatibility is linear: N, only the packages themselves need to be maintained. Not to mention that M is much larger than N so the difference between N and NM is more than quadratic.

@mindplay-dk
Copy link

A key difference is this: every package that ensures backwards compatibility improves downstream reliability, because dependents can upgrade without thinking about compatibility.

💯

With semver, there is no such guarantee except for minor releases.

You mean minor and patch releases, right?

and what's your point? I mean, with SemVer, only major releases (are supposed to) contain breaking changes, that's the standard - we need some way to signal a breaking change, right? Otherwise semantic constraints in package manager requirements wouldn't be any use at all.

With backwards compatibility, software becomes more reliable in general, because upgrades to the "latest and greatest" are seamless and there is less friction so there is more time to maintain the dependent software.

of course backwards compatibility is always preferable, whenever it's practical and realistic - in some cases though, things can be simplified and made more reliable by removing code that exists to supports backwards compatibility. Code like this doesn't typically make software more reliable - usually the opposite - so we want to remove it eventually.

I've been following this conversation for a long time, and I'm a bit confused. 😅

is this a discussion about SemVer or about change management in general?

if it's about SemVer, yeah, I agree, there are some problems with SemVer as described - but there aren't any major problems with SemVer as implemented in package managers, is there?

sure, sometimes you get a bad release, because people tagged it with the wrong version number - but I'd say that's maybe 10% of the time, absolute worst case, which means 90% of the time it's saving us a lot of work.

if I notice a package using romantic versioning, usually I just change my constraint to something like 1.2.3 and manually update that package as needed - it's something I've rarely needed though, as most packages (based on my experience with NPM and Composer) are generally versioned according to package manager recommendation.

speaking of, I will mention this again, since no one ever commented:

https://simversion.github.io/

it's a subset of SemVer, which references how version numbers are interpreted by package managers - which is much simpler and easier to describe than actual SemVer, which just seems to create confusion and start endless debates.

my hope with this was that developers would be more interested in solving problems than debating the complex semantics of a specification that (lets face it) most developers don't even bother reading.

it was just a first draft/pitch, but no one ever showed any interest.

wouldn't it be more interesting so solve the problem than to debate the pros and cons of the SemVer spec?

extract "the good parts" and give people something they can actually understand and apply in a way that makes sense with existing package managers? 90% of the value with 10% of the complexity? 🙃

@bartlettroscoe
Copy link

The cost of backwards incompatibility is immense, because it is NM where N is the number of packages and M is the number of dependents. The cost of backwards compatibility is linear: N, only the packages themselves need to be maintained. Not to mention that M is much larger than N so the difference between N and NM is more than quadratic.

@robnagler, the cost calculations are not that straightforward. Maintaining backward compatibility for long periods of time over a lot of development and many releases can become a huge drain on productivity of the package development team, especially for faster moving packages and those that are driven by research and large underlying technology changes. (I come from the area of computational science where changes in GPU and other accelerator technologies are massively disruptive and mandate breaks in backward compatibility in many cases.) One can argue that maintaining backward compatibility for old customers is a large tax on new customers that want to adopt and fund future development of the software.

If you push too hard for never breaking backward compatibility, then you force many teams to abandon packages and start from scratch with a new package for new costumers. Then someone has to maintain the old package and well as the new package. This reminds me of:

(See my summary of this views 20 years later here).

We have to look at the total area under the curve of productivity of package developers, downstream package developers, package ecosystem management, and end customers. For some packages, the minimum area under the curve will come from some key highly used packages needing to never break backward compatibility. But for many other packages with a smaller number of direct downstream customers, the area under the curve will be minimized if the package development team can dump the cost of backward compatibility at regular intervals and have the downstream customers absorb those costs incrementally.

In the organization where I am working on, they are trying to remove an old highly used software package that as not been activity developed for 15 years and trying to get customers to move to the new package that was ready at least 10 years ago. The old package was officially deprecated at least 8 years and is not slated to be removed until the end of next year! That process is hugely expensive for everyone involved.
One can argue that it would have been better and overall cheaper to just incrementally refactor that old package into the new package starting 20 years ago and slowly broke backward compatibility in smaller, easier to absorb, increments.

And that is why we need semver and we need to take it seriously.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment