@stevekrouse I think it's worth thinking through the implications of only wanting to democratize high-level programs. Since we don't really have a good way to build hardware that can execute recursively nested expressions, the lowest level will always execute statements that are connected by named locations. So by saying you don't want people to have to think about statements, you're asking for the lowest levels to just be "handled" for you. That's basically just the same priesthood arrangement we're trying to get away from, and we end up with the same questions about who watches the watchers, and so on.
-Kartik Agram on the Future of Coding Slack
This is a very interesting point: if we truly want to democratize programming, it needs to be democratic and every level, all the way down to the bits. Otherwise, it's just another limited app won't support extensibility in all the ways we'd want.
Of course, I am with you that a stack that's fully explainable at each level is wonderful. This reminds me of Alan Kay's STEPS project's goals. A system that rewards curiosity infinitely! Very beautiful.
My goal is to enable people to direct computers to their human ends. For me that's democratizing computation: not necessarily that they can understand each level, but that they can at some level (likely the highest level) describe what they want to a computer.
Forcing humans to interface with a lower-level machine language is very much antithetical to this vision. As Alan Perlis says, "A programming language is low level when its programs require attention to the irrelevant." In this sense, I want to democratize programming by providing the highest level language possible.
A common reply is some variation on:
But abstractions are always leaky. People will eventually need to understand lower levels when the higher level is randomly slow or doesn't do what they want it to, etc, etc.
This doesn't move me. Of course you can come up with examples of when abstractions are leaky, but in my experience, most of the time abstractions aren't leaky and I don't have to peak under the hood. And these occurrences don't make me give up on abstractions, but simply wish for that specific abstraction to have that one specific improvement.
However, I do hold out hope for a world where we can both get what we want. For example, I can imagine a world where a super high level language is eventually compiled to the lambda calculus or System F, which is then compiled and hyper optimized into machine code. That feels comprehensible most of the way down.
Steve, I've been thinking about your one-sentence mission:
Enable all people to modify the software they use in the course of using it. (https://futureofcoding.org/episodes/033)
How do you weight the two halves? Do you care about people being able to modify their OS? Their programming language? Their firmware?
I think the place where we diverge is the
in the course of using itpart. Or at least that seems like a much lower priority to me. If you try to attack both halves at once (allow modificationandin the course of using it) you're liable to add features at the product/UI level to help with the latter that make certain kinds of modification harder. But then I (still) don't really understand Smalltalk yet..
I think we agree here more than we disagree. The in the course of using it part could be replaced with in time proportional to the essential complexity of the modification but that's a bit of a mouthful. I would be happy if we could shift things from where they are to your a single afternoon or replace quantum leaps of understanding after weeks of effort with an hour of achievement for an honest hour (or three) of effort. My in the course of using it is more of a far-off goal. Somewhere in the future, I want it to be as quick as using the Settings menu is today -- modulo how difficult the task is.
Ah, I think this may be where we disagree. I am working towards a set of abstractions for whole systems that will allow humans to direct computers without mentioning low level concerns like network requests. Of course people will create abstractions within the tool, but there will be no need for ad-hoc extra abstractions, like Docker, on top of it. I can't fully defend why yet, but I do have conviction that these denotational abstraction will be different than others of the past...
Concurrency is a prime example of a problem that entirely disappears in a denotational system. If you stop over-specifying the order of instructions and instead specify what you want, the computer is free to maximize concurrency at compile time.
I guess democracy is a spectrum, not a binary. Some things are more democratic than others. But then there are also trade-offs. For example anarchy might be more "democratic" than representative democracy, but one might prefer representative democracy because they'd trade some democratic power for stability. In the same way, I think most people would be willing to not be able to fix every single bug on their own in exchange for some really powerful abstractions.
Additionally, one of the benefits of denotational semantics is that it lets us be precise about the ways in which our implementations don't meet our specification. It makes our bug reports precise.
This would be the sort of thing that you'd be able to do in my imaginary system. Maybe it'd help to be concrete with an example about the ways in which my system won't be infinitely fixable:
Denotational programming is trying to make all objects in a language representable by some mathematical object. So for argument's sake, let's work with math as an analogy. Let's take the square root function. In order to actually compute this function, the computer has some imperative algorithm, but I simply call
sqrtdeclaratively and it happens faster than I can blink. I can assume it's instantaneous. However, what if I ask for thesqrtof a number so large that it takes a really long time to compute? This breaks the idea that calculations such assqrtare instantaneous. Yet I don't really care about the imperative algorithm as long as I have a debugging tool that can help me pinpoint the problem to this function, so I can route around it or think up some alternative. (Maybe I wrap the computation in such a way that the length of computation time is explicit.)So the places where my system "has a wall" would be when mathematical assumptions don't hold.
This is a good point. I'd love to be able to understand/debug what's going on with my wifi driver. Yet at the same time, I wonder (without knowing almost anything about wifi) if we can replace wifi with a denotational protocol. In other words, wifi is a low level protocol/technology -- could we replace it with a higher level technology that would be debuggable at that higher level?
I'm a big fan of pushing all "low-level, implementation details" to a "compiler" and leaving the layer above that compiler for the interesting problems humans are concerned with.
I agree this is a worthwhile goal. In the same way you weight
allow modificationmuch higher thanin the course of using it, I would weighthighest-level comprehensibiltyoverfully stack comprehensibility. I'd worry that the choices we make to accomplish the latter would preclude the highest level to compile down efficiently. But hopefully one day we can have both!