Skip to content

Instantly share code, notes, and snippets.

@stevekrouse
Created February 6, 2019 18:30
Show Gist options
  • Select an option

  • Save stevekrouse/8a8454c2bc4e4e01ba80caae905af754 to your computer and use it in GitHub Desktop.

Select an option

Save stevekrouse/8a8454c2bc4e4e01ba80caae905af754 to your computer and use it in GitHub Desktop.
kartik reply

@stevekrouse I think it's worth thinking through the implications of only wanting to democratize high-level programs. Since we don't really have a good way to build hardware that can execute recursively nested expressions, the lowest level will always execute statements that are connected by named locations. So by saying you don't want people to have to think about statements, you're asking for the lowest levels to just be "handled" for you. That's basically just the same priesthood arrangement we're trying to get away from, and we end up with the same questions about who watches the watchers, and so on.

-Kartik Agram on the Future of Coding Slack

This is a very interesting point: if we truly want to democratize programming, it needs to be democratic and every level, all the way down to the bits. Otherwise, it's just another limited app won't support extensibility in all the ways we'd want.

Of course, I am with you that a stack that's fully explainable at each level is wonderful. This reminds me of Alan Kay's STEPS project's goals. A system that rewards curiosity infinitely! Very beautiful.

My goal is to enable people to direct computers to their human ends. For me that's democratizing computation: not necessarily that they can understand each level, but that they can at some level (likely the highest level) describe what they want to a computer.

Forcing humans to interface with a lower-level machine language is very much antithetical to this vision. As Alan Perlis says, "A programming language is low level when its programs require attention to the irrelevant." In this sense, I want to democratize programming by providing the highest level language possible.

A common reply is some variation on:

But abstractions are always leaky. People will eventually need to understand lower levels when the higher level is randomly slow or doesn't do what they want it to, etc, etc.

This doesn't move me. Of course you can come up with examples of when abstractions are leaky, but in my experience, most of the time abstractions aren't leaky and I don't have to peak under the hood. And these occurrences don't make me give up on abstractions, but simply wish for that specific abstraction to have that one specific improvement.

However, I do hold out hope for a world where we can both get what we want. For example, I can imagine a world where a super high level language is eventually compiled to the lambda calculus or System F, which is then compiled and hyper optimized into machine code. That feels comprehensible most of the way down.


Steve, I've been thinking about your one-sentence mission:

Enable all people to modify the software they use in the course of using it. (https://futureofcoding.org/episodes/033)

How do you weight the two halves? Do you care about people being able to modify their OS? Their programming language? Their firmware?

I think the place where we diverge is the in the course of using it part. Or at least that seems like a much lower priority to me. If you try to attack both halves at once (allow modification and in the course of using it) you're liable to add features at the product/UI level to help with the latter that make certain kinds of modification harder. But then I (still) don't really understand Smalltalk yet..

-Kartik Agram on the Future of Coding Slack here and here

I think we agree here more than we disagree. The in the course of using it part could be replaced with in time proportional to the essential complexity of the modification but that's a bit of a mouthful. I would be happy if we could shift things from where they are to your a single afternoon or replace quantum leaps of understanding after weeks of effort with an hour of achievement for an honest hour (or three) of effort. My in the course of using it is more of a far-off goal. Somewhere in the future, I want it to be as quick as using the Settings menu is today -- modulo how difficult the task is.

@akkartik
Copy link

akkartik commented Feb 6, 2019

I'm not in favor of forcing people to understand low level details either. Most of the time you shouldn't have to think about them. But when a) the low-level details get in the way, and b) you have the time/inclination to look into them, then the system should let you get into the guts more easily. It should reward curiosity. I think this is crucial to democratization, and it has nothing to do with how low- or high-level you are.

'Low' and 'high' are relative, and many things that seem low-level to us (how the OS handles network events) were considered high-level not very long ago. Over time additional layers intermediated between us and them. There's no reason to think this won't continue in future. If you're swinging Docker containers around, the Elm code in them may feel like a low-level detail to you. If you aren't thinking about solutions that apply at all layers of abstraction, your solution will be of limited use because the new layers that others create will make them less relevant over time.

in my experience, most of the time abstractions aren't leaky and I don't have to peak under the hood.

I think the most common 'abstraction leakage' I run into is that something is slow, and I want to understand why it's slow so that I can find the simplest change to my behavior to improve my life. Similarly, my computer often hangs and throws up a beach ball, and I want to understand precisely what part was blocking so that I can alter my expectations to "not do that". Or I move my computer and suddenly the wifi refuses to reconnect[1]. Perhaps you have run into such issues as well? Performance, concurrency and networking are common issues caused by mismatch between multiple levels of abstraction.

And these occurrences don't make me give up on abstractions, but simply wish for that specific abstraction to have that one specific improvement.

But now you have a wish and no way to fulfill it. What do you do next? You run into a bug, and you have to wait for someone else to fix it. That's "undemocratic", right? You upgraded your browser and it added a new menu that you didn't ask for and want to remove. Are these desires not "human ends"?

You definitely shouldn't give up on abstractions. But let's remember the original meaning of the word. An abstraction allows you to choose to ignore certain details when they are irrelevant. But you should still be able to focus on the details when the situation calls for it. When you give up all ability to focus on the details it isn't an abstraction anymore. It's just a wall, and it's disenfranchising you.

[1] And I'm not thinking of cases like "having too many tabs open" or "the wifi signal is weak". Latency and connectivity issues often happen in spite of plenty of capacity. They're less common with Macs than Linux, but Macs are not perfect and they've been getting worse over time, which fits with my sense that software tends to get worse over time even if it's actively worked on.

@stevekrouse
Copy link
Author

There's no reason to think this [the layer of abstractions] won't continue in future.

Ah, I think this may be where we disagree. I am working towards a set of abstractions for whole systems that will allow humans to direct computers without mentioning low level concerns like network requests. Of course people will create abstractions within the tool, but there will be no need for ad-hoc extra abstractions, like Docker, on top of it. I can't fully defend why yet, but I do have conviction that these denotational abstraction will be different than others of the past...

Performance, concurrency and networking are common issues caused by mismatch between multiple levels of abstraction.

Concurrency is a prime example of a problem that entirely disappears in a denotational system. If you stop over-specifying the order of instructions and instead specify what you want, the computer is free to maximize concurrency at compile time.

But now you have a wish and no way to fulfill it. What do you do next? You run into a bug, and you have to wait for someone else to fix it. That's "undemocratic", right?

I guess democracy is a spectrum, not a binary. Some things are more democratic than others. But then there are also trade-offs. For example anarchy might be more "democratic" than representative democracy, but one might prefer representative democracy because they'd trade some democratic power for stability. In the same way, I think most people would be willing to not be able to fix every single bug on their own in exchange for some really powerful abstractions.

Additionally, one of the benefits of denotational semantics is that it lets us be precise about the ways in which our implementations don't meet our specification. It makes our bug reports precise.

You upgraded your browser and it added a new menu that you didn't ask for and want to remove.

This would be the sort of thing that you'd be able to do in my imaginary system. Maybe it'd help to be concrete with an example about the ways in which my system won't be infinitely fixable:

Denotational programming is trying to make all objects in a language representable by some mathematical object. So for argument's sake, let's work with math as an analogy. Let's take the square root function. In order to actually compute this function, the computer has some imperative algorithm, but I simply call sqrt declaratively and it happens faster than I can blink. I can assume it's instantaneous. However, what if I ask for the sqrt of a number so large that it takes a really long time to compute? This breaks the idea that calculations such as sqrt are instantaneous. Yet I don't really care about the imperative algorithm as long as I have a debugging tool that can help me pinpoint the problem to this function, so I can route around it or think up some alternative. (Maybe I wrap the computation in such a way that the length of computation time is explicit.)

So the places where my system "has a wall" would be when mathematical assumptions don't hold.

Or I move my computer and suddenly the wifi refuses to reconnect

This is a good point. I'd love to be able to understand/debug what's going on with my wifi driver. Yet at the same time, I wonder (without knowing almost anything about wifi) if we can replace wifi with a denotational protocol. In other words, wifi is a low level protocol/technology -- could we replace it with a higher level technology that would be debuggable at that higher level?

I'm a big fan of pushing all "low-level, implementation details" to a "compiler" and leaving the layer above that compiler for the interesting problems humans are concerned with.

But you should still be able to focus on the details when the situation calls for it.

I agree this is a worthwhile goal. In the same way you weight allow modification much higher than in the course of using it, I would weight highest-level comprehensibilty over fully stack comprehensibility. I'd worry that the choices we make to accomplish the latter would preclude the highest level to compile down efficiently. But hopefully one day we can have both!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment