Skip to content

Instantly share code, notes, and snippets.

@ivan
Last active May 21, 2025 19:55
Show Gist options
  • Save ivan/a36e2489623469d96c1ad79077b6dcf9 to your computer and use it in GitHub Desktop.
Save ivan/a36e2489623469d96c1ad79077b6dcf9 to your computer and use it in GitHub Desktop.
2024 reading list

Things I might read in 2024.

Now extended into 2025.



  • Antoine de Saint-Exupéry, Richard Howard (translator) - The Little Prince
  • (Translation by) Sam Hamill - Yellow River: Three Hundred Poems From the Chinese
  • Sayaka Murata, Ginny Tapley Takemori (translator) - Convenience Store Woman (via)
  • Jorge Luis Borges - Tlön, Uqbar, Orbis Tertius (in Labyrinths)/ printed (via)
  • Franz Kafka - The Metamorphosis (via)
  • William Olaf Stapledon - Star Maker/ audio, go to 12m35s to skip past the introduction spoilers

  • The Heart of Innovation: A Field Guide for Navigating to Authentic Demand/ audio (via)
  • Peter D. Kaufman - Poor Charlie's Almanack: The Wit and Wisdom of Charles T. Munger, Expanded Third Edition
  • Lia A. DiBello - Expertise in Business: Evolving with a Changing World (in The Oxford Handbook of Expertise) (via)
  • Joël Glenn Brenner - The Emperors of Chocolate: Inside the Secret World of Hershey and Mars
  • Elad Gil - High Growth Handbook/ audio
  • W. Edwards Deming - The New Economics for Industry, Government, Education/ audio
  • W. Edwards Deming - The New Economics for Industry, Government, Education/ the PDF or ebook
  • Henrik Karlsson - Escaping Flatland/ including the posts I SingleFile'd
  • the relevant-looking posts on benkuhn.net/posts
  • Commoncog Case Library Beta
  • Keith J. Cunningham - The Road Less Stupid: Advice from the Chairman of the Board/ audio
  • Keith J. Cunningham - The 4-Day MBA/ video
  • Cedric Chin's summary of 7 Powers
  • Akio Morita, Edwin M. Reingold, Mitsuko Shimomura - Made in Japan: Akio Morita and Sony
  • Nomad Investment Partnership Letters or redacted (via)
  • How to Lose Money in Derivatives: Examples From Hedge Funds and Bank Trading Departments
  • Brian Hayes - Infrastructure: A Guide to the Industrial Landscape
  • Accelerated Expertise (via)/ printed, "read Chapters 9-13 and skim everything else"
  • David J. Gerber - The Inventor's Dilemma (via Oxide and Friends)
  • Alex Komoroske - The Compendium / after I convert the Firebase export in code/websites/compendium-cards-data/db.json to a single HTML page
  • Rich Cohen - The Fish That Ate The Whale (via)
  • Bob Caspe - Entrepreneurial Action/ printed, skim for anything I don't know



Interactive fiction


unplanned notable things read


unplanned and abandoned

  • Ichiro Kishimi, Fumitake Koga - The Courage to Be Disliked/ audio
  • Matt Dinniman - Dungeon Crawler Carl/ audio
  • Charles Eisenstein - The More Beautiful World Our Hearts Know Is Possible/ audio
  • Geoff Smart - Who: The A Method for Hiring/ audio
  • Genki Kawamura - If Cats Disappeared from the World/ audio
  • Paul Stamets - Fantastic Fungi: How Mushrooms Can Heal, Shift Consciousness, and Save the Planet/ audio
  • Jefferson Fisher - The Next Conversation/ audio
@ivan
Copy link
Author

ivan commented May 4, 2025

With all due respect to folks working on web and phone apps, I keep getting the feeling that AI is great for high level, routine sorts of problems and still mostly useless for systems programming.

As one of those folks, no it's pretty bad in that world as well. For menial crap it's a great time saver, but I'd never in a million years do the "vibe coding" thing, especially not with user-facing things or especially not for tests. I don't mind it as a rubber duck though.

I think the problem is that there's 2 groups of users, the technical ones like us and then the managers and C-levels etc. They see it spit out a hundred lines of code in a second and as far as they know (and care) it looks good, not realizing that someone now has to spend their time reviewing the 100 lines of code, plus having the burden of maintenance of those 100 lines going into the future. But, all they see is a way to get the pesky, expensive devs replaced or at least a chance squeeze more out of them. The system is so flashy and impressive looking, and you can't even blame them for falling for the marketing and hype, after all that's what all the AIs are being sold as, omnipotent and omniscient worker replacers.

Watching my non-technical CEO "build" things with AI was enlightening. He prompts it for something fairly simple, like a TODO List application. What it spits out works for the most part, but the only real "testing" he does is clicking on things once or twice and he's done and satisfied, now convinced that AI can solve literally everything you throw at it.

However if he were testing the solution as a proper dev would, he'd see that the state updates break after a certain amount of clicks, and that the list was glitching out sometimes, and that adding things breaks on scroll and overflows the viewport, and so on. These are all real examples of an "app" he made by vibe coding, and after playing around with it myself for all of 3 minutes I noticed all these issues and more in his app.

https://news.ycombinator.com/item?id=43878850

@ivan
Copy link
Author

ivan commented May 4, 2025

For many people ChatGPT is already the smartest relationship they have in their lives, not sure how long we have until it’s the most fulfilling. On the upside it is plausible that ChatGPT can get to a state where it can act as a good therapist and help helpless who otherwise would not get help.

I am more regularly finding myself in discussions where the other person believes they’re right because they have ChatGPT in their corner.

I think most smart people overestimate the intelligence of others for a variety of reasons so they overestimate what it would take for a LLM to beat the output of an average person.

https://news.ycombinator.com/item?id=43872426

@ivan
Copy link
Author

ivan commented May 5, 2025

Optimizing AI agents requires writing down the valuable institutional knowledge in your company. Big co’s will struggle with this: middle managers worry about becoming replaceable and leaders worry about leaks. If they don’t do it, they’ll be crushed by competitors who do.

https://x.com/harjtaggar/status/1918323067484553429

@ivan
Copy link
Author

ivan commented May 5, 2025

Other known limitations:

  • Links that require login or not publicly accessible could not be saved
  • Some websites block automated bots, we can't save such webpages yet
  • Maximum size of entire web-page/file is limited by 70 MB
  • Video, audio and iframes included in web-page could not be saved

https://help.raindrop.io/permanent-copy

Other bookmarking things: https://news.ycombinator.com/item?id=43857196

@ivan
Copy link
Author

ivan commented May 6, 2025

No offense to other posters here, but both positive and negative opinions on effect expressed here are rather shallow and random, I've used effect-ts since it was just starting to take shape, so I have a lot of experience with it. But I'm not a maintainer or a contributor, just a user, so I don't think I am blinded by some privileged knowledge or bias.

The best projector for intuition for effect would be to think of it as the same as nextjs/nuxt/sveltekit are to their respective frameworks, but to typescript itself. Just as they are meta-frameworks in their ecosystem, effect acts as a sort of meta-language for typescript. Calling it DSL suggests an operating Domain it is Specific to, which there is none, it does allow developers to very easily construct their own actual DSL using itself though. Saying it mirrors rust-style is also a misnomer. Effect in the core is a direct 1:1 port of ZIO from the Scala landscape. It just so happens, that like many other languages with type classes and unions, there are data types like Option, Either, and the likes. Other than that, there is no resemblance to Rust what-so-ever, and if you try to take that resemblance further to more advanced concepts like concurrency and batching, it falls apart really quickly. So while these are simplicitic and easy to chew analogies, they are just completely off the mark..

Effect eschews abstract ideas often tauted by functional libraries and focuses on actual usage and real-world scenarios, and makes a real effort to allow you to write the best code you understand and want to write, that is to say, there is no dogma or fancy-pants language to writing effect code. A very simple but potent example is the many ways effect allows you to utilize their case classes construction (a data container that implements equality by value instead of by reference. very very useful, I honestly can't live without it anymore)

  1. Data.case - plain interface based construction
  2. Data.struct - ad-hoc construction
  3. Data.tagged - like .case, but with a predefined _tag member (used for discrimination)
  4. Data.Class - like .case, but for the class syntax
  5. Data.TaggedClass - take a guess
  6. Schema.data - ad-hoc transformation from raw to Case
  7. Schema.Class - all class-based schemas automatically produce cases nvm the terminology that is still vague, I just wanted to show how many ways there are to produce the same result, depending on your own preferences, needs, and code-style.

But, it is not a free pickup. Like the aforementioned meta-frameworks, and tools like rxjs, react, or even typescript. this is not something you can just pick up and run with blind. Effect is extremely powerful, but it requires a great initial effort to chew through.

When I say it's extremely powerful, I mean EXTREMELY powerful. here is a non exhaustive list of tools I dropped entirely thanks to effect making them redundant, inadequate, or useless. (All are great tools btw, not dissing them)

  • lodash/ramda/remeda/fp-ts/similar
  • express/koa/h3/other servers
  • react-query
  • redux/xstate/jotai/zustand/other state management
  • rxjs
  • purify-ts
  • date-fns and similar
  • inversify
  • zod/typebox/yup/joi/and so on
  • all stream related libs
  • axios/got/ky/superagent/similar

this is off top of my head and who knows how many one-offs I've rid myself off. There's also a surprisingly vibrant, if small, ecosystem around effect for other more specialized issues like db access or even a full-blown monolithic framework al-a nextjs. Everything powered by effect and completely interoperable with it and everything else that is built on it

Downplaying the error handling is really narrow-minded. Meaningless declarations like "I throw only when things break" are of course utter none sense. Error handling is not the same as exception handling. All software is allowed to safely error, the question is how you paint these errors, throwing them as exceptions is only one of them. But that's a whole topic and my comment is long enough.

I've only scratched the tip of this, there's more good, and more bad to effect, though I think I said plenty

TL;DR Effect fundamentally changed how I write code and I can't recommend it enough-- but only for those who are willing and wanting to learn to use it. If you want a free win, effect isn't it. What it is, is production ready (I and many others used it in prod for nearly 2 years in real companies without issues), very powerful, replacement for most of the tools you use and much much more than that.

https://old.reddit.com/r/typescript/comments/16w3iwn/opinions_about_effectts_do_you_recommend_using_it/

How does it replace Express? Really curious, because I haven't found this yet.

You can find example here http-server, looks like Express replacement.

It's better to create HttpApi though, because you split specification and implementation. API becomes type-safe, compilers checks that you don't forget to implement endpoints and you get OpenAPI specification for free

https://old.reddit.com/r/typescript/comments/16w3iwn/opinions_about_effectts_do_you_recommend_using_it/

@ivan
Copy link
Author

ivan commented May 6, 2025

The lime-colored Green Treefrog is common in the southeastern states. The male calls and the female comes to him. BUT sometimes a silent interloper ("satellite" male) steals the mating as in this video.

https://www.youtube.com/watch?v=OYm2Mm6z850

@ivan
Copy link
Author

ivan commented May 6, 2025

they really know when they're playing a game they're going to win and they don't go outside of that game

https://www.youtube.com/watch?v=ii8tDwkizJQ&t=2m

@ivan
Copy link
Author

ivan commented May 6, 2025

Don’t just read it; fight it! Ask your own questions, look for your own examples, discover your own proofs. Is the hypothesis necessary? Is the converse true? What happens in the classical special case? What about the degenerate cases? Where does the proof use the hypothesis?

Paul Halmos

quoted in Euclid’s “Elements” Redux

@ivan
Copy link
Author

ivan commented May 6, 2025

A major mistake I made in my undergrad is that I focused way too much on mathematical lens of computing - computability, decidability, asymptotic complexity etc. And too little on physical lens - energy/heat of state change, data locality, parallelism, computer architecture. The former is interesting; The latter bestows power.

https://x.com/karpathy/status/1919647115099451892

@ivan
Copy link
Author

ivan commented May 7, 2025

I have not had the opportunity to own any other optical or ECG sensors, so I can't compare the usability. I can say this thing worked immediately out of the box and has been rock solid since I've owned it. I don't feel the need to even consider something else.

Having the Apple Watch, I've discovered that while the heart rate monitor works fine during a run, the watch must be on very tight and there is some serious lag in the readings, making running to heart rate or doing intervals to heart rate very dodgy. These are the reasons I bought this monitor. It has exceeded my expectations. The heart rate is constantly updating, and it never has me feeling like I'm waiting on it.

Here are some Apple Watch/Coros Monitor specifics you may want to know about:

  • You do not need to download the Coros app in order to use this with your Apple Watch. Thank you Coros for not making that necessary. It may give you stats on battery level or whatnot, but I just toss it on the charger once a week and I'm good. No need for the app.

  • It truly is "wear it to turn it on, take it off to turn it off" in the most nearly flawless way. There is a sort of "soft latch" it has on your watch's internal heart rate monitor, so there is a blackout period where manually reading your heart rate on your watch will not work. It's not a long time, but it is noticeable.

  • One thing I don't care for is that the monitor comes on when charging. I don't see why this is necessary. The impact that has is that while it's charging, it is connected to your watch if it's in range, and you will not be able to manually read your heart rate.

  • However, the good news is that it appears that the watch continues to do the random readings with your watch's internal heart rate monitor regardless of whether the Coros monitor is charging and hijacking your watch, if you are wearing the monitor, or if you've just taken the monitor off and are in the "blackout" period where your watch is waiting to release that connection. I believe the watch somehow separates the source of the readings based on workout data or health tracking. It looks like the automatic health tracking does not utilize the external monitor at all.

It took me a while to figure out what was going on, and I hope this is helpful to Apple Watch users considering this monitor. It doesn't appear to be well documented, I'd imagine since the watch is not their product.

https://www.amazon.com/gp/customer-reviews/R1JJ1L1O91GW61/ref=cm_cr_arp_d_rvw_ttl?ie=UTF8&ASIN=B0CH8LJL3Y
via https://www.amazon.com/product-reviews/B0CH8LJL3Y/ref=cm_cr_dp_d_show_all_btm?ie=UTF8&reviewerType=all_reviews

@ivan
Copy link
Author

ivan commented May 8, 2025

I’m Leslie Lamport from Microsoft Research. I’ve been a researcher most of my adult life, that means I’ve primarily been a writer and also a performer. I’ve written papers and I’ve performed talks at conferences and other venues. I’ve been successful because I’m a pretty good writer.

Good writing will be crucial to your success. The most obvious reason is because people will judge you by your writing, not just by reports or papers that you write, but also by your emails and texts. What does it tell you about a person, if he sends you email with lots of errors, and with sentences that make no sense?

Learning to write well takes practice. You have to think before you write, and then you have to read what you wrote and think about it. And you have to keep rewriting, re-reading and thinking, until it’s as good as you can make it, even when writing an email or a text.

A less obvious reason to improve your writing, is to improve your thinking. You should think before you write. You should think before you do anything, because it will help you understand what you’re doing, which will help you to do it better. And as someone said, “Writing is nature’s way of showing you how fuzzy your thinking is.” If you think you understand something, and don’t write down your ideas, you only think you’re thinking. To think clearly, you need to be able to write down your ideas clearly, which requires being able to write well.

Learning to write well will improve your thinking. And learning to think better, will improve your writing. It’s a virtuous cycle. You have to write better to think better to write better. And you should start that cycle now, by trying to write better.

https://mentors.fm/2019/08/13/think-and-write-with-leslie-lamport/
https://www.youtube.com/watch?v=RnY5iJea5ww

@ivan
Copy link
Author

ivan commented May 8, 2025

To understand the unknown type, it helps to think about any in terms of assignability. The power and danger of any come from two properties:

  • All types are assignable to the any type.
  • The any type is assignable to all other types. (With the exception of never.)

If we “think of types as sets of values”, the first property means that any is a supertype of all other types, while the second means that it is a subtype. This is strange! It means that any doesn’t fit into the type system, since a set can’t simultaneously be both a subset and a superset of all other sets. This is the source of any’s power but also the reason it’s problematic. Since the type checker is set based, the use of any effectively disables it.

The unknown type is an alternative to any that does fit into the type system. It has the first property (any type is assignable to unknown) but not the second (unknown is only assignable to unknown and, of course, any). It’s known as a “top” type since it’s at the top of the type hierarchy. The never type is the opposite: it has the second property (can be assigned to any other type) but not the first (no other type can be assigned to never). It’s known as a “bottom” type.

Effective TypeScript

@ivan
Copy link
Author

ivan commented May 8, 2025

I have to admit that I have a suspicion that people who can’t express themselves clearly in writing also aren’t going to be very good at structuring their ideas in code. To me, these abilities are closely correlated.

https://bsky.app/profile/janstette.bsky.social/post/3lon6tix6a22q

@ivan
Copy link
Author

ivan commented May 8, 2025

I came to realize something which I should have realized earlier: what I realized is that what we makes stands testament to who we are.

https://www.youtube.com/watch?v=wLb9g_8r-mE Jony Ive

@ivan
Copy link
Author

ivan commented May 9, 2025

Experimental spending on even probabilistic improvements builds up so much value over time.

https://x.com/RomeoStevens76/status/1920937194614571475

@ivan
Copy link
Author

ivan commented May 9, 2025

Importing types without type keyword

Due to the nature of type stripping, the type keyword is necessary to correctly strip type imports. Without the type keyword, Node.js will treat the import as a value import, which will result in a runtime error. The tsconfig option [verbatimModuleSyntax][] can be used to match this behavior.

https://github.com/nodejs/node/blob/daced4ab98be82953ef2fa73e0f81e2b1967be8b/doc/api/typescript.md

@ivan
Copy link
Author

ivan commented May 10, 2025

a simple truth: a lot of current RL research is to translate fuzzy, subjective real-world tasks into objective and unhackable rewards that you can reliably optimize during training

https://x.com/karinanguyen_/status/1921348292694167672

@ivan
Copy link
Author

ivan commented May 11, 2025

Mentor (from Greek Μέντωρ) is cognate to Sanskrit mantṛ, a wise & trusted counselor or teacher.

Similar to iatrogenic harm through medical error & negligence, there is pedagogic & mystagogic harm through failing to teach the primacy & the art of asking questions.

https://x.com/hokaisobol/status/1293919065313026048

@ivan
Copy link
Author

ivan commented May 13, 2025

Per our terms, we reserve the right to expire unused credits after one year of purchase.

https://openrouter.ai/docs/faq#do-credits-expire

(OpenAI does the same with credits you buy there.)

@ivan
Copy link
Author

ivan commented May 13, 2025

After some research online (another reddit thread of antkeepers) I found: "Due to how surface tension works at their scale, they can get sucked into a drop of water and drown inside it unable to escape. When they find an open puddle of liquid they will cover it with sand or trash or whatever to reduce the danger."

https://old.reddit.com/r/Damnthatsinteresting/comments/1kl9ki0/i_tried_to_make_a_time_lapse_of_ants_eating_this/

@ivan
Copy link
Author

ivan commented May 13, 2025

My ability now isn't due to any intrinsic talent or grand insight but instead hundreds of tiny changes in habits, processes and values. When I compare these changes to online writing and discussions about how to get better at programming, I see very little overlap.

https://www.scattered-thoughts.net/writing/reflections-on-a-decade-of-coding

This is what I think about whenever I see a blog post with a pithy wisdom drawn from a single experience in a single domain. 'Programming' covers an enormous range of activities with different problem domains, team sizes, management structures, project lifespans, deployment sizes, deployment frequencies, hardware, performance requirements, consequences for failure etc. We should expect it to be incredibly rare that any given practice is appropriate across all those contexts, let alone that we could discover general best practices from the outcome of a few projects.

[...]

Programming practices are mostly tacit knowledge. Tacit knowledge isn't easy to share. An expert will relate some simple-sounding rule of thumb, but then grilling them on specific cases will quickly uncover a huge collection of exceptions and caveats that vary depending on the specific details of the situation. These are generated from many many past experiences and don't generalize well outside of the context of that body of experience.

Trying to apply the rule of thumb without knowing all those details tends to result in failure. Phrases like "don't repeat yourself", "you aren't going to need it", "separation of concerns", "test-driven development" etc were originally produced from some body of valid experience, but then wildly over-generalized and over-applied without any of the original nuance.

The way to convey tacit knowledge, if at all, is via the body of experiences that generated the rule. For this reason I find much more value in specific experience reports or in watching people actually working, as opposed to writing about general principles.

[...]

Confusing means and ends

The goal is always to write a program that solves the problem at hand and that can be maintained over its useful lifetime.

Advice like "write short functions" is a technique that may help achieve that goal in many situations, but it's not a goal in itself. And yet some characteristic of human thinking makes it very easy for these sort of pseudo-goals to take over from the actual goals. So you may hear people saying that some piece of software is bad because it has very long functions, even if the evidence suggests that it also happens to be easy to maintain.

(Applicable outside programming—e.g. "you should be nice".)

No gradient

Technology companies often make incredible profits by using simple technology to solve a business problem. This means that programmers at those companies can make a lot of bad decisions and adopt poor practices and still be successful. If money falls out of the sky whatever you do, there isn't much of a gradient to help discover better practices.

For example slack is an incredibly successful product. But it seems like every week I encounter a new bug that makes it completely unusable for me, from taking seconds per character when typing to being completely unable to render messages. (Discord on the other hand has always been reliable and snappy despite, judging by my highly scientific googling, having 1/3rd as many employees. So it's not like chat apps are just intrinsically hard.) And yet slack's technical advice is popular and if I ran across it without having experienced the results myself it would probably seem compelling.

https://www.scattered-thoughts.net/writing/on-bad-advice/

Finding the idea that actually works amidst the sea of very similar ideas that don't work requires staying curious long enough to encounter the fine-grained detail of reality and humble enough to recognize and learn from each failure.

[...]

The first language I learned was haskell and for several years I was devoted to proclaiming its innate superiority. Later on I wrote real production code in ocaml, erlang, clojure, julia and rust. I don't believe any of this improved my programming ability.

Despite spending many years writing haskell, when I write code today I don't use the ideas that are idiomatic in haskell. I write very imperative code, I use lots of mutable state, I avoid advanced type system features. These days I even try to avoid callbacks and recursion where possible (the latter after a nasty crash at materialize). If there was an alternate universe where I had only ever learned c and javascript and had never heard of any more exotic languages, I probably still would have converged to the same style.

That's not to say that languages don't matter. Languages are tools and tools can be better or worse, and there has certainly been substantial progress in language design over the history of computing. But I didn't find that any of the languages I learned had a special juice that rubbed off on my brain and made me smarter.

If anything, my progress was often hampered by the lack of libraries, unreliable tools and not spending enough time in any one ecosystem to develop real fluency. These got in the way of working on hard problems, and working on hard problems was the main thing that actually led to improvement.

By way of counter-example, check out this ICFP contest retrospective. Nikita is using clojure, a pretty niche language, but has built up incredible fluency with both the language and the ecosystem so that he can quickly throw out web scrapers and gui editors. Whereas I wouldn't be able to quickly solve those problems in any language after flitting around from ecosystem to ecosystem for 12 years.

[...]

For all of the above, the real kicker is the opportunity cost. The years that I spent messing around with haskell were not nearly as valuable to me as the week I spent learning to use rr. Seeking out jobs where I could write erlang meant not seeking out jobs where I could learn how cpus work or how to manage a long-lived database. I don't write erlang any more, but I still use cpus sometimes.

Life is short and you don't get to learn more than a tiny fraction of the knowledge and skills available, so if you want to make really cool stuff then you need to spend most of your time on the highest-leverage options and spend only a little time on the lottery tickets.

I expect people to object that you never know what will turn out to be useful. But you can make smart bets.

[...]

But a decade of mistakes later I find that I arrived at more or the less the point that I could have started at if I was willing to believe that the accumulated wisdom of tens of thousands of programmers over half a century was worth paying attention to.

And the older I get, the more I notice that the people who actually make progress are the ones who are keenly aware of the bounds of their own knowledge, are intensely curious about the gaps and are willing to learn from others and from the past. One exemplar of this is Julia Evans, whose blog archives are a clear demonstration of how curiosity and lack of ego is a fast path to expertise.

https://www.scattered-thoughts.net/writing/things-unlearned/

Another huge time-suck is online entertainment, especially when it masquerades as work-related or educational. I was spending easily two hours a day on sites like hacker news and twitter. This is apparently below average.

The opportunity cost is huge - 2 hours per work day is 500 hours per year. Maybe I learned something from that aimless browsing, but with the same time I could have read 250 papers and watched 125 movies - better value for both education and entertainment!

This seems almost intrinsic - learning requires effort but anything that requires effort doesn't spread fast online. So fast online media optimizes for content that makes you feel like you're learning something but that doesn't actually require any effort.

Not to mention that the culture in any massive unpoliced community always seems to devolve towards the worst of its members rather than the average. The majority of comments I see are cynical, mean, thoughtless, and usually wrong to boot. Thoughtful comments take much longer to write so they get drowned out.

There's an idea that you internalize the voice of anyone that you spend a lot of time with. I don't want the voice of hacker news sitting on my shoulder telling me that they haven't actually looked at what I'm doing but they're pretty sure that I'm doing it wrong and that they could do it in a weekend.

I think it makes sense to treat much of the internet as fundamentally adversarial, exploiting unpatched bugs in the human mind. Don't get got.

especially:

learning requires effort but anything that requires effort doesn't spread fast online

[...]

I think about work practices in terms of positive and negative reinforcement. If my time working is regularly rewarding then I'm training myself to want to work hard. If it's mostly frustrating and unpleasant then I'm training myself to not want to work.

To get lots of positive reinforcement I try to break tasks down into small chunks, each of which does something that works. I also try to order tasks to get some kind of reward as soon as possible.

It's also nice to have some kind of scoreboard. Ticking tasks off a list or seeing passing tests or performance numbers go up over time helps make progress tangible. It also provides the sense that progress is a ratchet, rather than being one step forward and two steps backwards.

When I finish a task I also try to spend some time enjoying the results before moving on to a different task.

Example: When adding json support to materialize I started by adding end-to-end support for json literals, then casts, then scalar functions, then set-valued functions etc. I could have instead grouped it like this: add backend support, then planner, then type inference, then syntax. But if I did things in that order I wouldn't get to try things out in the repl or see the number of failing tests go down on our CI graph until I finished the entire task. After several weeks of work, I took a few hours to play around making silly json demos and just enjoying the fact that it worked now.

[...]

I decide what to work on each day based on how I'm feeling. Some days I feel ready to tackle really hard problems. Other days I'm totally scatterbrained and if I try anything hard I'll just mess it up and get discouraged. For those days I keep a list of easy bug fixes, maintenance, documentation, tests, tools to try out etc. These also work well as warmup tasks - sometimes after a couple of easy wins I'll feel more excited to tackle something hard.

In the past I've occasionally had hard problems that had some time pressure, real or imagined, and I've tried to push through. "Just one last push and it'll be finished and I'll take some time off." That one last push always ends up being some kind of Xeno's paradox situation where my capacity to make good decisions erodes at the same rate as the amount of remaining work. Whenever I've thought that I could temporarily ignore basic sanity maintenance to get something finished it's always been a total disaster.

[...]

I used to often get derailed by distractions. I think the process looks like this:

  • I'm working
  • Something feels effortful or makes me feel anxious or doubtful
  • I switch to my email or whatever
  • The anxiety is gone and I get some entertainment - double reinforcement!

[...]

There is always more work that could be done than I could possibly do in a lifetime. So I find it really important not to think of work as a todo list that I'm trying to get through - that's a crushing burden. Instead I just focus on the idea that my time is finite and that I want to spend it well each day.

https://www.scattered-thoughts.net/writing/emotional-management/

@ivan
Copy link
Author

ivan commented May 13, 2025

Making good decisions tends to depend on the fine details of the situation.

This means that the goal needs to contain as much of this detail as possible. Things like:

  • the exact scope of the problem being solved
  • who will be using the code
  • where it will run, and on what kind of hardware
  • who will be maintaining/supporting the code and for how long
  • constraints on correct output
  • consequences of bugs
  • amount, distribution, rate of change of input data
  • requirements on throughput, latency, memory usage, storage, power usage

[...]

One way to break up complex goals is to work on vertical slices of the stack - going feature-by-feature instead of layer-by-layer.

https://www.scattered-thoughts.net/writing/setting-goals/

@ivan
Copy link
Author

ivan commented May 13, 2025

Any process that you can make automatic, any decision or context switch that you can avoid, frees up mental resources that can be redeployed elsewhere. So even if the complex high-level work seems like the most crucial, you can still make gains by speeding up the low-level mechanical stuff.

[...]

For me the strongest argument is that being faster is more fun.

I like being able to make more things. I like being able to take on more ambitious projects. I like being good at what I do, and I like trying to get better.

https://www.scattered-thoughts.net/writing/speed-matters/

Care

The main thing that helped is actually wanting to be faster.

Early on I definitely cared more about writing 'elegant' code or using fashionable tools than I did about actually solving problems. Maybe not as an explicit belief, but those priorities were clear from my actions.

I probably also wasn't aware how much faster it was possible to be. I spent my early career working with people who were as slow and inexperienced as I was.

Over time I started to notice that some people are producing projects that are far beyond what I could do in a single lifetime. I wanted to figure out how to do that, which meant giving up my existing beliefs and trying to discover what actually works.

[...]

Now when I finish a chunk of work I look back and ask why it took me as long as it did and whether it could have been faster. This process is usually uncomfortable and I often manage to avoid thinking about the things I'm doing wrong so that I can stay in my comfort zone.

[...]

The most important class of decisions is 'what should I do next?'. There are always far more options than time. Having explicit goals makes it easier to prioritize the list.

For tools that I use myself, I prioritize by time saved or quality improved. For commercial projects, priorities come from customers. For research projects, the priority is whatever will give the most information about the research question/hypothesis.

It's often possible to raise quality on one axis by lowering it on another, like improving throughput by consuming more memory or vice versa. If I don't know what the requirements on throughput and memory are then there is no way to decide which tradeoff to make.

A more subtle decision is how long to work on something. There are usually declining returns on time invested so it might be more valuable to half-ass three tasks than to do one task perfectly.

[...]

Focus

I work in blocks of 2-3 hours during which I don't do anything else - no email, slack, twitter, hacker news, chatting to my neighbour etc.

  • 'Multi-tasking' is just rapid context switching. I would often reply to an email while something is compiling and then when I came back I forgot what I was compiling and why. This wastes time and causes mistakes.
  • Previous tasks continue to consume attention even after switching. This is especially true for anything that causes strong emotions. I find it hard to concentrate if I'm opening slack every 15 minutes and every time seeing that thread where someone is arguing with me and they're totally wrong and how can they even believe what they're saying and what was I doing again?
  • Exposing myself to addictive interactions trained me to self-interrupt - whenever I encountered a difficult decision or a tricky bug I would find myself switching to something easier and more immediately rewarding. Making progress on hard problems is only possible if I don't allow those habits to be reinforced.
  • Expecting outside interruptions (eg notifications, distracting background conversation) makes it harder for me to begin concentrating, since on some level I don't expect it to be worthwhile if I'm going to be distracted at random anyway.

I take small breaks within those work blocks but don't do anything that might take over my focus. So I'll walk around, stretch, make tea etc but not look at my phone or check my email.

Music definitely hurts my concentration, but it also improves my mood so I'll sometimes play a single album if I'm having trouble getting started in the morning. Usually by the time the album finishes I'm deep enough that I don't notice it's gone.

[...]

These changes may sound trivial but I can't overemphasize how much difference they made when applied consistently. Attention and short-term memory are the bottleneck that everything else has to flow through but they are incredibly fragile and, increasingly, exposed to adversarial input.

[...]

I do this kind of batching all over the place. Pretty much any complex task involves switching between multiple subtasks. Whenever I can figure out how to rearrange them to reduce the amount of switching I tend to find that it saves time and reduces mistakes.

[...]

Example: The final straw that made me stop using emacs was running into multiple bugs that each caused sporadic multi-second pauses during which my keystrokes would go to the wrong window or get queued up on a non-responsive window. Each time it would leave me confused as to what state things were now in and then after I fixed it I would have to take more time to remember what I was doing.

https://www.scattered-thoughts.net/writing/moving-faster/

@ivan
Copy link
Author

ivan commented May 13, 2025

The foremost idea I keep in mind is that the goal of writing a program is to turn inputs into outputs, while making good use of limited human and machine resources.

[...]

I try to make the state tree-shaped, as much as possible.

As soon as you allow pointers to point between different components, it becomes harder to:

  • reason about what state a function depends on or can change, because it can always find a pointer to the rest of the jungle
  • test or reuse components, because they require the destinations of all those pointers to exist, so you have to generate a fake version of the entire state just to run one component
  • print, inspect or copy the state, because there might be cycles in the pointer graph

https://www.scattered-thoughts.net/writing/coding/

@ivan
Copy link
Author

ivan commented May 13, 2025

Test Features, Not Code

The over-simplified binary search example can be stretched further. What if you replace the sorted array with a hash map inside your application? Or what if the calling code no longer needs to search at all, and wants to process all of the elements instead?

Good code is easy to delete. Tests represent an investment into existing code, and make it costlier to delete (or change).

The solution is to write tests for features in such a way that they are independent of the code. I like to use the neural network test for this:

Neural Network Test

Can you re-use the test suite if your entire software is replaced with an opaque neural network?

To give a real-life example this time, suppose that you are writing that part of code-completion engine which sorts potential completions according to relevance. (something I should probably be doing right now, instead of writing this article :-) )

Internally, you have a bunch of functions that compute relevance facts, like:

  • Is there direct type match (.foo has the desired type)?
  • Is there an indirect type match (.foo.bar has the right type)?
  • How frequently is this completion used in the current module?

Then, there’s the final ranking function that takes these facts and comes up with an overall rank.

The classical unit-test approach here would be to write a bunch of isolated tests for each of the relevance functions, and a separate bunch of tests which feeds the ranking function a list of relevance facts and checks the final score.

This approach obviously fails the neural network test.

An alternative approach is to write a test to check that at a given position a specific ordered list of entries is returned. That suite could work as a cross-validation for an ML-based implementation.

In practice, it’s unlikely (but not impossible), that we use actual ML here. But it’s highly probably that the naive independent weights model isn’t the end of the story. At some point there will be special cases which would necessitate change of the interface.

Key point: duh, test features, not code! Test at the boundaries.

If you build a library, the boundary is the public API. If you are building an application, you are not building the library. The boundary is what a human in front of a display sees.

Note that this advice goes directly against one common understanding of unit-testing. I am fairly confident that it results in better software over the long run.

https://matklad.github.io/2021/05/31/how-to-test.html

@ivan
Copy link
Author

ivan commented May 14, 2025

circulated meeting notes to a client after a call and she wrote back like "Wow these notes are really good! Which AI are you using?? We're trying to decide which one to use in our office!" its called paying attention lady. its called brainpower.

https://x.com/brianonhere/status/1921960543398625765

@ivan
Copy link
Author

ivan commented May 14, 2025

tfw when you get below-average outcomes in an area of your life you‘ve been devoting next to no time and attention to

https://x.com/bschne/status/1921948621106294989

@ivan
Copy link
Author

ivan commented May 14, 2025

I wonder if “gifted kid burnout” is just: You learn to navigate by the reward signal of achievement (easy, ppl tell you how to get it) so you never develop the more subtle ability to navigate by what interests you. Then when achievement gets sparse, you feel aimless & unmotivated

Even more nefarious is that schools actively cull the ability to navigate by interest. I think most learning gripes are either from being dragged too deep through something, or being cut short.

https://x.com/TheOisinMoran/status/1922653407665430897

@ivan
Copy link
Author

ivan commented May 14, 2025

There is a lot of pu-pooing in this thread about the idea of philosophy and economics even being compatible. I think you all are missing the point. There are really basic questions in economics that aren't clear and haven't been answered.

I'll give you a few examples: What is money? What is worth? Should we think of the wealth of any economy based on GDP or wages?

Economics, just like every single discipline in the social sciences, is working on answering basic questions. For fuck's sake even maths and physics are trying to do the same.

I'll give a couple examples:

What's a number? That question alone can get professors of maths running in circles.

What's an atom? These questions can keep running on and on, but I think a lot of people are missing the point of asking these sorts of questions (and the use of philosophy in general).

When you ask really basic questions you have to clarify yourself in your answers and the concept you are discussing becomes a bit more clear. When it becomes a bit clearer you can use it. It just requires a bit more thinking.

Reddit is filled with engineers, computer programmers, and a ton of folk who have relied on basic Aristotelian, Fregean, and a whole host of other folks who developed the laws of logic that you all use today.

I'm frankly baffled that there are so many of you who just shit on the idea of philosophy and thinking about things abstractly as a waste of time. It is literally one of the most important things any human being can do.

Re-conceptualizing ideas, questioning assumptions, or coming up with a new idea that no one else has thought up change lives.

https://old.reddit.com/r/philosophy/comments/4q02u3/philosophy_of_economics_hanshermann_hoppe/

@ivan
Copy link
Author

ivan commented May 14, 2025

If you or anyone wants to hear his reasoning on this check out Praxeology: The Austrian Method (by Hans-Hermann Hoppe) - Introduction to Austrian Economics, 6of11. He goes through the epistemology of rationalism and logical positivism. He does not deny that empirical questions have to be answered empirically. He is arguing that certain statements can be 'a priori' true, that say something about the nature of reality but which are not hypothetical. This is called 'synthetic a priori' knowledge, in Kantian terminology.

He is the best one to lay down the dualist approach to the rationalist-logical positivist dispute. His main point in refuting the pure logical positivist position is that they themselves have to assume a priori knowledge. If one were to take the logical positivist position and apply it to itself then you can clearly see this. If you say there only is empirical statements and analytical statements, then you would have to ask yourself: what sort of statement is this statement? Is it an empirical or analytical statement?

I really recommend the link above for anyone interested. It does not really explain praxeology, but rather rationalism and logical positivism as well as the epistemological foundation of praxeology.

https://old.reddit.com/r/philosophy/comments/4q02u3/philosophy_of_economics_hanshermann_hoppe/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment