Skip to content

Instantly share code, notes, and snippets.

@alexandremcosta
Created February 28, 2026 20:01
Show Gist options
  • Select an option

  • Save alexandremcosta/5a94711c24aa994ecca27cc67ecffa78 to your computer and use it in GitHub Desktop.

Select an option

Save alexandremcosta/5a94711c24aa994ecca27cc67ecffa78 to your computer and use it in GitHub Desktop.
What a trumpet taught me about code

What a trumpet taught me about code

My best friend is a musician, I’m a coder. Professionally and also as our favorite hobbies. He uses instruments to create songs, I use languages to create apps. We often fall into parallels between both worlds, but yesterday’s conversation went somewhere new.

He told me he bought a trumpet. No previous experience, never touched one before. I asked if he was going to take classes or figure it out himself. He laughed… already figuring it out himself, obviously. So he walked me through it.

First he figured out the mechanics. How much the sound changes when he presses the first valve, the second, the third. He knows a little trombone, so he could pick up the different mouth positions, each one produces a different note in a different range.

Then came the first AHA-moment.

Pressing the trumpet valves works like sliding the trombone or moving between guitar frets: you go higher or lower in pitch. While changing mouth position switches notes the way changing guitar strings does. Two different instruments he already knew were lending him a map to navigate a third one he’d never touched.

Now he can play any note. In theory. There’s a problem: for each sound, he still has to calculate the finger and mouth positions, which takes a few seconds. The music isn’t flowing, it’s stuttering through arithmetic.

The next step arises naturally: to practice scales until any sound comes out automatically, without calculation. He did this on the guitar…of course the process is going to help here. He also needs to learn how to sustain a note over time — something the guitar never taught him, it doesn’t even make sense for a guitar. But the trombone saved him there bc sustaining notes was exactly what he’d been working on.

I was nodding along, and about halfway through his explanation I realized: this was EXACTLY how I was learning Swift two weeks ago.

First I figured out the mechanics. How Swift handles types, optionals, structs vs classes. I already know Elixir’s pattern matching, so when I saw Swift’s switch with associated values, I recognized the shape immediately — different syntax, similar idea. Rust’s Result type had already taught me that making errors explicit in the type system is powerful, so Swift’s Optional felt like a dialect I’d half-learned before.

Then I practiced until the syntax stopped requiring thought. Wrote small programs, rewrote things I’d built in other languages, let my fingers learn the new grammar while my brain focused on what I already understood: data flow, state management, how to structure the core and the shell.

My friend and I were doing the same thing with completely different tools. And we’d both done it enough times to have a system. Learning your fifth programming language is easier than learning the first, for the exact same reasons learning your fifth instrument is. You already know what you need: figure out the mechanics, map them to what you know, practice until they feel automatic, build something real. The path materializes in front of you. It just requires time and motivation, and every time it requires less of both.

But here’s the part that surprised us both, the thing that got us really excited.

It’s not just that each new tool is easier to learn. Each new tool makes you better at the ones you already knew.

When my friend learned piano after guitar, he came back to the guitar understanding harmony differently. Piano lays out every note visually in a line — you can see chord structures, inversions, voice leading in a way the guitar fretboard hides. He returned to guitar and started finding voicings he never would have reached before, because now he understood why certain shapes worked, not just that they worked.

I had the same experience going from Elixir to Rust. Rust’s obsession with type safety and compile-time guarantees rewired how I thought about contracts between parts of a system. When I came back to Elixir, I started writing more type specs, using Dialyzer more seriously and paying way more attention to code at compile time. Rust didn’t just teach me Rust — it made me a better Elixir developer.

I like to call this the lattice effect. Charlie Munger talks about building a “latticework of mental models” — the idea that you improve understanding by connecting ideas from multiple fields, reinforcing each other. The same thing happens here, but with tools instead of theories.

When you only know one tool, your growth is linear — you get better at that one thing, with diminishing returns. But when you learn a second tool, it connects back to the first. A third connects to both. By the fourth and fifth, every new thing you learn sends echoes through the entire network.

It’s not addition. It’s multiplication.

Guitar taught my friend rhythm and notes geometry. Piano taught him harmony and notes arithmetic. When he picked up bass, he already understood the fretboard AND the harmonic role of bass lines AND how to think about rhythm as a foundation. Two instruments worth of knowledge converging on one new one.

And it goes beyond just learning faster. You start developing what musicians call a voice — a recognizable style that transcends any single instrument. Herbie Hancock sounds like Herbie Hancock whether he’s on acoustic piano, Rhodes, or synthesizer. His harmonic sensibility, his rhythmic feel, they carry across.

Developers have this too, even if we don’t name it. Some developers reach for composition over inheritance in every language. Some always think in data pipelines. Some obsess over clear error boundaries. Read enough of someone’s code and you start recognizing their fingerprint, their voice.

Your voice is what remains when you strip away the tool. It’s the accumulated wisdom of having seen the same problems from multiple angles.

There’s a natural objection here: doesn’t spreading yourself across many tools make you shallow? The opposite, actually. The polyglot doesn’t just collect tools, they start seeing through the tools to the concepts underneath. Loops, recursion, map, those are three expressions of the same idea: doing something to each element. Once you see the concept, the syntax becomes trivial. You stop learning languages and start learning programming.

Musicians hit this same inflection point. At some point you stop learning songs and start learning music. A II-V-I progression is the same three chords whether you’re playing blues on guitar or rock on piano. Rhythm feels similar in your body whether you’re drumming or strumming. The instrument becomes a window into something bigger.

This doesn’t mean the tools don’t matter. You can’t learn music theory in the abstract, you have to feel it through an instrument, hear it resonate, build the muscle memory. You can’t learn code without implementing, watching it fail, debugging at 2am. The tools are how you access the knowledge. But they are not the knowledge itself.

Of course, the parallel between code and music isn’t perfect, and where it breaks is revealing.

Code is persistent: it sits there as text, executes identically every time, can be rolled back and diffed and reviewed. Music is ephemeral: it exists in the moment you play it. Every performance is unique, even of the same piece. Kind of like data changes each execution, but different. A musician practices to make the ephemeral reproducible. A developer writes tests to make the persistent reliable. Different directions, same underlying discipline: making your craft trustworthy when it matters.

And there’s a deeper break, one that I think explains something about our current moment with AI.

When you listen to a song, you’re searching for a feeling. But it’s worth being more precise about what that feeling is — because it’s not always about the artist. Sometimes it’s about you. A song you heard on a road trip, playing in the background when something important happened, that becomes inseparable from the memory. Years later, hearing it again doesn’t reconnect you to the musician — it reconnects you to a version of yourself, to a moment you lived. The artist becomes secondary. The music is a vessel, but what it carries isn’t necessarily their emotion. It might be yours.

That’s one reason the AI music conversation is more complicated than it first appears. Not all human-made music moves people — plenty of it is technically correct and emotionally empty. And some AI-generated music is hard to distinguish from the real thing, at least on first listen. The line isn’t as clean as “human = soul, AI = hollow.” What seems to matter is whether the music can attach itself to something real in the listener’s life, whether it becomes a container for experience. AI might be able to produce that vessel. Whether it can ever feel like someone handed it to you — that’s a harder question.

Code is different. When the user opens an app, they don’t care about the author’s emotional journey. You don’t search for a sentiment or a relationship with whoever wrote it. You care about your own experience: does this solve my problem? Is it fast? Is it reliable? The craft matters, but the feeling behind it doesn’t reach the user the same way. Nobody calls a REST API and thinks “I can feel what the developer was going through.” — or at least not with the same frequency as music :)

That’s why AI-generated code feels useful where AI-generated music feels soulless. Code’s value lives in the output. Music’s value lives in the human connection — though that connection is often less about the artist than about what the music holds for the listener. Same tools, same structures, same parallels in how you learn them, but a fundamentally different relationship between creator and audience.

But maybe the deepest thing the parallel reveals isn’t the divergence. It’s something that runs underneath both.

Both of us — a musician and a programmer — use our tools to touch human life. He creates a mood, enables a dance, fills a room with something that wasn’t there before. I solve a problem, manage data, make something work that someone needs to work. In both cases, the craft is in service of human need, even when that need is as individual as a feeling or as specific as a tax calculation. We’re not just operating tools. We’re using them to participate in the world, to interact with people, to shape experience.

And in that sense, both music and code are genuinely languages. Not metaphorically — actually. You can exchange complex ideas, shift someone’s emotional state, build something together with another person, without using a single word or following a single rule of traditional grammar. A chord progression argues something. A well-named function communicates intent. A melody makes a case that prose cannot. A data structure reveals how its author thinks about the world.

What’s interesting is that this mode of exchange doesn’t just translate what you could say in words — it opens paths that words can’t reach. The way a song can hold ambiguity that a sentence would collapse. The way a clean API can express a relationship between concepts more precisely than any paragraph. These languages don’t just communicate differently. They take you somewhere different.

That’s the most interesting thing the parallel reveals.

Code and music share deep structural DNA: the grammars, the patterns, the way mastery compounds across tools. But they diverge at the point of purpose — and they reconverge at something deeper: both are ways of being in conversation with other people. Both let you say things that couldn’t be said any other way. And both reward the polyglot — not because collecting tools is the goal, but because each new tool teaches you something about the underlying territory that no other one can.

Learn many tools not to collect them, but to see through them.


Alexandre Costa (alexandremcost at gmail) Raphael Gaspar (raphagaspar2 at gmail) Fev 2026

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment