Skip to content

Instantly share code, notes, and snippets.

@pral2a
Last active February 26, 2026 10:08
Show Gist options
  • Select an option

  • Save pral2a/f00e8690cd50620c01abada7c2576ba1 to your computer and use it in GitHub Desktop.

Select an option

Save pral2a/f00e8690cd50620c01abada7c2576ba1 to your computer and use it in GitHub Desktop.

How we tricked sand to think

From decisions to electronic logic

1. Computation as rule-following

Computers do not think. They follow instructions step by step to change information.

computation = applying rules reliably at scale

Computation means following clear rules step by step. Even AI works this way, because it performs many small calculations one after another.

2. Decisions become something machines can follow

Simple decisions can be written as yes/no rules.

George Boole (1854) showed decisions could be described using clear rules that machines could later follow.

machines can execute decisions

Logic is describing decisions as yes or no. Machines can follow these decisions.

3. Machines that follow rules

A machine only needs: a state → rules → a way to change state

Alan Turing (1936) imagined a very simple rule-following machine, showing that any rule-based process could be done mechanically.

A computing machine keeps track of a situation and applies rules to change it step by step.

4. Logic becomes electrical

Switches can represent yes/no decisions.

Claude Shannon (1937) showed that electrical switches could implement logical rules.

many switches → decisions → machines

A switch can represent a choice: on or off. Many switches together can make decisions.

5. Programmable machines

Computers became flexible when instructions were stored inside the machine.

Ada Lovelace (≈1843) imagined that machines could follow instructions to create many kinds of outputs, not just perform calculations. John von Neumann (1945) formalised this idea.

same hardware, different behaviour

A programmable computer changes behaviour by following different instructions.

6. Making switches smaller

Early computers used large fragile switches.

The transistor replaced them with tiny reliable ones.

Bell Labs (1947) invented the transistor.

tiny switches enable scaling

A transistor is a tiny switch made from silicon, a material found in sand.

7. Density, energy and heat

When billions of switches exist, physics matters: density → power → heat

Brains show another strategy: very low energy per decision.

computing is limited by energy

Every switch uses energy and produces heat. Engineers design computers to stay efficient.

8. Integrated circuits — many switches on one chip

Putting many switches on one chip changed everything.

The Apollo (≈1958) programme bought large quantities, accelerating the industry.

reusable building blocks

Putting many components on one chip made electronics cheaper and easier to build.

9. Different kinds of chips

Different devices need different trade-offs CPU, MCU, GPU, DSP, FPGA, SOC, ASICs, MEMORY, etc

modern industry is often fabless

ARM fabless approach (1985), enabling companies like Apple Inc. to design chips such as the Apple M series without owning factories.

Chips are designed for different jobs depending on speed, energy and flexibility.

10. Software and abstraction

Hardware executes decisions. Software describes them. Layers hide complexity across scales.

Linus Torvalds (1991) created the Linux kernel, which runs from phones to data centres, showing how the same logic operates everywhere.

no magic — only layers of abstraction

Programming means writing instructions that are translated into simple operations. The operating system coordinates everything.

Computers are not thinking machines. They are systems that organise simple decisions at massive scale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment