HVM takes the ideas of Interaction Combinators and combines it with the ideas of Type Systems, Functional Programming, and Compilers, to create an implementation of Yves Lafont's ideas into a highly parallelized runtime.
Start with the HVM whitepaper and HOW page of the repo, and if that makes sense to you, great! You're probably too smart for everything after. Otherwise, read into the next sections to develop the knowledge you're missing.
Much of the Interaction Combinator papers are written with an audience presumed to be familiar with lambda calculus, so it is good to have some knowledge here before jumping into the Interaction Combinators section.
- Programming Languages interactive book (good for lambda calculus, functional programming, and type systems!)
Afterwards, to bridge the gap between this and Interaction Combinators, it's recommended to read (at least) the first 3 chapters of The optimal implementation of functional programming languages.
HVM is highly based on Yves Lafont's ideas of Interaction Nets and Interaction Combinators.
Easy to understand introduction to how interaction combinators can work to facilitate a compiler's inner workings: https://www.youtube.com/watch?v=GawiQQCn3bk
Advanced resources building upon the above concepts:
- A Denotational Semantics for the Symmetric Interaction Combinators
- Towards a GPU-based implementation of interaction nets
Haha, the optimal implementation of functional programming is mostly useful to understand how to translate a lambda calculus term into a sharing graph, which is basically an interaction net.
So I'd say it's the bridge between lambda calculus and interaction combinators.
Note 1: You only need the first three chapters at the most to understand sharing graphs.
Note 2: HVM actually implements a simplified version of that.