Skip to content

Instantly share code, notes, and snippets.

View R4YGUL4RiZ3's full-sized avatar
💀

Ray R4YGUL4RiZ3

💀
View GitHub Profile
@VictorTaelin
VictorTaelin / dps_sup_nodes.md
Last active November 17, 2024 14:30
Accelerating Discrete Program Search with SUP Nodes

Fast Discrete Program Search 2

I am investigating how to use Bend (a parallel language) to accelerate Symbolic AI; in special, Discrete Program Search. Basically, think of it as an alternative to LLMs, GPTs, NNs, that is also capable of generating code, but by entirely different means. This kind of approach was never scaled with mass compute before - it wasn't possible! - but Bend changes this. So, my idea was to do it, and see where it goes.

Now, while I was implementing some candidate algorithms on Bend, I realized that, rather than mass parallelism, I could use an entirely different mechanism to speed things up: SUP Nodes. Basically, it is a feature that Bend inherited from its underlying model ("Interaction Combinators") that, in simple terms, allows us to combine multiple functions into a single superposed one, and apply them all to an argument "at the same time". In short, it allows us to call N functions at a fraction of the expected cost. Or, in simple terms: why parallelize when we can share?

A