You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is my take on how to get up and running with NGINX, PHP-FPM, MySQL and phpMyAdmin on OSX Yosemite.
This article is adapted from the original by Jonas Friedmann. Who I just discovered is from Würzburg in Germany. A stonesthrow from where I was born ;)
A collection of WebGL and WebGPU frameworks and libraries
A non-exhaustive list of WebGL and WebGPU frameworks and libraries. It is mostly for learning purposes as some of the libraries listed are wip/outdated/not maintained anymore.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# http://prdl-download.adobe.com/Acrobat%20Professional/7D1CAD83986848F092F39734E60DCFEA/1508139532629/Acrobat_DC_Web_WWMUI.dmg”>Mac OS (64 bit)
Accelerating Discrete Program Search with SUP Nodes
Fast Discrete Program Search 2
I am investigating how to use Bend (a parallel language) to accelerate Symbolic AI; in special, Discrete Program Search. Basically, think of it as an alternative to LLMs, GPTs, NNs, that is also capable of generating code, but by entirely different means. This kind of approach was never scaled with mass compute before - it wasn't possible! - but Bend changes this. So, my idea was to do it, and see where it goes.
Now, while I was implementing some candidate algorithms on Bend, I realized that, rather than mass parallelism, I could use an entirely different mechanism to speed things up: SUP Nodes. Basically, it is a feature that Bend inherited from its underlying model ("Interaction Combinators") that, in simple terms, allows us to combine multiple functions into a single superposed one, and apply them all to an argument "at the same time". In short, it allows us to call N functions at a fraction of the expected cost. Or, in simple terms: why parallelize when we can share?