HOC - Thesis and Future Direction
At HOC, we believe Interaction Nets technology can lead to valuable market products. Two years ago, we raised $4 million to explore and validate this idea. During this period, we developed HVM and Bend—tools that significantly speed up functional and symbolic programs.
As we wrap up this research phase, we've identified the Symbolic AI market as our best opportunity. This market is rapidly growing, highly values computational efficiency, and aligns perfectly with HVM's strengths in optimizing symbolic algorithms.
The recent success of DeepSeek's V3 model, which matches GPT-4's capabilities but was trained for just $5 million instead of $100 million, highlights an important shift: AI development has become primarily an optimization challenge. This is exactly where HOC excels.
Our ambitious goal is to create a purely symbolic AI architecture that replaces computationally heavy operations—like matrix multiplications and gradient descent—with efficient symbolic alternatives. If successful, this could allow us to train models as powerful as GPT-4 at a fraction of the current cost, potentially enabling us to offer an alternative to services like ChatGPT with drastically reduced operational expenses.
We've already demonstrated the potential of this approach with SupGen, our program synthesizer that outperformed existing solutions by up to 1000x. This result confirms HVM's effectiveness in symbolic computation and gives us confidence to pursue a complete symbolic AI architecture.
With around $1 million remaining, we're dedicating the next two years, until the end of 2026, to this research. To support our efforts, we've invested $300,000 in 256 Apple M4 processors. These CPUs are better suited for HVM's computation style—such as pattern-matching and dynamic allocations—compared to GPUs, offering superior cost-performance for our needs. They'll be used both for running SupGen and for developing symbolic AI models.
It's important to recognize that this approach is experimental. Unlike DeepSeek, which optimized an existing architecture, we're venturing into new territory by building an entirely new AI framework from scratch. While the potential for major efficiency gains is promising—as shown by the cost differences between traditional matrix operations and symbolic methods, and our success with SupGen—the challenges are also significant.