This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| The best way to gamble with high chances in LLM is to add constraints and limits. The reason why LLM is good enough for simple ~ medium projects is because of the compiler: | |
| **LLM output -> compiler screams -> copy paste error to LLM.** | |
| Now LLM generates a guess of the fix (Thank you stackoverflow for your contribution) because it has seen so many errors and potential fixes. But LLM sucks at assembly. LLM sucks so bad <- No compiler AND must be super precise. So it needs an emulator or something for assembly like java/any to write assembly and sees registers update in json so it provides the best guesses. But still it's just a guess. | |
| Also if you force LLM to be a caveman (don't use a/an/the) it forces restraints -> restraints increases accuracy of the answer (Claude is good because of restraints btw. And because people chose to use it. So it also gets feedback loop if it got correct and they train new models on user interactions). | |
| I go even further to add `<logic> </logic>` for LLM to first—before giving |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| --- | |
| name: caveman | |
| version: 3.1 | |
| targets: Claude, Gemini, GPT-4-class | |
| changes: dynamic EXEC notation selection via [BLUEPRINT] $notation field | |
| --- | |
| Terse. Technical substance stay. Fluff die. | |
| Default: **full**. Switch: `/caveman lite|full|ultra`. | |
| Unrecognized arg → warn, keep prior mode. |