You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I didn't really follow the LanguageBreak instructions because I didn't care about most of the features + I was curious to do it myself, but the LanguageBreak github repo was invaluable for debugging
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Mara's statement on the retraction of ThePhd's RustConf keynote
I was one of the people who didn't vote for ThePhd's keynote; originally because I simply preferred another promising candidate. Later, after hearing technical concerns from an expert that I mostly agreed with, also because of the topic, although I must admit I am no expert on the topic.
The candidate I voted for originally got a few approving comments at first, but ended up being mostly ignored later, mostly because of our lack of process and proper voting.
When it was brought up in the leadership chat that 'there are concerns', I focused on the talk itself rather than focusing on the process failure. That was a mistake, for which I apologize. I am not an expert on this topic, and should not have rushed myself into talking about things outside my expertise, even under pressure.
In another situation this could have been a minor mistake with no consequences, just one of several opinions in a discussion, but in this situation it became part
This worked on 14/May/23. The instructions will probably require updating in the future.
llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet.
It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)
Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.
It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.
Clone llama.cpp from git, I am on commit 08737ef720f0510c7ec2aa84d7f70c691073c35d.
Optimized solution to APS #038 unique words with unique letters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters