This configuration is no longer updated
| // Cross browser, backward compatible solution | |
| (function( window, Date ) { | |
| // feature testing | |
| var raf = window.mozRequestAnimationFrame || | |
| window.webkitRequestAnimationFrame || | |
| window.msRequestAnimationFrame || | |
| window.oRequestAnimationFrame; | |
| window.animLoop = function( render, element ) { | |
| var running, lastFrame = +new Date; |
| #!/usr/bin/perl | |
| ###### | |
| # hueg.pl PRO MODE | |
| # modded by ma0 and others | |
| # respekts 2 jakk and others | |
| ###### | |
| use Irssi; | |
| use vars qw($VERSION %IRSSI); |
This guide is unmaintained and was created for a specific workshop in 2017. It remains as a legacy reference. Use at your own risk.
Workshop Instructor:
- Lilly Ryan @attacus_au
This workshop is distributed under a CC BY-SA 4.0 license.
- UEFI
- Systemd-boot (gummiboot)
- Encrypted root partition
- mkinitcpio
- Unified Kernel Image (UKI)
- Intel CPU
- NVIDIA GPU
- Wayland (sway)
- Secure Boot (optional)
This worked on 14/May/23. The instructions will probably require updating in the future.
llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)
Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.
It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.
- Clone llama.cpp from git, I am on commit
08737ef720f0510c7ec2aa84d7f70c691073c35d.