- Download all files.
- Build the Crystal part:
crystal build --release --cross-compile --prelude ./no_main library.cr
^ You can leave this out
^ We want to link ourselves!
^ Use our custom prelude!
ruby '2.7.1' | |
gem 'rails', github: 'rails/rails' | |
gem 'tzinfo-data', '>= 1.2016.7' # Don't rely on OSX/Linux timezone data | |
# Action Text | |
gem 'actiontext', github: 'basecamp/actiontext', ref: 'okra' | |
gem 'okra', github: 'basecamp/okra' | |
# Drivers |
I am passionate about Ruby, but its execution time compared to other languages is extremely high, especially when we want to use more complex algorithms. In general, data structures in interpreted languages become incredibly slow compared to compiled languages. Some algorithms such as ´n-body´ and ´fannkuch-redux´ can be up to 30 times slower in Ruby than Go. This is one of the reasons I was interested in embedding Go code in a Ruby environment.
For those who do not know how shared libraries operate, they work in a similar way as DLLs in Windows. However, they have a native code with a direct interface to the C compiler.
Note Windows uses the DLL system, and in this case, this does not necessarily have to be in native code.
One example is DLLs written in C#, which runs on a virtual machine. Because I do not use windows, I ended up not testing if it is poss
# tutorial video link : https://youtu.be/dYt9xJ7dnpU | |
# colab link : https://colab.research.google.com/drive/1xSbu-b-EwYd6GdaFPRVgvXBX_mciZ41e?usp=sharing | |
# repo link : https://github.com/ai-forever/Kandinsky-2 | |
# used repo commit hash : a4354c04d5fbd48851866ef7d84ec444d3d50102 | |
# those who getting cuda error | |
# pip uninstall torch | |
# pip3 install torch==1.13.1 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117 | |
import os |
This worked on 14/May/23. The instructions will probably require updating in the future.
llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)
Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.
It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.
08737ef720f0510c7ec2aa84d7f70c691073c35d
.""" | |
Classifies sequences of length 10 with 20 features into 2 classes | |
with a single LSTM layer with 32 neurons. | |
See also a more involved example: | |
https://gist.github.com/bzamecnik/dccc1c4fdcf1c7a31757168b19c827a7 | |
""" | |
from keras.layers import Input, LSTM, Dense |
""" | |
When classifying upon a sequence usually we stack some LSTM returning sequences, | |
then one LSTM returning a point, then Dense with softmax activation. | |
Is it possible instead to give the last non-sequential LSTM a softmax activation? | |
The answer is yes. | |
In this example we have 3 sequential layers and one layer producing the final result. |
I saw that there's a markdown-live-preview-mode
and the var
markdown-live-preview-window-function
which was set to the function
markdown-live-preview-window-eww
. That function looks like this:
(defun markdown-live-preview-window-eww (file)
"Preview FILE with eww.
To be used with `markdown-live-preview-window-function'."
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; | |
;; Terminal notifier | |
;; requires 'sudo gem install terminal-notifier' | |
;; stolen from erc-notifier | |
(defvar terminal-notifier-command (executable-find "terminal-notifier") "The path to terminal-notifier.") | |
; (terminal-notifier-notify "Emacs notification" "Something amusing happened") | |
(defun terminal-notifier-notify (title message) |