Bootstrap knowledge of LLMs ASAP. With a bias/focus to GPT.
Avoid being a link dump. Try to provide only valuable well tuned information.
Neural network links before starting with transformers.
// you need this in your cargo.toml | |
// reqwest = { version = "0.11.3", features = ["stream"] } | |
// futures-util = "0.3.14" | |
// indicatif = "0.15.0" | |
use std::cmp::min; | |
use std::fs::File; | |
use std::io::Write; | |
use reqwest::Client; | |
use indicatif::{ProgressBar, ProgressStyle}; |
import tensorflow as tf #We need tensorflow 2.x | |
import numpy as np | |
#The hashlength in bits | |
hashLength = 256 | |
def buildModel(): | |
#we can set the seed to simulate the fact that this network is known and doesn't change between runs | |
#tf.random.set_seed(42) | |
model = tf.keras.Sequential() |
# We'll just use the official Rust image rather than build our own from scratch | |
FROM docker.io/library/rust:1.54.0-slim-bullseye | |
ENV KEYRINGS /usr/local/share/keyrings | |
RUN set -eux; \ | |
mkdir -p $KEYRINGS; \ | |
apt-get update && apt-get install -y gpg curl; \ | |
# clang/lld/llvm | |
curl --fail https://apt.llvm.org/llvm-snapshot.gpg.key | gpg --dearmor > $KEYRINGS/llvm.gpg; \ |
Yoav Golderg, February 2024.
Researchers at Google DeepMind released a paper about a learned systems that is able to play blitz-chess at a grandmaster level, without using search. This is interesting and imagination-capturing, because up to now computer-chess systems that play at this level, either based on machine-learning or not, did use a search component.[^1]
Indeed, my first reaction when reading the paper was to tweet wow, crazy and interesting
. I still find it crazy and interesting, but upon a closer read, it may not be as crazy and as interesting as I initially thought. Many reactions on twitter, reddit, etc, were super-impressed, going into implications about projected learning abilities of AI systems, the ability of neural networks to learn semantics from observations, etc, which are really over-the-top. The paper does not claim any of them, but they are still perceiv
This study focuses on the strategies used by the "xz backdoor", an extremely
complex piece of malware that contains its own x64 disassembler inside of it
to find critical locations in your code and hijacks it by swapping out your
code with its own as it runs. Because this a machine-code based attack,
all code written in any program language can be attacked and is vulnerable.
Instead of targeting sshd directly, the xz
backdoor injects itself in the parent systemd process then hijacks the
GNU Dynamic Linker (ld), before sshd is even started or libcrypto.so is
#!/bin/sh | |
print_usage() { | |
echo "usage: compress_video <input_file>" | |
echo "supported formats: mp4, webm, mkv, mov, avi, flv" | |
} | |
get_extension() { | |
f="${1##*/}" | |
case "$f" in |
Relax, I only have one Sunday to work on idea, literally my weekend project. So I tried Deepseek to see if it can help. Surprisingly, it works and it saves me another weekend...
Just chat.deepseek.com (cost = free) with prompts adapted from this gist.