- Web Wormhole https://webwormhole.io/ https://github.com/saljam/webwormhole
- ToffeeShare https://toffeeshare.com/
- FilePizza https://file.pizza/
ShareDrop sharedrop.io https://github.com/szimek/sharedrop(SOLD, not recommended, use one of the forks)A clone SnapDrop snapdrop.net https://github.com/RobinLinus/snapdrop(SOLD, not recommended, use one of the forks)- A fork PairDrop https://pairdrop.net/ https://github.com/schlagmichdoch/pairdrop
- Instant.io https://instant.io/
- FileTC https://file.tc/
This worked on 14/May/23. The instructions will probably require updating in the future.
llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)
Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.
It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.
- Clone llama.cpp from git, I am on commit
08737ef720f0510c7ec2aa84d7f70c691073c35d
.
set tabstop=4 | |
set softtabstop=4 | |
set shiftwidth=4 | |
set textwidth=79 | |
set expandtab | |
set autoindent | |
set number | |
set fileformat=unix | |
set list | |
set listchars=tab:>- |
# -*- coding: utf-8 -*- | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, |
create or replace function f_sk_date (ts timestamp ) | |
returns integer | |
stable as $$ | |
if not ts: | |
return None | |
return int(str(ts)[0:10].replace('-','')) | |
$$ language plpythonu; | |
create or replace function f_date (ts timestamp) | |
returns date |
import io | |
import sys | |
class IteratorFile(io.TextIOBase): | |
""" given an iterator which yields strings, | |
return a file like object for reading those strings """ | |
def __init__(self, it): | |
self._it = it | |
self._f = io.StringIO() |
opkg install luci-lib-json luci rng-tools usbutils avrdude avahi-daemon | |
# If rng-tools is not installable, then install it by hand from e.g., http://download.linino.org/linino_distro/linino_dev/latest/packages/rng-tools_3-2_ar71xx.ipk | |
# Edit your /etc/opkg.conf, add | |
src/gz barrier_breaker http://download.linino.org/dogstick/all-in-one/latest/packages/ | |
# Comment out your earlier src/gz | |
opkg update | |
opkg list|grep bridge |
This simple script will take a picture of a whiteboard and use parts of the ImageMagick library with sane defaults to clean it up tremendously.
The script is here:
#!/bin/bash
convert "$1" -morphology Convolve DoG:15,100,0 -negate -normalize -blur 0x1 -channel RBG -level 60%,91%,0.1 "$2"
Others have recently developed packages for this same functionality, and done it better than anything I could do. Use the packages instead of this script:
-
Gargoyle package by @lantis1008
-
OpenWRT package by @dibdot
In its basic usage, this script will modify the router such that blocked addresses are null routed and unreachable. Since the address blocklist is full of advertising, malware, and tracking servers, this setup is generally a good thing. In addition, the router will update the blocklist weekly. However, the blocking is leaky, so do not expect everything to be blocked.