- How to Cross Compile LLVM: https://llvm.org/docs/HowToCrossCompileLLVM.html
- Building LLVM with CMake: https://llvm.org/docs/CMake.html
- Hints from wasi-sdk Makefile: https://github.com/CraneStation/wasi-sdk/blob/master/Makefile
- Try compiling natively (needed for llvm-tblgen and clang-tblgen)
- cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DLLVM_TARGETS_TO_BUILD="X86;WebAssembly" -DLLVM_ENABLE_PROJECTS="lld;clang" ../llvm
- Try building LLVM with WASI:
- cmake -G Ninja -DCMAKE_AR=”/usr/local/google/home/binji/dev/llvm-project/build/bin/llvm-ar” -DCMAKE_RANLIB=”/usr/local/google/home/binji/dev/llvm-project/build/bin/llvm-ranlib” -DCMAKE_C_COMPILER="/usr/local/google/home/binji/dev/wasi-sdk-5.0/opt/wasi-sdk/bin/clang" -DCMAKE_CXX_COMPILER="/usr/local/google/home/binji/dev/wasi-sdk-5.0/opt/wasi-sdk/bin/clang++" -DCMAKE_CROSSCOMPILING=True -DCMAKE_INSTALL_PREFIX=/usr/local/google/home/binji/dev/wasi-clang -DLLVM_TABLEGEN=/usr/local/google/home/binji/dev/llvm-project/build/bin/llvm-tblgen -DCLANG_TABLEGEN=/
type: picture-elements | |
view_layout: | |
column: 1 | |
elements: | |
- type: custom:config-template-card | |
entities: | |
- sensor.{HA_PRINTER_DEVICE_NAME}_vt_tray | |
element: | |
type: state-icon | |
entity: sensor.{HA_PRINTER_DEVICE_NAME}_vt_tray |
#!/bin/bash | |
### steps #### | |
# verify the system has a cuda-capable gpu | |
# download and install the nvidia cuda toolkit and cudnn | |
# setup environmental variables | |
# verify the installation | |
### | |
### to verify your gpu is cuda enable check |
#!/usr/bin/env python | |
import argparse | |
import torch | |
from transformers import GPTJForCausalLM, GPTJConfig | |
# Note: these need the git version of Transformers as of 7/22/2022 | |
from transformers import CodeGenTokenizer, CodeGenForCausalLM | |
from transformers import CODEGEN_PRETRAINED_MODEL_ARCHIVE_LIST | |
parser = argparse.ArgumentParser('Convert SalesForce CodeGen model to GPT-J') |
Hyperbee is super useful for storing data and searching for data using it's ordered key search. When you set up your keyspace just right, you can query a small subset of a large database and quickly load what you need without having to download the entire dataset or traverse it.
This module seeks to make it easier to set up these indexes for JSON data and to make it easy to query large datasets.
We’ve been saying for a long time that we need “something like a CAP theorum for IPLD.” By this, I mean that we need a way to talk about the design tradeoffs that are made when designing data structures in IPLD.
This is going to end up looking a lot different than the CAP theorem but the end goal is to have something that provides us a blueprint for how to discuss the performance tradeoffs of different data structure designs.
Anyway, here’s my attempt at a first draft. It’s incomplete but I need some feedback in order to iterate on it and turn it into a proper PR.
IPLD has a very all-embracing approach to compatibility: many systems have some sort of bridge to IPLD. As a result, it's very important to understand which of those bridges are "complete", and which contain limitations; and for those that have limitations, what those limitations are. Most of this appears in the Codec layer: because Codecs are responsible for how data is serialized, they encompass almost all of the compatibility efforts in bridging IPLD systems to each other and to other systems.
We define "completeness" in terms of the [[Data Model]] (and split it into two concepts):
The general plan is to build an sd-image-aarch64 from nixpkgs/nixos/modules/installer/cd-dvd/sd-image-aarch64.nix flash it to the eMMC and have the system come up, similar to how this “just works” for raspberry-pis.
The RockPi 4 is a RockChip RK3399 based board, build by radxa with the same formfactor as a rasperry Pi. One noticable difference is that the Rock Pi’s cpu is at the bottom to better allow for the