This TPU VM cheatsheet uses and was tested with the following library versions:
| Library | Version |
|---|---|
| JAX | 0.3.25 |
| FLAX | 0.6.4 |
| Datasets | 2.10.1 |
| Transformers | 4.27.1 |
| """ To use: install LLM studio (or Ollama), clone OpenVoice, run this script in the OpenVoice directory | |
| git clone https://github.com/myshell-ai/OpenVoice | |
| cd OpenVoice | |
| git clone https://huggingface.co/myshell-ai/OpenVoice | |
| cp -r OpenVoice/* . | |
| pip install whisper pynput pyaudio | |
| """ | |
| from openai import OpenAI | |
| import time |
| from stable_baselines3.common.env_util import make_atari_env | |
| from stable_baselines3.common.vec_env import VecFrameStack | |
| from stable_baselines3 import A2C | |
| # There already exists an environment generator | |
| # that will make and wrap atari environments correctly. | |
| # Here we are also multi-worker training (n_envs=4 => 4 environments) | |
| env = make_atari_env('BreakoutNoFrameskip-v4', n_envs=16) | |
| # Frame-stacking with 4 frames | |
| env = VecFrameStack(env, n_stack=4) |
This TPU VM cheatsheet uses and was tested with the following library versions:
| Library | Version |
|---|---|
| JAX | 0.3.25 |
| FLAX | 0.6.4 |
| Datasets | 2.10.1 |
| Transformers | 4.27.1 |
| #!/bin/sh | |
| set -x | |
| # == Swarm training (alpha release) == | |
| # Setup: | |
| # | |
| # git clone https://github.com/shawwn/gpt-2 | |
| # cd gpt-2 | |
| # git checkout dev-shard |
| #!/bin/bash | |
| function clearCaches { | |
| # Clear npm cache | |
| npm cache clean | |
| # Clear npm5 cache | |
| npm5 cache clear --force | |
| } |