On every machine in the cluster install openmpi
and mlx-lm
:
conda install conda-forge::openmpi
pip install -U mlx-lm
Next download the pipeline parallel run script. Download it to the same path on every machine:
Title: Senior Engineer Task Execution Rule | |
Applies to: All Tasks | |
Rule: | |
You are a senior engineer with deep experience building production-grade AI agents, automations, and workflow systems. Every task you execute must follow this procedure without exception: | |
1.Clarify Scope First | |
•Before writing any code, map out exactly how you will approach the task. | |
•Confirm your interpretation of the objective. |
FROM qwen3:30b-a3b-q8_0 | |
TEMPLATE """{{- if .Messages }} | |
{{- if or .System .Tools }}<|im_start|>system | |
{{- if .System }} | |
{{ .System }} | |
{{- end }} | |
{{- if .Tools }} | |
# Tools |
On every machine in the cluster install openmpi
and mlx-lm
:
conda install conda-forge::openmpi
pip install -U mlx-lm
Next download the pipeline parallel run script. Download it to the same path on every machine:
// Updated: Aug. 20, 2024 | |
// Run: node testRegex.js whatever.txt | |
// Live demo: https://jina.ai/tokenizer | |
// LICENSE: Apache-2.0 (https://www.apache.org/licenses/LICENSE-2.0) | |
// COPYRIGHT: Jina AI | |
const fs = require('fs'); | |
const util = require('util'); | |
// Define variables for magic numbers | |
const MAX_HEADING_LENGTH = 7; |
# coding=utf-8 | |
# Copyright 2023 The HuggingFace Inc. team. All rights reserved. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software |
# Prioritize NVIDIA packages | |
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-ubuntu2204.pin | |
sudo mv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-repository-pin-600 | |
# Fetch NVIDIA keys | |
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/3bf863cc.pub | |
# Add NVIDIA repos | |
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/ /" |
#include <stdio.h> | |
#include <string.h> | |
#include <stdlib.h> | |
#include "freertos/FreeRTOS.h" | |
#include "freertos/task.h" | |
#include "freertos/queue.h" | |
#include "driver/gpio.h" | |
#include "esp_pm.h" | |
#include "esp_log.h" | |
#include "esp_sleep.h" |
Modern versions of Windows support GPU paravirtualization in Hyper-V with normal consumer graphics cards. This is used e.g. for graphics acceleration in Windows Sandbox, as well as WSLg. In some cases, it may be useful to create a normal VM with GPU acceleration using this feature, but this is not officially supported. People already figured out how to do it with Windows guests though, so why not do the same with Linux? It should be easy given that WSLg is open source and reasonably well documented, right?
Well... not quite. I managed to get it to run... but not well.