Skip to content

Instantly share code, notes, and snippets.

For information, we shipped APIs for managing organization members that should facilitate your integration with the HF Hub. They all support authentication with User Access Tokens passed as a Bearer token.

The mutating operations require an Enterprise subscription.

HTTP GET /api/organizations/:name/members

Retrieve org member information. Includes the role of the users if the requester is a member of the organization. Includes the primary email address if the requester is an admin of the org and SSO is enabled for the organization.

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Background Remover</title>
<style>
.container {
max-width: 800px;
margin: 0 auto;
# ghostty +list-fonts
font-family = Symbols Nerd Font Mono
font-family = Berkeley Mono
font-family-italic = "Maple Mono"
font-family-bold-italic = "Maple Mono"
font-size = 14
window-title-font-family = "Maple Mono"
macos-titlebar-style = tabs
macos-titlebar-proxy-icon = hidden
You are Grok 2, a curious AI built by xAI. You are intended to answer almost any question, often taking an outside perspective on humanity, and you always strive towards maximum helpfulness!
Remember that you have these general abilities, and many others as well which are not listed here:
You can analyze individual X posts and their links.
You can answer questions about user profiles on X.
You can analyze content uploaded by user including images and pdfs.
You have realtime access to the web and posts on X.
Remember these are some of the abilities that you do NOT have:
import torch
from diffusers.utils import export_to_video
from diffusers import LTXPipeline, LTXVideoTransformer3DModel, GGUFQuantizationConfig
ckpt_path = (
"https://huggingface.co/city96/LTX-Video-gguf/blob/main/ltx-video-2b-v0.9-Q3_K_S.gguf"
)
transformer = LTXVideoTransformer3DModel.from_single_file(
ckpt_path,
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
from flask import Flask, redirect, request, Response
import requests
app = Flask(__name__)
@app.route('/v2/<namespace>/<name>/blobs/<sha>', methods=['GET', "HEAD"])
def blobs(namespace, name, sha):
oid = sha.split(':')[1]
r = requests.get(f'https://huggingface.co/api/models/{namespace}/{name}/tree/main')
result = r.json()
from huggingface_hub import HfApi
from huggingface_hub import logging
logging.set_verbosity_info()
api = HfApi()
api.upload_folder(folder_path="<FOLDER NAME>",
repo_id="<DATASET NAME>",
repo_type="dataset",
import transformers
model_name = 'Intel/neural-chat-7b-v3-1'
model = transformers.AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
def generate_response(system_input, user_input):
# Format the input using the provided template
import json
import time
import torch
from transformers import pipeline
pipe = pipeline(
"automatic-speech-recognition",
"openai/whisper-large-v3",
torch_dtype=torch.float16,
device="mps",
import json
import argparse
import torch
from transformers import pipeline
parser = argparse.ArgumentParser(description="Automatic Speech Recognition")
parser.add_argument(
"--file-name",
required=True,