Skip to content

Instantly share code, notes, and snippets.

View suxue's full-sized avatar

FEI Hao suxue

  • CSIT of ANU
  • ACT, Australia
View GitHub Profile
name: "DeepFace_set003_net"
input: "data"
input_dim: 1
input_dim: 1
input_dim: 128
input_dim: 128
layer{
name: "conv1"
type: "Convolution"
name: "LNet"
input: "data"
input_dim: 1
input_dim: 15
input_dim: 24
input_dim: 24
layer {
name: "slicer_data"
type: "Slice"
@suxue
suxue / multi_index_example.cpp
Created October 22, 2018 11:10
Boost multiple-index example
#include <boost/flyweight.hpp>
#include <boost/multi_index_container.hpp>
#include <boost/multi_index/member.hpp>
#include <string>
#include <cstdint>
#include <vector>
#include <iostream>
#include <tuple>
typedef std::tuple<short, std::uint8_t, std::uint8_t> Date;
@suxue
suxue / multi_index.cpp
Created October 22, 2018 11:11
multi_index
#include <boost/flyweight.hpp>
#include <boost/multi_index_container.hpp>
#include <boost/multi_index/member.hpp>
#include <string>
#include <cstdint>
#include <vector>
#include <iostream>
#include <tuple>
typedef std::tuple<short, std::uint8_t, std::uint8_t> Date;
@suxue
suxue / 1.cpp
Created October 22, 2018 11:12
multi_index
#include <boost/flyweight.hpp>
#include <boost/multi_index_container.hpp>
#include <boost/multi_index/member.hpp>
#include <string>
#include <cstdint>
#include <vector>
#include <iostream>
#include <tuple>
typedef std::tuple<short, std::uint8_t, std::uint8_t> Date;
#!/usr/bin/env python
# coding: utf-8
# module for http benchmarking using wrk
# example command like
# ./wrk.py --connections '[1,2,3,4]' --filename ./benchmark_qa/query.json --url http://x.y.z.k/hello.php --output 1.csv --duration 600
import subprocess
import tempfile
import urlparse
from contextlib import contextmanager
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@suxue
suxue / llama_torchao_compile.py
Created August 29, 2024 09:01 — forked from SunMarc/llama_torchao_compile.py
`transformers` + `torchao` quantization + `torch.compile` on Llama3.1 8B
# REQUIRES torchao, torch nightly (or torch 2.5) and transformers
from transformers import AutoTokenizer, AutoModelForCausalLM, TorchAoConfig
from transformers import TextStreamer
import torch
from tqdm import tqdm
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false" # To prevent long warnings :)
torch.set_float32_matmul_precision('high')