Skip to content

Instantly share code, notes, and snippets.

@haotian-liu
haotian-liu / karabiner.json
Created June 24, 2020 02:49
karabiner.json
{
"global": {
"check_for_updates_on_startup": true,
"show_in_menu_bar": true,
"show_profile_name_in_menu_bar": false
},
"profiles": [
{
"complex_modifications": {
"parameters": {
@haotian-liu
haotian-liu / gh-pages-deploy.md
Created June 19, 2022 07:02 — forked from cobyism/gh-pages-deploy.md
Deploy to `gh-pages` from a `dist` folder on the master branch. Useful for use with [yeoman](http://yeoman.io).

Deploying a subfolder to GitHub Pages

Sometimes you want to have a subdirectory on the master branch be the root directory of a repository’s gh-pages branch. This is useful for things like sites developed with Yeoman, or if you have a Jekyll site contained in the master branch alongside the rest of your code.

For the sake of this example, let’s pretend the subfolder containing your site is named dist.

Step 1

Remove the dist directory from the project’s .gitignore file (it’s ignored by default by Yeoman).

@haotian-liu
haotian-liu / quantize.py
Created July 31, 2023 16:32
AutoGPTQ quantization for LLaVA
from transformers import AutoTokenizer, TextGenerationPipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import logging
logging.basicConfig(
format="%(asctime)s %(levelname)s [%(name)s] %(message)s", level=logging.INFO, datefmt="%Y-%m-%d %H:%M:%S"
)
"""
Download https://huggingface.co/liuhaotian/llava-llama-2-13b-chat-lightning-preview to local
@haotian-liu
haotian-liu / 1_eval.py
Created November 4, 2023 19:11
gqa evaluation
# Evaluation code for GQA.
# Computes a suite of metrics such as accuracy, consistency, plausibility and scores per question type and length.
# Visit https://gqadataset.org/ for all information about the dataset, including examples, visualizations, paper and slides.
#
#
# Metrics:
# - Accuracy: Standard accuracy, computed over the balanced version of the dataset, which is more robust against
# cheating by making educated guesses. For each question-answer pair (q,a), we give 1 point if the
# predicted answer p matches a and 0 otherwise, and average over all questions in the dataset.
#