Skip to content

Instantly share code, notes, and snippets.

View vuiseng9's full-sized avatar

VS (Vui Seng Chua) vuiseng9

View GitHub Profile

Remove files in wandb cloud programmatically via python API

import wandb
api = wandb.Api()
runs = api.runs("<entity>/<project>") # e.g. vchua/huggingface

for run in runs:
    print(run.entity, run.project, run.id, run.name)
    for fff in run.files():
 if fff.name == 'output.log':

Goal:

Get examples of Intel Neural Compressor (INC) up and running, with existing trained model. We will use HuggingFace's Optimum as frontend and INC is chosen as its backend. We aim to reproduce static quantization example provided by Optimum out-of-the-box

  1. Create a conda environment
conda create -n optimum-inc python=3.8
  1. Setup Intel Neural Compressor per landing page. But we do it slightly different for dev.
@vuiseng9
vuiseng9 / timm.ipynb
Created March 4, 2022 20:11 — forked from Chris-hughes10/timm.ipynb
timm.ipynb
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@vuiseng9
vuiseng9 / nncf_wrap_bert.py
Last active April 13, 2023 22:41
tranformer_block_tracing via NNCF
import functools
from typing import Dict, Callable, Any, Union, List, Tuple
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torch.utils.data import Dataset
from nncf.torch.nncf_network import NNCFNetwork
from nncf.torch.dynamic_graph.graph_tracer import create_input_infos, create_dummy_forward_fn
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2022 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import logging as log
import sys
import time
@vuiseng9
vuiseng9 / bert-squad-eval.md
Last active July 6, 2022 20:29
Evaluation of different BERT models on SQuADv1.1

Evaluation of different BERT models on SQuADv1.1

https://github.com/huggingface/transformers

# following has been validated with transformers v4.18

# 24 Layers
# https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad
model=bert-large-uncased-whole-word-masking-finetuned-squad
import time
import numpy as np
import logging as log
from openvino.runtime import AsyncInferQueue, Core, PartialShape
from openvino.tools.benchmark.utils.constants import CPU_DEVICE_NAME
log.info = print
model_path="/data1/vchua/jpqd-bert/r0.010-squad-bert-b-mvmt-8bit/ir/squad-BertForQuestionAnswering.cropped.8bit.onnx"

Objective: train a resnet18 for CIFAR10 dataset

Top 1 accuracy of resnet18/CIFAR10 in this repo achieves 93%. We are not using this because it defines/implements its own Resnet. We would like to use the out-of-the-box torchvision resnet18 definition. NNCF provides an image classification example which utilizes torchvision resnet definition.

# Step 1: Create a new virtualenv or conda environment, make sure the env is activated

# Step 2: Install VS's fork of NNCF
git clone https://github.com/vuiseng9/nncf
cd nncf