Skip to content

Instantly share code, notes, and snippets.

View abhishekkrthakur's full-sized avatar
🏠
Working from home

abhishek thakur abhishekkrthakur

🏠
Working from home
View GitHub Profile
test_transform=transforms.Compose([
transforms.Resize(IMAGE_SIZE),
transforms.CenterCrop(IMAGE_SIZE),
transforms.ToTensor(),
transforms.Normalize(IMG_MEAN,IMG_STD)
])
test_dataset = CollectionsDatasetTest(csv_file='../input/sample_submission.csv',
root_dir='../input/test/',
@abhishekkrthakur
abhishekkrthakur / slack_notifier.py
Created December 6, 2019 07:53
Slack notification from python
import os
import requests
import json
SLACK_WEBHOOK= os.environ.get("SLACK_WEBHOOK")
def send_message(messages, channel="abhishek", username="beast"):
"""
:param messages: list of texts
@abhishekkrthakur
abhishekkrthakur / falcon_pdf_bot.py
Created July 16, 2023 09:03
This is a reference to the YouTube tutorial here: https://youtu.be/hSQY4N1u3v0
import argparse
from pdfminer.high_level import extract_text
from sentence_transformers import SentenceTransformer, CrossEncoder, util
from text_generation import Client
PREPROMPT = "Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\n"
PROMPT = """"Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to
@abhishekkrthakur
abhishekkrthakur / llm_training_sft.py
Created July 16, 2023 09:09
Train LLMs in 50 lines of code. This is a reference code for YouTube tutorial: https://www.youtube.com/watch?v=JNMVulH7fCo&ab_channel=AbhishekThakur
import torch
from datasets import load_dataset
from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training
from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments
from trl import SFTTrainer
def train():
train_dataset = load_dataset("tatsu-lab/alpaca", split="train")
tokenizer = AutoTokenizer.from_pretrained("Salesforce/xgen-7b-8k-base", trust_remote_code=True)
import os
import gradio as gr
from text_generation import Client
PROMPT = """<s>[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
from diffusers import DiffusionPipeline, StableDiffusionXLImg2ImgPipeline
import torch
model = "stabilityai/stable-diffusion-xl-base-1.0"
pipe = DiffusionPipeline.from_pretrained(
model,
torch_dtype=torch.float16,
)
pipe.to("cuda")
pipe.load_lora_weights("model/", weight_name="pytorch_lora_weights.safetensors")