Skip to content

Instantly share code, notes, and snippets.

View Free-Radical's full-sized avatar
🎯
Focusing

Saqib Ali Khan Free-Radical

🎯
Focusing
View GitHub Profile
@ruvnet
ruvnet / *DeepSeek-uncensored.md
Last active January 19, 2026 10:03
Deploying and Fine-Tuning an Uncensored DeepSeek R1 Distill Model on Google Cloud

DeepSeek R1 Distill: Complete Tutorial for Deployment & Fine-Tuning

This guide shows how to deploy an uncensored DeepSeek R1 Distill model to Google Cloud Run with GPU support and how to perform a basic, functional fine-tuning process. The tutorial is split into:

  1. Environment Setup
  2. FastAPI Inference Server
  3. Docker Configuration
  4. Google Cloud Run Deployment
  5. Fine-Tuning Pipeline (Cold Start, Reasoning RL, Data Collection, Final RL Phase)
@mberman84
mberman84 / gist:ea207e7d9e5f8c5f6a3252883ef16df3
Created November 29, 2023 15:31
AutoGen + Ollama Instructions
1. # create new .py file with code found below
2. # install ollama
3. # install model you want “ollama run mistral”
4. conda create -n autogen python=3.11
5. conda activate autogen
6. which python
7. python -m pip install pyautogen
7. ollama run mistral
8. ollama run codellama
9. # open new terminal
@Free-Radical
Free-Radical / GPT4all-langchain-demo.ipynb
Created April 19, 2023 23:45 — forked from psychemedia/GPT4all-langchain-demo.ipynb
Example of running GPT4all local LLM via langchain in a Jupyter notebook (Python)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@peterw
peterw / embed.py
Created April 17, 2023 16:30
embedding the pdf
import openai
import streamlit as st
from streamlit_chat import message
from dotenv import load_dotenv
import os
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
import openai
from langchain.document_loaders import UnstructuredMarkdownLoader
from langchain.chains.question_answering import load_qa_chain
import openai
import streamlit as st
from streamlit_chat import message
from dotenv import load_dotenv
import os
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
import openai
from langchain.document_loaders import UnstructuredMarkdownLoader
from langchain.chains.question_answering import load_qa_chain
@psychemedia
psychemedia / GPT4all-langchain-demo.ipynb
Last active December 21, 2023 17:30
Example of running GPT4all local LLM via langchain in a Jupyter notebook (Python)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@camullen
camullen / installation.md
Created December 10, 2022 23:52
KDE Install on WSL2
@insightsbees
insightsbees / plotly_network.py
Created September 1, 2022 20:33
Use Plotly to visualize the network graph
#Use plotly to visualize the network graph created using NetworkX
#Adding edges to plotly scatter plot and specify mode='lines'
edge_trace = go.Scatter(
x=[],
y=[],
line=dict(width=1,color='#888'),
hoverinfo='none',
mode='lines')
for edge in G.edges():
@insightsbees
insightsbees / networkx.py
Created September 1, 2022 20:21
Create network graph using networkx
#Create the network graph using networkx
if uploaded_file is not None:
df=pd.read_csv(uploaded_file)
A = list(df["Source"].unique())
B = list(df["Target"].unique())
node_list = set(A+B)
G = nx.Graph() #Use the Graph API to create an empty network graph object
#Add nodes and edges to the graph object
for i in node_list:
@insightsbees
insightsbees / sidebar.py
Last active January 27, 2023 05:04
Import libraries and add widgets to sidebar
import streamlit as st
import pandas as pd
import numpy as np
import networkx as nx
import plotly.graph_objs as go
from PIL import Image
#Add a logo (optional) in the sidebar
logo = Image.open(r'C:\Users\13525\Desktop\Insights_Bees_logo.png')
st.sidebar.image(logo, width=120)