Skip to content

Instantly share code, notes, and snippets.

View adeelahmad's full-sized avatar
:octocat:

Adeel Ahmad adeelahmad

:octocat:
View GitHub Profile
which pyminify || pip install pyminify; find . -path '*/.git*' -prune -o -path '*/venv*' -prune -o -path '*/.env*' -prune -o -path '*/__pycache__*' -prune -o -path '*/.*' -prune -o -path '*/*env*' -prune -o -type f -regex '.*\.py$' -exec sh -c 'FF="$1"; AST_OUTPUT=$(echo "" 2>/dev/null); if [ "$(echo "$AST_OUTPUT" | wc -l)" -gt 1 ]; then COMMENTED_AST=$(echo "$AST_OUTPUT" | sed "s/^/# /"); AST_BLOCK="\n#-----------#\n# AST Output (Mult-line):\n$COMMENTED_AST\n#-----------#"; else AST_BLOCK=""; fi; printf "#-----------#\n# <FILE name=\"$FF\">\n# Start of Minified File (pyminify with flags) at Path: $FF\n"; pyminify "$FF" --remove-asserts --remove-debug --remove-literal-statements; printf "%s\n# End of File at Path: $FF\n#</FILE name=\"$FF>\n#-----------#\n" "$AST_BLOCK"' sh {} \; | grep -v '^\s*$'
Instead of using the python file for ast directly execute the code in the single liner, of course after installing deps if not available only!:
import ast
import click
from rich.console import Console
from rich.tr
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
ultra_qna_server_dataset_concurrent_rl.py
UltraQnA+++++ (Concurrent Dataset Edition, RL-ready) — Upgraded
This script forms the first part of the LLM training pipeline. It takes raw source
material (code, YAML, markdown) and synthetically generates a high-quality,
grounded dataset. This dataset is specifically formatted with `prompt`, `completion`,
import autogen
from autogen import AssistantAgent, UserProxyAgent, GroupChat, GroupChatManager
config_list = [
{
"model": "gpt-4",
# "api_type": "open_ai",
[TASK(s)]
1.
[INPUT]
Design a chatbot interface where, instead of responding with plain text, the AI provides action buttons based on the user’s message. The chatbot should analyze the user input and dynamically generate relevant options as clickable buttons.
Key Features:
#!/bin/bash
# Ultimate ZFS Proxmox Tuner Script
# Function to detect existing ZFS pools
detect_zfs_pools() {
zpool list -H -o name
}
# Function to detect SCSI disks dynamically
@adeelahmad
adeelahmad / 01-intro.md
Created April 14, 2024 16:02 — forked from AaradhyaSaxena/01-intro.md
qdrant-vector-db

# Qdrant

Vector databases are a relatively new way for interacting with abstract data representations derived from opaque machine learning models such as deep learning architectures. These representations are often called vectors or embeddings and they are a compressed version of the data used to train a machine learning model to accomplish a task like sentiment analysis, speech recognition, object detection, and many others.

These new databases shine in many applications like semantic search and recommendation systems, and here, we’ll learn about one of the most popular and fastest growing vector databases in the market, Qdrant.

Screenshot 2023-11-04 at 5 34 18 PM

Concepts

  1. Collection
@adeelahmad
adeelahmad / README.md
Last active April 8, 2024 11:54
Description of the process for fine tuning instruct dataset generating.

Finetune instruct that can enhance LLM capabilities and safety.

flowchart
	StartScriptExecution["Start Script Execution"]
	ReadTextFile["Read Text File"]
	ChunkTextIntoSegments["Chunk Text Into Segments"]
	ForEachTextChunk["For Each Text Chunk"]
	PerformNamedEntityRecognition["Perform Named Entity Recognition (NER)"]
from autogen import AssistantAgent, GroupChatManager, UserProxyAgent
from autogen.agentchat import GroupChat
config_list = [
{
"model": "ollama/mistralorca",
"api_base": "http://localhost:8000", # litellm compatible endpoint
"api_type": "open_ai",
"api_key": "NULL", # just a placeholder
}
@adeelahmad
adeelahmad / self_execute_function_agent.py
Created April 7, 2024 00:10 — forked from bonadio/self_execute_function_agent.py
Autogen Agent that can auto execute a function_call
# %%
import os
import openai
# import autogen
from autogen import Agent, ConversableAgent, oai, UserProxyAgent, AssistantAgent
import types
from dotenv import load_dotenv, find_dotenv
from typing import Any, Callable, Dict, List, Optional, Tuple, Type, Union
_ = load_dotenv(find_dotenv()) # read local .env file
Step 1: Craft a Summary with Expert Insignts
Your role is to take turns act as an expert better than last turn, and connecting your content to previos dialog maintaining a flow of converstation so easy that the most complex ideas can be made so simple with Expert Analigiest and Insights for the Given Content Topic/Domain.
Distill the main ideas and critical information from the content below into a concise summary. Emphasize the primary themes and essential takeaways.
You will add a lot of effects with tags where needed for a perfect Audio Experience:
[[ slnc 5000 ]] : silence for 5s.
[[volm 0.9]] changes the volume to the indicated level.
[[volm +0.1]] increases the volume by the indicated level.