Skip to content

Instantly share code, notes, and snippets.

View devxpy's full-sized avatar

Dev Aggarwal devxpy

View GitHub Profile
@devxpy
devxpy / gist6_prm.md
Last active May 8, 2026 12:17
PRM (Process Reward Model) training data — per-step {correct, neutral, wrong} labels for agricultural advisory reasoning traces. Adapted from OpenAI's PRM800K (Lightman et al. 2023).

PRM (Process Reward Model) training data — agricultural advisory

The training row is one (problem, reasoning_trace) pair where each step of the trace carries a label: positive, negative, or neutral. This data is used to train the PRM — not the policy directly. At RL time, the PRM scores each step the policy emits, giving a dense reward signal instead of one scalar at the end of the response.

Why this matters for an ag advisor: most agricultural advice has no clean outcome checker (no unit test, no assert yield == 4.2). But the reasoning steps inside a diagnosis are individually checkable by an agronomist — "is this the right differential?", "is this dose in the registered range?", "does this plan ignore the soil report?". Per-step labels are easier to collect than full gold demonstrations and give a much denser RL signal than outcome-only rewards.

Method follows OpenAI's Let's Verify Step by Step (Lightman et al., 2023) (PRM800K), with one delib

import json
from threading import Thread
from time import sleep
import gooey_gui as gui
import requests
from decouple import config
from fastapi import FastAPI
from starlette.requests import Request
app = FastAPI()
"""
README: Android UI Automator Script for Hinge App
This script automates the process of liking profiles in the Hinge Android app using the uiautomator2 library. It connects to an Android device, scrolls through the app, finds the like button, and presses it, then attempts to send the like. The process repeats in a loop with random delays and scrolls to mimic human behavior.
**Requirements:**
- Python 3.x
- [uiautomator2](https://github.com/openatx/uiautomator2) (`pip install uiautomator2`)
- An Android device with USB debugging enabled
- Hinge app installed on the device
@devxpy
devxpy / main.dart
Created December 24, 2024 04:19
Gooey.AI Lipsync Flutter Wav2Lip
import 'package:http/http.dart' as http;
import 'dart:convert';
main() async {
var request = http.MultipartRequest(
'POST',
Uri.parse('https://api.gooey.ai/v3/Lipsync/async/form/'),
);
request.headers['Authorization'] = 'bearer sk-hhm...POJ';
request.files.add(http.MultipartFile.fromBytes('input_face', []));
from babel.numbers import format_currency
# from gooeysite import wsgi
#
# assert wsgi
# import streamlit as gui
import plotly.graph_objects as go
import pandas as pd
@devxpy
devxpy / Dockerfile
Last active September 7, 2024 17:20
FROM python
RUN pip install --no-cache-dir furl requests loguru
RUN curl -so main.py 'https://gist.githubusercontent.com/devxpy/2abbe49fa2acb6958eb4d828f50c4467/raw/db60e263a7e345bae31869f5e037f1f87bc54b6b/main.py'
CMD ["python", "main.py"]
let response = await fetch("https://api.gooey.ai/v3/integrations/stream/", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
// your integration's ID as shown in the Gooey.AI Integrations tab
"integration_id": "DEy",
// the input text for the bot
"input_prompt": "Hello, world!",
|MODEL |TEM4A0C37S31SB |TEM4A0C42S41SB |TEM4A0C48S41SB |TEM4A0C60S51SB|
|--|--|--|--|--|
|RATED VOLTS/PH/HZ |208-230/1/60 | | ||
|RATINGS(a) |See O.D. Specifications | | ||
|INDOOR COIL - Type |Plate Fin | | ||
|Rows - F.P.I. |3 - 14 |3 - 14 |3 - 14 |4 - 14|
|Face Area (sq. ft.) |5.50 |5.50 |5.50 |5.91|
|Tube Size (in.) |3/8 | | ||
|Refrigerant Control |TXV | | ||
|Drain Conn. Size (in. )(b) |3/4 NPT | | ||
def reposition_object_img_bytes(
*,
img_bytes: bytes,
mask_bytes: bytes,
out_size: (int, int) = (512, 512),
out_obj_scale: float = 0.2,
out_pos_x: float = 4 / 9,
out_pos_y: float = 3 / 9,
) -> (bytes, bytes):
image_cv2 = bytes_to_cv2_img(img_bytes)
@devxpy
devxpy / out.txt
Last active August 22, 2023 16:54
from llama_index import (
VectorStoreIndex,
SimpleWebPageReader,
set_global_service_context,
ServiceContext,
)
from llama_index.callbacks import CallbackManager, TokenCountingHandler
from llama_index.chat_engine.types import ChatMode