Skip to content

Instantly share code, notes, and snippets.

View parnexcodes's full-sized avatar
🏠
Working from home

Paranjay Singh parnexcodes

🏠
Working from home
View GitHub Profile
- blur
blur: true
blur amount: 1.3
blur output fps: 60
blur weighting: gaussian_sym
- interpolation
interpolate: true
interpolated fps: 720
@parnexcodes
parnexcodes / create-commit.md
Last active October 9, 2024 11:56
To be used with cursor for generating commit messages

IDENTITY and PURPOSE

You are an expert Git commit message generator, specializing in creating concise, informative, and standardized commit messages based on Git diffs. Your purpose is to follow the Conventional Commits format and provide clear, actionable commit messages.

GUIDELINES

  • Adhere strictly to the Conventional Commits format.
  • Use allowed types: feat, fix, build, chore, ci, docs, style, test, perf, refactor, etc.
  • Write commit messages entirely in lowercase.
  • Keep the commit message title under 60 characters.
import os
import requests
def get_kraken_server():
url = "https://krakenfiles.com/api/server/available"
response = requests.get(url)
if response.status_code == 200:
data = response.json()["data"]
url = data["url"]
server_access_token = data["serverAccessToken"]
import os
import requests
def get_server():
url = "https://api.gofile.io/servers"
response = requests.get(url)
if response.status_code == 200:
servers = response.json()["data"]["servers"]
if servers:
return servers[0]["name"] # Selecting the first server, you may adjust this logic if needed
@parnexcodes
parnexcodes / index.js
Created June 3, 2022 13:51
imdb_top_250 scrape
const axios = require('axios')
const cheerio = require('cheerio')
const express = require('express')
const app = express()
const port = 3000
app.get('/', async (req, res) => {
const getMovies = async () => {
const siteUrl = 'https://www.imdb.com/chart/top/'
#include<stdio.h>
#include<stdlib.h>
struct Node{
int data;
struct Node * next;
};
void linkedListTraversal(struct Node *ptr)
{
import os
import glob
list1 = []
for file in glob.glob("*.sup"):
print(file)
list1.append(file)
list2 = []
for j in list1:
@echo off
mkdir SCREEN
set /p name="Enter name : "
ffmpeg -ss 00:05:00 -i "%name%.mkv" -c:v png -frames:v 1 "1.%name%.png"
ffmpeg -ss 00:10:00 -i "%name%.mkv" -c:v png -frames:v 1 "2.%name%.png"
ffmpeg -ss 00:15:00 -i "%name%.mkv" -c:v png -frames:v 1 "3.%name%.png"
ffmpeg -ss 00:20:00 -i "%name%.mkv" -c:v png -frames:v 1 "4.%name%.png"
ffmpeg -ss 00:25:00 -i "%name%.mkv" -c:v png -frames:v 1 "5.%name%.png"
ffmpeg -ss 00:30:00 -i "%name%.mkv" -c:v png -frames:v 1 "6.%name%.png"
ffmpeg -ss 00:35:00 -i "%name%.mkv" -c:v png -frames:v 1 "7.%name%.png"
@parnexcodes
parnexcodes / imdb_scrap.py
Created October 31, 2021 13:37
imdb.com/top scraping
from typing import final
import requests
import pprint
from bs4 import BeautifulSoup
def get_recent():
URL = f"https://www.imdb.com/chart/top/"
r = requests.get(URL)
soup = BeautifulSoup(r.content, 'lxml')
@parnexcodes
parnexcodes / scrap.py
Created October 31, 2021 08:24
asianembed recent episodes scraping
import requests
import pprint
from bs4 import BeautifulSoup
def get_recent(number):
URL = f"https://asianembed.com/?page={number}"
r = requests.get(URL)
soup = BeautifulSoup(r.content, 'lxml')
items = soup.find_all('ul', {'class': 'listing items'})