Skip to content

Instantly share code, notes, and snippets.

View Fashad-Ahmed's full-sized avatar
🧠
Thinking

Fashad Ahmed Fashad-Ahmed

🧠
Thinking
View GitHub Profile
%-------------------------
% Resume in Latex
% Author : Jake Gutierrez
% Based off of: https://GitHub.com/sb2nov/resume
% License : MIT
%------------------------
\documentclass[letterpaper,11pt]{article}
\usepackage{latexsym}
# ============================================================================
# Phase 6: Generate Final LaTeX Report
# ============================================================================
def escape_latex(text):
"""Escape special LaTeX characters in text."""
if not isinstance(text, str):
text = str(text)
# Escape special characters
replacements = {
'\\': '\\textbackslash{}',
# ============================================================================
# Phase 5.5: Generate Visualization Images for LaTeX Report
# ============================================================================
print("=== Phase 5.5: Generating Visualization Images ===")
import os
# Create images directory if it doesn't exist
os.makedirs('images', exist_ok=True)
print("✓ Created/verified 'images' directory")

PWA Implementation Technical Documentation

Overview

This document describes the technical implementation of converting the MyAbhyasa web application into a Progressive Web App (PWA). The PWA implementation enables offline functionality, app-like installation, and improved performance through service worker caching.

Project: MyAbhyasa - Self-Study Made Easy
Technology Stack: Next.js 15.0.3, React 19, TypeScript
PWA Library: next-pwa v5.6.0
Date: November 2024

export const STUDENT_TEST_TYPES: TestType[] = [
{
id: "previous-year",
title: "Previous Year Papers",
description: "Solve actual past exam papers for realistic practice",
icon: "/teachers/test/previous-paper-icon.svg",
route: "/student/test/pyp",
dbID: 1,
},
// {

Quick Start Guide - Local Testing

NEW: One-command setup available! Run ./scripts/setup-local.sh for automated setup.

🚀 Quick Setup (5 minutes)

1. Start Supabase

supabase start

Step-by-Step Setup Guide

This guide will walk you through setting up the local development environment from scratch.

Prerequisites

Before starting, ensure you have:

  1. Docker Desktop installed and running

Tokenization Implementation Guide

Current Status

The codebase uses cl100k_base encoding as an approximation for Llama 4 and Gemini Pro models, which is accurate enough for most use cases (within 10-20% of actual token counts).

Option 4: Full Model-Specific Tokenization

For Llama 4 Models

To implement proper Llama tokenization, you have two options:

/**
* Example implementation of proper tokenization for Llama and Gemini
* This file shows how to implement Option 4 with full model-specific tokenization
*
* To use this:
* 1. Install required dependencies: npm install @huggingface/tokenizers
* 2. Replace the current token-counter.util.ts with this implementation
* 3. Update all countTokens calls to handle async properly
*/
import { encoding_for_model, get_encoding, type Tiktoken } from 'tiktoken';
import { Model } from '../../debate/enums/model.enum';
export class TokenCounter {
private static readonly FALLBACK_CHARS_PER_TOKEN = 4;
private static encodingCache = new Map<string, Tiktoken>();
private static getEncodingForModel(model: Model): Tiktoken | null {
const cacheKey = model;