Skip to content

Instantly share code, notes, and snippets.

Lovable:
Encourage External Tool Usage
Nudge GPT to use external tools for specific tasks.
Particularly useful for frontend tasks (e.g., progress bars, UI components).
Explicitly Include "Connect to Database"
Mention "Connect to database" in either the user prompt or system message to ensure proper handling of database-related tasks.
Fix Row-Level Security (RLS) Issues
Main Components of LLM training:
architecture, training algo/loss, data, evaluation, systems
AI systems:
Majority completion strategy: for doing a hard reasoning task, the model can go in diff reasoning paths and final answer can be the majority output from these paths.
dspy is a declarative programming library designed for building and optimizing AI-powered applications, especially those leveraging large language models (LLMs). It provides a structured and modular approach to defining tasks, training models, and integrating AI capabilities.
AI Agent evaluation - Galileo
AI technical cofounder - Woz (YC W25)
LLM model and LLM system evaluation - Arize AI
Statefull AI Agents - Letta
LLM Fine tuning - unsloth https://unsloth.ai/blog/r1-reasoning
Prompt Engineering : https://www.kaggle.com/whitepaper-prompt-engineering?utm_source=alphasignal
Firebase Studio, a web-based IDE to build and deploy full-stack AI applications
AI Agents Unleashed , playbook for ai agents in production
NVIDIA provides a cutting-edge, modular workflow for synthetic data generation (SDG), leveraging platforms and models like Omniverse Replicator, Isaac Sim, NeMo, and advanced LLMs (such as Nemotron-4 and Llama 3.1).
Tools
Model-controlled
Functions invoked by the model
Retrieve / search
Send a message
Update DB records
Resources
Application-controlled
Data exposed to the application
Time Series Forecasting Techniques
Smoothing Based Techniques:
Simple Moving Average
Simple Exponential Smoothing
Holt’s Linear Trend
Holt Winter’s Exponential Smoothing
ARIMA Based Techniques:
AR
MA
Table Stakes
Better Parsers
Chunk Sizes
Hybrid Search
Metadata Filters
Advanced Retrieval
Reranking
Recursive Retrieval
Embedded Tables
Suppose we make requests in comet regarding our banking mails (after connecting our mail to comet), all AI processing happens locally itself (this involves using heuristics like sender, subject keywords), if we want to use advanced features which are not possible locally then minimal data needed is sent to cloud and data is processed. This "minimal" data is determined algorithmically by AI + heuristics. We can control different settings to manage the data that is transmitted. We can also view the data collected by comet.
Pre-tokenization masking of restricted content using named entity recognition (NER) can be used to mask sensitive information.
Thinking in first principles. Think of 10X better solution. Combine solutions from different industries.
MOSCOW, RICE, JTBD, Kano.
Remove AI from the business and check if it still works.
The only to win in hypercompetitive AI space is to be overly obsessed with the problem we are trying to solve. Even if the odds are extremely low I'll still work on it.
Hiring generalists might be better because they understand context of different tasks.
Masterpieces come when we try a lot.
Creativity is combinatorial, combining different existing solutions in novel ways.
Follow this notebook for insights on key features of Weaviate, using weaviate with k8s and many more:
https://notebooklm.google.com/notebook/bec57f15-3091-43dd-b93a-70a09de31274
Follow this notebook for gain insights over different techniques to make small language models along with some results of my experiments:
https://notebooklm.google.com/notebook/eee2c93a-12a8-4dba-9311-a76b464c58ac