Skip to content

Instantly share code, notes, and snippets.

@canyon289
Last active March 3, 2025 01:49
Show Gist options
  • Save canyon289/d5929701dacfe66950bf25cafa97f7e5 to your computer and use it in GitHub Desktop.
Save canyon289/d5929701dacfe66950bf25cafa97f7e5 to your computer and use it in GitHub Desktop.
Building AI Webapp

🚀 Getting Started

To run this workshop locally, you'll need to set up Ollama and a Python environment using UV.

1. Setting Up Ollama (Most Critical Step)

We’ll be running Gemma 2B locally with Ollama, so you need to set this up first. This step will require a large download (~10GB total) and some hardware considerations.

Install Ollama

Download and install Ollama from https://ollama.com/.

Pull the required models

Once installed, run the following commands in your terminal to download the models:

ollama pull gemma2:2b
ollama pull gemma2:2b-instruct-fp16
ollama pull gemma2:2b-instruct-q2_K

⚠️ Note: Some models may not run depending on your hardware. AI models, while getting easier to use, still come with real-world constraints—this is part of the learning process!


2. Setting Up Your Python Environment

We’ll be using UV to manage dependencies. This ensures a lightweight, reproducible Python environment.

Install UV

If you don’t have UV installed, first install it with:

pip install uv

Create and activate your environment

uv venv gemma-app

Activate the environment:

  • On macOS/Linux:
    source gemma-app/bin/activate
  • On Windows:
    gemma-app\Scripts\activate

Install dependencies

Once the environment is active, install all required Python packages:

uv pip install -r requirements.txt
numpy
ollama
gradio
jupyter
datasette
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment