Notebook is a minimal, trust-driven platform for students in grades 8–12 to share study materials—organized by board, grade, and subject. It enables uploads of notes (PDFs, PPTs), question papers, and worksheets. Students can vote, comment, report, and build a trust score similar to Reddit karma. A clean black-and-white interface, realtime updates, and offline access through PWA make it lightweight but powerful.
A peer-driven learning space where credibility rises naturally through trust, not algorithms. Every upload, vote, and report shapes the reputation ecosystem.
Stack:
- Frontend: Vite + Tailwind + PWA
- Backend: Bun.js on server
- Storage: JSON files (state + users) and filesystem for uploaded content
notebook/
├── server/
│ ├── index.ts # Bun server entry
│ ├── routes/
│ │ ├── upload.ts # file uploads
│ │ ├── feed.ts # fetch feeds
│ │ ├── vote.ts # upvote/downvote
│ │ ├── report.ts # moderation logic
│ │ ├── user.ts # profiles and leaderboard
│ ├── db/
│ │ ├── recent.json # maintains list of recent posts ranked
| | ├── posts/
| | └── uuid.json # stores information of each post
│ │ └── users/
| | └── uuid.json # stores information of each user
│ ├── uploads/
| | └── uuid.json # stored PDFs/PPTs by board/grade/subject
│ └── utils/
│ ├── trust.ts # trust logic
│ └── id.ts # UUID generator
│
├── client/
│ ├── index.html
│ ├── main.tsx
│ ├── App.tsx
│ ├── components/ (Feed, UploadForm, NoteCard, Profile, Leaderboard, CommentSection)
│ ├── pages/ (Home, FeedPage, UploadPage)
│ ├── service-worker.js # caching logic
│ └── manifest.json # PWA manifest
posts/uuid.json:
{
"title": "Electromagnetism Notes",
"board": "CBSE",
"grade": "11",
"bloomData": bloomData #blooms helps store liked users data
"subject": "Physics",
"filename": "uuid.pdf",
"uploader": "user123",
"trust": 35,
"votes": { "up": 12, "down": 2 },
"reports": 1,
"timestamp": 1697000000,
"comments": []
}
users/uuid.json:
{
"username": "user123",
"trust": 85,
"uploads": 10,
"reports": 0
}| Endpoint | Method | Purpose |
|---|---|---|
/upload |
POST | Upload file + metadata |
/feed/:board/:grade/:subject |
GET | Fetch feed sorted by trust/time |
/vote/:id |
POST | Upvote/downvote note |
/report/:id |
POST | Report content (auto-deletes after N reports) |
/user/:username |
GET | Get profile |
/leaderboard |
GET | Fetch top trusted users |
if (note.reports >= min(5, 0.1 * note.iews)) {
removeFile(note.filename);
reportingUser.reported += 1;
user.trust -= 10;
}Pages: Feed, Profile, Leaderboard, Dcuments
Feed: Lazy load N new posts based on trust score and likes/views Documents: downloaded and stared ones with modal to upload Profile: check own profile and others Leaderboard: Check others rank top 10 and yours and +/-3 from u
Core Components:
- NoteCard: shows note title, votes, trust
- UploadForm: handles metadata + file upload
- CommentSection: threaded discussion
- Leaderboard: global trust ranking
- Profile: user uploads and stats
- Theme: Black and white monochrome, grayscale shadows
- Typography: Sans (Inter) + Mono (JetBrains Mono)
- Icons: Thin-line Lucide or Tabler icons
- Layout: Minimal, 3–4 pages; feed-focused UI
Core: Upload, download, trust score, vote, report, profile, feed Extras: Comments, leaderboard, realtime feed, search bar Advanced: Offline mode, export, auto-trust decay
import { WebSocketServer } from "ws";
const wss = new WebSocketServer({ port: 8081 });
function broadcastUpdate(note) {
wss.clients.forEach(c => c.send(JSON.stringify({ type: "new", note })));
}Frontend listens for new uploads and injects into the live feed.
- Cache static assets and last-viewed feeds.
- IndexedDB/localStorage for offline reading.
- Manifest for installable app experience.
- Banner indicator for offline state.
Day 1:
- Setup Bun server and routes (upload/feed/vote/report)
- Create JSON storage system
- Build core UI (Feed + Upload)
- Implement trust/voting logic
Day 2:
- Add comments, profiles, leaderboard
- Integrate WebSocket realtime feed
- Add offline caching + final polish
- Prepare final demo pitch
Notebook is a peer-to-peer academic sharing space where trust, not algorithms, decides visibility. Students upload, vote, and collaborate on notes across boards, grades, and subjects. It’s fast, minimal, works offline, and rewards credible contribution.
RAG
The best approach is to build the AI as a separate Python microservice. Your main Bun.js server will remain the lightweight "frontend" server, which will then make internal HTTP requests (acting as a proxy) to this new Python AI service.
This keeps your stacks separate: Bun.js handles fast I/O and web logic, while Python handles the heavy compute of AI/ML.
Here are the two main AI features we'll design:
AI Q&A Assistant (The RAG Pipeline):
all-mpnet-base-v2will be used to embed (vectorize) all uploaded documents (PDFs, PPTs) into a vector database (like ChromaDB, which is file-system-based and fits your "minimal" ethos).Gemini 12B(via Ollama) to generate a helpful, context-aware answer.AI Question Paper Predictor (Hybrid ML + LLM Model):
Gemini 12B(via Ollama) to author a new paper based on the ML model's analysis.Gemini 12B.📁 AI Service: Filesystem Structure
You will create a new folder next to your
clientandserverfolders, calledai_service.📚 Incredibly Detailed Documentation:
ai_service1. Overview & Purpose
The
ai_serviceis a self-contained Python microservice built with FastAPI. Its sole purpose is to handle all computationally expensive AI and ML tasks. It communicates only with the mainnotebook/server(Bun.js) via a local HTTP API. It never talks directly to the end-user's client.2. Setup & Installation
ai_service/requirements.txt:Installation Steps:
cd notebook/ai_service pip install -r requirements.txt3. Core Component Deep-Dive
processing/indexer.py(The "Data Pipeline")This is the most critical offline script. It must be run to build the AI's knowledge.
../server/db/posts/: Reads alluuid.jsonfiles to get a list of all posts.board,grade,subject, andfilenamefrom the JSON.../server/uploads/[filename]).rag_pipeline/text_extractor.py):.pdf, usePyPDF2to read text page by page..pptx, usepython-pptxto read text from slides.rag_pipeline/vector_db.py):all-mpnet-base-v2.data/vector_store/) along with its metadata:{ "text": chunk_text, "source_file": filename, "board": board, "grade": grade }.ml_models/question_db.py):ml_models/question_db.json) for the topic modeler.ml_models/topic_modeler.py):TfidfVectorizeron the text and saves the vectorizer todata/ml_models/tfidf.pkl.KMeansmodel (e.g.,n_clusters=50) on the TF-IDF vectors and saves the model todata/ml_models/kmeans.pkl.How to run it:
You must run this script the first time you set up the service.
cd notebook/ai_service python processing/indexer.pyrag_pipeline/(The Q&A System)embedder.py:all-mpnet-base-v2model usingsentence_transformers.def get_embedding(text: str) -> List[float]: ...vector_db.py:client = chromadb.PersistentClient(path="./data/vector_store").collection = client.get_or_create_collection("notebook_docs").def add_documents(chunks: List[str], metadatas: List[dict], ids: List[str]): ...(used byindexer.py).def query_db(query_text: str, n_results=5, filter_dict={}) -> List[str]: ...query_textfromembedder.py.results = collection.query(query_embeddings=[embedding], n_results=n_results, where=filter_dict).textfrom thedocumentsin the results.generator.py:def get_ai_response(question: str, context: List[str]) -> str:client = ollama.Client().response = client.chat(model='gemini:12b-qat', messages=[...]).response['message']['content'].def generate_paper_from_context(analysis_data: dict, num_questions: int) -> str:response = client.chat(model='gemini:12b-qat', messages=[...]).response['message']['content'](the full paper as a single string).ml_models/(The Paper Predictor's Analyzer)topic_modeler.py:def load_models(): ...tfidf.pklandkmeans.pklfromdata/ml_models/usingjobliborpickle.def get_topic_analysis(board: str, grade: str, subject: str) -> dict:board,grade, andsubjectfromquestion_db.py.predictthe topic (cluster label) for each question.analysis_datafor the LLM):{ "board": "CBSE", "grade": "12", "subject": "Physics", "top_topics": [ { "topic_id": 3, "keywords": "kinematics, velocity", "examples": ["Q: ...", "Q: ..."] }, { "topic_id": 7, "keywords": "optics, lens, mirror", "examples": ["Q: ..."] } ] }