-XX:NativeMemoryTracking=detail
jps
ps -p <PID> -o pcpu,rss,size,vsize
| /** | |
| * Island universe | |
| */ | |
| import eu.ace_design.island.{PointGenerator, RandomGrid, SquaredGrid} | |
| val MAP_SIZE = 800 | |
| val generators: Map[String,PointGenerator] = Map( | |
| "RANDOM" -> new RandomGrid(MAP_SIZE), | |
| "SQUARE" -> new SquaredGrid(MAP_SIZE) | |
| ) |
| # Quick intro to accessing Stubhub API with Python | |
| # Ozzie Liu ([email protected]) | |
| # Related blog post: http://ozzieliu.com/2016/06/21/scraping-ticket-data-with-stubhub-api/ | |
| # Updated 3/5/2017 for Python 3 and Stubhub's InventorySearchAPI - v2 | |
| import requests | |
| import base64 | |
| import json | |
| import pprint | |
| import pandas as pd |
| from selenium import webdriver | |
| from selenium.webdriver.support.ui import WebDriverWait | |
| from webdriver_manager.chrome import ChromeDriverManager | |
| class Autotrader: | |
| def __init__(self, url): | |
| self.url = url | |
| self.driver = None | |
| self.page_num = None | |
| self.xpath_dict = self.XPathDict() |
TLDR: JWTs should not be used for keeping your user logged in. They are not designed for this purpose, they are not secure, and there is a much better tool which is designed for it: regular cookie sessions.
If you've got a bit of time to watch a presentation on it, I highly recommend this talk: https://www.youtube.com/watch?v=pYeekwv3vC4 (Note that other topics are largely skimmed over, such as CSRF protection. You should learn about other topics from other sources. Also note that "valid" usecases for JWTs at the end of the video can also be easily handled by other, better, and more secure tools. Specifically, PASETO.)
A related topic: Don't use localStorage (or sessionStorage) for authentication credentials, including JWT tokens: https://www.rdegges.com/2018/please-stop-using-local-storage/
The reason to avoid JWTs comes down to a couple different points:
A pattern for building personal knowledge bases using LLMs. Extended with lessons from building agentmemory, a persistent memory engine for AI coding agents.
This builds on Andrej Karpathy's original LLM Wiki idea file. Everything in the original still applies. This document adds what we learned running the pattern in production: what breaks at scale, what's missing, and what separates a wiki that stays useful from one that rots.
The core insight is correct: stop re-deriving, start compiling. RAG retrieves and forgets. A wiki accumulates and compounds. The three-layer architecture (raw sources, wiki, schema) works. The operations (ingest, query, lint) cover the basics. If you haven't read the original, start there.