Created
April 3, 2025 11:26
-
-
Save gromgull/b109eba274fe7ef9576b62689d7905fe to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"id": "4dbc79ed-244f-495a-a1c6-4b4bccd4dd2a", | |
"metadata": {}, | |
"source": [ | |
"# 5 patterns for using LLMs that isn't a Chatbot\n", | |
"\n", | |
"This is a companion notebook to this post in our tech blog: https://bakkenbaeck.com/tech/5-patterns-for-llms \n", | |
"\n", | |
"This has more code - go there to read if you prefer more chat! " | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 2, | |
"id": "6efd7a34-5c1f-4746-990a-8c4472ab3cb1", | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"import requests\n", | |
"import json\n", | |
"from IPython.display import display, Markdown\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "0ad5056b-9f09-4c0f-b9f5-ae8693bfd038", | |
"metadata": {}, | |
"source": [ | |
"# Talk to the LLM\n", | |
"\n" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 3, | |
"id": "2d0a529b-3ffc-453e-b930-b606ad2bb751", | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"# Setup some utilty methods to talk to an OpenAI compatible API. \n", | |
"\n", | |
"def llm(messages):\n", | |
" data = dict(messages=messages, mode=\"instruct\")\n", | |
" res = requests.post('http://127.0.0.1:5000/v1/chat/completions', json=data)\n", | |
" return res.json()['choices'][0]['message']['content']\n", | |
"\n", | |
"def llm_msg(msg): \n", | |
" return llm([dict(role=\"user\", content=msg)])" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "ce5db671-de6d-4f31-ae8c-0adb17b25436", | |
"metadata": {}, | |
"source": [ | |
"I ran all of this locally - using [Text Generation UI](https://github.com/oobabooga/text-generation-webui) and the [Vicuna 13B model, quantized to 5 bits](https://huggingface.co/TheBloke/vicuna-13B-v1.5-16K-AWQ).\n", | |
"\n", | |
"Install `text generation ui` and start with `--api`\n", | |
"\n", | |
"Load the model - on my RTX 3080 with 12G of RAM I can load 35 layers on the GPU with a context length of 4096. \n", | |
"\n", | |
"(or just use your OpenAI API access - you need to add the key somehow) " | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 6, | |
"id": "06610072-a3d2-4e13-8d79-9dcc8d3e9cdb", | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"'Hi there! As an AI language model, I do not have the ability to perceive or experience events in the same way that humans do. However, I am here and ready to assist you with any questions or tasks you may have. How can I help you today?'" | |
] | |
}, | |
"execution_count": 6, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"llm_msg(\"Hi there - what's happening?\")" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "7b96035b-53c2-4003-b31e-2e4fd86f49f5", | |
"metadata": {}, | |
"source": [ | |
"# Lets stick to JSON\n", | |
"\n", | |
"If you want to embed your LLM in an existing program or process, it's very convenient if it was just a JSON API. Luckily, you can get this by just asking and providing an example or two. " | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 7, | |
"id": "bb25c08b-2f20-4eec-90f8-a304d432bcb2", | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"def json_llm(msg): \n", | |
" tries = 0\n", | |
" messages = [dict(role=\"user\", content=msg)]\n", | |
" while tries < 3: \n", | |
" res = llm(messages)\n", | |
" try:\n", | |
" return json.loads(res)\n", | |
" except json.JSONDecodeError: \n", | |
" print(res)\n", | |
" import traceback ; traceback.print_exc()\n", | |
" messages += [ dict(role=\"system\", content=res), dict(role=\"user\", content=\"your json isn't valid, please try again\") ]\n", | |
" tries += 1\n", | |
" raise Exception(\"3 attempts to get valid JSON all failed\")\n" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 9, | |
"id": "d65c2078-8fd7-4799-b1b6-4498a7601e04", | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"{'reasoning': 'Subtracting 8 from 12 will result in a difference of 4.', 'result': 4}\n" | |
] | |
} | |
], | |
"source": [ | |
"base_prompt = \"\"\"please reply only with valid json documents.\n", | |
"\n", | |
"You are a calculator - please give responses like this:\n", | |
"\n", | |
"Question: what is 2+4 ?\n", | |
"\n", | |
"{ \"reasoning\": \"any text here\", \"result\": 6 }\n", | |
"\n", | |
"\"\"\"\n", | |
"print(json_llm(base_prompt + \"What is 12-8 ?\"))\n" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 10, | |
"id": "dec7d5c4-7a27-4e6e-9a2e-1d65a81f8228", | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"{'reasoning': '2 raised to the power of 8 is 256', 'result': 256}" | |
] | |
}, | |
"execution_count": 10, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"json_llm(base_prompt + \"What is 2**8 ?\")" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "7ec9c4c6-ca4a-455d-8c4e-0f2e0210b854", | |
"metadata": {}, | |
"source": [ | |
"# Information Extraction\n", | |
"\n", | |
"You can use LLMs to _lift_ your unstructured information into structured documents in any way you want." | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"id": "532d33fc-6a82-4e7e-bf9f-59084b80fa30", | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"import feedparser\n", | |
"\n", | |
"feed = feedparser.parse('https://www.timboucher.ca/feed/') # load some unstructured data" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 43, | |
"id": "f8053f2d-921e-4b69-87da-10ea2ef53691", | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"base_prompt = \"\"\"please reply only with valid json documents.\n", | |
"\n", | |
"Please extract any mention of companies and people from the text. For example:\n", | |
"\n", | |
"> Jensen Huang shocks the world with Nvidia Quantum Day surprise. \n", | |
"\n", | |
"{ \"people\": ['Jensen Huang'], \"companies\": [\"Nvidia\"] }\n", | |
"\n", | |
"The text:\n", | |
"\n", | |
"\"\"\"\n" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 46, | |
"id": "cf2f97df-d90f-432d-bdd6-b15593829b4b", | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"Title Perplexity.ai on the Akron Smash Group\n", | |
"{'people': [], 'companies': ['Perplexity.ai', 'Akron Smash Group']}\n", | |
"Title Trevor Paglen’s ‘Psyop Realism’ (Videos)\n", | |
"{'people': ['Trevor Paglen'], 'companies': []}\n", | |
"Title Good Marshall Mcluhan Interview 1967\n", | |
"{'people': ['Marshall McLuhan'], 'companies': []}\n" | |
] | |
} | |
], | |
"source": [ | |
"for entry in feed.entries[:3]: \n", | |
" print('Title: ', entry.title)\n", | |
" print('Entities: ', json_llm(base_prompt + entry.title + '\\n\\n' + entry.description))\n", | |
" print('--------')" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "726e19c5-d216-4065-9bc3-179712fd0a3e", | |
"metadata": {}, | |
"source": [ | |
"## Entity Disambiguation\n", | |
"\n", | |
"Mapping well known things to well known databases should just work. For your own database, include it in the prompt (if small), or use some sort of shortlist! " | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 11, | |
"id": "600bdbd3-331d-46a5-b813-6cb693227856", | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"[{'name': 'Lady Gaga', 'link': 'https://en.wikipedia.org/wiki/Lady_Gaga'}]" | |
] | |
}, | |
"execution_count": 11, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"base_prompt = '''\n", | |
"Extract the list of people in this text, reply only with a valid JSON document with a list of objects name and wikipedia link, i.e.: \n", | |
"\n", | |
"Input: Jeff Dean is an American computer scientist and software engineer. Since 2018, he has been the lead of Google AI.\n", | |
"\n", | |
"Output: [ { \"explaination\": \"blah blah\", name\": \"Jeff Dean\", \"link\": \"https://en.wikipedia.org/wiki/Jeff_Dean\" } ]\n", | |
"\n", | |
"text: '''\n", | |
"\n", | |
"json_llm(base_prompt + ' Lady Gaga is an American singer, songwriter and actress. Known for her image reinventions and versatility across the entertainment industry.')" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "b5615fa1-77d1-4945-b68e-1b6e4719b1e3", | |
"metadata": {}, | |
"source": [ | |
"# Classification\n", | |
"\n", | |
"The GOAT NLP task. Just describe the classes you want and give some examples!" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 12, | |
"id": "7f7d12ba-3c86-4744-b749-8ed9b20af68f", | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"# some comments from a reddit story on deekseek\n", | |
"comments = [ \n", | |
" \"Making it open source is the right way to go. The models were trained n public data for free. The Linux model of paying for support like Red Hat is better.\",\n", | |
" \"No one wants this AI junk anyway.\",\n", | |
" \"Quick, claim national security and ban all open source AI.\",\n", | |
" \"DeepSeek isn't opensource. This whole narrative is BS.\",\n", | |
" \"That's the most interesting thing about DeepSeek, is that its open source and you can run it on a Laptop. Ive heard it described as what the PC did for home-computing.\",\n", | |
" \"We are pleased to announce that your EMAIL ADDRESS has been selected to receive unclaimed contract inheritance funds recovered from corrupt treasury security unit\"\n", | |
"]" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 13, | |
"id": "381e4f77-cb4c-42a2-9f1f-c9a382cbff11", | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"base_prompt = \"\"\"please reply only with a valid json document.\n", | |
"\n", | |
"We are classifying comments from an online discussion forum. \n", | |
"Please classify the comment in one of three classes: USEFUL, NOT USEFUL or SPAM . For example:\n", | |
"\n", | |
"> Free BTC here\n", | |
"\n", | |
"{ reasoning: \"any text here\", class: \"SPAM\" }\n", | |
"\n", | |
"The text:\n", | |
"\"\"\"\n" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 15, | |
"id": "a7ab17d5-0944-4fab-baa2-2d39b5366d2c", | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"# comment Making it open source is the right way to go. The models were trained n public data for free. The Linux model of paying for support like Red Hat is better.\n", | |
"{'reasoning': 'The comment is discussing the open-source model for training AI models and its benefits.', 'class': 'USEFUL'}\n", | |
"--------\n", | |
"# comment No one wants this AI junk anyway.\n", | |
"{'reasoning': 'The text contains a negative opinion about AI which may not be relevant or helpful to the discussion.', 'class': 'NOT USEFUL'}\n", | |
"--------\n", | |
"# comment Quick, claim national security and ban all open source AI.\n", | |
"{'reasoning': 'The comment suggests banning open source AI, which may be seen as harmful to the community and not useful.', 'class': 'NOT USEFUL'}\n", | |
"--------\n", | |
"# comment DeepSeek isn't opensource. This whole narrative is BS.\n", | |
"{'reasoning': \"The statement 'DeepSeek isn't opensource' is likely untrue and misleading, and the claim that the whole narrative is 'BS' is a strong indication that this comment is not useful or constructive.\", 'class': 'NOT USEFUL'}\n", | |
"--------\n", | |
"# comment That's the most interesting thing about DeepSeek, is that its open source and you can run it on a Laptop. Ive heard it described as what the PC did for home-computing.\n", | |
"{'reasoning': 'The text mentions DeepSeek as an open-source tool that can be run on a laptop and compares it to the impact of the PC on home-computing.', 'class': 'USEFUL'}\n", | |
"--------\n", | |
"# comment We are pleased to announce that your EMAIL ADDRESS has been selected to receive unclaimed contract inheritance funds recovered from corrupt treasury security unit\n", | |
"{'text': 'We are pleased to announce that your EMAIL ADDRESS has been selected to receive unclaimed contract inheritance funds recovered from corrupt treasury security unit', 'reasoning': \"this message is potentially a scam trying to get personal information from the recipient. The mention of 'unclaimed contract inheritance funds' and 'corrupt treasury security unit' are red flags. The usage of 'pleased to announce' also sounds like a formal tone trying to trick the recipient. Therefore, this message is likely a SPAM.\", 'class': 'SPAM'}\n", | |
"--------\n" | |
] | |
} | |
], | |
"source": [ | |
"for c in comments:\n", | |
" print('# comment',c)\n", | |
" print(json_llm(base_prompt+c))\n", | |
" print('--------')" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "b4a44123-5f92-46be-a031-9aa25666fac4", | |
"metadata": {}, | |
"source": [ | |
"### Classification by known taxonomies\n", | |
"\n", | |
"If you classes are well known - your model can probably do this off the shelf!" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 42, | |
"id": "002e8034-efe9-413b-9d2a-f29d528f8e30", | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"{'code': '2530',\n", | |
" 'title': 'Manufacture of computer, electronic, and optical products',\n", | |
" 'explanation': 'Nvidia is involved in the design and sale of GPUs, chip systems, and other related products, which fall under this ISIC code.'}" | |
] | |
}, | |
"execution_count": 42, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"base_prompt = \"\"\"\n", | |
"Respond only with a valid JSON document with an International Standard Industrial Classification (ISIC) classification code for a company described by the input text. For example: \n", | |
"\n", | |
"Input: Nestlé S.A. is a Swiss multinational food and drink processing conglomerate corporation headquartered in Vevey, Switzerland.\n", | |
"\n", | |
"Output: \n", | |
"\n", | |
"{ \"code\": \"1104\", title: \"Manufacture of soft drinks\", \"explanation\": \"blah blah\" }\n", | |
"\n", | |
"The text:\n", | |
"\"\"\"\n", | |
"json_llm(base_prompt + \"Nvidia designs and sells GPUs for gaming, cryptocurrency mining, and professional applications; the company also sells chip systems for use in vehicles, robotics, and more.\")" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "db5b355e-6b36-4b2e-bdf5-95e1557b46c2", | |
"metadata": {}, | |
"source": [ | |
"(my small model gets this wrong :D 2530 doesn't exist - a real model gets it right though!) " | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "baa7d02f-0a04-41bb-abb5-89d9fd82f33d", | |
"metadata": {}, | |
"source": [ | |
"# Learning Rules\n", | |
"\n", | |
"The big brother of classification - if you need more insight into your systems decisions, and a clear trail for each decision - rule based systems are for you!" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 44, | |
"id": "f894122b-581f-480d-9a47-d0bcf30c902b", | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"1. Look for employees in high-paying locations with high-ranking job titles.\n", | |
"```sql\n", | |
"SELECT *\n", | |
"FROM dataset\n", | |
"WHERE location IN ('San Francisco', 'New York', 'Boston', 'Seattle', 'Washington DC')\n", | |
" AND job_title IN ('Product Manager', 'Senior Developer', 'Engineering Manager', 'Project Manager', 'Solutions Architect');\n", | |
"```\n", | |
"1. Identify job titles with \"Senior\" or \"Manager\" in the title, and prioritize higher-paying cities.\n", | |
"```sql\n", | |
"SELECT *\n", | |
"FROM dataset\n", | |
"WHERE job_title LIKE '%Senior%' OR job_title LIKE '%Manager%'\n", | |
" AND location IN ('New York', 'San Francisco', 'Boston', 'Seattle', 'Washington DC');\n", | |
"```\n", | |
"1. Look for employees in high-paying job roles with high salaries in any location.\n", | |
"```sql\n", | |
"SELECT *\n", | |
"FROM dataset\n", | |
"WHERE job_title IN ('Data Scientist', 'UX Designer', 'DevOps Engineer', 'Digital Marketing Manager', 'Product Designer', 'Full Stack Developer')\n", | |
" AND salary >= 80000;\n", | |
"```\n" | |
] | |
} | |
], | |
"source": [ | |
"print(llm_msg('''This is a tabular dataset - the first line is headers.\n", | |
"\n", | |
"Generate some simple rules for finding people with a likely high salary. Predict this by looking at the other columns, and without looking at the salary column.\n", | |
"\n", | |
"Express the rules using SQL syntax - just select rows with a likely high salary.\n", | |
"\n", | |
"Reply only with the SQL queries.\n", | |
"\n", | |
"gender,salary,location,job_title\n", | |
"Female,75000,New York,Software Engineer\n", | |
"Male,82000,San Francisco,Product Manager\n", | |
"Female,65000,Chicago,Marketing Specialist\n", | |
"Male,95000,Seattle,Data Scientist\n", | |
"Female,120000,Boston,Senior Developer\n", | |
"Male,55000,Austin,Customer Service Rep\n", | |
"Female,88000,Los Angeles,UX Designer\n", | |
"Male,105000,Denver,Solutions Architect\n", | |
"Female,72000,Miami,HR Manager\n", | |
"Male,68000,Portland,Sales Representative\n", | |
"Female,92000,Atlanta,Business Analyst\n", | |
"Male,115000,Washington DC,Project Manager\n", | |
"Female,78000,Houston,Content Strategist\n", | |
"Male,98000,San Diego,DevOps Engineer\n", | |
"Female,83000,Philadelphia,Financial Analyst\n", | |
"Male,67000,Phoenix,Technical Support\n", | |
"Female,125000,Boston,Engineering Manager\n", | |
"Male,71000,Chicago,Digital Marketing Manager\n", | |
"Female,89000,Seattle,Product Designer\n", | |
"Male,93000,Austin,Full Stack Developer\n", | |
"'''))" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "589782c6-0fb4-463b-8dc0-e75f016b73ab", | |
"metadata": {}, | |
"source": [ | |
"# Smarter Auto-Correction\n", | |
"\n", | |
"More UX than LLM pattern really - but imagine any complicated form giving intelligent feedback as you go!" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 34, | |
"id": "e90f03fc-491b-4126-80b6-c948443e2533", | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"import ipywidgets as widgets\n" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 35, | |
"id": "a3280873-632f-4734-b723-fba731c80cab", | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"base_prompt = \"\"\"please reply only with valid json documents.\n", | |
"\n", | |
"We are providing feedback on text input in a form for getting a piece of electronics approved in the EU. \n", | |
"Please make sure the text is concrete and contains all details. Your feedback should be short and concise.\n", | |
"\n", | |
"For example:\n", | |
"\n", | |
"> I don't know how many volts this even needs.\n", | |
"\n", | |
"{ \"ok\": false, \"feedback\": \"please be specific about the voltage\" }\n", | |
"\n", | |
"> This battery is rated for 2A at 5V. \n", | |
"\n", | |
"{ \"ok\": true, \"feedback\": \"\" }\n", | |
"\n", | |
"The text:\n", | |
"\n", | |
"\"\"\"" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 36, | |
"id": "bfc14467-3188-478c-868c-68fabed73ce3", | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"application/vnd.jupyter.widget-view+json": { | |
"model_id": "4187c5c723bb4b638174547220c11d60", | |
"version_major": 2, | |
"version_minor": 0 | |
}, | |
"text/plain": [ | |
"Textarea(value='', continuous_update=False)" | |
] | |
}, | |
"metadata": {}, | |
"output_type": "display_data" | |
}, | |
{ | |
"data": { | |
"application/vnd.jupyter.widget-view+json": { | |
"model_id": "c107f1d4f36544a3bae6738c1a72d598", | |
"version_major": 2, | |
"version_minor": 0 | |
}, | |
"text/plain": [ | |
"Label(value='')" | |
] | |
}, | |
"metadata": {}, | |
"output_type": "display_data" | |
} | |
], | |
"source": [ | |
"label = widgets.Label(style=dict(text_color=\"red\", font_style='italic'), value=\"\")\n", | |
"txt = widgets.Textarea(continuous_update=False)\n", | |
"\n", | |
"def on_change(event): \n", | |
" print(base_prompt + event.new)\n", | |
" label.value = json_llm(base_prompt + event.new).get('feedback')\n", | |
" \n", | |
"txt.observe(on_change, names='value')\n", | |
"\n", | |
"display(txt, label)" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"id": "fb7e3433-3b1d-4d9a-a22b-10bac5448f34", | |
"metadata": {}, | |
"outputs": [], | |
"source": [] | |
} | |
], | |
"metadata": { | |
"kernelspec": { | |
"display_name": "Python 3 (ipykernel)", | |
"language": "python", | |
"name": "python3" | |
}, | |
"language_info": { | |
"codemirror_mode": { | |
"name": "ipython", | |
"version": 3 | |
}, | |
"file_extension": ".py", | |
"mimetype": "text/x-python", | |
"name": "python", | |
"nbconvert_exporter": "python", | |
"pygments_lexer": "ipython3", | |
"version": "3.9.20" | |
} | |
}, | |
"nbformat": 4, | |
"nbformat_minor": 5 | |
} |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment