# tutorial video link : https://youtu.be/dYt9xJ7dnpU | |
# colab link : https://colab.research.google.com/drive/1xSbu-b-EwYd6GdaFPRVgvXBX_mciZ41e?usp=sharing | |
# repo link : https://github.com/ai-forever/Kandinsky-2 | |
# used repo commit hash : a4354c04d5fbd48851866ef7d84ec444d3d50102 | |
# those who getting cuda error | |
# pip uninstall torch | |
# pip3 install torch==1.13.1 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117 | |
import os |
import os | |
os.environ["OPENAI_API_KEY"] = "" | |
from flask import Flask, Response, request | |
import threading | |
import queue | |
from langchain.chat_models import ChatOpenAI | |
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler | |
from langchain.schema import AIMessage, HumanMessage, SystemMessage |
from typing import TypeVar, Generic, Callable | |
from dataclasses import dataclass | |
from argparse import Namespace | |
T = TypeVar('T') | |
S = TypeVar('S') | |
@dataclass | |
class ListMap(Generic[S, T]): | |
f: Callable[[T], S] |
ChatGPT appeared like an explosion on all my social media timelines in early December 2022. While I keep up with machine learning as an industry, I wasn't focused so much on this particular corner, and all the screenshots seemed like they came out of nowhere. What was this model? How did the chat prompting work? What was the context of OpenAI doing this work and collecting my prompts for training data?
I decided to do a quick investigation. Here's all the information I've found so far. I'm aggregating and synthesizing it as I go, so it's currently changing pretty frequently.
FROM registry.visionhub.ru/models/base:v5 AS base | |
# ============ BEGIN User model environment ============ | |
FROM ubuntu:18.04 | |
RUN apt-get update && apt-get install -y libsm6 libxext6 libxrender-dev libglib2.0-0 ffmpeg zlib1g-dev libjpeg-dev | |
RUN apt-get install -y python3-pip | |
RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1 | |
RUN update-alternatives --install /usr/bin/pip pip /usr/bin/pip3 1 | |
RUN pip install -U pip |
""" Use Apple's Vision Framework via PyObjC to detect text in images | |
To use: | |
python3 -m pip install pyobjc-core pyobjc-framework-Quartz pyobjc-framework-Vision wurlitzer | |
""" | |
import pathlib |
import argparse | |
import math | |
import os | |
from multiprocessing import Pool, cpu_count | |
import numpy as np | |
import pandas as pd | |
import tensorflow as tf | |
from tqdm import tqdm |
I use PlantUML a lot. It's what I use for drawing all sorts of diagrams and it's handy because of its easy markup (once you get used to it) while making things easy to maintain as projects grow (thanks to version control)
This gist explains how I do my PlantUML workspace in a project.
- The idea is to keep a
globals
directory for all diagrams to follow (like the "stylesheet" below) to keep things consistent. - I use a
stylesheet.iuml
file that keeps the use of colors consistent through use of basic FOREGROUND, BACKGROUND and ACCENT colors. - The
style-presets.iuml
file defines these colors so you can make "presets" or "themes" out of them. - As stated in the
stylesheet.iuml
, you'll need the Roboto Condensed and Inconsolata fonts for these to work properly. - You can choose to either run the PlantUML jar over your file/s, or use an IDE like VSCode with the PlantUML extension. Here's a preview of
example-sequence.puml
for example: https://imgur.com/Klk3w2F