Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

// WARNING:此脚本仅做学习和演示用途,在不了解其用途前不建议使用 | |
// 本脚本的用途是将输入内容分页,每次提取一页内容,编辑第二条消息,发送,然后收集结果 | |
// 使用前,需要有两条消息,参考模板 https://chat.openai.com/share/17195108-30c2-4c62-8d59-980ca645f111 | |
// 演示视频: https://www.bilibili.com/video/BV1tp4y1c7ME/?vd_source=e71f65cbc40a72fce570b20ffcb28b22 | |
// | |
(function (fullText) { | |
const wait = (ms) => new Promise((resolve) => setTimeout(resolve, ms)); | |
const groupSentences = (fullText, maxCharecters = 2800) => { | |
const sentences = fullText.split("\n").filter((line) => line.trim().length > 0); |
[Script Info] | |
Title: How AI Could Empower Any Business | Andrew Ng | TED | |
ScriptType: v4.00+ | |
WrapStyle: 0 | |
Collisions: Reverse | |
PlayResX: 384 | |
PlayResY: 288 | |
Timer: 100.0000 | |
ScaledBorderAndShadow: no |
12th July, 2023. I'm going to try creating an iOS app called Paranovel, using Expo. My environment for mobile app dev (Xcode, Ruby, etc.) should be in reasonably good shape already as I frequently develop with React Native and NativeScript.
Go to https://docs.expo.dev, and see the Quick Start: npx create-expo-app paranovel
This runs with no problem, then I get this macOS system popup:
from langchain.chat_models import ChatOpenAI | |
from pydantic import BaseModel, Field | |
from langchain.document_loaders import UnstructuredURLLoader | |
from langchain.chains.openai_functions import create_extraction_chain_pydantic | |
class LLMItem(BaseModel): | |
title: str = Field(description="The simple and concise title of the product") | |
description: str = Field(description="The description of the product") | |
def main(): |
Rollup builds doesn't scale well in large apps. You need to increase Node's memory with --max-old-space-size=4096
to handle all the modules. This is one of Vite's highest-rated issue.
This file documents various findings and attempts to improve this issue.
NOTE: I've only been reading Rollup's source code for a while, so some of these may not be accurate.
// Requires the gpt library from https://github.com/hrishioa/socrate and the progress bar library. | |
// Created by Hrishi Olickel ([email protected]) (@hrishioa). Reach out if you have trouble running this. | |
import { ThunkQueue } from '../../utils/simplethrottler'; | |
import { | |
AcceptedModels, | |
Messages, | |
askChatGPT, | |
getMessagesTokenCount, | |
getProperJSONFromGPT, |
import { getAuth, withClerkMiddleware } from "@clerk/nextjs/server"; | |
import { NextResponse, NextFetchEvent } from "next/server"; | |
import type { NextRequest } from "next/server"; | |
import { Ratelimit } from "@upstash/ratelimit"; | |
import { Redis } from "@upstash/redis"; | |
// Add public paths for Clerk to handle. | |
const publicPaths = ["/", "/sign-in*", "/sign-up*", "/api/blocked"]; | |
// set your rate limit. |
import json | |
import os | |
import requests | |
OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY") | |
OPENAI_BASE_URL = "https://api.openai.com/v1/chat/completions" | |
def translator(srt): |
// Website you intended to retrieve for users. | |
const upstream = 'api.openai.com' | |
// Custom pathname for the upstream website. | |
const upstream_path = '/' | |
// Website you intended to retrieve for users using mobile devices. | |
const upstream_mobile = upstream | |
// Countries and regions where you wish to suspend your service. |