- AWS Lambda -- Running End-to-End Tests with Playwright on AWS Lambda
- Cloudflare Workers -- Automate an isolated browser instance with just a few lines of code
- Google Cloud -- Creating an automatic monitoring application with Node.js, Playwright, and Google Sheets on Google Cloud Platform
- Digital Ocean -- How To Run End-to-End Tests Using Playwright and Docker
- BrowserStack -- Run your first Playwright test on BrowserStack
- GitHub Actions -- [Playwright CLI with GitHub Actions CI](https://github.com/mic
FROM node:18-slim as builder | |
ARG PLAYWRIGHT_VERSION | |
ARG PLAYWRIGHT_BROWSERS_PATH=/ms-playwright | |
RUN npx playwright@${PLAYWRIGHT_VERSION} install chromium | |
FROM node:18-slim as runner | |
ARG PLAYWRIGHT_VERSION | |
ARG PLAYWRIGHT_BROWSERS_PATH=/ms-playwright | |
COPY --from=builder /ms-playwright /ms-playwright | |
RUN npx playwright@${PLAYWRIGHT_VERSION} install-deps chromium | |
USER node |
import os | |
import json | |
import random | |
import textwrap | |
import re | |
import math | |
import torch | |
from torch import nn | |
from torch.utils.data import DataLoader, Dataset, IterableDataset |
# Delete all forks that haven't been updated since 2020 | |
gh auth refresh -h github.com -s delete_repo | |
gh search repos \ | |
--owner tonybaloney \ | |
--updated="<2020-01-01" \ | |
--include-forks=only \ | |
--limit 100 \ | |
--json url \ | |
--jq ".[] .url" \ | xargs -I {} gh repo delete {} --confirm |
This episode of Recsperts was transcribed with Whisper from OpenAI, an open-source neural net trained on almost 700 hours of audio. The model includes an encoder-decoder architecture by tokenizing audio into 30-second chunks, normalizing audio samples to the log-Mel scale, and passing the data into an encoder. A decoder is trained to predict the captioned text matching the encoder, and the model includes transcription, as well as timestamp-aligned transcription, and multilingual translation.
The transcription process outputs a single string file, so it's up to the end-user to parse out individual speakers, or run the model [through a sec
-i https://pypi.org/simple | |
anyio==3.6.2; python_full_version >= '3.6.2' | |
certifi==2022.12.7; python_version >= '3.6' | |
click==8.1.3; python_version >= '3.7' | |
colorama==0.4.6 | |
commonmark==0.9.1 | |
h11==0.14.0; python_version >= '3.7' | |
httpcore==0.16.3; python_version >= '3.7' | |
httpx==0.23.1 | |
idna==3.4 |
ChatGPT appeared like an explosion on all my social media timelines in early December 2022. While I keep up with machine learning as an industry, I wasn't focused so much on this particular corner, and all the screenshots seemed like they came out of nowhere. What was this model? How did the chat prompting work? What was the context of OpenAI doing this work and collecting my prompts for training data?
I decided to do a quick investigation. Here's all the information I've found so far. I'm aggregating and synthesizing it as I go, so it's currently changing pretty frequently.