- Install Docker Desktop.
- Open a Terminal (Windows / Mac / Linux) and maximise it to the full size of your screen.
-
$ docker run --rm -it bcbcarl/hollywood
- To exit: Try mashing
Ctrl-D
andCtrl-C
. If you get to a terminal you can typeexit
to close the container.If this fails, just quit your terminal. 😁
import sys | |
import whisper | |
from whisper.utils import write_srt | |
def run(input_path: str, output_path: str) -> None: | |
model = whisper.load_model("base") | |
result = model.transcribe(input_path) |
from datetime import datetime | |
import pytz | |
import boto3 | |
# define our CloudEvents client | |
client = boto3.client('events') | |
# Let's grab our current time in GMT and convert it to local time (US Central) | |
utc_time = datetime.utcnow().replace(tzinfo=pytz.utc) | |
# Set your timezone here! See timezones here: https://gist.github.com/heyalexej/8bf688fd67d7199be4a1682b3eec7568 |
/* | |
Copy this into the console of any web page that is interactive and doesn't | |
do hard reloads. You will hear your DOM changes as different pitches of | |
audio. | |
I have found this interesting for debugging, but also fun to hear web pages | |
render like UIs do in movies. | |
*/ | |
const audioCtx = new (window.AudioContext || window.webkitAudioContext)() |
The following assumes you have ffmpeg
installed on your Mac.
If you need to install it, please use Homebrew.
The settings for encoding for YouTube with ffmpeg can be found here and here.
There is no error checking here. It assumes that there are videos in the folder, and that they have mp4 extensions.
from __future__ import print_function | |
import httplib2 | |
import os | |
from apiclient import discovery | |
from oauth2client import client | |
from oauth2client import tools | |
from oauth2client.file import Storage | |
import datetime |
/* NOTICE: THIS WAS MADE BACK IN 2017, OF COURSE IT'S NOT GOING TO WORK WELL NOW THAT TWITTER'S FUCKED THINGS UP */ | |
@namespace url(http://www.w3.org/1999/xhtml); | |
@-moz-document domain("twitter.com") { | |
[data-component-context="suggest_recap"], | |
[data-component-context="suggest_who_to_follow"], | |
[data-component-context="suggest_activity"], | |
[data-component-context="suggest_activity_tweet"], | |
[data-component-context="suggest_recycled_tweet_inline"], | |
[data-component-context="suggest_recycled_tweet"]{ |
#!/bin/bash | |
# Anh Nguyen <[email protected]> | |
# 2016-04-30 | |
# MIT License | |
# This script takes in same-size images from a folder and make a crossfade video from the images using ffmpeg. | |
# Make sure you have ffmpeg installed before running. | |
# The output command looks something like the below, but for as many images as you have in the folder. |
Just a quickie test in Python 3 (using Requests) to see if Google Cloud Vision can be used to effectively OCR a scanned data table and preserve its structure, in the way that products such as ABBYY FineReader can OCR an image and provide Excel-ready output.
The short answer: No. While Cloud Vision provides bounding polygon coordinates in its output, it doesn't provide it at the word or region level, which would be needed to then calculate the data delimiters.
On the other hand, the OCR quality is pretty good, if you just need to identify text anywhere in an image, without regards to its physical coordinates. I've included two examples:
####### 1. A low-resolution photo of road signs