Last active
August 27, 2024 18:13
-
-
Save patrick-samy/cf8470272d1ff23dff4e2b562b940ef5 to your computer and use it in GitHub Desktop.
Split large audio file and transcribe it using the Whisper API from OpenAI
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import os | |
import sys | |
import openai | |
import os.path | |
from dotenv import load_dotenv | |
from pydub import AudioSegment | |
load_dotenv() | |
openai.api_key = os.getenv('OPENAI_API_KEY') | |
audio = AudioSegment.from_mp3(sys.argv[1]) | |
segment_length = 25 * 60 | |
duration = audio.duration_seconds | |
print('Segment length: %d seconds' % segment_length) | |
print('Duration: %d seconds' % duration) | |
segment_filename = os.path.basename(sys.argv[1]) | |
segment_filename = os.path.splitext(segment_filename)[0] | |
number_of_segments = int(duration / segment_length) | |
segment_start = 0 | |
segment_end = segment_length * 1000 | |
enumerate = 1 | |
prompt = "" | |
for i in range(number_of_segments): | |
sound_export = audio[segment_start:segment_end] | |
exported_file = '/tmp/' + segment_filename + '-' + str(enumerate) + '.mp3' | |
sound_export.export(exported_file, format="mp3") | |
print('Exported segment %d of %d' % (enumerate, number_of_segments)) | |
f = open(exported_file, "rb") | |
data = openai.Audio.transcribe("whisper-1", f, prompt=prompt) | |
f.close() | |
print('Transcribed segment %d of %d' % (enumerate, number_of_segments)) | |
f = open(os.path.join('transcripts', segment_filename + '.txt'), "a") | |
f.write(data.text) | |
f.close() | |
prompt += data.text | |
segment_start += segment_length * 1000 | |
segment_end += segment_length * 1000 | |
enumerate += 1 |
I'm curious why you're adding the previous segment of transcription into the prompt for future segment transcriptions here? Docs from OpenAI says that prompt ignores anything over 224 tokens.
In addition, the prompt is limited to only 224 tokens. If the prompt is longer than 224 tokens, only the final 224 tokens of the prompt will be considered; all prior tokens will be silently ignored.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
@ceinem
Here is a fix.