- Install CUDA Toolkit -> you should install the CUDA version that PyTorch supports for example at this time, Version CUDA 12.4 is the latest.
navigate to https://developer.nvidia.com/cuda-12-4-0-download-archive download and install the CUDA Toolkit version
- Check that CUDA Toolkit is installed > issue
nvcc --version
and observe from output that the installed cuda version is detected
- Navigate to https://pytorch.org/get-started/locally/ and select the appropriate options for your system (remember to chose the same CUDA version you installed for "Compute Platform"
- Run the command
- Test that Torch is installed
>>> import torch
>>> torch.cuda.is_available()
True
>>> torch.cuda.device_count()
1
>>> torch.cuda.current_device()
0
>>> torch.cuda.device(0)
<torch.cuda.device at 0x7efce0b03be0>
>>> torch.cuda.get_device_name(0)
'GeForce GTX 950M'
- If there is an issue see below (reference: pytorch/pytorch#131662)
Torch dependency issue: (Missing fbgemm.dll) OSError: [WinError 126] The specified module could not be found.
Solution here: pytorch/pytorch#131662 (comment)
- Install Visual Studio Community 2022
- Tools > Get Tools and Features
- Individual Components tab > VS 2022 C++ ... (latest)
https://github.com/openai/whisper
whisper ".\New Recording 50.m4a" --model large-v3 --language=en --threads=4
# [00:00.320 --> 00:05.520] what is the you know the highest cost or all the all the cost that is causing all the
# [00:05.520 --> 00:11.760] expensiveness like and see whether from the industrial solution we choose uh you know shed
whisper ".\New Recording 40.m4a" --language Chinese --model large-v3 --threads 4 --output_format txt
# [00:00.000 --> 00:02.160] 嗨,今天天气很好
# to steer ai to translate to simplified chinese (use initial_prompt with simplified chinese)
whisper ".\New Recording 40.m4a" --language Chinese --model large-v3 --threads 4 --output_format txt --initial_prompt '以下是普通话的句子'