别逼我学PYTHON Don‘t ask me to learn python: OPENAI WHISPER-small-ct2 模型,例子,语音识别转文本

作者 : admin 本文共1011个字,预计阅读时间需要3分钟 发布时间: 2024-06-17 共1人阅读

 别逼我学PYTHON Don‘t ask me to learn python: OPENAI WHISPER-small-ct2 模型,例子,语音识别转文本插图

import ctranslate2
import librosa
import transformers
import os
os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
# Load and resample the audio file.
audio, _ = librosa.load("audio.wav", sr=16000, mono=True)

# Compute the features of the first 30 seconds of audio.
processor = transformers.WhisperProcessor.from_pretrained("whisper-small")
inputs = processor(audio, return_tensors="np", sampling_rate=16000)
features = ctranslate2.StorageView.from_array(inputs.input_features)

# Load the model on CPU.
model = ctranslate2.models.Whisper("whisper-small-ct2")

# Detect the language.
results = model.detect_language(features)
language, probability = results[0][0]
print("Detected language %s with probability %f" % (language, probability))

# Describe the task in the prompt.
# See the prompt format in https://github.com/openai/whisper.
prompt = processor.tokenizer.convert_tokens_to_ids(
    [
        "",
        language,
        "",
        "",  # Remove this token to generate timestamps.
    ]
)

# Run generation for the 30-second window.
results = model.generate(features, [prompt])
transcription = processor.decode(results[0].sequences_ids[0])
print(transcription)

输出结果如下: 

别逼我学PYTHON Don‘t ask me to learn python: OPENAI WHISPER-small-ct2 模型,例子,语音识别转文本插图(1)

本站无任何商业行为
个人在线分享 » 别逼我学PYTHON Don‘t ask me to learn python: OPENAI WHISPER-small-ct2 模型,例子,语音识别转文本
E-->