langchain_community.document_loaders.parsers.audio
.OpenAIWhisperParserΒΆ
- class langchain_community.document_loaders.parsers.audio.OpenAIWhisperParser(api_key: Optional[str] = None, *, chunk_duration_threshold: float = 0.1, base_url: Optional[str] = None, language: Optional[str] = None, prompt: Optional[str] = None, response_format: Optional[Literal['json', 'text', 'srt', 'verbose_json', 'vtt']] = None, temperature: Optional[float] = None)[source]ΒΆ
Transcribe and parse audio files.
Audio transcription is with OpenAI Whisper model.
- Parameters
api_key (Optional[str]) β OpenAI API key
chunk_duration_threshold (float) β minimum duration of a chunk in seconds NOTE: According to the OpenAI API, the chunk duration should be at least 0.1 seconds. If the chunk duration is less or equal than the threshold, it will be skipped.
base_url (Optional[str]) β
language (Optional[str]) β
prompt (Optional[str]) β
response_format (Optional[Literal['json', 'text', 'srt', 'verbose_json', 'vtt']]) β
temperature (Optional[float]) β
Methods
__init__
([api_key,Β ...])lazy_parse
(blob)Lazily parse the blob.
parse
(blob)Eagerly parse the blob into a document or documents.
- __init__(api_key: Optional[str] = None, *, chunk_duration_threshold: float = 0.1, base_url: Optional[str] = None, language: Optional[str] = None, prompt: Optional[str] = None, response_format: Optional[Literal['json', 'text', 'srt', 'verbose_json', 'vtt']] = None, temperature: Optional[float] = None)[source]ΒΆ
- Parameters
api_key (Optional[str]) β
chunk_duration_threshold (float) β
base_url (Optional[str]) β
language (Optional[str]) β
prompt (Optional[str]) β
response_format (Optional[Literal['json', 'text', 'srt', 'verbose_json', 'vtt']]) β
temperature (Optional[float]) β