whisper_ggml 1.7.0
whisper_ggml: ^1.7.0 copied to clipboard
OpenAI Whisper ASR (Automatic Speech Recognition) for Flutter
Supported platforms #
| Platform | Supported |
|---|---|
| Android | ✅ |
| iOS | ✅ |
| MacOS | ✅ |
Features #
-
Automatic Speech Recognition integration for Flutter apps.
-
Supports automatic model downloading and initialization. Can be configured to work fully offline by using
assetsmodels (see example folder). -
Seamless iOS and Android support with optimized performance.
-
Can be configured to use specific language ("en", "fr", "de", etc) or auto-detect ("auto").
-
Utilizes CORE ML for enhanced processing on iOS devices.
Installation #
To use this library in your Flutter project, follow these steps:
- Add the library to your Flutter project's
pubspec.yaml:
dependencies:
whisper_ggml: ^1.7.0
- Run
flutter pub getto install the package.
Usage #
To integrate Whisper ASR in your Flutter app:
- Import the package:
import 'package:whisper_ggml/whisper_ggml.dart';
- Pick your model. Smaller models are more performant, but the accuracy may be lower. Recommended models are
tinyandsmall.
final model = WhisperModel.tiny;
- Declare
WhisperControllerand use it for transcription:
final controller = WhisperController();
final result = await controller.transcribe(
model: model, /// Selected WhisperModel
audioPath: audioPath, /// Path to .wav file
lang: 'en', /// Language to transcribe
);
- Use the
resultvariable to access the transcription result:
if (result?.transcription.text != null) {
/// Do something with the transcription
print(result!.transcription.text);
}
Notes #
Transcription processing time is about 5x times faster when running in release mode.