masamune_speech_to_text_google 3.2.7
masamune_speech_to_text_google: ^3.2.7 copied to clipboard
Masamune plug-in adapter for use with Speech-to-Text on Google. Provides a controller.
Masamune Google Speech-to-Text
[GitHub] | [YouTube] | [Packages] | [X] | [LinkedIn] | [mathru.net]
Masamune Google Speech-to-Text #
Usage #
Installation #
Add the package to your project.
flutter pub add masamune_speech_to_text
Run flutter pub get when editing pubspec.yaml manually.
Register the Adapter #
Configure GoogleSpeechToTextMasamuneAdapter before launching the app. Provide the default language settings.
// lib/adapter.dart
/// Masamune adapters used in the application.
final masamuneAdapters = <MasamuneAdapter>[
const UniversalMasamuneAdapter(),
const GoogleSpeechToTextMasamuneAdapter(
defaultLocale: Locale('en', 'US'), // Default language (required)
),
];
For Japanese:
const GoogleSpeechToTextMasamuneAdapter(
defaultLocale: Locale('ja', 'JP'),
)
With Auto-initialization (optional):
// Create controller for auto-initialization
final sttController = SpeechToTextController();
const GoogleSpeechToTextMasamuneAdapter(
defaultLocale: Locale('en', 'US'),
speechToTextController: sttController, // Will initialize on boot
initializeOnBoot: true, // Auto-initialize
)
Speech-to-Text Controller #
Use SpeechToTextController to initialize speech recognition, start listening, and handle results.
class VoiceInputPage extends PageScopedWidget {
@override
Widget build(BuildContext context, PageRef ref) {
final stt = ref.page.controller(SpeechToTextController.query());
// Initialize on page load
ref.page.on(
initOrUpdate: () {
stt.initialize();
},
);
return Scaffold(
appBar: AppBar(title: const Text("Voice Input")),
body: Column(
children: [
Text(stt.recognizedText ?? "Say something..."),
ElevatedButton(
onPressed: () async {
if (stt.isListening) {
await stt.stop();
} else {
await stt.listen(
onResult: (result) {
print("Recognized: ${result.recognizedWords}");
},
);
}
},
child: Text(stt.isListening ? "Stop" : "Start Listening"),
),
],
),
);
}
}
Continuous Listening #
listen()starts recognition; setpartialResults: trueto receive interim transcripts.- Use
stt.pause()/stt.resume()to manage listening sessions without full reinitialization. - Handle permission checks; the controller throws if microphone access is denied.
Error Handling #
Listen to the error stream or supply onError to listen() to capture SpeechRecognitionError details.
await stt.listen(
onError: (SpeechRecognitionError error) {
debugPrint("Error: ${error.errorMsg}");
},
);
Tips #
- Always call
initialize()before listening; reuse the controller across sessions for performance. - Provide UI feedback (animations, status indicators) while listening.
- Localize locale IDs (e.g.,
ja_JP,fr_FR) to match target audiences. - Combine with text input fields to let users edit recognized text.
GitHub Sponsors #
Sponsors are always welcome. Thank you for your support!