zifty 0.0.7
zifty: ^0.0.7 copied to clipboard
Voice Pilot
Zifty #
A specialized Flutter Package for providing real-time voice communication and command processing capabilities.
Features #
- π€ Real-time voice communication
- π£οΈ Voice command processing
- π Audio stream mute/unmute capabilities
- π± Flexible UI layouts for different form factors
Platform Support #
Flutter Pilot supports all major platforms:
- β Android
- β iOS
- β Web
- β Windows
- β macOS
- β Linux
Getting Started #
- Add
zifty
as a dependency in yourpubspec.yaml
:
dependencies:
zifty:
- Run:
flutter pub get
Usage #
Basic Implementation #
- Obtain an ephemeral key, serverToken and model:
// Using API
Align(
alignment: Alignment.bottomCenter,
child: Padding(
padding: const EdgeInsets.all(16.0),
child: AudioChatWidget(
showMuteButton: true,
horizontalLayout: true,
initialContext: "User Name is John.",
headers: const {'Content-Type': 'application/json'},
credentialsUrl: "Url to get serverToken, apiKey and model",
getCredentials: _getCredentials, //Either this future method or credentialsUrl way
onError: (e) {
ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text("Error connecting to audio chat: $e")));
},
userToken: "Bearer loggedIn user token(for api calling)[Optional]"
),
),
);
- Getting Credentials Via API:
Future<Map<String, String>> _getCredentials() async {
try {
final response = await http.post(
Uri.parse("Url to get authToken and apiKey"),
headers: {'Content-Type': 'application/json'},
);
if (response.statusCode == 200) {
var apiResponse = json.decode(response.body);
print(apiResponse);
return {"apiKey": apiResponse["apiKey"], "serverToken": apiResponse["serverToken"], "model": apiResponse["model"]};
} else {
print(
'Failed to send function call data to API. Status code: ${response.statusCode}');
}
} catch (e) {
print('Error sending function call data to API: $e');
}
return {};
}
-
For Android users
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" /> <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /> <uses-permission android:name="android.permission.INTERNET"/> <uses-permission android:name="android.permission.BLUETOOTH" /> <uses-permission android:name="android.permission.BLUETOOTH_ADMIN" />
-
For IOS users Add the entry to your Info.plist file, located in
<key>NSMicrophoneUsageDescription</key> <string>$(PRODUCT_NAME) Microphone Usage!</string>
This allows to access users microphone.
Note for iOS. Still facing issue while using webrtc, try this once, issue mentioned at flutter_webrtc package.
post_install do |installer|
installer.pods_project.targets.each do |target|
flutter_additional_ios_build_settings(target)
target.build_configurations.each do |config|
config.build_settings['ONLY_ACTIVE_ARCH'] = 'YES' # <= this line
end
end
end
API Documentation #
AudioChatWidget #
The main widget for starting.
Properties
apiKey
: This is Ephemeral Key for connectionserverToken
: Client authentication tokenmodel
: OpenAi modeluserToken
: Optional logged-in user tokeninitialContext
: Optional initial conversation contextonError
: Callback for errorshorizontalLayout
: Widget layout orientationgetCredentials
: Future API for getting credentialscredentialsUrl
: Credential URl(Either this or getCredentails is required)headers
: used if {credentialsUrl} is provided.body
: used if {credentialsUrl} is provided.
License #
This project is licensed under the MIT License - see the LICENSE file for details.