Zifty

A specialized Flutter Package for providing real-time voice communication and command processing capabilities.

Features

  • 🎀 Real-time voice communication
  • πŸ—£οΈ Voice command processing
  • πŸ”‡ Audio stream mute/unmute capabilities
  • πŸ“± Flexible UI layouts for different form factors

Platform Support

Flutter Pilot supports all major platforms:

  • βœ“ Android
  • βœ“ iOS
  • βœ“ Web
  • βœ“ Windows
  • βœ“ macOS
  • βœ“ Linux

Getting Started

  1. Add zifty as a dependency in your pubspec.yaml:
dependencies:
  zifty: 
  1. Run:
flutter pub get

Usage

Basic Implementation

  1. Obtain an ephemeral key, serverToken and model:
// Using API
Align(
  alignment: Alignment.bottomCenter,
  child: Padding(
    padding: const EdgeInsets.all(16.0),
    child: AudioChatWidget(
      showMuteButton: true,
      horizontalLayout: true,
      initialContext: "User Name is John.",
      headers: const {'Content-Type': 'application/json'},
      credentialsUrl: "Url to get serverToken, apiKey and model",
      getCredentials: _getCredentials, //Either this future method or credentialsUrl way
      onError: (e) {
        ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text("Error connecting to audio chat: $e")));
      },
      userToken: "Bearer loggedIn user token(for api calling)[Optional]"
    ),
  ),
);


- Getting Credentials Via API:

Future<Map<String, String>> _getCredentials() async {
    try {
      final response = await http.post(
        Uri.parse("Url to get authToken and apiKey"),
        headers: {'Content-Type': 'application/json'},
      );

      if (response.statusCode == 200) {
        var apiResponse = json.decode(response.body);
        print(apiResponse);
        return {"apiKey": apiResponse["apiKey"], "serverToken": apiResponse["serverToken"], "model": apiResponse["model"]};
      } else {
        print(
            'Failed to send function call data to API. Status code: ${response.statusCode}');
      }
    } catch (e) {
      print('Error sending function call data to API: $e');
    }
    return {};
  }
  1. For Android users

       <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
       <uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" />
       <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
       <uses-permission android:name="android.permission.INTERNET"/>
       <uses-permission android:name="android.permission.BLUETOOTH" />
       <uses-permission android:name="android.permission.BLUETOOTH_ADMIN" />
    
  2. For IOS users Add the entry to your Info.plist file, located in

       <key>NSMicrophoneUsageDescription</key>
       <string>$(PRODUCT_NAME) Microphone Usage!</string>
    

    This allows to access users microphone.

    Note for iOS. Still facing issue while using webrtc, try this once, issue mentioned at flutter_webrtc package.

    post_install do |installer|
        installer.pods_project.targets.each do |target|
            flutter_additional_ios_build_settings(target)
                target.build_configurations.each do |config|
                config.build_settings['ONLY_ACTIVE_ARCH'] = 'YES' # <= this line
            end
        end
    end

API Documentation

AudioChatWidget

The main widget for starting.

Properties

  • apiKey: This is Ephemeral Key for connection
  • serverToken: Client authentication token
  • model: OpenAi model
  • userToken: Optional logged-in user token
  • initialContext: Optional initial conversation context
  • onError: Callback for errors
  • horizontalLayout: Widget layout orientation
  • getCredentials: Future API for getting credentials
  • credentialsUrl: Credential URl(Either this or getCredentails is required)
  • headers: used if {credentialsUrl} is provided.
  • body: used if {credentialsUrl} is provided.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Libraries

zifty