tflite_plus 1.0.3  tflite_plus: ^1.0.3 copied to clipboard
tflite_plus: ^1.0.3 copied to clipboard
A comprehensive Flutter plugin for Google AI's LiteRT (TensorFlow Lite) with advanced machine learning capabilities for both Android and iOS platforms.
π₯ TensorFlow Lite Plus #
A comprehensive Flutter plugin for TensorFlow Lite with advanced ML capabilities
Bring the power of AI to your Flutter apps with ease π
π Table of Contents #
- β¨ Features
- π Quick Start
- π¦ Installation
- βοΈ Platform Setup
- π Public API
- π― Usage Examples
- π§ Advanced Configuration
- β‘ Performance Tips
- π οΈ Troubleshooting
- π§ͺ Complete Examples
- π€ Contributing
- π¬ Support
- π License
β¨ Features #
| π₯ | Image Classification Classify images using pre-trained or custom models | 
| π― | Object Detection Detect and locate objects with bounding boxes | 
| π | Pose Estimation Detect human poses and keypoints using PoseNet | 
| π¨ | Semantic Segmentation Pixel-level image segmentation | 
| β‘ | Hardware Acceleration GPU, NNAPI, Metal, and CoreML delegate support | 
| π± | Cross-Platform Works seamlessly on Android and iOS | 
| π§ | Flexible Input Support for file paths and binary data | 
| π | Asynchronous Non-blocking inference with async/await | 
π Quick Start (FFI Interpreter API) #
This package now exposes a low-level, FFI-backed Interpreter API. Use the Interpreter class to load models (from assets, files, or buffers), run inference and manage resources.
import 'package:tflite_plus/tflite_plus.dart';
// 1. Load your model from assets
final interpreter = await Interpreter.fromAsset('assets/models/mobilenet.tflite');
// 2. Prepare your input (must match model input shape and type)
// Example: a Float32 input buffer for a 1x224x224x3 model
final input = Float32List(1 * 224 * 224 * 3);
// Fill `input` with normalized image data...
// 3. Prepare output container (shape depends on model)
final output = List.filled(1 * 1001, 0.0); // adjust to your model's output size
// 4. Run inference
interpreter.run(input, output);
// 5. Use results
print('Top score: ${output[0]}');
// 6. Close when done
interpreter.close();
π¦ Installation #
1. Add Dependency #
dependencies:
  tflite_plus: ^1.0.3
2. Install #
flutter pub get
3. Import #
import 'package:tflite_plus/tflite_plus.dart';
βοΈ Platform Setup #
Android Configuration (Not Mandatory) #
Add to android/app/build.gradle: (Not Mandatory)
android {
    defaultConfig {
        minSdkVersion 21 
    }
}
iOS Configuration #
Add to ios/Runner/Info.plist:
<key>NSCameraUsageDescription</key>
<string>This app needs camera access for ML inference.</string>
<key>NSPhotoLibraryUsageDescription</key>
<string>This app needs photo library access for ML inference.</string>
Update ios/Podfile:
platform :ios, '12.0'
π Public API (high level) #
This repository now exports a set of low-level, FFI-backed primitives. The most commonly used APIs are:
| Symbol | Description | 
|---|---|
| Interpreter | Core class to load a TensorFlow Lite model (from asset/file/buffer) and run inference. See Interpreter.fromAsset,Interpreter.fromBuffer,Interpreter.fromFile,run,runForMultipleInputs,invoke,close. | 
| InterpreterOptions | Options used when creating an Interpreter(delegates, threads, etc.). | 
| Delegateand delegate implementations | Hardware delegates and helpers: GpuDelegate,MetalDelegate,XNNPackDelegate,CoreMLDelegate. | 
| Tensor | Accessor for input/output tensor metadata and data helpers. | 
| Model | Low-level model helpers (used internally). | 
For advanced uses you can also work directly with the exported utilities in src/util/ such as byte conversion helpers.
π― Usage Examples (Interpreter) #
Below are three small recipes using the FFI Interpreter API. These are intentionally low-level β for higher-level helpers (pre/post-processing, label mapping) check the example/ folder for complete apps.
1. Simple Image Classification (synchronous run) #
import 'dart:typed_data';
import 'package:tflite_plus/tflite_plus.dart';
final interpreter = await Interpreter.fromAsset('assets/models/mobilenet.tflite');
// Example input for 1x224x224x3 float model
final input = Float32List(1 * 224 * 224 * 3);
// TODO: fill input with normalized image bytes
final output = List.filled(1 * 1001, 0.0);
interpreter.run(input, output);
// Process output (find top results)
// ...
interpreter.close();
2. Object Detection (multiple outputs) #
import 'dart:typed_data';
import 'package:tflite_plus/tflite_plus.dart';
final interpreter = await Interpreter.fromAsset('assets/models/ssd_mobilenet.tflite');
final input = Float32List(1 * 300 * 300 * 3);
// Output map: index -> buffer for each output tensor
final outputs = <int, Object>{
  0: List.filled(1 * 10 * 4, 0.0), // boxes
  1: List.filled(1 * 10, 0.0), // classes
  2: List.filled(1 * 10, 0.0), // scores
};
interpreter.runForMultipleInputs([input], outputs);
// Parse outputs from `outputs`
interpreter.close();
3. Pose Estimation (invoke + tensor helpers) #
import 'dart:typed_data';
import 'package:tflite_plus/tflite_plus.dart';
final interpreter = await Interpreter.fromAsset('assets/models/posenet.tflite');
final input = Float32List(1 * 257 * 257 * 3);
final output = List.filled(1 * 17 * 3, 0.0);
interpreter.run(input, output);
// Output post-processing to get keypoints
interpreter.close();
Notes on parameters #
The new API is lower-level and works directly with typed buffers (Float32List, Uint8List, etc.). Use Tensor helpers and InterpreterOptions to configure delegates and threads. See lib/src/interpreter.dart for the full API and examples in example/ for end-to-end usage.
π§ Advanced Configuration #
GPU Acceleration #
import 'dart:io' show Platform;
import 'package:tflite_plus/tflite_plus.dart';
// Create interpreter with GPU delegate
final options = InterpreterOptions();
if (Platform.isAndroid) {
  options.addDelegate(GpuDelegate());
} else if (Platform.isIOS) {
  options.addDelegate(MetalDelegate());
}
final interpreter = await Interpreter.fromAsset(
  'assets/models/model.tflite', 
  options: options,
);
NNAPI/CoreML Acceleration #
// Enable NNAPI (Android) / CoreML (iOS)
final options = InterpreterOptions();
if (Platform.isAndroid) {
  // NNAPI delegate (Android)
  options.addDelegate(XNNPackDelegate());
} else if (Platform.isIOS) {
  // CoreML delegate (iOS) 
  options.addDelegate(CoreMLDelegate());
}
final interpreter = await Interpreter.fromAsset(
  'assets/models/model.tflite',
  options: options,
);
Thread Configuration #
import 'dart:io' show Platform;
import 'dart:math' as math;
// Optimize for different devices
final numCores = Platform.numberOfProcessors;
final options = InterpreterOptions()
  ..threads = math.min(numCores, 4); // Use up to 4 threads
final interpreter = await Interpreter.fromAsset(
  'assets/models/model.tflite',
  options: options,
);
Working with Raw Tensor Data #
// Access input/output tensors directly  
final interpreter = await Interpreter.fromAsset('assets/models/model.tflite');
// Get input tensor info
final inputTensor = interpreter.getInputTensor(0);
print('Input shape: ${inputTensor.shape}');
print('Input type: ${inputTensor.type}');
// Get output tensor info
final outputTensor = interpreter.getOutputTensor(0);
print('Output shape: ${outputTensor.shape}');
interpreter.close();
β‘ Performance Tips #
π― Model Optimization #
# Optimize your TensorFlow Lite model
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model('model')
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
tflite_model = converter.convert()
π± Best Practices #
- Use Hardware Delegates: GPU/Metal delegates provide 2-4x faster inference
- Quantize Models: INT8 quantized models are smaller and faster
- Optimize Thread Usage: Use multiple threads but don't exceed CPU cores
- Proper Tensor Management: Reuse tensors when possible, call close()when done
- Preprocess Efficiently: Resize images to exact model input dimensions
βοΈ Performance Benchmarks #
| Device | Model | CPU (ms) | GPU (ms) | Speedup | 
|---|---|---|---|---|
| Pixel 6 | MobileNet | 45 | 12 | 3.75x | 
| iPhone 13 | MobileNet | 38 | 8 | 4.75x | 
| Galaxy S21 | EfficientNet | 120 | 28 | 4.28x | 
π οΈ Troubleshooting #
Common Issues & Solutions #
Model Loading Fails
// β Problem: Model not found
// β
 Solution: Check assets configuration
flutter:
  assets:
    - assets/models/
GPU Delegate Issues
// β Problem: GPU acceleration fails
// β
 Solution: Handle delegate errors gracefully
try {
  final options = InterpreterOptions();
  if (Platform.isAndroid) {
    options.addDelegate(GpuDelegate());
  }
  final interpreter = await Interpreter.fromAsset('model.tflite', options: options);
} catch (e) {
  // Fallback to CPU-only interpreter
  final interpreter = await Interpreter.fromAsset('model.tflite');
}
Memory Issues
// β Problem: Out of memory
// β
 Solution: Resource management
interpreter.close(); // Always clean up when done
// Process smaller batches
// Use quantized models  
// Reduce input tensor sizes
Inference Too Slow
// β Problem: Slow inference
// β
 Solution: Optimization strategies
final options = InterpreterOptions()
  ..threads = 4;                    // Use multiple threads
  
if (Platform.isAndroid) {
  options.addDelegate(GpuDelegate()); // Enable GPU
}
final interpreter = await Interpreter.fromAsset(
  'assets/models/model_quantized.tflite', // Use quantized model
  options: options,
);
Error Codes #
| Error | Cause | Solution | 
|---|---|---|
| ArgumentError | Invalid model file or corrupt data | Check model file path and integrity | 
| StateError | Interpreter not allocated | Call allocateTensors()or ensure model is loaded | 
| RangeError | Invalid tensor index | Check tensor indices with getInputTensors().length | 
| Out of memory | Insufficient RAM | Use smaller models/reduce batch size | 
π§ͺ Complete Examples #
Basic Image Classification with File Input #
import 'dart:typed_data';
import 'dart:io';
import 'package:flutter/services.dart';
import 'package:tflite_plus/tflite_plus.dart';
import 'package:image/image.dart' as img;
class ImageClassifier {
  Interpreter? _interpreter;
  List<String>? _labels;
  Future<void> loadModel() async {
    // Load the interpreter
    _interpreter = await Interpreter.fromAsset('assets/models/mobilenet.tflite');
    
    // Load labels
    final labelData = await rootBundle.loadString('assets/models/labels.txt');
    _labels = labelData.split('\n');
  }
  Future<List<Map<String, dynamic>>> classifyImage(String imagePath) async {
    if (_interpreter == null) throw StateError('Model not loaded');
    // Load and preprocess image
    final imageFile = File(imagePath);
    final imageBytes = await imageFile.readAsBytes();
    final image = img.decodeImage(imageBytes)!;
    
    // Resize to model input size (224x224 for MobileNet)
    final resized = img.copyResize(image, width: 224, height: 224);
    
    // Convert to Float32List and normalize
    final input = Float32List(1 * 224 * 224 * 3);
    var index = 0;
    for (int y = 0; y < 224; y++) {
      for (int x = 0; x < 224; x++) {
        final pixel = resized.getPixel(x, y);
        input[index++] = (img.getRed(pixel) - 127.5) / 127.5;
        input[index++] = (img.getGreen(pixel) - 127.5) / 127.5; 
        input[index++] = (img.getBlue(pixel) - 127.5) / 127.5;
      }
    }
    
    // Run inference
    final output = List.filled(1001, 0.0);
    _interpreter!.run(input, output);
    
    // Convert to results
    final results = <Map<String, dynamic>>[];
    for (int i = 0; i < output.length; i++) {
      results.add({
        'index': i,
        'label': i < _labels!.length ? _labels![i] : 'Unknown',
        'confidence': output[i],
      });
    }
    
    // Sort by confidence and return top 5
    results.sort((a, b) => b['confidence'].compareTo(a['confidence']));
    return results.take(5).toList();
  }
  void dispose() {
    _interpreter?.close();
  }
}
Batch Processing with Progress Tracking #
class BatchImageProcessor {
  static Future<List<Map<String, dynamic>>> processImages(
    List<String> imagePaths,
    {Function(int, int)? onProgress}
  ) async {
    final classifier = ImageClassifier();
    await classifier.loadModel();
    final results = <Map<String, dynamic>>[];
    
    for (int i = 0; i < imagePaths.length; i++) {
      try {
        final predictions = await classifier.classifyImage(imagePaths[i]);
        
        results.add({
          'path': imagePaths[i],
          'predictions': predictions,
          'status': 'success',
        });
        
        onProgress?.call(i + 1, imagePaths.length);
        
      } catch (e) {
        results.add({
          'path': imagePaths[i],
          'error': e.toString(),
          'status': 'error',
        });
      }
    }
    
    classifier.dispose();
    return results;
  }
}
π€ Contributing #
We welcome contributions from the community! π
Contributors #
|   Shakil Ahmed π Creator & Maintainer | 
Want to see your profile here? Contribute to the project!
How to Contribute #
π Quick Start
- 
Fork & Clone git clone https://github.com/yourusername/tflite_plus.git cd tflite_plus
- 
Create Branch git checkout -b feature/amazing-feature
- 
Make Changes - Add your awesome code
- Write tests
- Update documentation
 
- 
Test Your Changes flutter test flutter analyze
- 
Submit PR git push origin feature/amazing-feature
π― Contribution Types
| Type | Description | Label | 
|---|---|---|
| π Bug Fix | Fix existing issues | bug | 
| β¨ Feature | Add new functionality | enhancement | 
| π Documentation | Improve docs | documentation | 
| π¨ UI/UX | Design improvements | design | 
| β‘ Performance | Speed optimizations | performance | 
| π§ͺ Tests | Add or improve tests | tests | 
π Contribution Guidelines
- Code Style: Follow Dart Style Guide
- Testing: Add tests for new features
- Documentation: Update README and code comments
- Commits: Use Conventional Commits
π Recognition
Contributors get:
- π Profile picture in README
- ποΈ Contributor badge on GitHub
- π’ Mention in release notes
- π Special Discord role (coming soon)
π¬ Support #
Get Help & Connect #
π Support Channels #
| Channel | Purpose | Response Time | 
|---|---|---|
| π GitHub Issues | Bug reports, feature requests | 24-48 hours | 
| π¬ GitHub Discussions | Questions, community help | 1-3 days | 
| π§ Email | Private support, partnerships | 2-5 days | 
| π Website | Documentation, tutorials | Always available | 
π Before Asking for Help #
- Check Documentation: Read this README thoroughly
- Search Issues: Look for existing solutions
- Provide Details: Include code, error messages, device info
- Minimal Example: Create a minimal reproducible example
π License #
MIT License
Copyright (c) 2024 CodeBumble
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
π Acknowledgments #
Special Thanks To:
- π€ Google AI Team for TensorFlow Lite
- π¦ Flutter Team for the amazing framework
- π Open Source Community for continuous support
- π» All contributors who make this project better
Made with β€οΈ by CodeBumble
If this project helped you, please consider giving it a β on GitHub!