tflite_plus 1.0.3 copy "tflite_plus: ^1.0.3" to clipboard
tflite_plus: ^1.0.3 copied to clipboard

A comprehensive Flutter plugin for Google AI's LiteRT (TensorFlow Lite) with advanced machine learning capabilities for both Android and iOS platforms.

πŸ”₯ TensorFlow Lite Plus #

A comprehensive Flutter plugin for TensorFlow Lite with advanced ML capabilities

pub package GitHub stars License: MIT Platform

Android iOS GitHub issues GitHub PRs Profile Views

Bring the power of AI to your Flutter apps with ease πŸš€


πŸ“‹ Table of Contents #


✨ Features #

πŸ”₯ Image Classification
Classify images using pre-trained or custom models
🎯 Object Detection
Detect and locate objects with bounding boxes
πŸƒ Pose Estimation
Detect human poses and keypoints using PoseNet
🎨 Semantic Segmentation
Pixel-level image segmentation
⚑ Hardware Acceleration
GPU, NNAPI, Metal, and CoreML delegate support
πŸ“± Cross-Platform
Works seamlessly on Android and iOS
πŸ”§ Flexible Input
Support for file paths and binary data
πŸš€ Asynchronous
Non-blocking inference with async/await

πŸš€ Quick Start (FFI Interpreter API) #

This package now exposes a low-level, FFI-backed Interpreter API. Use the Interpreter class to load models (from assets, files, or buffers), run inference and manage resources.

import 'package:tflite_plus/tflite_plus.dart';

// 1. Load your model from assets
final interpreter = await Interpreter.fromAsset('assets/models/mobilenet.tflite');

// 2. Prepare your input (must match model input shape and type)
// Example: a Float32 input buffer for a 1x224x224x3 model
final input = Float32List(1 * 224 * 224 * 3);
// Fill `input` with normalized image data...

// 3. Prepare output container (shape depends on model)
final output = List.filled(1 * 1001, 0.0); // adjust to your model's output size

// 4. Run inference
interpreter.run(input, output);

// 5. Use results
print('Top score: ${output[0]}');

// 6. Close when done
interpreter.close();

πŸ“¦ Installation #

1. Add Dependency #

dependencies:
  tflite_plus: ^1.0.3

2. Install #

flutter pub get

3. Import #

import 'package:tflite_plus/tflite_plus.dart';

βš™οΈ Platform Setup #

Android Configuration (Not Mandatory) #

Add to android/app/build.gradle: (Not Mandatory)

android {
    defaultConfig {
        minSdkVersion 21 
    }
}

iOS Configuration #

Add to ios/Runner/Info.plist:

<key>NSCameraUsageDescription</key>
<string>This app needs camera access for ML inference.</string>
<key>NSPhotoLibraryUsageDescription</key>
<string>This app needs photo library access for ML inference.</string>

Update ios/Podfile:

platform :ios, '12.0'

πŸ“š Public API (high level) #

This repository now exports a set of low-level, FFI-backed primitives. The most commonly used APIs are:

Symbol Description
Interpreter Core class to load a TensorFlow Lite model (from asset/file/buffer) and run inference. See Interpreter.fromAsset, Interpreter.fromBuffer, Interpreter.fromFile, run, runForMultipleInputs, invoke, close.
InterpreterOptions Options used when creating an Interpreter (delegates, threads, etc.).
Delegate and delegate implementations Hardware delegates and helpers: GpuDelegate, MetalDelegate, XNNPackDelegate, CoreMLDelegate.
Tensor Accessor for input/output tensor metadata and data helpers.
Model Low-level model helpers (used internally).

For advanced uses you can also work directly with the exported utilities in src/util/ such as byte conversion helpers.


🎯 Usage Examples (Interpreter) #

Below are three small recipes using the FFI Interpreter API. These are intentionally low-level β€” for higher-level helpers (pre/post-processing, label mapping) check the example/ folder for complete apps.

1. Simple Image Classification (synchronous run) #

import 'dart:typed_data';
import 'package:tflite_plus/tflite_plus.dart';

final interpreter = await Interpreter.fromAsset('assets/models/mobilenet.tflite');

// Example input for 1x224x224x3 float model
final input = Float32List(1 * 224 * 224 * 3);
// TODO: fill input with normalized image bytes

final output = List.filled(1 * 1001, 0.0);
interpreter.run(input, output);

// Process output (find top results)
// ...

interpreter.close();

2. Object Detection (multiple outputs) #

import 'dart:typed_data';
import 'package:tflite_plus/tflite_plus.dart';

final interpreter = await Interpreter.fromAsset('assets/models/ssd_mobilenet.tflite');

final input = Float32List(1 * 300 * 300 * 3);
// Output map: index -> buffer for each output tensor
final outputs = <int, Object>{
  0: List.filled(1 * 10 * 4, 0.0), // boxes
  1: List.filled(1 * 10, 0.0), // classes
  2: List.filled(1 * 10, 0.0), // scores
};

interpreter.runForMultipleInputs([input], outputs);

// Parse outputs from `outputs`

interpreter.close();

3. Pose Estimation (invoke + tensor helpers) #

import 'dart:typed_data';
import 'package:tflite_plus/tflite_plus.dart';

final interpreter = await Interpreter.fromAsset('assets/models/posenet.tflite');

final input = Float32List(1 * 257 * 257 * 3);
final output = List.filled(1 * 17 * 3, 0.0);

interpreter.run(input, output);

// Output post-processing to get keypoints

interpreter.close();

Notes on parameters #

The new API is lower-level and works directly with typed buffers (Float32List, Uint8List, etc.). Use Tensor helpers and InterpreterOptions to configure delegates and threads. See lib/src/interpreter.dart for the full API and examples in example/ for end-to-end usage.


πŸ”§ Advanced Configuration #

GPU Acceleration #

import 'dart:io' show Platform;
import 'package:tflite_plus/tflite_plus.dart';

// Create interpreter with GPU delegate
final options = InterpreterOptions();
if (Platform.isAndroid) {
  options.addDelegate(GpuDelegate());
} else if (Platform.isIOS) {
  options.addDelegate(MetalDelegate());
}

final interpreter = await Interpreter.fromAsset(
  'assets/models/model.tflite', 
  options: options,
);

NNAPI/CoreML Acceleration #

// Enable NNAPI (Android) / CoreML (iOS)
final options = InterpreterOptions();
if (Platform.isAndroid) {
  // NNAPI delegate (Android)
  options.addDelegate(XNNPackDelegate());
} else if (Platform.isIOS) {
  // CoreML delegate (iOS) 
  options.addDelegate(CoreMLDelegate());
}

final interpreter = await Interpreter.fromAsset(
  'assets/models/model.tflite',
  options: options,
);

Thread Configuration #

import 'dart:io' show Platform;
import 'dart:math' as math;

// Optimize for different devices
final numCores = Platform.numberOfProcessors;
final options = InterpreterOptions()
  ..threads = math.min(numCores, 4); // Use up to 4 threads

final interpreter = await Interpreter.fromAsset(
  'assets/models/model.tflite',
  options: options,
);

Working with Raw Tensor Data #

// Access input/output tensors directly  
final interpreter = await Interpreter.fromAsset('assets/models/model.tflite');

// Get input tensor info
final inputTensor = interpreter.getInputTensor(0);
print('Input shape: ${inputTensor.shape}');
print('Input type: ${inputTensor.type}');

// Get output tensor info
final outputTensor = interpreter.getOutputTensor(0);
print('Output shape: ${outputTensor.shape}');

interpreter.close();

⚑ Performance Tips #

🎯 Model Optimization #

# Optimize your TensorFlow Lite model
import tensorflow as tf

converter = tf.lite.TFLiteConverter.from_saved_model('model')
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
tflite_model = converter.convert()

πŸ“± Best Practices #

  1. Use Hardware Delegates: GPU/Metal delegates provide 2-4x faster inference
  2. Quantize Models: INT8 quantized models are smaller and faster
  3. Optimize Thread Usage: Use multiple threads but don't exceed CPU cores
  4. Proper Tensor Management: Reuse tensors when possible, call close() when done
  5. Preprocess Efficiently: Resize images to exact model input dimensions

βš™οΈ Performance Benchmarks #

Device Model CPU (ms) GPU (ms) Speedup
Pixel 6 MobileNet 45 12 3.75x
iPhone 13 MobileNet 38 8 4.75x
Galaxy S21 EfficientNet 120 28 4.28x

πŸ› οΈ Troubleshooting #

Common Issues & Solutions #

Model Loading Fails

// ❌ Problem: Model not found
// βœ… Solution: Check assets configuration
flutter:
  assets:
    - assets/models/

GPU Delegate Issues

// ❌ Problem: GPU acceleration fails
// βœ… Solution: Handle delegate errors gracefully
try {
  final options = InterpreterOptions();
  if (Platform.isAndroid) {
    options.addDelegate(GpuDelegate());
  }
  final interpreter = await Interpreter.fromAsset('model.tflite', options: options);
} catch (e) {
  // Fallback to CPU-only interpreter
  final interpreter = await Interpreter.fromAsset('model.tflite');
}

Memory Issues

// ❌ Problem: Out of memory
// βœ… Solution: Resource management
interpreter.close(); // Always clean up when done

// Process smaller batches
// Use quantized models  
// Reduce input tensor sizes

Inference Too Slow

// ❌ Problem: Slow inference
// βœ… Solution: Optimization strategies
final options = InterpreterOptions()
  ..threads = 4;                    // Use multiple threads
  
if (Platform.isAndroid) {
  options.addDelegate(GpuDelegate()); // Enable GPU
}

final interpreter = await Interpreter.fromAsset(
  'assets/models/model_quantized.tflite', // Use quantized model
  options: options,
);

Error Codes #

Error Cause Solution
ArgumentError Invalid model file or corrupt data Check model file path and integrity
StateError Interpreter not allocated Call allocateTensors() or ensure model is loaded
RangeError Invalid tensor index Check tensor indices with getInputTensors().length
Out of memory Insufficient RAM Use smaller models/reduce batch size

πŸ§ͺ Complete Examples #

Basic Image Classification with File Input #

import 'dart:typed_data';
import 'dart:io';
import 'package:flutter/services.dart';
import 'package:tflite_plus/tflite_plus.dart';
import 'package:image/image.dart' as img;

class ImageClassifier {
  Interpreter? _interpreter;
  List<String>? _labels;

  Future<void> loadModel() async {
    // Load the interpreter
    _interpreter = await Interpreter.fromAsset('assets/models/mobilenet.tflite');
    
    // Load labels
    final labelData = await rootBundle.loadString('assets/models/labels.txt');
    _labels = labelData.split('\n');
  }

  Future<List<Map<String, dynamic>>> classifyImage(String imagePath) async {
    if (_interpreter == null) throw StateError('Model not loaded');

    // Load and preprocess image
    final imageFile = File(imagePath);
    final imageBytes = await imageFile.readAsBytes();
    final image = img.decodeImage(imageBytes)!;
    
    // Resize to model input size (224x224 for MobileNet)
    final resized = img.copyResize(image, width: 224, height: 224);
    
    // Convert to Float32List and normalize
    final input = Float32List(1 * 224 * 224 * 3);
    var index = 0;
    for (int y = 0; y < 224; y++) {
      for (int x = 0; x < 224; x++) {
        final pixel = resized.getPixel(x, y);
        input[index++] = (img.getRed(pixel) - 127.5) / 127.5;
        input[index++] = (img.getGreen(pixel) - 127.5) / 127.5; 
        input[index++] = (img.getBlue(pixel) - 127.5) / 127.5;
      }
    }
    
    // Run inference
    final output = List.filled(1001, 0.0);
    _interpreter!.run(input, output);
    
    // Convert to results
    final results = <Map<String, dynamic>>[];
    for (int i = 0; i < output.length; i++) {
      results.add({
        'index': i,
        'label': i < _labels!.length ? _labels![i] : 'Unknown',
        'confidence': output[i],
      });
    }
    
    // Sort by confidence and return top 5
    results.sort((a, b) => b['confidence'].compareTo(a['confidence']));
    return results.take(5).toList();
  }

  void dispose() {
    _interpreter?.close();
  }
}

Batch Processing with Progress Tracking #

class BatchImageProcessor {
  static Future<List<Map<String, dynamic>>> processImages(
    List<String> imagePaths,
    {Function(int, int)? onProgress}
  ) async {
    final classifier = ImageClassifier();
    await classifier.loadModel();

    final results = <Map<String, dynamic>>[];
    
    for (int i = 0; i < imagePaths.length; i++) {
      try {
        final predictions = await classifier.classifyImage(imagePaths[i]);
        
        results.add({
          'path': imagePaths[i],
          'predictions': predictions,
          'status': 'success',
        });
        
        onProgress?.call(i + 1, imagePaths.length);
        
      } catch (e) {
        results.add({
          'path': imagePaths[i],
          'error': e.toString(),
          'status': 'error',
        });
      }
    }
    
    classifier.dispose();
    return results;
  }
}

🀝 Contributing #

We welcome contributions from the community! πŸŽ‰

Contributors #

Shakil Ahmed
Shakil Ahmed

πŸš€ Creator & Maintainer

Want to see your profile here? Contribute to the project!

How to Contribute #

πŸš€ Quick Start

  1. Fork & Clone

    git clone https://github.com/yourusername/tflite_plus.git
    cd tflite_plus
    
  2. Create Branch

    git checkout -b feature/amazing-feature
    
  3. Make Changes

    • Add your awesome code
    • Write tests
    • Update documentation
  4. Test Your Changes

    flutter test
    flutter analyze
    
  5. Submit PR

    git push origin feature/amazing-feature
    

🎯 Contribution Types

Type Description Label
πŸ› Bug Fix Fix existing issues bug
✨ Feature Add new functionality enhancement
πŸ“š Documentation Improve docs documentation
🎨 UI/UX Design improvements design
⚑ Performance Speed optimizations performance
πŸ§ͺ Tests Add or improve tests tests

πŸ“‹ Contribution Guidelines

πŸ† Recognition

Contributors get:

  • 🌟 Profile picture in README
  • πŸŽ–οΈ Contributor badge on GitHub
  • πŸ“’ Mention in release notes
  • 🎁 Special Discord role (coming soon)

πŸ’¬ Support #

Get Help & Connect #

Email GitHub Issues Discussions Website

πŸ“ž Support Channels #

Channel Purpose Response Time
πŸ› GitHub Issues Bug reports, feature requests 24-48 hours
πŸ’¬ GitHub Discussions Questions, community help 1-3 days
πŸ“§ Email Private support, partnerships 2-5 days
🌐 Website Documentation, tutorials Always available

πŸ†˜ Before Asking for Help #

  1. Check Documentation: Read this README thoroughly
  2. Search Issues: Look for existing solutions
  3. Provide Details: Include code, error messages, device info
  4. Minimal Example: Create a minimal reproducible example

πŸ“„ License #

MIT License

Copyright (c) 2024 CodeBumble

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

πŸŽ‰ Acknowledgments #

Special Thanks To:

  • πŸ€– Google AI Team for TensorFlow Lite
  • 🐦 Flutter Team for the amazing framework
  • 🌟 Open Source Community for continuous support
  • πŸ’» All contributors who make this project better

Made with ❀️ by CodeBumble

If this project helped you, please consider giving it a ⭐ on GitHub!

Star on GitHub

2
likes
150
points
171
downloads

Publisher

verified publishercodebumble.net

Weekly Downloads

A comprehensive Flutter plugin for Google AI's LiteRT (TensorFlow Lite) with advanced machine learning capabilities for both Android and iOS platforms.

Homepage
Repository (GitHub)
View/report issues
Contributing

Topics

#machine-learning #tensorflow #litert #ai #object-detection

Documentation

Documentation
API reference

License

MIT (license)

Dependencies

ffi, flutter, image, path, plugin_platform_interface, quiver

More

Packages that depend on tflite_plus

Packages that implement tflite_plus