executorch_flutter 0.0.2
executorch_flutter: ^0.0.2 copied to clipboard
A Flutter plugin package using ExecuTorch to allow model inference on Android, iOS, and macOS platforms.
ExecuTorch Flutter #
A Flutter plugin package using ExecuTorch to allow model inference on Android, iOS, and macOS platforms.
Overview #
ExecuTorch Flutter provides a simple Dart API for loading and running ExecuTorch models (.pte
files) in your Flutter applications. The package handles all native platform integration, providing you with a straightforward interface for on-device machine learning inference.
Features #
- ✅ Cross-Platform Support: Android (API 23+), iOS (17.0+), and macOS (12.0+ Apple Silicon)
- ✅ Type-Safe API: Generated with Pigeon for reliable cross-platform communication
- ✅ Async Operations: Non-blocking model loading and inference execution
- ✅ Multiple Models: Support for concurrent model instances
- ✅ Error Handling: Structured exception handling with clear error messages
- ✅ Backend Support: XNNPACK, CoreML, MPS backends
- ✅ Live Camera: Real-time inference with camera stream support
Installation #
Add to your pubspec.yaml
:
dependencies:
executorch_flutter: ^0.0.1
Basic Usage #
The package provides a simple, intuitive API that matches native ExecuTorch patterns:
1. Load a Model #
import 'package:executorch_flutter/executorch_flutter.dart';
// Load a model from file path
final model = await ExecuTorchModel.load('/path/to/model.pte');
2. Run Inference #
// Prepare input tensor
final inputTensor = TensorData(
shape: [1, 3, 224, 224],
dataType: TensorType.float32,
data: yourImageBytes,
);
// Run inference
final outputs = await model.forward([inputTensor]);
// Process outputs
for (var output in outputs) {
print('Output shape: ${output.shape}');
print('Output type: ${output.dataType}');
}
// Clean up when done
await model.dispose();
3. Loading Models from Assets #
import 'package:flutter/services.dart' show rootBundle;
import 'package:path_provider/path_provider.dart';
import 'dart:io';
// Load model from assets
final byteData = await rootBundle.load('assets/models/model.pte');
final tempDir = await getTemporaryDirectory();
final file = File('${tempDir.path}/model.pte');
await file.writeAsBytes(byteData.buffer.asUint8List());
// Load and run inference
final model = await ExecuTorchModel.load(file.path);
final outputs = await model.forward([inputTensor]);
// Dispose when done
await model.dispose();
Complete Examples #
See the example/
directory for a full working application:
- Unified Model Playground - Complete app with MobileNet classification and YOLO object detection, supporting both static images and live camera
Supported Model Formats #
- ExecuTorch (.pte): Optimized PyTorch models converted to ExecuTorch format
- Input Types: float32, int8, int32, uint8 tensors
- Model Size: Tested with models up to 500MB
📖 Need to export your PyTorch models? See the Official ExecuTorch Export Guide for converting PyTorch models to ExecuTorch format with platform-specific optimizations.
Platform Requirements #
Android #
- Minimum SDK: API 23 (Android 6.0)
- Architecture: arm64-v8a
- Supported Backends: XNNPACK
iOS #
- Minimum Version: iOS 17.0+
- Architecture: arm64 (device only)
- ⚠️ iOS Simulator (x86_64) is NOT supported
- Supported Backends: XNNPACK, CoreML, MPS
macOS #
- Minimum Version: macOS 12.0+ (Monterey)
- Architecture: arm64 only (Apple Silicon)
- ⚠️ Intel Macs (x86_64) are NOT supported
- Supported Backends: XNNPACK, CoreML, MPS
macOS Build Limitations
Debug Builds: ✅ Work by default on Apple Silicon Macs
Release Builds: ⚠️ Currently NOT working
macOS release builds are not supported due to Flutter's build system forcing universal binaries (arm64 + x86_64), but ExecuTorch only provides arm64 libraries.
🔗 Tracking: Flutter Issue #176605
Platform Configuration #
When adding executorch_flutter
to an existing Flutter project, you may need to update the minimum deployment targets. If you see build errors mentioning platform versions, follow these steps:
iOS Deployment Target (iOS 17.0+) #
If you get an error like: The package product 'executorch-flutter' requires minimum platform version 17.0 for the iOS platform
Update using Xcode (Recommended):
- Open your Flutter project in Xcode:
- Navigate to your project folder
- Open
ios/Runner.xcworkspace
(NOT the.xcodeproj
file)
- In Xcode's left sidebar, click on Runner (the blue project icon at the top)
- Make sure Runner is selected under "TARGETS" (not under "PROJECT")
- Click the Build Settings tab at the top
- In the search bar, type:
iOS Deployment Target
- You'll see "iOS Deployment Target" with a version number (like 13.0)
- Click on the version number and change it to 17.0
- Close Xcode
macOS Deployment Target (macOS 12.0+) #
If you get an error like: The package product 'executorch-flutter' requires minimum platform version 12.0 for the macOS platform
Update using Xcode (Recommended):
- Open your Flutter project in Xcode:
- Navigate to your project folder
- Open
macos/Runner.xcworkspace
(NOT the.xcodeproj
file)
- In Xcode's left sidebar, click on Runner (the blue project icon at the top)
- Make sure Runner is selected under "TARGETS" (not under "PROJECT")
- Click the Build Settings tab at the top
- In the search bar, type:
macOS Deployment Target
- You'll see "macOS Deployment Target" with a version number (like 10.15)
- Click on the version number and change it to 12.0
- Close Xcode
Verification #
After updating deployment targets, clean and rebuild:
# Clean build artifacts
flutter clean
# Get dependencies
flutter pub get
# Build for your target platform
flutter build ios --debug --no-codesign # For iOS
flutter build macos --debug # For macOS
flutter build apk --debug # For Android
Advanced Usage #
Processor Interfaces #
The example app demonstrates processor strategies for common model types:
- Image Classification - ImageNet preprocessing and postprocessing for MobileNet
- Object Detection - YOLO preprocessing, NMS, and bounding box extraction
- OpenCV Processors - High-performance OpenCV-based preprocessing
See the example app for complete processor implementations using the strategy pattern.
Example Application #
The example/
directory contains a comprehensive demo app showcasing:
- Unified Model Playground - Main playground supporting multiple model types
- MobileNet V3 image classification
- YOLO object detection (v5, v8, v11)
- Static image and live camera modes
- Reactive settings (thresholds, top-K, preprocessing providers)
- Performance monitoring and metrics
Converting PyTorch Models to ExecuTorch #
To use your PyTorch models with this package, convert them to ExecuTorch format (.pte
files).
📖 Official ExecuTorch Export Guide: PyTorch ExecuTorch Documentation
Key Resources:
Example App Models:
The example app includes scripts for exporting reference models (MobileNet, YOLO):
# One-command setup: installs dependencies and exports all models
cd python
python3 setup_models.py
This will:
- ✅ Install all required dependencies (torch, ultralytics, executorch)
- ✅ Export MobileNet V3 for image classification
- ✅ Export YOLO11n for object detection
- ✅ Generate COCO labels file
- ✅ Verify all models are ready
Development Status #
This project is actively developed following these principles:
- Test-First Development: Comprehensive testing before implementation
- Platform Parity: Consistent behavior across Android and iOS
- Performance-First: Optimized for mobile device constraints
- Documentation-Driven: Clear examples and API documentation
API Reference #
Core Classes #
ExecuTorchModel
The primary class for model management and inference.
// Static factory method to load a model
static Future<ExecuTorchModel> load(String filePath)
// Execute inference (matches native module.forward())
Future<List<TensorData>> forward(List<TensorData> inputs)
// Release model resources
Future<void> dispose()
// Check if model is disposed
bool get isDisposed
Native API Mapping:
- Android (Kotlin):
Module.load()
→module.forward()
- iOS/macOS (Swift):
Module()
+load("forward")
→module.forward()
TensorData
Input/output tensor representation:
final tensor = TensorData(
shape: [1, 3, 224, 224], // Tensor dimensions
dataType: TensorType.float32, // Data type (float32, int32, int8, uint8)
data: Uint8List(...), // Raw bytes
name: 'input_0', // Optional tensor name
);
Exception Hierarchy #
ExecuTorchException // Base exception
├── ExecuTorchModelException // Model loading/lifecycle errors
├── ExecuTorchInferenceException // Inference execution errors
├── ExecuTorchValidationException // Tensor validation errors
├── ExecuTorchMemoryException // Memory/resource errors
├── ExecuTorchIOException // File I/O errors
└── ExecuTorchPlatformException // Platform communication errors
Contributing #
Contributions are welcome! Please see our Contributing Guide for:
- Development setup and prerequisites
- Automated Pigeon code generation script
- Integration testing workflow
- Code standards and PR process
- Platform-specific guidelines
License #
MIT License - see LICENSE file for details.
Support #
For issues and questions:
- 📖 Check the Official ExecuTorch Documentation
- 🐛 Report issues on GitHub
- 💬 Discussions for questions and feature requests
Roadmap #
See our Roadmap for planned features and improvements, including:
- Additional model type examples (segmentation, pose estimation)
- Windows and Linux platform support
- Performance optimizations and more
Built with ❤️ for the Flutter and PyTorch communities.