flutter_pytorch_lite 0.0.1+1
flutter_pytorch_lite: ^0.0.1+1 copied to clipboard
PyTorch Lite plugin for Flutter. End-to-end workflow from Training to Deployment for iOS and Android mobile devices.
Flutter PyTorch Lite #
PyTorch Lite plugin for Flutter.
End-to-end workflow from Training to Deployment for iOS and Android mobile devices.
PyTorch Mobile #
There is a growing need to execute ML models on edge devices to reduce latency, preserve privacy, and enable new interactive use cases.
The PyTorch Mobile runtime beta release allows you to seamlessly go from training a model to deploying it, while staying entirely within the PyTorch ecosystem. It provides an end-to-end workflow that simplifies the research to production environment for mobile devices. In addition, it paves the way for privacy-preserving features via federated learning techniques.
PyTorch Mobile is in beta stage right now, and is already in wide scale production use. It will soon be available as a stable release once the APIs are locked down.
Requirements #
Usage instructions #
Install #
In the dependency section of pubspec.yaml
file, add flutter_pytorch_lite
(adjust the version accordingly based on the latest release)
dependencies:
flutter_pytorch_lite: ^0.0.1+1
or
dependencies:
flutter_pytorch_lite:
git:
url: https://github.com/winfordguo/flutter_pytorch_lite.git
Import #
import 'package:flutter_pytorch_lite/flutter_pytorch_lite.dart';
Loading the model #
-
From path
await FlutterPytorchLite.load('/path/to/your_model.ptl');
-
From asset
Place
your_model.ptl
inassets
directory. Make sure to include assets inpubspec.yaml
.final filePath = '${Directory.systemTemp.path}/your_model.ptl'; File(filePath).writeAsBytesSync(await _getBuffer('assets/your_model.ptl')); await FlutterPytorchLite.load(filePath); /// Get byte buffer static Future<Uint8List> _getBuffer(String assetFileName) async { ByteData rawAssetFile = await rootBundle.load(assetFileName); final rawBytes = rawAssetFile.buffer.asUint8List(); return rawBytes; }
Refer to the documentation for info on creating interpreter from buffer or file.
Forwarding #
-
For single input and output
Use
static Tensor forward(Tensor input)
.// For ex: if input tensor shape [1,5] and type is float32 final inputShape = Int64List.fromList([1, 5]); var input = [1.23, 6.54, 7.81, 3.21, 2.22]; Tensor inputTensor = Tensor.fromBlobFloat32(input, inputShape); // forward Tensor outputTensor = await FlutterPytorchLite.forward(inputTensor); // Get output tensor: if output tensor type is float32 final outputShape = outputTensor.shape; var output = outputTensor.dataAsFloat32List; // print the output print(output);
Destroying the model #
await FlutterPytorchLite.destroy();
Q&A #
Android #
-
Q: Execution failed for task ':app:mergeDebugNativeLibs'
* What went wrong: Execution failed for task ':app:mergeDebugNativeLibs'. > A failure occurred while executing com.android.build.gradle.internal.tasks.Workers$ActionFacade > More than one file was found with OS independent path 'lib/x86/libc++_shared.so'
A: add this to your
app/build.gradle
android { // your existing code packagingOptions { pickFirst '**/libc++_shared.so' } }
-
Q: What is the version of PyTorch Lite?
A:
org.pytorch:pytorch_android_lite:1.10.0
andorg.pytorch:pytorch_android_torchvision_lite:1.10.0
iOS #
-
Q: What is the version of PyTorch Lite?
A:
'LibTorch-Lite', '~> 1.10.0'