onnxruntime_v2 library
Classes
- OrtAllocator
- OrtEnv
- A class about onnx runtime environment.
- OrtFlags
- OrtRunOptions
- OrtSession
- OrtSessionOptions
- OrtStatus
- Description of the ort status.
- OrtTensorTypeAndShapeInfo
- OrtThreadingOptions
- A class obout thread's options.
- OrtValue
- OrtValueMap
- OrtValueSequence
- OrtValueSparseTensor
- OrtValueTensor
Enums
- CANNFlags
- A flag for OrtProvider.cann. CANN provides acceleration on Huawei Ascend AI processors.
- CoreMLFlags
- A flag for OrtProvider.coreml.
- CPUFlags
- A flag for OrtProvider.cpu.
- CUDAFlags
- A flag for OrtProvider.cuda. CUDA provides GPU acceleration using NVIDIA GPUs.
- DirectMLFlags
- A flag for OrtProvider.directml. DirectML provides GPU acceleration on Windows using DirectX 12. Works with AMD, Intel, and NVIDIA GPUs.
- DNNLFlags
- A flag for OrtProvider.dnnl. DNNL (Deep Neural Network Library) provides optimized CPU operations for Intel processors. Formerly known as MKL-DNN.
- GraphOptimizationLevel
- MIGraphXFlags
- A flag for OrtProvider.migraphx. MIGraphX provides AMD GPU acceleration with graph optimizations.
- NnapiFlags
- A flag for OrtProvider.nnapi.
- ONNXTensorElementDataType
- ONNXType
- OpenVINOFlags
- A flag for OrtProvider.openvino. OpenVINO provides optimized inference on Intel hardware.
- OrtApiVersion
- An enumerated value of api's version.
- OrtLoggingLevel
- An enumerated value of log's level.
- OrtProvider
- An enumerated value of ort provider.
- OrtSessionExecutionMode
- Enum for session execution modes
- OrtSparseFormat
- ROCmFlags
- A flag for OrtProvider.rocm. ROCm provides GPU acceleration for AMD GPUs on Linux.
- TensorRTFlags
- A flag for OrtProvider.tensorrt. TensorRT provides optimized inference on NVIDIA GPUs with additional optimizations.
Typedefs
- OrtSessionGraphOptimizationLevel = GraphOptimizationLevel
- Enum for OrtSessionGraphOptimizationLevel - alias for GraphOptimizationLevel Provides compatibility with naming used in some examples