/**
* Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
* SPDX-License-Identifier: Apache-2.0.
*/
#pragma once
#include Contains information about the output location for the compiled model and the
* target device that the model runs on. TargetDevice and
* TargetPlatform are mutually exclusive, so you need to choose one
* between the two to specify your target device or platform. If you cannot find
* your device you want to use from the TargetDevice list, use
* TargetPlatform to describe the platform of your edge device and
* CompilerOptions if there are specific settings that are required or
* recommended to use for particular TargetPlatform.See Also:
AWS
* API Reference
Identifies the S3 bucket where you want Amazon SageMaker to store the model
* artifacts. For example, s3://bucket-name/key-name-prefix.
Identifies the S3 bucket where you want Amazon SageMaker to store the model
* artifacts. For example, s3://bucket-name/key-name-prefix.
Identifies the S3 bucket where you want Amazon SageMaker to store the model
* artifacts. For example, s3://bucket-name/key-name-prefix.
Identifies the S3 bucket where you want Amazon SageMaker to store the model
* artifacts. For example, s3://bucket-name/key-name-prefix.
Identifies the S3 bucket where you want Amazon SageMaker to store the model
* artifacts. For example, s3://bucket-name/key-name-prefix.
Identifies the S3 bucket where you want Amazon SageMaker to store the model
* artifacts. For example, s3://bucket-name/key-name-prefix.
Identifies the S3 bucket where you want Amazon SageMaker to store the model
* artifacts. For example, s3://bucket-name/key-name-prefix.
Identifies the S3 bucket where you want Amazon SageMaker to store the model
* artifacts. For example, s3://bucket-name/key-name-prefix.
Identifies the target device or the machine learning instance that you want
* to run your model on after the compilation has completed. Alternatively, you can
* specify OS, architecture, and accelerator using TargetPlatform fields. It
* can be used instead of TargetPlatform.
Identifies the target device or the machine learning instance that you want
* to run your model on after the compilation has completed. Alternatively, you can
* specify OS, architecture, and accelerator using TargetPlatform fields. It
* can be used instead of TargetPlatform.
Identifies the target device or the machine learning instance that you want
* to run your model on after the compilation has completed. Alternatively, you can
* specify OS, architecture, and accelerator using TargetPlatform fields. It
* can be used instead of TargetPlatform.
Identifies the target device or the machine learning instance that you want
* to run your model on after the compilation has completed. Alternatively, you can
* specify OS, architecture, and accelerator using TargetPlatform fields. It
* can be used instead of TargetPlatform.
Identifies the target device or the machine learning instance that you want
* to run your model on after the compilation has completed. Alternatively, you can
* specify OS, architecture, and accelerator using TargetPlatform fields. It
* can be used instead of TargetPlatform.
Identifies the target device or the machine learning instance that you want
* to run your model on after the compilation has completed. Alternatively, you can
* specify OS, architecture, and accelerator using TargetPlatform fields. It
* can be used instead of TargetPlatform.
Contains information about a target platform that you want your model to run
* on, such as OS, architecture, and accelerators. It is an alternative of
* TargetDevice.
The following examples show how to configure
* the TargetPlatform and CompilerOptions JSON strings
* for popular target platforms:
Raspberry Pi 3 Model B+
* "TargetPlatform": {"Os": "LINUX", "Arch": "ARM_EABIHF"},
* "CompilerOptions": {'mattr': ['+neon']}
Jetson * TX2
"TargetPlatform": {"Os": "LINUX", "Arch": "ARM64",
* "Accelerator": "NVIDIA"},
"CompilerOptions": {'gpu-code':
* 'sm_62', 'trt-ver': '6.0.1', 'cuda-ver': '10.0'}
EC2 * m5.2xlarge instance OS
"TargetPlatform": {"Os": "LINUX", "Arch":
* "X86_64", "Accelerator": "NVIDIA"},
"CompilerOptions":
* {'mcpu': 'skylake-avx512'}
RK3399
* "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator":
* "MALI"}
ARMv7 phone (CPU)
* "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM_EABI"},
* "CompilerOptions": {'ANDROID_PLATFORM': 25, 'mattr': ['+neon']}
*
ARMv8 phone (CPU)
"TargetPlatform": {"Os":
* "ANDROID", "Arch": "ARM64"},
"CompilerOptions":
* {'ANDROID_PLATFORM': 29}
Contains information about a target platform that you want your model to run
* on, such as OS, architecture, and accelerators. It is an alternative of
* TargetDevice.
The following examples show how to configure
* the TargetPlatform and CompilerOptions JSON strings
* for popular target platforms:
Raspberry Pi 3 Model B+
* "TargetPlatform": {"Os": "LINUX", "Arch": "ARM_EABIHF"},
* "CompilerOptions": {'mattr': ['+neon']}
Jetson * TX2
"TargetPlatform": {"Os": "LINUX", "Arch": "ARM64",
* "Accelerator": "NVIDIA"},
"CompilerOptions": {'gpu-code':
* 'sm_62', 'trt-ver': '6.0.1', 'cuda-ver': '10.0'}
EC2 * m5.2xlarge instance OS
"TargetPlatform": {"Os": "LINUX", "Arch":
* "X86_64", "Accelerator": "NVIDIA"},
"CompilerOptions":
* {'mcpu': 'skylake-avx512'}
RK3399
* "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator":
* "MALI"}
ARMv7 phone (CPU)
* "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM_EABI"},
* "CompilerOptions": {'ANDROID_PLATFORM': 25, 'mattr': ['+neon']}
*
ARMv8 phone (CPU)
"TargetPlatform": {"Os":
* "ANDROID", "Arch": "ARM64"},
"CompilerOptions":
* {'ANDROID_PLATFORM': 29}
Contains information about a target platform that you want your model to run
* on, such as OS, architecture, and accelerators. It is an alternative of
* TargetDevice.
The following examples show how to configure
* the TargetPlatform and CompilerOptions JSON strings
* for popular target platforms:
Raspberry Pi 3 Model B+
* "TargetPlatform": {"Os": "LINUX", "Arch": "ARM_EABIHF"},
* "CompilerOptions": {'mattr': ['+neon']}
Jetson * TX2
"TargetPlatform": {"Os": "LINUX", "Arch": "ARM64",
* "Accelerator": "NVIDIA"},
"CompilerOptions": {'gpu-code':
* 'sm_62', 'trt-ver': '6.0.1', 'cuda-ver': '10.0'}
EC2 * m5.2xlarge instance OS
"TargetPlatform": {"Os": "LINUX", "Arch":
* "X86_64", "Accelerator": "NVIDIA"},
"CompilerOptions":
* {'mcpu': 'skylake-avx512'}
RK3399
* "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator":
* "MALI"}
ARMv7 phone (CPU)
* "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM_EABI"},
* "CompilerOptions": {'ANDROID_PLATFORM': 25, 'mattr': ['+neon']}
*
ARMv8 phone (CPU)
"TargetPlatform": {"Os":
* "ANDROID", "Arch": "ARM64"},
"CompilerOptions":
* {'ANDROID_PLATFORM': 29}
Contains information about a target platform that you want your model to run
* on, such as OS, architecture, and accelerators. It is an alternative of
* TargetDevice.
The following examples show how to configure
* the TargetPlatform and CompilerOptions JSON strings
* for popular target platforms:
Raspberry Pi 3 Model B+
* "TargetPlatform": {"Os": "LINUX", "Arch": "ARM_EABIHF"},
* "CompilerOptions": {'mattr': ['+neon']}
Jetson * TX2
"TargetPlatform": {"Os": "LINUX", "Arch": "ARM64",
* "Accelerator": "NVIDIA"},
"CompilerOptions": {'gpu-code':
* 'sm_62', 'trt-ver': '6.0.1', 'cuda-ver': '10.0'}
EC2 * m5.2xlarge instance OS
"TargetPlatform": {"Os": "LINUX", "Arch":
* "X86_64", "Accelerator": "NVIDIA"},
"CompilerOptions":
* {'mcpu': 'skylake-avx512'}
RK3399
* "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator":
* "MALI"}
ARMv7 phone (CPU)
* "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM_EABI"},
* "CompilerOptions": {'ANDROID_PLATFORM': 25, 'mattr': ['+neon']}
*
ARMv8 phone (CPU)
"TargetPlatform": {"Os":
* "ANDROID", "Arch": "ARM64"},
"CompilerOptions":
* {'ANDROID_PLATFORM': 29}
Contains information about a target platform that you want your model to run
* on, such as OS, architecture, and accelerators. It is an alternative of
* TargetDevice.
The following examples show how to configure
* the TargetPlatform and CompilerOptions JSON strings
* for popular target platforms:
Raspberry Pi 3 Model B+
* "TargetPlatform": {"Os": "LINUX", "Arch": "ARM_EABIHF"},
* "CompilerOptions": {'mattr': ['+neon']}
Jetson * TX2
"TargetPlatform": {"Os": "LINUX", "Arch": "ARM64",
* "Accelerator": "NVIDIA"},
"CompilerOptions": {'gpu-code':
* 'sm_62', 'trt-ver': '6.0.1', 'cuda-ver': '10.0'}
EC2 * m5.2xlarge instance OS
"TargetPlatform": {"Os": "LINUX", "Arch":
* "X86_64", "Accelerator": "NVIDIA"},
"CompilerOptions":
* {'mcpu': 'skylake-avx512'}
RK3399
* "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator":
* "MALI"}
ARMv7 phone (CPU)
* "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM_EABI"},
* "CompilerOptions": {'ANDROID_PLATFORM': 25, 'mattr': ['+neon']}
*
ARMv8 phone (CPU)
"TargetPlatform": {"Os":
* "ANDROID", "Arch": "ARM64"},
"CompilerOptions":
* {'ANDROID_PLATFORM': 29}
Contains information about a target platform that you want your model to run
* on, such as OS, architecture, and accelerators. It is an alternative of
* TargetDevice.
The following examples show how to configure
* the TargetPlatform and CompilerOptions JSON strings
* for popular target platforms:
Raspberry Pi 3 Model B+
* "TargetPlatform": {"Os": "LINUX", "Arch": "ARM_EABIHF"},
* "CompilerOptions": {'mattr': ['+neon']}
Jetson * TX2
"TargetPlatform": {"Os": "LINUX", "Arch": "ARM64",
* "Accelerator": "NVIDIA"},
"CompilerOptions": {'gpu-code':
* 'sm_62', 'trt-ver': '6.0.1', 'cuda-ver': '10.0'}
EC2 * m5.2xlarge instance OS
"TargetPlatform": {"Os": "LINUX", "Arch":
* "X86_64", "Accelerator": "NVIDIA"},
"CompilerOptions":
* {'mcpu': 'skylake-avx512'}
RK3399
* "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator":
* "MALI"}
ARMv7 phone (CPU)
* "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM_EABI"},
* "CompilerOptions": {'ANDROID_PLATFORM': 25, 'mattr': ['+neon']}
*
ARMv8 phone (CPU)
"TargetPlatform": {"Os":
* "ANDROID", "Arch": "ARM64"},
"CompilerOptions":
* {'ANDROID_PLATFORM': 29}
Specifies additional parameters for compiler options in JSON format. The
* compiler options are TargetPlatform specific. It is required for
* NVIDIA accelerators and highly recommended for CPU compliations. For any other
* cases, it is optional to specify CompilerOptions.
CPU: Compilation for CPU supports the following compiler
* options.
mcpu: CPU micro-architecture. For
* example, {'mcpu': 'skylake-avx512'}
* mattr: CPU flags. For example, {'mattr': ['+neon',
* '+vfpv4']}
ARM: Details of
* ARM CPU compilations.
NEON: NEON is an
* implementation of the Advanced SIMD extension used in ARMv7 processors.
For example, add {'mattr': ['+neon']} to the compiler options if
* compiling for ARM 32-bit platform with the NEON support.
NVIDIA: Compilation for NVIDIA GPU supports the following
* compiler options.
gpu_code: Specifies the
* targeted architecture.
trt-ver: Specifies the
* TensorRT versions in x.y.z. format.
cuda-ver:
* Specifies the CUDA version in x.y format.
For example,
* {'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}
ANDROID: Compilation for the Android OS supports the
* following compiler options:
ANDROID_PLATFORM:
* Specifies the Android API levels. Available levels range from 21 to 29. For
* example, {'ANDROID_PLATFORM': 28}.
* mattr: Add {'mattr': ['+neon']} to compiler options if
* compiling for ARM 32-bit platform with NEON support.
Specifies additional parameters for compiler options in JSON format. The
* compiler options are TargetPlatform specific. It is required for
* NVIDIA accelerators and highly recommended for CPU compliations. For any other
* cases, it is optional to specify CompilerOptions.
CPU: Compilation for CPU supports the following compiler
* options.
mcpu: CPU micro-architecture. For
* example, {'mcpu': 'skylake-avx512'}
* mattr: CPU flags. For example, {'mattr': ['+neon',
* '+vfpv4']}
ARM: Details of
* ARM CPU compilations.
NEON: NEON is an
* implementation of the Advanced SIMD extension used in ARMv7 processors.
For example, add {'mattr': ['+neon']} to the compiler options if
* compiling for ARM 32-bit platform with the NEON support.
NVIDIA: Compilation for NVIDIA GPU supports the following
* compiler options.
gpu_code: Specifies the
* targeted architecture.
trt-ver: Specifies the
* TensorRT versions in x.y.z. format.
cuda-ver:
* Specifies the CUDA version in x.y format.
For example,
* {'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}
ANDROID: Compilation for the Android OS supports the
* following compiler options:
ANDROID_PLATFORM:
* Specifies the Android API levels. Available levels range from 21 to 29. For
* example, {'ANDROID_PLATFORM': 28}.
* mattr: Add {'mattr': ['+neon']} to compiler options if
* compiling for ARM 32-bit platform with NEON support.
Specifies additional parameters for compiler options in JSON format. The
* compiler options are TargetPlatform specific. It is required for
* NVIDIA accelerators and highly recommended for CPU compliations. For any other
* cases, it is optional to specify CompilerOptions.
CPU: Compilation for CPU supports the following compiler
* options.
mcpu: CPU micro-architecture. For
* example, {'mcpu': 'skylake-avx512'}
* mattr: CPU flags. For example, {'mattr': ['+neon',
* '+vfpv4']}
ARM: Details of
* ARM CPU compilations.
NEON: NEON is an
* implementation of the Advanced SIMD extension used in ARMv7 processors.
For example, add {'mattr': ['+neon']} to the compiler options if
* compiling for ARM 32-bit platform with the NEON support.
NVIDIA: Compilation for NVIDIA GPU supports the following
* compiler options.
gpu_code: Specifies the
* targeted architecture.
trt-ver: Specifies the
* TensorRT versions in x.y.z. format.
cuda-ver:
* Specifies the CUDA version in x.y format.
For example,
* {'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}
ANDROID: Compilation for the Android OS supports the
* following compiler options:
ANDROID_PLATFORM:
* Specifies the Android API levels. Available levels range from 21 to 29. For
* example, {'ANDROID_PLATFORM': 28}.
* mattr: Add {'mattr': ['+neon']} to compiler options if
* compiling for ARM 32-bit platform with NEON support.
Specifies additional parameters for compiler options in JSON format. The
* compiler options are TargetPlatform specific. It is required for
* NVIDIA accelerators and highly recommended for CPU compliations. For any other
* cases, it is optional to specify CompilerOptions.
CPU: Compilation for CPU supports the following compiler
* options.
mcpu: CPU micro-architecture. For
* example, {'mcpu': 'skylake-avx512'}
* mattr: CPU flags. For example, {'mattr': ['+neon',
* '+vfpv4']}
ARM: Details of
* ARM CPU compilations.
NEON: NEON is an
* implementation of the Advanced SIMD extension used in ARMv7 processors.
For example, add {'mattr': ['+neon']} to the compiler options if
* compiling for ARM 32-bit platform with the NEON support.
NVIDIA: Compilation for NVIDIA GPU supports the following
* compiler options.
gpu_code: Specifies the
* targeted architecture.
trt-ver: Specifies the
* TensorRT versions in x.y.z. format.
cuda-ver:
* Specifies the CUDA version in x.y format.
For example,
* {'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}
ANDROID: Compilation for the Android OS supports the
* following compiler options:
ANDROID_PLATFORM:
* Specifies the Android API levels. Available levels range from 21 to 29. For
* example, {'ANDROID_PLATFORM': 28}.
* mattr: Add {'mattr': ['+neon']} to compiler options if
* compiling for ARM 32-bit platform with NEON support.
Specifies additional parameters for compiler options in JSON format. The
* compiler options are TargetPlatform specific. It is required for
* NVIDIA accelerators and highly recommended for CPU compliations. For any other
* cases, it is optional to specify CompilerOptions.
CPU: Compilation for CPU supports the following compiler
* options.
mcpu: CPU micro-architecture. For
* example, {'mcpu': 'skylake-avx512'}
* mattr: CPU flags. For example, {'mattr': ['+neon',
* '+vfpv4']}
ARM: Details of
* ARM CPU compilations.
NEON: NEON is an
* implementation of the Advanced SIMD extension used in ARMv7 processors.
For example, add {'mattr': ['+neon']} to the compiler options if
* compiling for ARM 32-bit platform with the NEON support.
NVIDIA: Compilation for NVIDIA GPU supports the following
* compiler options.
gpu_code: Specifies the
* targeted architecture.
trt-ver: Specifies the
* TensorRT versions in x.y.z. format.
cuda-ver:
* Specifies the CUDA version in x.y format.
For example,
* {'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}
ANDROID: Compilation for the Android OS supports the
* following compiler options:
ANDROID_PLATFORM:
* Specifies the Android API levels. Available levels range from 21 to 29. For
* example, {'ANDROID_PLATFORM': 28}.
* mattr: Add {'mattr': ['+neon']} to compiler options if
* compiling for ARM 32-bit platform with NEON support.
Specifies additional parameters for compiler options in JSON format. The
* compiler options are TargetPlatform specific. It is required for
* NVIDIA accelerators and highly recommended for CPU compliations. For any other
* cases, it is optional to specify CompilerOptions.
CPU: Compilation for CPU supports the following compiler
* options.
mcpu: CPU micro-architecture. For
* example, {'mcpu': 'skylake-avx512'}
* mattr: CPU flags. For example, {'mattr': ['+neon',
* '+vfpv4']}
ARM: Details of
* ARM CPU compilations.
NEON: NEON is an
* implementation of the Advanced SIMD extension used in ARMv7 processors.
For example, add {'mattr': ['+neon']} to the compiler options if
* compiling for ARM 32-bit platform with the NEON support.
NVIDIA: Compilation for NVIDIA GPU supports the following
* compiler options.
gpu_code: Specifies the
* targeted architecture.
trt-ver: Specifies the
* TensorRT versions in x.y.z. format.
cuda-ver:
* Specifies the CUDA version in x.y format.
For example,
* {'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}
ANDROID: Compilation for the Android OS supports the
* following compiler options:
ANDROID_PLATFORM:
* Specifies the Android API levels. Available levels range from 21 to 29. For
* example, {'ANDROID_PLATFORM': 28}.
* mattr: Add {'mattr': ['+neon']} to compiler options if
* compiling for ARM 32-bit platform with NEON support.
Specifies additional parameters for compiler options in JSON format. The
* compiler options are TargetPlatform specific. It is required for
* NVIDIA accelerators and highly recommended for CPU compliations. For any other
* cases, it is optional to specify CompilerOptions.
CPU: Compilation for CPU supports the following compiler
* options.
mcpu: CPU micro-architecture. For
* example, {'mcpu': 'skylake-avx512'}
* mattr: CPU flags. For example, {'mattr': ['+neon',
* '+vfpv4']}
ARM: Details of
* ARM CPU compilations.
NEON: NEON is an
* implementation of the Advanced SIMD extension used in ARMv7 processors.
For example, add {'mattr': ['+neon']} to the compiler options if
* compiling for ARM 32-bit platform with the NEON support.
NVIDIA: Compilation for NVIDIA GPU supports the following
* compiler options.
gpu_code: Specifies the
* targeted architecture.
trt-ver: Specifies the
* TensorRT versions in x.y.z. format.
cuda-ver:
* Specifies the CUDA version in x.y format.
For example,
* {'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}
ANDROID: Compilation for the Android OS supports the
* following compiler options:
ANDROID_PLATFORM:
* Specifies the Android API levels. Available levels range from 21 to 29. For
* example, {'ANDROID_PLATFORM': 28}.
* mattr: Add {'mattr': ['+neon']} to compiler options if
* compiling for ARM 32-bit platform with NEON support.
Specifies additional parameters for compiler options in JSON format. The
* compiler options are TargetPlatform specific. It is required for
* NVIDIA accelerators and highly recommended for CPU compliations. For any other
* cases, it is optional to specify CompilerOptions.
CPU: Compilation for CPU supports the following compiler
* options.
mcpu: CPU micro-architecture. For
* example, {'mcpu': 'skylake-avx512'}
* mattr: CPU flags. For example, {'mattr': ['+neon',
* '+vfpv4']}
ARM: Details of
* ARM CPU compilations.
NEON: NEON is an
* implementation of the Advanced SIMD extension used in ARMv7 processors.
For example, add {'mattr': ['+neon']} to the compiler options if
* compiling for ARM 32-bit platform with the NEON support.
NVIDIA: Compilation for NVIDIA GPU supports the following
* compiler options.
gpu_code: Specifies the
* targeted architecture.
trt-ver: Specifies the
* TensorRT versions in x.y.z. format.
cuda-ver:
* Specifies the CUDA version in x.y format.
For example,
* {'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}
ANDROID: Compilation for the Android OS supports the
* following compiler options:
ANDROID_PLATFORM:
* Specifies the Android API levels. Available levels range from 21 to 29. For
* example, {'ANDROID_PLATFORM': 28}.
* mattr: Add {'mattr': ['+neon']} to compiler options if
* compiling for ARM 32-bit platform with NEON support.