Simd Library Documentation.

Home | Release Notes | Download | Documentation | Issues | GitHub
Quantized addition framework

A framework to accelerate Quantized addition in Synet Framework. More...

Functions

SIMD_API void * SimdSynetQuantizedAddInit (const size_t *aShape, size_t aCount, SimdTensorDataType aType, const float *aScale, int32_t aZero, const size_t *bShape, size_t bCount, SimdTensorDataType bType, const float *bScale, int32_t bZero, SimdConvolutionActivationType actType, const float *actParams, SimdTensorDataType dstType, const float *dstScale, int32_t dstZero)
 Initilizes quantized addition algorithm. More...
 
SIMD_API void SimdSynetQuantizedAddForward (void *context, const uint8_t *a, const uint8_t *b, uint8_t *dst)
 Performs forward propagation of quantized addition algorithm. More...
 

Detailed Description

A framework to accelerate Quantized addition in Synet Framework.

Function Documentation

◆ SimdSynetQuantizedAddInit()

void * SimdSynetQuantizedAddInit ( const size_t *  aShape,
size_t  aCount,
SimdTensorDataType  aType,
const float *  aScale,
int32_t  aZero,
const size_t *  bShape,
size_t  bCount,
SimdTensorDataType  bType,
const float *  bScale,
int32_t  bZero,
SimdConvolutionActivationType  actType,
const float *  actParams,
SimdTensorDataType  dstType,
const float *  dstScale,
int32_t  dstZero 
)

Initilizes quantized addition algorithm.

Parameters
[in]aShape- a pointer to shape of input A tensor.
[in]aCount- a count of dimensions of input A tensor.
[in]aType- a type of input A tensor. Can be FP32 of UINT8.
[in]aScale- a quantization scale parameter of A tensor.
[in]aZero- a quantization zero parameter of A tensor.
[in]bShape- a pointer to shape of input B tensor.
[in]bCount- a count of dimensions of input B tensor.
[in]bType- a type of input B tensor. Can be FP32 of UINT8.
[in]bScale- a quantization scale parameter of B tensor.
[in]bZero- a quantization zero parameter of B tensor.
[in]actType- an activation function type (if it merged to quantized addition).
[in]actParams- a pointer to activation function parameters. Can be NULL.
[in]dstType- a type of output tensor. Can be FP32 of UINT8.
[in]dstScale- an output quantization scale.
[in]dstZero- an output quantization zero.
Returns
a pointer to quantized addition context. On error it returns NULL. It must be released with using of function SimdRelease. This pointer is used in function SimdSynetQuantizedAddForward.

◆ SimdSynetQuantizedAddForward()

void SimdSynetQuantizedAddForward ( void *  context,
const uint8_t *  a,
const uint8_t *  b,
uint8_t *  dst 
)

Performs forward propagation of quantized addition algorithm.

Parameters
[in]context- a pointer to quantized addition context. It must be created by function SimdSynetQuantizedAddInit and released by function SimdRelease.
[in]a- a pointer to input A tensor.
[in]b- a pointer to input B tensor.
[out]dst- a pointer to output tensor.