Simd Library Documentation.

Home | Release Notes | Download | Documentation | Issues | GitHub
Quantized addition framework

A framework to accelerate Quantized addition in Synet Framework. More...

Functions

SIMD_API void * SimdSynetQuantizedAddInit (const size_t *aShape, size_t aCount, SimdTensorDataType aType, int32_t aBias, const float *aNorm, const size_t *bShape, size_t bCount, SimdTensorDataType bType, int32_t bBias, const float *bNorm, SimdConvolutionActivationType actType, const float *actParams, SimdTensorDataType dstType, const float *dstNorm, int32_t dstZero)
 Initilizes quantized addition algorithm. More...
 
SIMD_API void SimdSynetQuantizedAddForward (void *context, const uint8_t *a, const uint8_t *b, uint8_t *dst)
 Performs forward propagation of quantized addition algorithm. More...
 

Detailed Description

A framework to accelerate Quantized addition in Synet Framework.

Function Documentation

◆ SimdSynetQuantizedAddInit()

void * SimdSynetQuantizedAddInit ( const size_t *  aShape,
size_t  aCount,
SimdTensorDataType  aType,
int32_t  aBias,
const float *  aNorm,
const size_t *  bShape,
size_t  bCount,
SimdTensorDataType  bType,
int32_t  bBias,
const float *  bNorm,
SimdConvolutionActivationType  actType,
const float *  actParams,
SimdTensorDataType  dstType,
const float *  dstNorm,
int32_t  dstZero 
)

Initilizes quantized addition algorithm.

Parameters
[in]aShape- a pointer to shape of input A tensor.
[in]aCount- a count of dimensions of input A tensor.
[in]aType- a type of input A tensor. Can be FP32 of UINT8.
[in]aBias- a dequantization bias parameter of A tensor (-zero).
[in]aNorm- a dequantization norm parameter of A tensor (scale).
[in]bShape- a pointer to shape of input B tensor.
[in]bCount- a count of dimensions of input B tensor.
[in]bType- a type of input B tensor. Can be FP32 of UINT8.
[in]bBias- a dequantization bias parameter of B tensor (-zero).
[in]bNorm- a dequantization norm parameter of B tensor (scale).
[in]actType- an activation function type (if it merged to quantized addition).
[in]actParams- a pointer to activation function parameters. Can be NULL.
[in]dstType- a type of output tensor. Can be FP32 of UINT8.
[in]dstNorm- an output quantization norm (1/scale).
[in]dstZero- an output quantization zero.
Returns
a pointer to quantized addition context. On error it returns NULL. It must be released with using of function SimdRelease. This pointer is used in function SimdSynetQuantizedAddForward.

◆ SimdSynetQuantizedAddForward()

void SimdSynetQuantizedAddForward ( void *  context,
const uint8_t *  a,
const uint8_t *  b,
uint8_t *  dst 
)

Performs forward propagation of quantized addition algorithm.

Parameters
[in]context- a pointer to quantized addition context. It must be created by function SimdSynetQuantizedAddInit and released by function SimdRelease.
[in]a- a pointer to input A tensor.
[in]b- a pointer to input B tensor.
[out]dst- a pointer to output tensor.