Simd Library Documentation.

Home | Release Notes | Download | Documentation | Issues | GitHub
Other functions need to quantization

These function are used to accelerate quantization algorithms in Synet Framework. More...

Functions

SIMD_API void SimdSynetDequantizeLinear (const uint8_t *src, size_t size, int32_t bias, const float *norm, float *dst)
 Performs UINT8 linear dequantization. More...
 
SIMD_API void SimdSynetQuantizedConcatLayerForward (size_t count, const uint8_t **src, size_t num, const size_t *size, const int32_t *bias, const float *norm, const float *scale, int32_t zero, uint8_t *dst)
 This function is used for forward propagation of QuantizedConcatLayer. More...
 
SIMD_API void SimdSynetQuantizedShuffleLayerForward (const uint8_t *src0, int bias0, const float *norm0, size_t srcC0, const uint8_t *src1, int bias1, const float *norm1, size_t srcC1, size_t spatial, uint8_t *dst0, uint8_t *dst1, const float *scale, int zero, SimdTensorFormatType format, int type)
 This function is used for forward propagation of QuantizedShuffleLayer. More...
 
SIMD_API void SimdSynetQuantizeLinear (const float *src, size_t size, const float *norm, int32_t zero, uint8_t *dst)
 Performs UINT8 linear quantization. More...
 

Detailed Description

These function are used to accelerate quantization algorithms in Synet Framework.

Function Documentation

◆ SimdSynetDequantizeLinear()

void SimdSynetDequantizeLinear ( const uint8_t *  src,
size_t  size,
int32_t  bias,
const float *  norm,
float *  dst 
)

Performs UINT8 linear dequantization.

Algorithm's details for SimdSynetDequantizeLinear:

for(i = 0; i < size; ++i)
    dst[i] = (src[i] + bias) * norm[0];
Parameters
[in]src- a pointer to UINT8 input tensor.
[in]size- a size of the input and output tensors.
[in]bias- a dequantization bias (-zero).
[in]norm- a dequantization norm (scale).
[out]dst- a pointer to FP32 output tensor.

◆ SimdSynetQuantizedConcatLayerForward()

void SimdSynetQuantizedConcatLayerForward ( size_t  count,
const uint8_t **  src,
size_t  num,
const size_t *  size,
const int32_t *  bias,
const float *  norm,
const float *  scale,
int32_t  zero,
uint8_t *  dst 
)

This function is used for forward propagation of QuantizedConcatLayer.

Note
This function is used in Synet Framework.
Parameters
[in]count- a number of input tensors.
[in]src- an array with pointers to UINT8 input tensors.
[in]num- an output size of input/output tensors (number of concatenations).
[in]size- an array with concatenation sizes of input tensors.
[in]bias- an array with bias parameter of input tensors (-zero).
[in]norm- an array with dequantization norm parameter of input tensors (scale).
[in]scale- an output quantization norm (1/scale).
[in]zero- an output quantization zero.
[out]dst- a pointer to the UINT8 output tensor.

◆ SimdSynetQuantizedShuffleLayerForward()

void SimdSynetQuantizedShuffleLayerForward ( const uint8_t *  src0,
int  bias0,
const float *  norm0,
size_t  srcC0,
const uint8_t *  src1,
int  bias1,
const float *  norm1,
size_t  srcC1,
size_t  spatial,
uint8_t *  dst0,
uint8_t *  dst1,
const float *  scale,
int  zero,
SimdTensorFormatType  format,
int  type 
)

This function is used for forward propagation of QuantizedShuffleLayer.

Note
This function is used in Synet Framework.
Parameters
[in]src0- a pointer to the 8-bit integer array with the first input tensor.
[in]bias0- a dequantization bias parameter of the first input tensor (-zero).
[in]norm0- a dequantization norm parameter of the first input tensor (scale).
[in]srcC0- a number of channels in the first input tensor.
[in]src1- a pointer to the 8-bit integer array with the second input tensor.
[in]bias1- a dequantization bias parameter of the second input tensor (-zero).
[in]norm1- a dequantization norm parameter of the second input tensor (scale).
[in]srcC1- a number of channels in the second input tensor.
[in]spatial- a spatial size of (input/output) tensors.
[out]dst0- a pointer to the 8-bit integer array with the first output tensor.
[out]dst1- a pointer to the 8-bit integer array with the second output tensor.
[in]scale- an output quantization norm (1/scale).
[in]zero- an output quantization zero.
[in]format- a format of (input/output) tensors.
[in]type- a shuffle type (it can be 0 or 1).

◆ SimdSynetQuantizeLinear()

void SimdSynetQuantizeLinear ( const float *  src,
size_t  size,
const float *  norm,
int32_t  zero,
uint8_t *  dst 
)

Performs UINT8 linear quantization.

Algorithm's details for SimdSynetQuantizeLinear:

for(i = 0; i < size; ++i)
    dst[i] = Min(Max(std::nearbyint(src[i] * scale[0]) + zero), 0), 255);
Parameters
[in]src- a pointer to FP32 input tensor.
[in]size- a size of the input and output tensors.
[in]norm- a quantization norm (1/scale).
[in]zero- a quantization zero.
[out]dst- a pointer to UINT8 output tensor.