Simd Library Documentation.

Home | Release Notes | Download | Documentation | Issues | GitHub
Other functions need to quantization

These function are used to accelerate quantization algorithms in Synet Framework. More...

Functions

SIMD_API void SimdSynetDequantizeLinear (const uint8_t *src, size_t size, int32_t bias, const float *norm, float *dst)
 Performs UINT8 linear dequantization. More...
 
SIMD_API void SimdSynetQuantizeLinear (const float *src, size_t size, const float *norm, int32_t zero, uint8_t *dst)
 Performs UINT8 linear quantization. More...
 

Detailed Description

These function are used to accelerate quantization algorithms in Synet Framework.

Function Documentation

◆ SimdSynetDequantizeLinear()

void SimdSynetDequantizeLinear ( const uint8_t *  src,
size_t  size,
int32_t  bias,
const float *  norm,
float *  dst 
)

Performs UINT8 linear dequantization.

Algorithm's details for SimdSynetDequantizeLinear:

for(i = 0; i < size; ++i)
    dst[i] = (src[i] + bias) * norm[0];
Parameters
[in]src- a pointer to UINT8 input tensor.
[in]size- a size of the input and output tensors.
[in]bias- a dequantization bias (-zero).
[in]norm- a dequantization norm (scale).
[out]dst- a pointer to FP32 output tensor.

◆ SimdSynetQuantizeLinear()

void SimdSynetQuantizeLinear ( const float *  src,
size_t  size,
const float *  norm,
int32_t  zero,
uint8_t *  dst 
)

Performs UINT8 linear quantization.

Algorithm's details for SimdSynetQuantizeLinear:

for(i = 0; i < size; ++i)
    dst[i] = Min(Max(std::nearbyint(src[i] * scale[0]) + zero), 0), 255);
Parameters
[in]src- a pointer to FP32 input tensor.
[in]size- a size of the input and output tensors.
[in]norm- a quantization norm (1/scale).
[in]zero- a quantization zero.
[out]dst- a pointer to UINT8 output tensor.