Functions to acceleratŠµ activation functions in Synet Framework. More...
Functions | |
SIMD_API void | SimdSynetElu32f (const float *src, size_t size, const float *alpha, float *dst) |
Calculates ELU activation function for 32-bit float array. More... | |
SIMD_API void | SimdSynetGelu32f (const float *src, size_t size, float *dst) |
This function is used for forward propagation of GeluLayer. More... | |
SIMD_API void | SimdSynetHardSigmoid32f (const float *src, size_t size, const float *scale, const float *shift, float *dst) |
Calculates HardSigmoid activation function (https://pytorch.org/docs/stable/generated/torch.nn.Hardsigmoid.html) for 32-bit float array. More... | |
SIMD_API void | SimdSynetHswish32f (const float *src, size_t size, const float *shift, const float *scale, float *dst) |
Calculates H-Swish activation function (https://arxiv.org/pdf/1905.02244.pdf) for 32-bit float array. More... | |
SIMD_API void | SimdSynetMish32f (const float *src, size_t size, const float *threshold, float *dst) |
SIMD_API void | SimdSynetPreluLayerForward (const float *src, const float *slope, size_t channels, size_t spatial, float *dst, SimdTensorFormatType format) |
This function is used for forward propagation of PreluLayer (PReLU). More... | |
SIMD_API void | SimdSynetRelu32f (const float *src, size_t size, const float *slope, float *dst) |
Calculates ReLU (rectified linear unit) function for 32-bit float array. More... | |
SIMD_API void | SimdSynetRelu16b (const uint16_t *src, size_t size, const float *slope, uint16_t *dst) |
Calculates ReLU (rectified linear unit) function for 16-bit brain-float array. More... | |
SIMD_API void | SimdSynetRestrictRange32f (const float *src, size_t size, const float *lower, const float *upper, float *dst) |
This function is used in order to restrict range for given 320bit float array. More... | |
SIMD_API void | SimdSynetSigmoid32f (const float *src, size_t size, const float *slope, float *dst) |
This function is used for forward propagation of SigmoidLayer. More... | |
SIMD_API void | SimdSynetSoftplus32f (const float *src, size_t size, const float *beta, const float *threshold, float *dst) |
This function is used for forward propagation of SoftplusLayer. More... | |
SIMD_API void | SimdSynetSwish32f (const float *src, size_t size, const float *slope, float *dst) |
This function is used for forward propagation of SwishLayer. More... | |
SIMD_API void | SimdSynetTanh32f (const float *src, size_t size, const float *slope, float *dst) |
Calculates hyperbolic tangent for 32-bit float array. More... | |
Detailed Description
Functions to acceleratŠµ activation functions in Synet Framework.
Function Documentation
◆ SimdSynetElu32f()
void SimdSynetElu32f | ( | const float * | src, |
size_t | size, | ||
const float * | alpha, | ||
float * | dst | ||
) |
Calculates ELU activation function for 32-bit float array.
The input and output arrays must have the same size.
Algorithm's details:
for(i = 0; i < size; ++i) dst[i] = src[i] >= 0 ? src[i] : alpha*(Exp(src[i]) - 1);
- Note
- This function is used in Synet Framework.
- Parameters
-
[in] src - a pointer to the input 32-bit float array. [in] size - a size of input and output arrays. [in] alpha - a pointer to alpha parameter. [out] dst - a pointer to the output 32-bit float array.
◆ SimdSynetGelu32f()
void SimdSynetGelu32f | ( | const float * | src, |
size_t | size, | ||
float * | dst | ||
) |
This function is used for forward propagation of GeluLayer.
Algorithm's details:
for(i = 0; i < size; ++i) dst[i] = src[i] * (1 + erf(src[i]/sqrt(2))) / 2;
- Note
- This function is used in Synet Framework.
- Parameters
-
[in] src - a pointer to the 32-bit float array. [in] size - a size of input and output arrays. [out] dst - a pointer to output 32-bit float array.
◆ SimdSynetHardSigmoid32f()
void SimdSynetHardSigmoid32f | ( | const float * | src, |
size_t | size, | ||
const float * | scale, | ||
const float * | shift, | ||
float * | dst | ||
) |
Calculates HardSigmoid activation function (https://pytorch.org/docs/stable/generated/torch.nn.Hardsigmoid.html) for 32-bit float array.
Input and output arrays must have the same size.
Algorithm's details:
for(i = 0; i < size; ++i) dst[i] = Max(0, Min(src[i] * scale + shift, 1));
- Note
- This function is used in Synet Framework.
- Parameters
-
[in] src - a pointer to the input 32-bit float array. [in] size - a size of input and output arrays. [in] scale - a pointer to scale parameter. This parameter is equal to 1/6 in Pytorch documentation. [in] shift - a pointer to shift parameter. This parameter is equal to 1/2 in Pytorch documentation. [out] dst - a pointer to the output 32-bit float array.
◆ SimdSynetHswish32f()
void SimdSynetHswish32f | ( | const float * | src, |
size_t | size, | ||
const float * | shift, | ||
const float * | scale, | ||
float * | dst | ||
) |
Calculates H-Swish activation function (https://arxiv.org/pdf/1905.02244.pdf) for 32-bit float array.
Input and output arrays must have the same size.
Algorithm's details:
for(i = 0; i < size; ++i) dst[i] = Max(Min(src[i], shift) + shift, 0)*scale*src[i];
- Note
- This function is used in Synet Framework.
- Parameters
-
[in] src - a pointer to the input 32-bit float array. [in] size - a size of input and output arrays. [in] shift - a pointer to shift parameter. It is equal to 3 in original paper. [in] scale - a pointer to scale parameter. It is equal to 1/6 in original paper. [out] dst - a pointer to the output 32-bit float array.
◆ SimdSynetMish32f()
void SimdSynetMish32f | ( | const float * | src, |
size_t | size, | ||
const float * | threshold, | ||
float * | dst | ||
) |
Calculates Mish activation function (https://arxiv.org/abs/1908.08681) for 32-bit float array
Algorithm's details:
for(i = 0; i < size; ++i) dst[i] = src[i] > threshold ? src[i] : src[i] * tanh(log(exp(src[i]) + 1));
- Note
- This function is used in Synet Framework.
- Parameters
-
[in] src - a pointer to the input 32-bit float array. [in] size - a size of input and output arrays. [in] threshold - a pointer to 'threshold' parameter. [out] dst - a pointer to the output 32-bit float array.
◆ SimdSynetPreluLayerForward()
void SimdSynetPreluLayerForward | ( | const float * | src, |
const float * | slope, | ||
size_t | channels, | ||
size_t | spatial, | ||
float * | dst, | ||
SimdTensorFormatType | format | ||
) |
This function is used for forward propagation of PreluLayer (PReLU).
Algorithm's details (example for NCHW tensor format):
for(c = 0; c < channels; ++c) for(s = 0; s < spatial; ++s) dst[c*spatial + s] = src[c*spatial + s] > 0 ? src[c*spatial + s] : slope[c]*src[c*spatial + s];
- Note
- This function is used in Synet Framework.
- Parameters
-
[in] src - a pointer to the 32-bit float array with input image tensor. The size of the array is equal to channels * spatial. [in] slope - a pointer to the 32-bit float array with slope coefficients. The size of the array is equal to channels. [in] channels - a number of channels in the (input/output) image tensor [in] spatial - a spatial size of (input/output) image tensor. [out] dst - a pointer to the 32-bit float array with output image tensor. The size of the array is equal to channels * spatial. [in] format - a format of (input/output) image tensor.
◆ SimdSynetRelu32f()
void SimdSynetRelu32f | ( | const float * | src, |
size_t | size, | ||
const float * | slope, | ||
float * | dst | ||
) |
Calculates ReLU (rectified linear unit) function for 32-bit float array.
Algorithm's details:
for(i = 0; i < size; ++i) dst[i] = src[i] > 0 ? src[i] : slope*src[i];
- Note
- This function is used in Synet Framework.
- Parameters
-
[in] src - a pointer to the input 32-bit float array. [in] size - a size of input and output arrays. [in] slope - a pointer to the 'slope' parameter. [out] dst - a pointer to output 32-bit float array.
◆ SimdSynetRelu16b()
void SimdSynetRelu16b | ( | const uint16_t * | src, |
size_t | size, | ||
const float * | slope, | ||
uint16_t * | dst | ||
) |
Calculates ReLU (rectified linear unit) function for 16-bit brain-float array.
Algorithm's details:
for(i = 0; i < size; ++i) dst[i] = src[i] > 0 ? src[i] : (slope*src[i];
- Note
- This function is used in Synet Framework.
- Parameters
-
[in] src - a pointer to the input 16-bit brain-float array. [in] size - a size of input and output arrays. [in] slope - a pointer to the 'slope' parameter. [out] dst - a pointer to output 16-bit brain-float array.
◆ SimdSynetRestrictRange32f()
void SimdSynetRestrictRange32f | ( | const float * | src, |
size_t | size, | ||
const float * | lower, | ||
const float * | upper, | ||
float * | dst | ||
) |
This function is used in order to restrict range for given 320bit float array.
Algorithm's details:
for(i = 0; i < size; ++i) dst[i] = Min(Max(lower, src[i]), upper);
- Note
- This function is used in Synet Framework.
- Parameters
-
[in] src - a pointer to the input 32-bit float array. [in] size - a size of input and output arrays. [in] lower - a pointer to lower restrict bound. [in] upper - a pointer to upper restrict bound. [out] dst - a pointer to the output 32-bit float array.
◆ SimdSynetSigmoid32f()
void SimdSynetSigmoid32f | ( | const float * | src, |
size_t | size, | ||
const float * | slope, | ||
float * | dst | ||
) |
This function is used for forward propagation of SigmoidLayer.
Algorithm's details:
for(i = 0; i < size; ++i) dst[i] = 1/(1 + exp(-slope*src[i]));
- Note
- This function is used in Synet Framework.
- Parameters
-
[in] src - a pointer to the 32-bit float array. [in] size - a size of input and output arrays. [in] slope - a pointer to the 'slope' parameter. [out] dst - a pointer to output 32-bit float array.
◆ SimdSynetSoftplus32f()
void SimdSynetSoftplus32f | ( | const float * | src, |
size_t | size, | ||
const float * | beta, | ||
const float * | threshold, | ||
float * | dst | ||
) |
This function is used for forward propagation of SoftplusLayer.
Algorithm's details:
for(i = 0; i < size; ++i) dst[i] = src[i] > threshold ? src[i] : log(1 + exp(src[i]*beta))/beta;
- Note
- This function is used in Synet Framework.
- Parameters
-
[in] src - a pointer to the input 32-bit float array. [in] size - a size of input and output arrays. [in] beta - a pointer to 'beta' parameter. [in] threshold - a pointer to 'threshold' parameter. [out] dst - a pointer to the output 32-bit float array.
◆ SimdSynetSwish32f()
void SimdSynetSwish32f | ( | const float * | src, |
size_t | size, | ||
const float * | slope, | ||
float * | dst | ||
) |
This function is used for forward propagation of SwishLayer.
Algorithm's details:
for(i = 0; i < size; ++i) dst[i] = src[i]/(1 + exp(-slope*src[i]));
- Note
- This function is used in Synet Framework.
- Parameters
-
[in] src - a pointer to the 32-bit float array. [in] size - a size of input and output arrays. [in] slope - a pointer to the 'slope' parameter. [out] dst - a pointer to output 32-bit float array.
◆ SimdSynetTanh32f()
void SimdSynetTanh32f | ( | const float * | src, |
size_t | size, | ||
const float * | slope, | ||
float * | dst | ||
) |
Calculates hyperbolic tangent for 32-bit float array.
- Note
- This function is used in Synet Framework.
Algorithm's details:
for(i = 0; i < size; ++i) { x = slope*src[i]; dst[i] = (exp(x) - exp(-x))/(exp(x) + exp(-x)); }
- Parameters
-
[in] src - a pointer to the input 32-bit float array. [in] size - a size of input and output arrays. [in] slope - a pointer to the 'slope' parameter. [out] dst - a pointer to output 32-bit float array.