Title: Analysis of quantization effects on high-order function neural networks
Authors: Jiang, Minghu ×
Gielen, Georges #
Issue Date: Feb-2008
Publisher: Springer
Series Title: Applied intelligence vol:28 issue:1 pages:51-67
Abstract: In this paper we investigate the combined effects of quantization and clipping on high-order function neural networks (HOFNN). Statistical models are used to analyze the effects of quantization in a digital implementation. We analyze the performance degradation caused as a function of the number of fixed-point and floating-point quantization bits under the assumption of different probability distributions for the quantized variables, and then compare the training performance between situations with and without weight clipping. We establish and analyze the relationships for a true nonlinear neuron between inputs and outputs bit resolution, training and quantization methods, network order and performance degradation, all based on statistical models, and for on-chip and off-chip training. Our experimental simulation results verify the presented theoretical analysis.
ISSN: 0924-669X
Publication status: published
KU Leuven publication type: IT
Appears in Collections:ESAT - MICAS, Microelectronics and Sensors
× corresponding author
# (joint) last author

Files in This Item:

There are no files associated with this item.

Request a copy


All items in Lirias are protected by copyright, with all rights reserved.

© Web of science