Topographic map algorithms that are aimed at building "faithful representations" also yield maps that transfer the maximum amount of information available about the distribution from which they receive input. The weight density (magnification factor) of these maps is proportional to the input density, or the neurons of these maps have an equal probability to be active (equiprobabilistic map). As MSE minimization is not compatible with equiprobabilistic map formation in general, a number of heuristics have been devised in order to compensate for this discrepancy in competitive learning schemes, e.g. by adding a "conscience" to the neurons' firing behavior. However, rather than minimizing a modified MSE criterion, we introduce a new unsupervised competitive learning rule, called the kernel-based Maximum Entropy learning Rule (kMER), for topographic map formation, that optimizes an information-theoretic criterion directly. To each neuron a radially symmetric kernel is associated, with a given center and radius, and the two are updated in such a way that the (unconditional) information-theoretic entropy of the neurons' outputs is maximized. We review a number of competitive learning rules for building equiprobabilistic maps. As benchmark tests for the faithfulness of the representations, we consider two types of distributions and compare the performances of these rules and kMER, for batch and incremental learning. As a first example application, we consider non-parametric density estimation where the maps are used for generating "pilot" estimates in kernel-based density estimation. The second application we envisage for kMER is "on-line" adaptive filtering of speech signals, using Gabor functions as wavelet filters. The topographic feature maps that are developed in this way differ in several respects from those obtained with Kohonen's Adaptive-Subspace SOM algorithm.