PTGNG: An Evolutionary Approach for Parameter Optimization in the Growing Neural Gas Algorithm
Keywords:
Growing Neural Gas, Parameter tuning, Evolutionary algorithmAbstract
Growing Neural Gas (GNG) algorithm is an unsupervised learning algorithm which belongs to the competitive learning family. Since then, GNG has been a subject to vaious developments and implementations found in the literatures for two main reasons: first, the number of neurons (i.e., nodes) is adaptive. Meaning, it is periodically changed through adding new neurons and removing old neurons accordingly in order to find the best network which captures the topological structure of the given data, and to reduce the overall error in that representation. Second, GNG algorithm has no restrictions when compared to other competitive learning algorithms, as it is both free in the space and the number of the neurons. In this paper, we propose and implement an evolutionary based approach, namely PTGNG, to tune GNG algorithm parameters for dealing with data in multiple dimensional space, namely, 2D, 3D, and 4D. The idea basically relies on finding the optimum set of parameter values for any given problem to be solved using GNG algorithm. The evolutionary algorithm by its nature searches a vast space of applicable solutions and evaluates each solution individually. When we implemented our approach of parameters tuning, we can note that GNG captured datasets topological structure with a smaller number of neurons and with a better accuracy. It also showed that the same results appeared when working on datasets with three and four dimensions.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 International Journal of Computational and Experimental Science and Engineering
This work is licensed under a Creative Commons Attribution 4.0 International License.