Growing Codebooks

Growing codebooks means start with codebooks that have only one vector each. Then use two accumulators for each vector and split those vectors in two for which you find that they are worth splitting. So after n training iterations of training you end up with at most 2 to the n-th vectors per codebook. The advantage of this method over starting with full sized vectors is that you don't have to run a k-means or a neural gas algorithm. Another advatage is that you can stop growing a codebook when it has reached its optimal size, resulting in an optimal number of trainable parameters.