Blog Post 8

Optimizing the NN

This week I began testing different configurations of Neural Nets with tensorboard to find out which gave the most promising results.

Previously , when I was training on the raw wav data, I found the best results with models that included 1 or more 1D Convolutional layers. Interestingly, when training on the cepstral data I got the best results using 3 dense layers of 128 nodes. An upside of this is that the model also trains far more quickly, taking less that half the time of a similar model that used convolutional layers.

During my testing phase I also found some interesting behaviour when I changed the loss function of the model from ‘sparse categorical crossentropy’ to “mean squared error”. A NN with 3 dense layers of 128 nodes went from getting to about 80% accuracy in around 100 epochs to a NN that managed to classify the validation set up to 100% accuracy in just 13 epochs, but without any significant change in the loss. I would like to look deeper into why this NN performed this way, and to make sure that it’s not caused by a bug in my code.

ECL

After settling on a model for my NN I decided to export my dataset as a CSV, so I wrote the necessary python code and exported my training data of cepstruns as two CSVs, one containing the dependant features (cepstral data) and the other of the independant classes (IR indexes).

I uploaded them to the landing zone of my cluster but ran into a problem when I attemted to spray them to the cluster itself where any attempt to spray the files resulted in getting a ‘SSH failed to connect’ error. After a few emails, it was determined that someone had been messing with the ssh configuration, which was promptly sorted out, allowing me to spray my files.

With the files sprayed, I began to write some ECL to determine the layout of my data sets, so that they could be used to train NNs on the cluster with the GNN bundle.

Currently I’m having some trouble with defining the stucture of the features since each cepstrum has 100 features, and each is stored in a sperate cell.


Currently I’m getting around this by changing the separator from ‘,’ to ‘/n/n/n/n’ which denotes the end of a row in a csv file, however I assume there is a better way do this.

Next week I plan to run a NN on the cluster using my data sets and the model from my research on tensorbaord.

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create your website at WordPress.com
Get started
%d bloggers like this: