• 16 Posts
  • 63 Comments
Joined 3 years ago
cake
Cake day: June 12th, 2023

help-circle











  • Original answer:

    It’s hard to give you an answer to your first question with just those graphs because it looks like one run on one dataset split. To address this part specifically:

    My current interpretation is that the validation loss is slowly increasing, so does that mean that it’s useless to train further? Or should I rather let it train further because the validation accuracy seems to sometimes jump up a little bit?

    The overall trend is what’s important, not small variations, try to imagine the validation loss curve smoothed out and you don’t want to go beyond the minimum of that curve. Technically overfitting is indicated by a significant difference in loss between training and testing.



  • It would be only Quebec/NL/LB. Very small amount of salmon compared to the rest of the market, not something you would find in walmart. Conservation groups have been calling for tighter restrictions for years and it might be they’re only giving out licenses to indigenous or recreational/sport atm. In Canada, we have special rules for indigenous that they can basically ignore certain/limits rules on hunting and fishing.


  • Original answer:

    Short answer: Use a convolutional autoencoder (add convolution layers to the exterior of the autoencoder)

    Long answer: From my experience with time series and autoencoders, it is best to do as much feature extraction as possible outside the autoencoder as it’s more difficult to train them to to do the feature extraction and dimensionality reduction. Consider using FFT or wavelet transforms on your data first. Even if they don’t extract your pattern exactly, it helps many applications. After transforming the data, train the convolutional autoencoder using the features and then to evaluate your model, reverse the transformation and compare with the original.