Preferences

It's interesting how amenable image classification neural networks are to the "take working model, peel off last layer or two, retrain for a new application" approach. I've seen this suggested as working pretty well in a few instances.

I guess the interpretation is that the first few normalize->convolution->pool->dropout layers are basically achieving something broadly analogous to the initial feature extraction steps that used to be the mainstay in this area (PCA/ICA, HOG, SIFT/SURF, etc.), and are reasonably problem-independent.


For sure, although I should say, for this specific instance I ended up training a network from scratch. I did get inspiration from the MobileNets architecture, but I did not keep any of the weights from their ImageNet training. That was shockingly affordable to do even on my very limited setup, and the results were better than what I could do with a retraining (mostly has to do with how finicky small networks can be when it comes to retraining).
That's very cool to hear, I'm a lot more interested in the eGPUs (vs. something like an AWS P2 instance) after reading this. Thanks again for sharing.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal