Preferences

They might be great for inference with tensorflow - but from what I can tell from Google's documentation, Coral doesn't support training at all.

I'm sure an ML accelerator that doesn't support training will be great for applications like mass-produced self-driving cars. But for hobbyists - the kind of people who care about the difference between a $170 dev board and a $100 dev board - being unable to train is a pretty glaring omission.


MichaelBurge
You wouldn't want to use it for training: This chip can do 4 INT8 TOPs with 2 watts. A Tesla T4 can do 130 INT8 TOPs with 70 watts, and 8.1 FP32 TFLOPs.

Assuming that ratio holds, you'd maybe get 231 GFLOPs for training. The Nvidia GTX 9800 that I bought in 2008 gets 432 GFLOPs according to a quick Google search.

Hobbyists don't care about power efficiency for training, so buy any GPU made in the last 12 years instead, train on your desktop, and transfer the trained model to the board.

rewq4321
On the other hand, it would be useful for people experimenting with low-compute online learning. Also, those types of projects tend to have novel architectures that benefit from the generality of a GPU.
Last I’ve heard covid was making GPUs about as difficult to find as the other things it’s jacked the prices up on, too.
gridlockd
You can get pretty much any GPU at pre-COVID prices right now, except for the newest generation NVIDIA GPUs that just came out to higher-than-expected demand.
omgwtfbyobbq
As a hobbyist in a state with relatively high electricity prices, I do care about the power efficiency of training.
jnwatson
Training is what the cloud is for.
wongarsu
That makes a $170 board that can also do training look dirt cheap in comparison
lawrenceyan
Good luck training anything in any reasonable time on it.
R0b0t1
Useful for adapting existing models. Not everything needs millions of hours of input.
tachyonbeam
If you want to train yet-another-convnet sure, but there could be applications where you want to train directly on a robot with live data, as in interactive learning.

See this paper for an example of interactive RL: https://arxiv.org/abs/1807.00412

suyash
or a highly rigged machine, this looks more for fast real time ML inference on the edge
debbiedowner
You can adapt the final layer of weights on edge tpu.

Training on a dev board should be a last resort.

Even hobbyists can afford to rent gpus for training on vast.ai or emrys

This item has no comments currently.