Preferences

The USB accelerator is designed for that - so you can target models for the a (small scale) TPU and then scale up on the cloud.

This would specifically let you make sure that the TensorFlow ops your algorithms use are supported on a TPU.

https://coral.withgoogle.com/products/accelerator/


solomatov
USB has too little bandwidth to do real training. Modern GPUs use 16PCIe lanes which is ~126Gpbs for PCIe 3.0 which is incomparable to USB.
izacus
Hence the part "scale up on the cloud" - USB units aren't supposed to replace the GPUs, c'mon.
elithrar OP
The ask here was inference, not training.

This item has no comments currently.