Preferences

The Nvidia Jetson Nano costs the same and is less likely to be killed-off after you’ve invested in the platform.

Jetson is actually an important product for Nvidia and Google tends to kill off this type of pet project.

Google/alphabet might have more success with their side-bets if they spun them out as separate companies like Xiaomi and Haier (both Chinese) seem to do.


howlgarnish
Coral is powered by an Edge TPU (Tensor Processing Unit), which wipes the floor with GPUs like the Jetson Nano when it comes to running Tensorflow:

https://blog.usejournal.com/google-coral-edge-tpu-vs-nvidia-...

...and Google is pretty invested in TPUs, since it uses lots of them in house.

https://en.wikipedia.org/wiki/Tensor_Processing_Unit

michaelt
They might be great for inference with tensorflow - but from what I can tell from Google's documentation, Coral doesn't support training at all.

I'm sure an ML accelerator that doesn't support training will be great for applications like mass-produced self-driving cars. But for hobbyists - the kind of people who care about the difference between a $170 dev board and a $100 dev board - being unable to train is a pretty glaring omission.

MichaelBurge
You wouldn't want to use it for training: This chip can do 4 INT8 TOPs with 2 watts. A Tesla T4 can do 130 INT8 TOPs with 70 watts, and 8.1 FP32 TFLOPs.

Assuming that ratio holds, you'd maybe get 231 GFLOPs for training. The Nvidia GTX 9800 that I bought in 2008 gets 432 GFLOPs according to a quick Google search.

Hobbyists don't care about power efficiency for training, so buy any GPU made in the last 12 years instead, train on your desktop, and transfer the trained model to the board.

rewq4321
On the other hand, it would be useful for people experimenting with low-compute online learning. Also, those types of projects tend to have novel architectures that benefit from the generality of a GPU.
Last I’ve heard covid was making GPUs about as difficult to find as the other things it’s jacked the prices up on, too.
gridlockd
You can get pretty much any GPU at pre-COVID prices right now, except for the newest generation NVIDIA GPUs that just came out to higher-than-expected demand.
omgwtfbyobbq
As a hobbyist in a state with relatively high electricity prices, I do care about the power efficiency of training.
jnwatson
Training is what the cloud is for.
wongarsu
That makes a $170 board that can also do training look dirt cheap in comparison
lawrenceyan
Good luck training anything in any reasonable time on it.
R0b0t1
Useful for adapting existing models. Not everything needs millions of hours of input.
tachyonbeam
If you want to train yet-another-convnet sure, but there could be applications where you want to train directly on a robot with live data, as in interactive learning.

See this paper for an example of interactive RL: https://arxiv.org/abs/1807.00412

suyash
or a highly rigged machine, this looks more for fast real time ML inference on the edge
debbiedowner
You can adapt the final layer of weights on edge tpu.

Training on a dev board should be a last resort.

Even hobbyists can afford to rent gpus for training on vast.ai or emrys

pinewurst
Google is pretty invested in TPUs for their own workloads but I fail to see any durable encouragement of them as an external product. At best they're there to encourage standalone development of applications/frameworks to be deployed on Google Cloud (IMHO of course).
tachyonbeam
AFAIK, apart from toy dev boards like this, you can't buy a TPU, you can only rent access to them in the cloud. I wouldn't want my company to rely on that. What if Google decides to lock you out? If you've adapted your workload to rely on TPUs, you'd be fucked.
akiselev
What's the difference between Coral's production line of Edge TPU modules and chips [1] and Google's cloud TPU offering?

Note: I haven't tried sourcing these in production (100k+) quantities so I have no idea what guarantees that product line gives customers.

[1] https://coral.ai/products/#production-products

usmannk
They're nothing alike at all. Similar to how a low end laptop GPU differs from a top of the line NVIDIA datacenter offering. Google's cloud TPU offering is the strongest ML training hardware that exists, the edge devices simply support the same API.
debbiedowner
Edge tpu is 2 tflops at half precision, cloud tpu starts at 140 tflops single precision and scales further.

Also edge tpu is 2-5Watts. Supposedly cloud tpus are more power efficient than GPUs, and for eg the 14 tflops 2080 ran at 300 W regularly.

popinman322
Coral can only run inference, and is optimized for models using 8-bit integers (via quantization).

A full TPU v2/v3 can train models and use 16/32 bit floats. They also have a Google-specific (?) 16-bit floating point type with reduced precision.

kanwisher
Until you want to use Pytorch or another non tensor flow framework the support goes down dramatically. Jetson Nano supports more frameworks out of the box quite well, and it ends up being same cuda code you run on your big Nvidia cloud servers
panpanna
Not only that, nvidia cares deeply about pytorch. Visit pytorch forums and look at most upvoted answers. All by nvidia field engineers.
sorenbouma
That benchmark appears to compare full precision fp32 inference on the nano with uint8 inference on the coral, that floor wiping comes with a lot of caveats
There seems to be more than one jetson board.
tasty_freeze
The Jetson nano has 2GB of RAM and is $59.

https://www.arrow.com/en/products/945-13541-0000-000/nvidia

agumonkey
what a nice price point
panpanna
This is rpi4 territory.

But I would not recommend the 2GB version. The 4GB versions is barely useful without a swap file on a SSD.

owowow
Why not just use ZRam and go headless? There are plenty of good ncurses apps out there.
panpanna
Note that Zram and ML are not best friends, for a number of reasons.
tasty_freeze
Maybe 2GB is barely usable (I'll take your word), but the google coral dev board this thread is about is $100 and has 1GB.
coredog64
The base board for the 2GB version removed the mini PCIE slot, meaning you can’t swap in a cheap drive either.
jasonvorhe
Google Brillo was renamed to Android Things four years but apart from the change of names, the boards are still supported: https://developer.android.com/things/get-started/kits
paxswill
While they may be supported, the store links for the kits either 404 or the item is discontinued.
panpanna
It's google, what did you expect?

This is why I would never recommend this SCB to anyone. Beside being locked to tensorflow, you are also at merci of some random manager at Google.

rektide
jetson's x1 core (note: not arms new x1 architecture!) is already 5 years old. once upon a time that would scare me, but now, it seems almost comedically safe to say "I guess it's not going anywhere!"
And it's still faster than the Coral Dev Board mini... (Cortex-A35 is a CPU tier _below_ the A53, there's no contest).

The fastest SBC at CPU tasks priced below $100 is the Raspberry Pi.

squarefoot
"The fastest SBC at CPU tasks priced below $100 is the Raspberry Pi."

The Odroid N2+ costs $79 and is over twice as fast as the Pi4. The Khadas Vim3 costs $100 and is about 30-40% faster than the Pi4.

The number of SBC boards out there is becoming huge; although the PI price has dropped significantly wrt performance and features (especially RAM), there's a lot of comeptition, and it's growing.

https://hackerboards.com/spec-summaries/ https://all3dp.com/1/single-board-computer-raspberry-pi-alte...

> Cortex-A73 at 2.4GHz

That's indeed much faster than the Pi4. Do you know the state of kernel support for that board?

rektide
It uses an Amlogic S922X aka the G12B. Support is generally pretty good, there's a dedicated community that has been very active pushing upstream[1].

Except the ARM G51 Bifrost gpu, which has only recently started to see viability[2] thanks to one hacker's reverse engineering. If you want to read a lot of words, there's a status report from the libreeelec Kodi-based media player distribution distribution that's a year old, that lays out a lot of what needs be done, from a very video-intense perspective[3]; this is before recent reverse engineering efforts, & largely discusses uses closed proprietary blobs, but still interesting. Most recently & very interestingly, there are signs that ARM itself may be willing to start helping out the reverse engineered development[4], which would be a new potentially interesting state of affairs.

[1] http://linux-meson.com/

[2] https://www.phoronix.com/scan.php?page=news_item&px=Bifrost-...

[1] https://forum.libreelec.tv/thread/21134-what-aspects-of-hard...

[4] https://www.phoronix.com/scan.php?page=news_item&px=Arm-Panf...

squarefoot
According to the Armbian (one distro to support them all:^) page, mainline kernel support is complete, although they say there still could be some network problems. From what I read on their forum, the Hardkernel Ubuntu-based image is currently more stable than the Armbian one.

https://www.armbian.com/odroid-n2/ https://forum.armbian.com/search/?q=odroid%20n2%2B&fromCSE=1

https://wiki.odroid.com/getting_started/os_installation_guid...

Dietpi also supports the N2, which is very similar to the N2+. https://dietpi.com/

snowAbstraction
Aren't there better odroid options if you mainly care about compute?
rektide
The $63 N2+ has the latest "C" rev of the S922X, which is a dual 2.4GHz A73 + 4x A53 and a "MP6" variety of bifrost GPU, a G51. The C4 has the newer S9005X3, which has 4x 2GHz A55 cores and a smaller G31 bifrost gpu. Those A55's, while improved over the A53's, are going to be sigificantly outmatched by the A73 cores on the N2+.

The H2+ has an Intel J4115 Atom celeron running 2.3GHz all-core, which I expect would trounce these ARM chips. It's also $120.

Alas there hasn't been any update to the excellent Exynos5422 that started HardKernel's/Odroid's ascent as the XU4. Lovely 2GHz Cortex 4x A15 4x A7 with (2x! wow! thanks!) USB3 root hosts and on-package RAM: really an amazing chip way ahead of it's time. These days it's way outgunned but this chip really lead the way for SBCs with it's bigger cores for the time, USB3, and on-package RAM (which we really need to see a comeback on).

Worth noting that the A73 on the N2/N2+ and RPi4 are from ARM Artemis, which hails from 2016. Maybe some year SBC won't all be running half decade old architectures, but at least we're at the point where half a decade ago we were doing something right. ;) Still, one can't help but imagine what a wonder it would be if an chip & SBC were to launch with an ARM X1 chip available.

rektide
it's an a57 on the x1 (an architecture from 2012, but a big core), so this coral mini's a35 (new but quite small) very significantly below.

the attraction of coral is supposed to be the inference engine. 4 TOps/s at 2 watts is... impressive. Jetson takes 10 or 15 watts & tops out a little under 0.5 TOp/s. those are much more flexible gpu cores but that's 60x efficiency gain & centered around a chip that is much easier to integrate into consumer products.

Compared to the Nano yeah, which is just the same SoC since a long time.

Xavier NX is 21TOPs at 15W for the whole SoC... but the pricing at $399 puts it in a different category...

Google should just start selling the USB sticks at the same price as the M.2 Corals, with them being used on RPis I think...

ekianjo
> Jetson is actually an important product for Nvidia and Google tends to kill off this type of pet project.

Jetson has obsolete distros though? Linux support is probably better with Google if anything.

conception
Which is odd because they created Alphabet to do that very thing.
ethbr0
And here I thought they created Alphabet because of Wall Street pressure.

This item has no comments currently.