It is a small turnoff that you need to use their cloud model ‘compiler’ but I still think I might get the USB Dev device.
I am retiring in a couple off weeks from my job managing a machine learning team and I intend on being a ‘gentleman scientist’ studying things of interest, without worrying about immediate practicality. Of most interest is local ML using tensorflow.js and devices like the Edge TPU, and also hybrid symbolic AI and deep neural net systems.
Anyway, good to see competition for edge devices.
frozenport(dead)
Can't you afford a real gpu? Or already own one?
wyldfire
> Upload your model
> It should take about one minute for compilation to complete.
...also, it should take about six months for Google to lose interest in this product, at which point the product you made when you integrated the Edge TPU -- is stuck without updates.
crazygringo
This kind of comment is getting really tired.
Can you show me statistically that Google is any more likely to discontinue something than any other startup? Or than Apple or Amazon?
A few people got upset about Google discontinuing Reader, but that was a looong time ago. And they've certainly discontinued other things to... but just like every other company.
They seem to discontinue a lot of products, including ones with fairly large user bases. It seems like a valid concern if you're going to try to build something on top of their stuff.
I think the reasoning is that it would be much nicer to have a compiler that runs locally so that you aren't dependent on Google to run the hardware even if they do EoL it.
It's a major issue for actual deployments of hardware in e.g. medical, education, research settings where a machine may end up supporting a piece of machinery for a couple decades on no support but just some spare duplicate parts that can be swapped in.
I once used a fiber optic splicer at MIT that was 2.5 decades old and ran DOS. Nobody gave a crap that it was DOS. We just needed fibers spliced and a new shiny touch screen splicer would cost $30K.
scrollaway
I'm the first one to say this type of comment is tired usually, but that's because people say it about Google cloud where it's patently untrue.
This however seems to be a product with no SLA and no guarantees, outside of the cloud offerings etc. I kinda agree with OP, Google's track record is bad when it comes to this kind of products.
And yes I think they are worse than other companies. Google isn't a hardware company, so they're worse than apple in that regard. And Amazon would do it through AWS, which would also make this fall inside their core competency.
arendtio
There are at least two websites [1][2] dedicated to listing past google products.
The HW is still there and if there is enough interest people can keep on hacking on it. There are many alternatives though. The spec for the dev board is interesting, I am curious about the ML accelerator coprocessor & cryptographic coprocessor. Interesting choice of operating system. If they were also releasing their new OS for these that would make this project infinitely more interesting to me.
They mentioned previously that you had to compile your models on the cloud, and not locally on your computer. Not sure if they've changed this policy.
mindcrime
They mentioned previously that you had to compile your models on the cloud
Wow, I was interested in this, right up until I read that. Talk about "weak sauce".
Sorry Google, but no, I will not use your proprietary compiler, especially when it's only available in the cloud, and become beholden to hardware which could instantly become a very expensive paperweight when you shut down the compiler service. No f'in way.
Release an open source compiler and I'm on-board. Otherwise, stuff it.
The proprietary compiler thing sucks, but it is where a lot of the secret sauce is, unfortunately. But a binary wouldn't be too much to ask for...
ipsum2
Well then, its DoA. Not sure why any company would agree to these terms.
jononor
Agree it might be a dealbreaker for some. But right now there is not that much competition in the TPU for embedded space. NVidia Jetson and Intel Nervana are the only ones shipping? So if the TPU allows some company to do something not possible / much better than with Jetson, they will probably be willing to play that game.
monocasa
It's starting to heat up. K210s are supposed to be pretty cool if you can put your hand on one.
jononor
The baseboard and SOM module split looks very well done. The module includes CPU+RAM+EMMC in addition to the TPU, so a custom baseboard can be quite simple.
A lot of audio input, ready for microphone arrays.
Curious to see what role the M4F microcontroller will play, hopefully that is for some sleep/low-power usage where it can wake up the beefy CPU (and TPU).
solomatov
I wish Google created a development version of TPU for inference so that it's possible to debug models locally and then send them for training to the GCP.
CoolGuySteve
Ya, I feel uneasy about this business model of creating hardware that you have to connect to a cloud service to actually use. Instead of vendor lock-in or proprietary drivers or whatever, it's a new form of locality-based lock-in.
Meaning, if I have an application that needs a big hot PCI-E card attached to a physical server I own somewhere, comparable to GPUs now, the TPU is not for me. But meanwhile, a bunch of NN research and frameworks on top of TensorFlow will treat these proprietary things as a first class citizen.
est31
The lock-in, while bad, only affects the development of new models. These devices exist so that you can avoid the cloud for inference.
dguaraglia
Well, you could get one of the development boards? Not sure what the use case you have in mind is, but these Edge TPUs are not for use inside of their GCP AI solutions, but rather on an 'edge' device. For anything else your 'local TPU' should be a hefty desktop with a couple of NVidia cards.
The deal is though that the Edge TPU is able to do it at much lower power.
syntaxing
The Edge TPU devices that Google has been promising since last year is now available under a new company called Coral. Would love to get one to compare to my Jetson TX2. The downside is that the unit can only use Tensorflow Lite.
E: Hah, seems like my topic got merged with this one. Interesting how I was short from OPs post by like a minute a two. Such a coincidence!
est31
> a new company called Coral.
On that website, each page has the Google logo at the bottom and "Copyright 2019 Google LLC. All rights reserved.".
Also, at [1], Google LLC is mentioned as manufacturer of the devices. At this point, Coral still seems to be a brand only, not a company. Maybe they just didn't want to harm/affect their "main" trademark with this. Or they actually do want to create a separate company and this is the first step.
Software Eng. with The Coral Project[0] here. Feeling a little odd with the same color (even the logo a bit) + name combo used for their TPU as we've used for The Coral Project for years now..
Kinda surprised they went with the internal code name for this.
Ecco
The datasheet says it features a "Cortex M4 with 16 KB of instruction cache and 16 KB of data cache". As far as I know, M4 don't have L1 cache. Maybe they're using an M7? Or there's just simply no cache?
M4 application notes (http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc....) says the M{-0..4} doesn't have any internal cache, but that it can be provided by the SoC. Presumably that's what's happening here -- although it seems weird that this can be called an L1 Cache (although I'm by no means an expert on this so can't really comment!).
makapuf
Maybe they're not talking about ram but Flash I/D cache. By example Stm32f4 come with flash cache (they call it ART accelerator) to prefetch instructions from flash to enable "zero" wait states.
emcq
The NXP i.MX 8M SOC has a Cortex-a53 and M4F.
yRetsyM
Interesting that it's Debian Linux support only for the peripherals. I'd be interested to see if that support grows to other os's, especially if it's a restriction to adoption.
I'm not in the space per-say but what are the predominant OS choices for ML/AI Devs?
jononor
I think they just want to get things out quickly. Plenty of people will be willing to deal with limitations in an early phase. I'm sure that for the USB stick other Linux systems will follow, and probably Mac/Windows also.
For the SOM they might stick with just Debian I guess. It is normal in embedded to just have one platform provided by the vendor, and everything else be "at your own risk".
dheera
They were handing the USB ones out today to attendees at the TensorFlow Dev Summit. I'll test mine later.
However I really wish they would make something beefier, to compete with e.g. Nvidia's Xavier.
bcatanzaro
Any details on how much this board costs?
Also, how many TOPs the Edge TPU has?
mitfahrener
Dev Board cost $149.99, says on the website.
bcatanzaro
Thanks. Somehow I missed that, even though it's in large print at the top. :)
How about the Edge TPU specifications? Did I overlook those too?
I am retiring in a couple off weeks from my job managing a machine learning team and I intend on being a ‘gentleman scientist’ studying things of interest, without worrying about immediate practicality. Of most interest is local ML using tensorflow.js and devices like the Edge TPU, and also hybrid symbolic AI and deep neural net systems.
Anyway, good to see competition for edge devices.
> It should take about one minute for compilation to complete.
...also, it should take about six months for Google to lose interest in this product, at which point the product you made when you integrated the Edge TPU -- is stuck without updates.
Can you show me statistically that Google is any more likely to discontinue something than any other startup? Or than Apple or Amazon?
A few people got upset about Google discontinuing Reader, but that was a looong time ago. And they've certainly discontinued other things to... but just like every other company.
They seem to discontinue a lot of products, including ones with fairly large user bases. It seems like a valid concern if you're going to try to build something on top of their stuff.
Disclaimer: I work at Google.
It's a major issue for actual deployments of hardware in e.g. medical, education, research settings where a machine may end up supporting a piece of machinery for a couple decades on no support but just some spare duplicate parts that can be swapped in.
I once used a fiber optic splicer at MIT that was 2.5 decades old and ran DOS. Nobody gave a crap that it was DOS. We just needed fibers spliced and a new shiny touch screen splicer would cost $30K.
This however seems to be a product with no SLA and no guarantees, outside of the cloud offerings etc. I kinda agree with OP, Google's track record is bad when it comes to this kind of products.
And yes I think they are worse than other companies. Google isn't a hardware company, so they're worse than apple in that regard. And Amazon would do it through AWS, which would also make this fall inside their core competency.
[1]: https://killedbygoogle.com
[2]: https://gcemetery.co
They mentioned previously that you had to compile your models on the cloud, and not locally on your computer. Not sure if they've changed this policy.
Wow, I was interested in this, right up until I read that. Talk about "weak sauce".
Sorry Google, but no, I will not use your proprietary compiler, especially when it's only available in the cloud, and become beholden to hardware which could instantly become a very expensive paperweight when you shut down the compiler service. No f'in way.
Release an open source compiler and I'm on-board. Otherwise, stuff it.
The proprietary compiler thing sucks, but it is where a lot of the secret sauce is, unfortunately. But a binary wouldn't be too much to ask for...
Meaning, if I have an application that needs a big hot PCI-E card attached to a physical server I own somewhere, comparable to GPUs now, the TPU is not for me. But meanwhile, a bunch of NN research and frameworks on top of TensorFlow will treat these proprietary things as a first class citizen.
This would specifically let you make sure that the TensorFlow ops your algorithms use are supported on a TPU.
https://coral.withgoogle.com/products/accelerator/
http://linuxgizmos.com/google-launches-i-mx8m-dev-board-with...
The edge TPU can do MobileNet V2 at 100 FPS.
An iPhone 7 can do it at 145 FPS (source https://machinethink.net/blog/mobilenet-v2/)
The deal is though that the Edge TPU is able to do it at much lower power.
E: Hah, seems like my topic got merged with this one. Interesting how I was short from OPs post by like a minute a two. Such a coincidence!
On that website, each page has the Google logo at the bottom and "Copyright 2019 Google LLC. All rights reserved.". Also, at [1], Google LLC is mentioned as manufacturer of the devices. At this point, Coral still seems to be a brand only, not a company. Maybe they just didn't want to harm/affect their "main" trademark with this. Or they actually do want to create a separate company and this is the first step.
[1]: https://coral.withgoogle.com/legal/
[0]: https://coralproject.net/
https://coral.withgoogle.com/tutorials/devboard-datasheet/
M4 application notes (http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc....) says the M{-0..4} doesn't have any internal cache, but that it can be provided by the SoC. Presumably that's what's happening here -- although it seems weird that this can be called an L1 Cache (although I'm by no means an expert on this so can't really comment!).
I'm not in the space per-say but what are the predominant OS choices for ML/AI Devs?
However I really wish they would make something beefier, to compete with e.g. Nvidia's Xavier.
How about the Edge TPU specifications? Did I overlook those too?