The answer is "almost never". Qualifications below.
Learn the Cortex-M4 ecosystem first. You can do a remarkable amount of stuff by abusing microcontroller systems in weird ways. The tools for dealing with microcontrollers are MUCH better than the FPGA ecosystem tools (which suck--and that's on a good day).
If you need an FPGA, generally you know it. The first sign is: "There's no way to do this with my microcontroller." That step is generally followed by: "Is there a different microcontroller that could do this?" followed by asking the old greybeard "Is there really no way to do this on a microcontroller?" And finally, "Crud, this can't be done on a microcontroller"--at that point you can consider an FPGA.
As someone who has been dealing with this for mumble too many years, I've been slowly migrating functionality of my clients away from their FPGAs and into the microcontrollers more and more. It's just way easier on too many fronts--and they don't have to learn Verilog/VHDL. The FPGAs will never go away, but they will have the minimum functionality we can get away with.
> I've no experience with FPGA, but would like to learn.
I'm likely to get some static for this, but get an actual supported dev board and not an open-source hacker board. I would recommend something like this that has an academic version:
https://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=...
Use the manufacturer tools (which, in this instance, are free) as using an "open-source toolchain" is NOT for beginners.
Once you understand FPGA's a bit more thoroughly, then you can take the leap to the open source stuff.
"You want to learn FPGAs! Let's simulate with Verilator. No, it can't simulate delays, or your vendor's IP library. No, it's not mixed language - hope all your IP is Verilog! And you'll need to know C or C++ to write your test bench, but that's ok right? The examples online involve templates (ZipCPU), so you're OK with Makefiles, and templated C++, obviously. Now, you have the FOSS P&R tools! They use a primitive simulated annealing placement algorithm that's like what we did in the 90's. Yeah, that's terrible, but don't worry, it works fine because the only parts you can target are really, really tiny Lattice parts! No, you can't even use all the hard blocks on the 7 series Xilinx parts - you can fit ONE whole RISC-V superscalar OoO core on the $3500 VC707 (https://github.com/csail-csg/riscy-OOO), running at ~120MHz - but I'll bet you'll do some primitive microcontroller RISC-V instead, something that fits on these really, really, REALLY tiny parts. Then you'll obviously want to augment that with Migen - don't you also know Python? Or Chisel! How great is Scala right? Yeah, it couldn't simulate a simple tri-state until version 2, but you're OK with the concept of domain specific languages already, right? You love the first-class functions and code as data concept! I know, I know. No, nobody in industry cares about any of this stuff, they're too busy using actual FPGA knowledge to like build real systems and circuits. Yeah, a microcontroller-level RISC-V style processor is a handful of undergraduate labs, but RISC-V IS OPEN! Anyway, back to FPGAs...hey where are you going?!"
As someone who learned the ropes with the open toolchain, I have to strongly disagree.
It's perfectly feasible, and from what I've seen (videos and such), the closed toolchains are way messier and do not provide much in terms of tools unless you pay for time-limited licenses. No thanks.
Perhaps Xilinx-land is different, but I have seen an intern go from blank Windows machine to "blink an LED" on an Altera board in about an hour or so with no Verlog/VHDL.
I have never seen this with the open-source toolchains.
Your experience differs. YMMV. Disclaimers. etc.
I could count the time to "install open tools", too, but it would be even less favorable to the proprietary toolchain. It was, after all, a single command (on Arch Linux) that finished in a few seconds at worst.
There's a lot of tutorials these days based on iCE40 and open toolchain.
Then I went on to read yosys's documentation, and learned a great deal about how the flow from verilog to hardware works in a very short amount of time, by just doing so.
When someone says "beginner", I never think of someone who already knows the Unix command line.
I've been doing this for far too many years, and I'm struggling to think of anybody I know who does FPGA work and knows the UNIX command line--even among the experienced hands.
Installation instructions (basically just extract the folder somewhere): https://github.com/im-tomu/fomu-toolchain
Workshop: https://github.com/esden/wtfpga
Sadly this frequently manifests as huge, Eclipse based bloatware with byzantine installation processes and comical fragility. Precious few understand the value of lightweight, robust tools and confuse elaborate graphical wizards and code generators with quality.
This is highly dependent upon how often you use the tool.
If I do an FPGA project once every two years and the scope is less than 3 months, user-friendliness is very much a useful metric for my tools.
If I'm doing an FPGA project that is going to take 15 months, then lack of user-friendliness certainly won't stop me (but it will make me grouchy).
[a]: They can be reconfigured (like flash program code in your PIC) instead of being write-once (like ye old PALs and GALs)
[b]: Essentially a configurable array of transistors (technically the transistor interconnects)
On a big scale, an STM (or whatever SoC you choose) is designed for running code. You tell it what to do. FPGAs, OTOH, are for when you want to tell the individual transistors how to connect to each other. When you just need to make an IOT widget, use a SoC. If you need to prototype an ASIC, use an FPGA. FPGAs allow more real time parallel applications by their nature of not being processors.
Then there’s the whole issue of FPGA vendors being terrible about interoperability. Want to use a Lattice part? You need Lattice’s toolchain. Want to use a Cyclone part? You need Intel’s “Quartus” toolchain. And those toolchains are multi-gigabyte monsters. Contrast that with a PIC where you can use pretty much any retargetable compiler and any USB-COM tool.
Of course, you still have to make the FGPA do something. An advantage of an FPGA over an STM32 would be if you're willing to invest some time in hardware description languages (HDLs) like VHDL or Verilog to design your own system, or want to drop in novel cores (like RISC-V) that aren't physically available yet or are still in a state of flux. RISC-V isn't fully specified yet so a "soft" core version on an FPGA offers you the ability to upgrade your device without needing to actually purchase a new one, and remove and replace the old one.
In theory, an Arduino compatible FPGA-based device could be used to develop applications totally sans software (that is, all the logic is encoded in the HDL) as well, which can be advantageous in prototyping, and maybe in performance depending on the FPGA in question.
FPGA's can't do that (or at least one's mere mortals can buy), but Cypress have a line of parts called PSoC (programmable system on chip) which can effectively synthesize analogue cells like op-amps in an analogous way to an FPGA. They also have a cut-down (surprised?) version of verilog, so you can write a few CPLD-y verilog modules to go on it as well.
The dev boards are very reasonably priced, well worth having one around to play with - can't comment on using them at scale. Cypress have some pretty good video lecture/tutorials on using them by Alan Hawse (Who is very stereotype-engineer https://youtu.be/0IKuUgEWAqg)
Essentially for the same reason you choose to use a GPU instead of a CPU. FPGAs deliver application specific circuitry that outperforms a general purpose device. Typically these are high throughput applications; signal processing (SDR), image processing (find the license plates), audio processing etc. While it might be conceivable to do many of these tasks with very fast MCUs, often there are power budgets or other constraints that can't be met.
For example, imagine your application might be handled by either a.) STM32F4 running full tilt at 160 MHz or b.) an iCE5LP4K at 50 MHz. The former could consume just south of 200mW, whereas the latter might be around 10mW. That's a very big difference when you're running on a 100mAh embedded lipo battery or 40mAh coin cell.
When would you choose FPGA over something like an STM32? From what an internet search leads to, they can be faster / do more things in parallel for the price. And you can add capabilities (More timers? More op amps?) with firmware updates. Is this at the core of when you'd choose one?
I've no experience with FPGA, but would like to learn. This is listed as "no longer available for sale". Which dev board do you recommend? Thank you.