Preferences

inamberclad parent
The problem is that the tools are still weak. The languages are difficult to use, nobody has made something more widely adopted than Verilog or VHDL. In addition, the IDEs are proprietary and the tools are fragile and not reproduceable. Synthesis results can vary from run to run on the exact same code with the same parameters, with real world impacts on performance. This all conspires to make FPGA development only suitable for bespoke products with narrow use cases.

I would love to see the open source world come to the rescue here. There are some very nice open source tools for Lattice FPGAs and Lattice's lawyers have essentially agreed to let the open source tools continue unimpeded (they're undoubtedly driving sales), but the chips themselves can't compete with the likes of Xilinx.


JoachimS
Systemverilog (SV) is the dominating language for both ASIC and FPGA development. SV is evolving, and the tools are updated quite fast. SV allows you to do abstractions through interfaces, enums, types etc. The verification part of contains a lot of modern-ish language constructions, support for formal verification. The important thing is really to understand that what is being described is hardware. Your design is supposed to be possible to implement on a die, with physical wires, gates, register, I/Os etc. There will be clocks, wire delays. It actually one of the problems one encounter with more SWE people tries to implement FPGAs and ASICs. The language, tools may help you, but you also need to understand that it is not programing, but design you are doing.

https://en.wikipedia.org/wiki/SystemVerilog

hardolaf
SV requires a linter for literally every single line change that you do because the language is rotten to the core by being based on Verilog. Heck, it has an entire chapter of it's LRM dedicated to the non-deterministic behavior inherent to its description of the hardware. VHDL has no such section because it is deterministic.

Both languages suck for different reasons but no one has figured out how to make a better language and output a netlist from it (yes, there is an open interchange standard that almost every proprietary tool supports).

sweetjuly
I don't disagree that the tools are rough, but to the person you're replying to's point, would perfect tools and languages actually solve the underlying problem?

As much as I love FPGAs, GPUs really ate their lunch in the acceleration sphere (trying to leverage the FPGA's parallelism to overcome a >20x clock speed disadvantage is REALLY hard, especially if power is a concern) and so it seems the only niche left for them is circuit emulation. Of course, circuit emulation is a sizable market (low volume designs which don't make sense as ASICs, verification, research, etc.) and so it's not exactly a death sentence.

hardolaf
The FPGA market has been growing in size despite GPGPU taking off. And clock speed difference is closer to 4-5x not 20x. Despite that and the lower area efficiency of FPGAs, there have been price and power competitive FPGA accelerators cards released over the last 5 years. Sure, you're not going to get an A100's performance, but you can get deterministic latency below 5us for something that the A100 would take a minimum of 50us to process. GPGPU isn't ideal for its current use case either so FPGA based designs have a lot of room to work in to get better, application specific accelerators.
JoachimS
The non-deterministic part of the toolchain is not a universal truth. Most, all tools allow you to set, control the seeds and you can get deterministic results. Tillitis use this fact to allow you to verify that the FPGA bitstream used is the exact one you get from the source. Just clone the design repo, install the tkey-builder docker image for the release and run 'make run-make' and of course all tools in the tkey-builder are open source with known versions to that you can verify the integrity of the tools.

And all this is due to the actually very good open source toolchain, including synthesis (Yosys) P&R (NextPNR, Trellis etc), Verilator, Icarus, Surfer and many more. Lattice being more friendly than other vendors has seen an uptake in sales because of this. They make money on the devices, not their tools.

And even if you move to ASICs, open source tools are being used more and more, esp at simulation, front end design. As an ASIC and FPGA designer for 25 odd years I spend most of my time in open source tools.

https://github.com/tillitis/tillitis-key1 https://github.com/tillitis/tillitis-key1/pkgs/container/tke...

bjourne
I never understood why FPGA vendors think the tools should do this and not the designer. Most do a terrible job at it too. E.g., Quartus doing place and route in a single thread and then bailing out after X hours/days with a cryptic error message... As a designer I would be much happier to tell it exactly where to put my adder, where to place the sram, and where to run the wires connecting the ports. You'd build your design by making larger and larger components.
variadix
As I understand it, the physical FPGA layout and timing information used for placement and routing is proprietary, and the vendors don’t want to share it. They’ll let you specify constraints for connections, but it has to go through their opaque solver. And to be fair, they do have to try to solve an NP-complete problem, so the slowness isn’t unjustified compared to all the other slow buggy software people have to deal with nowadays.
JoachimS
The competitiveness between Lattice and Xilinx is also not a univeral truth. It totally depends on the applications. Small to medium designs Lattice have very competitive offerings. Hard ARM cores, not as much. Very large designs not at all. But if you need internal config memory (for some devices), small footprint etc Lattice is really a good choice. And then support in open source tools to boot.
zxexz
I’m sure the Lattice open source situation is driving sales in a more than substantial way. I’ve definitely spent >2k in the past five years on lattice boards and chips (not counting used or otherwise aftermarket). Most people I’ve met with any disposable income and an interest in hardware or DSP have done similar. I know 2k USD is nothing compared to even a single high end chip from another vendor, but it sticks. I’ve known more amazing devs than I can count on two hands that ended up actually using Lattice FPGAs in their job (several times changing industries to do so!). I honestly feel that if Lattice embraced open source further, they could be a leading player in the space. Hell, I’m sure they could do that and still find a way to make money on software. Something, something, enterprise.
fake-name
> Synthesis results can vary from run to run on the exact same code with the same parameters, with real world impacts on performance.

This is because some of the challenges for the synthesis/routing process are effectively NP-hard. Instead, the compiler uses heuristics and a random process to try to find a valid solution that meets the timing constraints, rather then the best possible solution.

I believe you can control the synthesis seed to make things repeatable, I believe that the stochastic nature of the process means that any change to the input can substantially change the output.

JoachimS
Yes you can control the seeds and get determinstic bitstreams. Depending on device, tools you can also assist the tools by providing floorplanning constraints. And one can of course try out seeds to get designs that meet results you need. Tillitis use this to find seeds that generate implementations that meet the timing requirements. Its in ther custom tool flow.

This item has no comments currently.