Preferences

checker659 parent
I think FPGAs (or CGRAs really) will make a comeback once LLMs can directly generate FPGA bitstreams.

throwawayabcdef
No need. I gave ChatGPT this prompt: "Write a data mover in Xilinx HLS with Vitis flow that takes in a stream of bytes, swaps pairs of bytes, then streams the bytes out"

And it did a good job. The code it made probably works fine and will run on most Xilinx FPGAs.

pjc50
> The code it made probably works fine

Solve your silicon verification workflow with this one weird trick: "looks good to me"!

throwawayabcdef
Its how I saved cost and schedule on this project.
ben_w
I don't even work in hardware, and yet even I have still heard of the Pentium FDIV bug, which happened despite people looking a lot more closely than "probably works fine".
15155
What does "directly generate FPGA bitstreams" mean?

Placement and routing is an NP-Complete problem.

duskwuff
And I certainly can't imagine how a language model would be of any use here, in a problem which doesn't involve language.
15155
They are "okay" at generating RTL, but are likely never going to be able to generate actual bitstreams without some classical implementation flow in there.
buildbot
I think in theory, given terabytes of bitstreams, you might be able to get an LLM to output valid designs. Excepting hardened IP blocks, a bitstream is literally a sequence of sram configuration bits to set the routing tables and LUTs. Given the right type of positional encoding I think you could maybe get simple designs working at a small scale.
CamperBob2
I'd expect a diffusion model to outperform autoregressive LLMs dramatically.
buildbot
Certainly possible! Or perhaps a block diffusion+autoregressive model or something like GPT 4o's image gen.
checker659 OP
AI could use EDA tools
imtringued
AMDs FPGAs already come with AI engines.

This item has no comments currently.