Preferences

Neywiny
Joined 1,271 karma

  1. I've had to take a beat to find the right words as all the frustration in the issue ticket impacted me too which has left a very bad taste in my mouth after being initially curious and open to the new feature.

    I think you're doing a disservice to newcomers by creating a new method of autocompletion. And I say that as somebody who has mentored a lot of newcomers in high school, University, and now professionally. Very often, including just yesterday, I'll hear something like "I don't really know how to use [very standard thing], we had [esoteric helper] instead." Yesterday it was for makefiles. Their school just abstracted it away to make it easier for them, so they don't know how to make a simple makefile to compile a few source files together. Or literally any other build system, including cmake. So, Lord have mercy on my soul if I have a new hire tell me "I don't know how to use the regular terminals. All I can use is VSCode's terminal." I think sometimes things should be hard, but I don't think terminal autocomplete is very hard. Just hit tab a few times and it'll do its thing or -h.

    Where it might come in handy, and I haven't tested this, is programs that haven't registered their completions. For example, I'm often cross compiling, and it would be nice if it knew that ...-objcopy had the same completion as the host objcopy. But I am not going to take the hit of the bad pathing just for that.

    I'll conclude with a lesson in biases: your insiders are biased. You need to recognize that only egregious errors might be statistically significant. Not only are they more power users, they're new feature hunters, and more than that, they want new VSCode features. Also, that's very creepy y'all are looking at my command success rate even though I'm not an insider. And if you look at the issue ticket, you'll see that a lot of the issues wouldn't cause failure. `Git add` on the wrong file isn't a negative return code, and they might just muscle memory press enter before seeing they need to edit. A possibly better metric is how many times did the user run the same command up to the completion point. But please don't collect that data, that's creepy. I'm going to have to look through my settings to try and turn that all off.

  2. I self host. I also have no paying customers and negligible compute needs. It's free minus the power cost which is, again, negligible. If it was on docker on my main computer instead of 18 year old junk, it would probably be even less.
  3. It did, yes. On an architecture without bit field extracts.
  4. Even worse, the windows drivers know they're knock offs and let you use them for a bit and then error out. It's the perfect lesson. They give you that high of "yes, I got one over on the big guys, we're cruising now" and then "I don't know boss it's just all of a sudden not working anymore. Yes I know we're in a production crunch. Yes we should've just bought the real one". On Linux though, no issues for me
  5. I'm once again surprised at GCC being slower than clang. I would have thought that GCC, which had a 20? year head start would've made faster code. And yet, occasionally I look into the assembly and go "what are you doing?" And the same flags + source into clang is better optimized or uses better instructions or whatever. One time it was bit extraction using shifts. Clang did it in 2 steps: shift left, shift right. GCC did it in 3 I think? I think it maybe shifted right first or maybe did a logical instead of arithmetic and then sign extended. Point is, it was just slower.
  6. I think it depends on your skillset and goals. If you want to run other peoples' software and compilers and whatnot, it doesn't hurt to implement to an existing spec. If you're going to write your own or need a lot of mods, may as well roll your own architecture.
  7. You need to choose another field if it takes you 6 months to hello world
  8. I'm all for your endeavor, but didn't see a device support list on your front page. Clicked the first of 2 links in your sidebar (docs) and got a 404. I'm not saying it's telling that your issues page works when your docs page doesn't, but it's not the foot I would have put forward.
  9. I think there will always be vendor lock in. The same way there have been architectural differences between Intel and AMD's x86, or even stuff like one specific chip/family tanking performance because one instruction was implemented differently, you won't be able to guarantee efficient utilization of different vendor/families.

    For example, I've taken code optimized for Xilinx, ran it for another vendor, and resource count ballooned because stuff that was built-in/free on one wasn't on the other. It's a lot of work to truly make generic code and usually just means switching out modules per vendor.

  10. Not for anything mid to higher range, but I believe there's open source tooling for some of the older Lattice and Xilinx parts. I would say for me it's not as big a deal as on the software side, because each vendor's hardware tends to be pretty different from each other anyway.
  11. I do the vast majority of my work on xilinx and it's easiest to just use the built in simulator. It's free and supports vhdl and verilog. Most support just one. For lattice and microchip work I use what the tool provides which is usually a cut down modelsim or something
  12. I recommend getting started like the author did: simulation first, then FPGA. Honestly FPGA will take you very far. I always get a kick out of being able to design my own SoC. "Hmmm I need 9 separate I2C ports... Ok, copy block, paste paste paste..." Or if you have an operation in software that's taking forever you can write an accelerator for it
  13. That's a very nebulous metric. Usec of overhead depends on a lot of runtime things and a lot of hardware options and design that I'm just not privy to
  14. Header and FCS, interpacket gap, and preamble. What do you think "Ethernet overhead" is?
  15. But software engineering as a concept still isn't writing code. I'm a bit of a stickler about it as somebody who has an engineering degree, but when programmers with a CS degree say they're a software engineer, they're not. Software engineering as far as I understand it from the little bit I did in school is actually engineering. Requirements analysis, breaking down the problem, following methodology, etc. It's not just that they're writing somewhere.

    So really there should be 3 fields of study: 1. The theory - computer science 2. How to apply the theory - software engineering 3. How to turn those designs into reality - programmers

    It's like the mech engineering side. You have materials science and stuff, then mechanical engineers, then machinists.

  16. No. As somebody who lives in the trenches, I'm not too far behind. I've met people who say they can only do theory and can't do the practical side of this area. I think they might use it derogatorily but anybody who does clearly doesn't understand that both sides are necessary. So if anybody says it in that tone, just walk away.
  17. I work with pretty much everything (except GPUs I guess). Embedded is extremely relative. To some, embedded means a rack mount server that's idk embedded in a vehicle instead of a datacenter. That's not me. To others, embedded means a 4-bit low power, mask-rom fed micro inside a sensor IC. That's also not me.

    So I work with microcontrollers of various vendors, I do FPGA with hard and soft processors, recently did just past the smoke test through embedded Linux on a SoC, and I've done plenty of desktop code on Linux and Windows for interfacing. I get to work with a wide range of devices and a wide range of tasks for them. Might not pay as much but my goodness is it fun

  18. I mean idk, I'm living comfortably and as the adage says, not working a day in my life. But if you're at a spot where you need the pay more than you want to write ring buffers, I understand.
  19. Well what I'll say is this: my job never had leetcode. Embedded engineering, especially if you do FPGA work, is very different from what leetcode has. Honestly if recruiters are using it for jobs like mine, they really don't get it. But I don't know you nearly enough to know. There are so many different fields up and down the stack. Front end, backend, embedded, cloud, edge, consumer, IoT... The list goes on. I would cast a wider net, I guess.
  20. And yet here I sit, writing ring buffers, and never thinking about this idea. Probably because of the power of two issue. Which isn't actually a problem because as he points out, who would do that? But it makes me think that it's a restriction that it just isn't.

    But in all honesty, look for more embedded jobs, then. We can certainly use the help.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal