Preferences

fitzn
Joined 250 karma
VP, AI and Architecture at SmartBear (smartbear.com)

Co-Founder at Reflect S20 (https://reflect.run)


  1. The problem with IPv6 jokes is that very few people are making them.
  2. Non-Linear Compression! We had a tiny idea back in the day in this space but never got too far with it (https://www.usenix.org/conference/hotstorage12/workshop-prog...).

    I am pumped to see this. Thanks for sharing.

  3. Yep - agreed. Thanks!
  4. Just making sure I understand the "one round trip" point. If the client has chained 3 calls together, that still requires 3 messages sent from the client to the server. Correct?

    That is, the client is not packaging up all its logic and sending a single blob that describes the fully-chained logic to the server on its initial request. Right?

    When I first read it, I was thinking it meant 1 client message and 1 server response. But I think "one round trip" more or less message "1 server message in response to potentially many client messages". That's a fair use of "1 RTT", but took me a moment to understand.

    Just to make that distinction clear from a different angle, suppose the client were _really_ _really_ slow and it did not send the second promise message to the server until AFTER the server had computed the result for promise1. Would the server have already responded to the client with the result? That would be a way to incur multiple RTTs, albeit the application wouldn't care since it's bottlenecked by the client CPU, not the network in this case.

    I realize this is unlikely. I'm just using it to elucidate the system-level guarantee for my understanding.

    As always, thanks for sharing this, Kenton!

  5. Reflect tests mobile apps by converting plain text instructions into appium commands at runtime using AI. Your tests are just the text steps.

    https://reflect.run/mobile-testing/

    disclaimer: I co-founded Reflect.

  6. It would have been cool to read stories about a few more examples of slingshots, as the author calls them.
  7. > The best engineers make more than your entire payroll. They have opinions on tech debt and timelines. They have remote jobs, if they want them. They don’t go “oh, well, this is your third company, so I guess I’ll defer to you on all product decisions”. They care about comp, a trait you consider disqualifying. They can care about work-life balance, because they’re not desperate enough to feel the need not to. And however successful your company has been so far, they have other options they like better.

    Yep

  8. I read it quickly, but I think all of the attack scenarios rely on there also being an MCP Server that advertises the tool for reading from the local hard disk. That seems like a bad tool to have in any circumstance, other than maybe a sandboxed one (e.g., container, VM). So, biggest bang for your security buck is to not install the local disk reading tool in your LLM apps.
  9. They won in 3.6 secs, but second place took 3.73 (or 3.74 if being consistent with the winning time). So, did second place also optimize the PoW or were they presumably on an FPGA as well?

    The previous submission the author describes as being some expensive FPGA one was 4+ seconds. You'd think he'd mention something about how second place on his week was potentially the second fastest submission of all time, no?

  10. Regarding Internet connectivity regardless of the orbit or location, something like YC co Bifrost Orbital (https://bifrostorbital.com/), might be an option.
  11. This article resonated with me and puts into words some of the feelings I had towards the end of my PhD. This part:

    > An interesting case in software engineering is dismissal for lack of “evaluation.” It would be, of course, ridiculous to deny the benefits that the emphasis on systematic empirical measurement has brought to software engineering in the last three decades. But it has become difficult today to publish conceptual work not yet backed by systematic quantitative studies.

    struck a chord with me. The top-tier CS systems conferences for me (OSDI and SOSP) have gotten to the point where you basically have to be writing the paper about the system you built at a FAANG that serves 1B users daily to get accepted.

    It's hard for a novel idea and first-cut implementation to compete with systems built over many years with a team of a dozen software engineers. Obviously, those big systems deserve tons of credit and it's amazing that Big Tech publishes those papers! Credit to them. But it's also the case that novel ideas with an implementation that hasn't seen 1B users yet still have value.

    I suppose the argument is that workshops serve that purpose of novel ideas with unproven implementation. There's some truth to that, but as the article highlights, the full conference papers are the real currency.

  12. Thank you very much for writing this up. Good, thought-provoking ideas here.
  13. Doctors, lawyers, teachers and other licensed professionals do continuing education every two or three years. Software engineers are not licensed professionals, so there is no legal standard of quality that all software engineers are guaranteed to have met (and continue to meet). Hence, the interview is an assessment along with all other parts of the application.
  14. Cool stuff. I'm probably missing this, but where in the code are you ensuring that all feature vectors have the same number of dimensions (i.e., length)? From what I can tell, for a text value from sqlite, the code converts each char to a float and stores those bits in the vector. This could work if the hamming distance accounts for different length vectors, but that function appears to assume they are the same. Thanks for the clarification.
  15. What open source model are you using when you hit groq?

    I just benchmarked some perf for some of my larger context window queries last week and groq's API took 1.6 seconds versus 1.8 to 2.2 for OpenAI GPT-3.5-turbo. So, it wasn't much faster. I almost emailed their support to see if I was doing something wrong. Would love to hear any details about your workload or the complexity of your queries.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal