Preferences

yid
Joined 3,329 karma
A world authority on practically nothing, and a pseduorandom pseudonym to boot.

  1. I think that's just called "code".
  2. > Am I naive for thinking that nothing like that should take as long as 6-9 months in the happy case and that it's absurd for it to not succeed at all?

    Bluntly, yes. And so is every other reply to you that says "no this isn't naive", or "there's no reason this project shouldn't have finished". All that means is that you've not seen a truly "enterprise" codebase that may be bringing in tons of business value, but whose internals are a true human centipede of bad practices and organic tendrils of doing things the wrong way.

  3. Twilio owns Authy
  4. > Did you actually use the command you ended up with?

    Yes! Note that I had to use my domain knowledge to sift through the options and eliminate the garbage, but the experience was just _faster_ than repeated searches and digging through ad-laden garbage sites.

  5. Here's my pet example...feel free to google around yourself on this.

    Problem: I want an AWS CLI command line that requests a whole bunch of wildcard certificates from AWS Certificate Manager (ACM) for a TLD.

    Ostensible solution: the AWS official docs have a small snippet to achieve this, BUT -- the snippet on the official page is inadvisable as it leads to a browser cert warning.

    So I (skeptically) asked ChatGPT for a command line to achieve what I was trying to do.

    Try 1: got basically the snippet from the AWS official docs (but with the inadvisable flag set to the _Correct_ value, strangely)

    Prompt 2: please give me more best practice options

    Try 2: get back a bunch of new CLI options and their meanings. 3 are useful. 1 is hallucinated. 1 is deprecated.

    Prompt 3: keep going with more options

    Try 3: 2 more useful new options, 2 more options I chose not to use

    As a skeptic, the overall experience was much more efficient that googling around or even reading a manpage. I put it all on the fact that context is maintained between questions, so you don't have to repeat yourself when asking for clarifications.

  6. A lot of the value comes from follow-up questions. Imagine being able to interrogate a StackOverflow answer with new constraints and details. Not always correct, but in some cases, faster that typing in a new search term and parsing a screen full of links.
  7. > What we're showing is that Rama creates a new era in software engineering where the cost of building applications at scale is radically reduced.

    Bold of you to come to HN with the breathless hyperbolic marketing fluff that may work on Twitter...

  8. ts-node [1]: am i a joke to you?

    [1] https://www.npmjs.com/package/ts-node

  9. > but it WILL surplant IP eventually.

    The painful move from IPv4 to IPv6 suggests that this is unlikely. More likely is an overlay over IP, TCP, or even HTTPS.

  10. There may be a clue in the story we’re commenting on.
  11. Your comment represents some of the best of HN (detailed, illuminating, informative), but is incredibly depressing for someone with an idle curiosity in FPGAs. This is what I've long suspected, and it seems that the barrier to entry is generally a bit too high for "software" people.
  12. > I'm hopeful but there's a lot of proprietary baggage around FPGAs that I think have kept them from truly reaching their potential.

    I don't really know much about this aspect, could you elaborate? I'm genuinely curious.

  13. > I don't know why people put up with it?

    Lock-in? Once you have a few gigs up on Dropbox, it's a bit of a challenge transferring it elsewhere.

  14. This aspect of their new chips is massively underrated. An FPGA is the future-proof solution here, not chip-level instructions for the soup-du-jour in machine learning.

    Edit: which is not to say that I'm not welcoming the new instructions with open arms...

  15. Google is putting a lot of arrows behind Firebase (and cloud in general under Diane Greene), so I'd say it's not going anywhere anytime soon.
  16. > First, a class action suit for 48 hours of downloads on an app is not likely.

    You'd be amazed at what a few billion in the bank will attract, especially when it's cheaper to settle than litigate.

  17. May I ask why this is a "nanodegree"? Is having this qualification likely to improve someone's chances of getting a self-driving engineering position in industry?
  18. I think people also forget that the Star Trek AI was in a semi-militarized scenario where efficiency and information greatly outweighed individual privacy needs.
  19. So the root cause is that WiredTiger locks up and SIGTERMs when it fills the cache? If this is indeed the cause, I must say this does shake my faith in WiredTiger. That's a pretty basic scenario that a company like 10gen should be testing for regularly, certainly before releases.

    And before the Mongo haters come out, remember that WiredTiger was written by about as stellar a database team as you can have.

  20. Whenever I see "base64" mentioned in a security article, I get cautious.

    The "split token" password reset is snake oil. Just store the hash of the token (ideally stretched like any password) in the database and mail the original token out. No need for "split tokens". A password reset token is a temporary password and should be treated like one.

  21. Depends on too many factors for even a ballpark. Take Google's Machine Vision API for instance. The limiting factor here is that the larger your model (and deep networks are very large models in terms of free parameters), the more training data you need to make a good approximation. To come close to "stealing" their entire trained model, my guess is that your API use would probably multiply Google's annual revenue by a small positive integer.

    Alternatively, you could restrict your "stolen" model to a smaller domain and use fewer, more targeted examples for training. But at this point, you might as well start blending in predictions from other APIs, perhaps even training one off the errors of another. This is basically a technique that has been around for a long time, and in one incarnation is called "boosting" (see Adaboost).

  22. "Extracting a model" refers to approximating someone else's black box outputs. You would be dissecting your own approximation, which could systematically be very different from whatever black box you're aiming to make inferences about, even if they both produce similar outputs.
  23. > Just brute force queries at the API, log the results, and start working on your own model based on theirs.

    To add insult to injury, you could outsource the training of your own model to the API too.

  24. > Large internet companies like Amazon, Netflix, and Twitter have shown that single monolithic codebases do not scale to large numbers of users,

    And yet, both Facebook and Google have large, monolithic codebases.

  25. They're not breaking with Zookeeper, it sounds like they're refactoring to make zookeeper use transparent to producers and consumers.
  26. > If Oracle thinks they can beat that, then more power to them but I'm not quite convinced.

    Don't rule out the power of acquisitions (see: Firebase/Google, Parse/Facebook). Word on the street is that they're also throwing silly mounds of cash at cloud talent.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal