Preferences

methodOverdrive
Joined 23 karma

  1. Cool. I plan to look at the source code later.

    One issue I noticed: when playing with the mouse, if I click on a tile, then click on some other tile, the FIRST tile clicked is the one that moves - which seems like a bug and is certainly a counterintuitive input behavior.

  2. I didn't read the study, but the WSJ article also didn't even mention the amount of fat or carbohydrates in the diets. I wouldn't be surprised if a "low-carb, but not actually ketogenic" diet was bad for you - if you're restricting carbs to be lower than the control 50% but still consuming, say, 20% carbohydrates, then you would never even become keto adapted.
  3. D'oh, missed the NaNs and wrote out a list without referring to the book... then checked the length of the list to make sure it had 8 things. Silly me!
  4. Bonus points if values near 0 are treated as 0 (encourages sparsity!)
  5. That's true - and I did enjoy it. I'm biased - still in university, so I'm used to dry papers. The lack of stodginess wouldn't have bothered me if I had been able to obtain key details on the format for free. I was a little annoyed to have to buy a 50-dollar paperback book instead of just downloading a short paper via the college library, though - I was convinced of the potential benefits of the format by one of Gustafson's earlier presentations and want to help in the efforts towards software/hardware implementations, so I didn't need enhanced accessibility to remain interested in the book.
  6. He's a bit showy about the format. Wish he would just put out a technical paper.

    Anyway, I guess his motivation might be "you can represent any real number (with finite bits, and therefore finite precision)". In the book, he presents an interesting case: little 4-bit versions of the Unum that can represent:

    -inf, (-inf, -2), -2, (-2, -1), (-1, -1/2), -1/2, (-1/2, -0), -0, 0, (0, 1/2), 1/2, (1/2, 1), 1, (1, 2), 2, (2, inf), inf.

    Putting together a pair of them, the book outlines simple interval arithmetic (where pairs of numbers can represent any interval between numbers on the line above, and single numbers can represent some of the closed intervals as above). The reason these are kind of neat is that using the standard Unum algorithms (without any fudging), you can get "correct" (albeit terribly imprecise) results for many real number computations. Questions like "is there a number satisfying a numerical predicate in some range?" or the value of a trigonmetric or exponential expression will come out "correct" (but you might get an answer like (-inf, inf)). If things work out as well as he claims (and demonstrates for some cases), then you can basically do the math to figure out how precise you want to be and choose an appropriate specialization of the format - or take advantage of the format's flexibility and do computations starting at a low precision and increasing precision until you are satisfied. In particular, it's kind of cool that you can do computations with little 8-bit intervals, and possibly circumvent doing more expensive computations (e.g. if you test if a property will hold anywhere in the Unum range and it won't hold anywhere, assuming you (and Gustafson) have done the math right, you can avoid doing more expensive checks with increased precision).

    Anyway, point is, the presentations are kind of flashy and misleading - and you're right, you can't represent any real number (just finitely representable dyadic intervals)... but the format itself _does_ seem promising...

  7. I would argue that unums are a "superior replacement" for doubles in many cases, though: in the case that you support unums that are "wide" enough, you can represent doubles exactly, plus you have additional values, plus some nice rules about when approximation error occurs/is propagated and not as many bits need to be stored or moved around on buses. It'll be a while before there's an implementation anywhere near as fast as existing FPUs, but Gustafson makes a good argument for his format. Personally, I'm more interested in the correctness benefits than the space/power/time savings - even if unums are never faster than 64-bit floats, they present an interesting way to do real-number arithmetic and my brief exposure to them leaves me much more confident that I could write numerical algorithms correctly than with doubles - I do numerical/statistical algorithms with doubles in my work, and it's really a pain to reason about things that the "uncertain" (open-inverval) values of the unum format would greatly simplify.

    They're also an inferior replacement in the case that you want to take advantage of highly-optimized hardware, and that getting a correct answer doesn't really matter. I don't see unums replacing floats for, say, video game graphics. But for numerical computation, it seems like the only real flaw with unums compared to doubles is the nonexistence of a hardware implementation, and the existing popularity of doubles.

  8. New hardware would be needed. I think libraries like LAPACK basically are fast because they do a good job taking advantage of the hardware implementation of floating point math, and are written with sufficient understanding of the format to mitigate approximation errors. Therefore you would probably need to rewrite huge parts of such libraries to use the new Unum hardware or software - though doing so might be relatively straightforward - as you say, Unum can be seen as a superset of floating point and it does support all of the same operations.
  9. Not many. I did go ahead and purchase his book. It's... kind of a weird read, honestly. Useful - it clarified some things about the proposed Unum format - but also written very casually, like "pop science" prose, which seems inappropriate - who would buy a book about a floating-point number format if they didn't want a dry, boring book full of technical details? Some of the arguments are made as though to convince a non-technical audience - maybe Gustafson wants managers (or former engineers who haven't worked as engineers for a long time) to read his book, but I think it would have been better to publish all of the details without any fluff, first.

    Also worth noting that half of the book is about the "ubox" method for solving optimization problems - also cool, but may be overkill if you are just interested in the numeric format itself. Personally, I've been working on an implementation of the format that I can toy around with - I have no real interest in learning a lot about the cool algorithms I could do with it until I can show myself that it works for basic arithmetic, etc, as well as the author claims.

    Gustafson also makes the code available (I think [here](https://www.crcpress.com/The-End-of-Error-Unum-Computing/Gus...). It's Mathematica code... there is a free viewer for that format (if you don't have Mathematica) which can print out a PDF with richly formatted equations.

    Also, Googling for that link led me to this [Python implementation someone whipped up](https://github.com/jrmuizel/pyunum).

  10. I came here to say something similar - my exact choice of subreddits varies (ever so slightly) from this list, but there are a lot of small, focused communities that are full of knowledgeable, kind people posting useful information that I would otherwise have no easy mechanism to find. Hacker News is great - and there exist other sites to find interesting news, reading, and products - but only the various specialized subreddits thus far cater to my more esoteric interests. Esoteric interests like academic subjects, not like racism or dumb jokes. I think the key to getting a lot out of reddit is to make an account right away, and tailor a list of actually-interesting subreddits. The front page is a cesspool not just in terms of vitriolic, hateful posts - it's also full of gifs of cats, silly jokes, etc. - and if you use reddit without filtering out such subreddits you'll end up wasting a lot of time on vapid content. All the dumb racist or sexist "humor" is vapid content too - but I'm willing to put up with both sources of worthless links, since I can easily log in and participate in interesting discussions/link-sharing with people who have similar interests. And I have not yet seen a community that enables this as well as reddit does, so I'll be sticking with it for the time being.
  11. You can explicitly add it as an extension from the Chrome Store. (I actually did this, because I like the manager/link sharing functionality... even though it means putting up with the awful bookmark-adding dialogue).
  12. I didn't read the paper (so I might be full of shit) but I think the idea is that the semantic sliders represent vectors in a vector space that is learned by a machine learning model, based on a sample set of examples. So the idea would be to take a bunch of examples (made by a 3D artist, taken from existing work, scanned in from real objects, etc) and first rate them in each category (which is subjective, and does still take a fair amount of human work)... but then you use an algorithm to train a model that can generate new examples in that vector space. Then, the sliders just modify the values of the different components of a vector representing a new, generated example. I doubt that the behavior of the semantic sliders is hard-coded - instead, there's a general algorithm for coming up with new sets of semantic sliders. So - ideally, for problems this model works well for, you would ideally be able to dramatically reduce the number of 3D models you need to characterize a whole space of parametrized variants.

    EDIT:

    Just actually went and read the paper. It's not machine learning, it's crowd-sourced. So it really needs a lot of people working on it... so I think that your concerns are totally well placed).

    (Maybe we'll see the machine-learning version of this in the near future!)

  13. I read the paper - it's interesting and definitely improves on prior efforts. But I wouldn't call it a "breakthrough" - a few percent better accuracy on some datasets (with no real discussion of other measures of performance), and the algorithm they use is dead simple: a recurrent neural network with rectified linear units (as opposed to Long Short Term Memory). It sounds to me like the major improvements they made were to use a ton of data, and a ton of processing power - the interesting part of the paper is largely about data partitioning to take advantage of multiple GPUs, not about a novel learning algorithm or network architecture.

    Not to discredit work by what I'm sure is a very effective machine learning research team - this paper is probably important, but as an incremental improvement on prior algorithms that takes advantage of modern hardware, not a dramatically new approach.

    I guess the "breakthrough" is showing that pure deep learning (without fancy acoustic models, etc) can perform well - which is pretty cool.

  14. This concept is interesting, but the relative lack of mathematical argument or detail made the paper unconvincing.

    There was also at least one obvious, major typo ("Gausian"), which is the sort of thing that increases my skepticism - whether rightly so or not.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal