- Max_Limelihood parentAndroid is open source; MacOS and Windows aren’t. This gives me more control over my computer, especially since this means LineageOS and GrapheneOS for the desktop soon.
- GNU/GNU
- Ubuntu replaced their core userland utils with uutils, so the bulk of it. I’m guessing most other distros will follow suit.
- Not jmp?
- This is a really good explanation of why I find Julia (effectively a Lisp in terms of these features) to be indispensable. The ability to generate code on the fly makes life so much easier that I just can't live without it.
- Never have I seen a title so perfectly encapsulate the very problem it's trying to solve.
If you keep trying to "protect" your research from any kind of competition, you're doomed from the start.
- Answer: they don’t
(Seriously, I’ve gotten so fed up with Python package management that I just use CondaPkg.jl, which uses Julia’s package manager to take care of Python packages. It is just so much cleaner and easier to use than anything in Python.)
- You’re 100% correct, I think. But it’s notable that in that case, an extra functional programming language would make things worse by dividing effort, not better.
- Because 14 wasn’t enough. https://xkcd.com/927/
- Actual AI researchers have wildly different opinions on this; IME, going off all the AI researchers I’ve talked to, they tend to split about 50/50.
- Self-appointed by the boards of directors at major companies and universities? https://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-r...
And no peer-reviewed research, apart from the dozens of widely-cited academic papers on this? All of these are just from one lab, not even including groups like CHAI: https://www.anthropic.com/#papers
- I feel like you’re misunderstanding how Julia syntax works. Julia syntax is Python syntax, for the most part. They’re not identical, but the differences are very small, and typically favor Julia—for example, compare:
x*(y.^2)
To the Python equivalent:
np.matmul(x, map(lambda x : x^2, y))
Note also that the first will be much faster because it can fuse the matrix multiplication with the exponentiation. That’s because Julia is a single language, and the whole thing is compiled at the same time. Python can’t fix the above code because NumPy is written in C, not Python.
Julia is just Python but fast.
- Julia already is a very high-level mathematics-oriented programming language, though.
The reason Mathematica is so much faster here is it’s using a different algorithm. When you compare using the same algorithm, Julia is 10-100x faster than Mathematica. https://julialang.org/benchmarks/
- This is a perfectly reasonable concern/criticism and one that plenty of EAs make. Going off funding, which is overwhelmingly directed towards global health and development, it’s probably one that most of them agree with. It also aligns with my experience talking to EAs, who mostly don’t work in or donate to AI.
But this is a criticism about effectiveness made within an EA framework. It assumes the thing we want to do is maximize the amount of good we do with our resources, and provides rational arguments for why AI won’t do that.
The AI folks think their cause is the one that does the most good, and they have rational arguments for that position. That’s why they’re considered part of the EA movement (despite not fitting in with the original vision).
That also means we have to listen and provide counterarguments before we reject their position. What we definitely shouldn’t do is write them off as “neckbeards” just because they’re working in tech and have unusual concerns. That’s how you end up writing off some 1930s physicist worried about the existential risk of nuclear fission weapons as some “Weird neckbeard nerd.”
- Yeah, and there’s also GiveWell. Probably 80% of EA money goes to development, with the rest evenly split between the other three cause areas.
I’d be very shocked if AI research got more than 5% of EA funding.
- The main benefit is that they’re probably more likely to actually read it. I dunno about you, but if someone gave me a book I’d probably actually read it. I definitely wouldn’t read a PDF someone emailed to me.
- It’s not; they’re pretty much completely unrelated fields. AI ethics focuses very little on AI alignment issues, which tends to worry about much bigger and more general problems.
The fact that you’re grouping AI alignment with is kind of an indication of the problem; most people have heard so little about AI alignment problems that they assume it’s the same thing as AI ethics.
- AI alignment research is still extremely neglected. There’s a handful of researches looking at it and that’s about it. There’s plenty of coverage/criticism about AI, but it tends to be very different than the kinds of things EAs worry about.
The only place I know of alignment actually being covered in normal media is Vox’s Future Perfect (and by Matt Yglesias, who used to work there) and that’s because EAs literally pay them to cover it.
- Why? I think it’s perfectly reasonable to think that distributing books about effective altruism would be a very effective use for $30,000 of donations. Most EAs give 10% of their income; if you assume the average EA will make $100,000 a year over a 40 year career (pretty reasonable for people at the IMO), this book only needs 1 of those people to become an EA for this to break even. (In fact, it doesn’t even need one — it just needs to be a 1% chance of producing a single EA!)
My Fermi estimate is probably off, but it’s a pretty sensible argument to me.
- Yep, exactly — and the average EA donates way more than $30,000 over their lifetime.
This is just a creative way to advertise, and I don’t see what’s wrong with charities trying to get their message out there.