For three years, I led AI R&D at Paige, a startup aimed at revolutionizing the detection and treatment of cancer. This resulted in the first FDA-cleared AI system for helping pathologists to detect cancer. During my time at Paige, the company grew from a handful of employees to almost 200. I currently serve on Paige's Scientific Advisory Board.
Previously, I was a professor at the Rochester Institute of Technology (RIT). For four years I was visiting faculty at Cornell Tech, where I taught a popular course on deep learning. Before becoming a professor, I worked at NASA JPL. I received my PhD from UC San Diego.
Web: www.chriskanan.com
- chriskanan parentI think we need to distinguish among kinds of AGI, as the term has become overloaded and redefined over time. I'd argue we need to retire the term and use more appropriate terminology to distinguish between economic automation and human-like synthetic minds. I wrote a post about this here: https://syntheticminds.substack.com/p/retiring-agi-two-paths...
- See this study, which is consistent with your thesis: https://www.eurekalert.org/news-releases/814485
Essentially, it claims that modern humans and our ancestors starting with Homo habilis were primarily carnivores for 2 million years. We moved back to an omnivorous diet starting around 85,000 years ago after killing off the megafauna, is the hypothesis.
- 2 points
- That's the practical reason for why one might care. Keep in mind that the solar system is rotating around the galaxy, so over time different stars become closer or farther away.
As the Kurzesagt video points out, a supernova within 100 light-years would make space travel very difficult for humans and machines due to the immense amount of radiation for many years.
Still, I think the primary value is in expanding our understanding of science and the nature of the universe and our location within it.
- A Type II supernova within 26 light-years of Earth is estimated to destroy more than half of the Earth's ozone layer. Some have argued that supernovas within 250-100 light-years can have a significant impact on Earth's environment, increase cancer rates, and kill a lot of plankton. They can potentially cause ice ages and extinctions. Within 25 light-years, we are within a supernova's "kill range." Fortunately, nothing should go supernova close to us for a long time.
Wikipedia article: https://en.wikipedia.org/wiki/Near-Earth_supernova
Kurzgesagt video on the impact on Earth of supernovas at varying distances: https://www.youtube.com/watch?v=q4DF3j4saCE
- Read the paper. The media is not providing a lot of missing context. The paper points out problems like leadership failures for those efforts, lack of employee buy-in (potentially because they use their personal LLM), etc.
A huge fraction of people at my work use LLMs, but only a small fraction use the LLM they provided. Almost everyone is using a personal license
- This is so shortsighted. The US needs a huge increase in its electricity generation capabilities, and nowadays, rewnewables, especially solar, are the cheapest option.
This video from a few days ago analyzes the issue: https://www.youtube.com/watch?v=2tNp2vsxEzk
Regardless of climate change issues, the anti-renewable policy doesn't seem to make any sense from an economic, growth, or national security standpoint. It even is contrary to the anti-regulation and pro-capitalism _stated_ stance of the administration.
- That's my assessment of the report as well.... really, some news truly is "fake" where they are pushing a narrative that they think will drive clicks and eyeballs, and the media is severely misrepresenting what is in this report.
The failure is not AI, but that a lot of existing employees are not adopting the tools or at least not adopting the tools provided by their company. The "Shadow AI economy" they discuss is a real issue: People are just using their personal subscriptions to LLMs rather than internal company offerings. My university made an enterprise version of ChatGPT available to all students, faculty, and staff so that it can be used with data that should not be used with cloud-based LLMs, but it lacks a lot of features and has many limitations compared to, for example, GPT-5. So, adoption and retention of users of that system is relatively low, which is almost surely due to its limitations compared to cloud-based options. Most use-cases don't necessarily involve data that would be illegal to use with a cloud-based system.
- Where is the actual paper that makes these claims? I'm seeing this story repeated all over today, but the link doesn't actually seem to go to the study.
I am not going to trust it without actually going over the paper.
Even then, if it isn't peer-reviewed and properly vetted, I still wouldn't necessarily trust it. The MIT study on AI's impact on scientific discovery that made a big splash a year ago was fraudulent even though it was peer reviewed (so I'd really like to know about the veracity of the data): https://www.ndtv.com/science/mit-retracts-popular-study-clai...
- Sam Altman way oversold GPT-5's capabilities, in that it doesn't feel like a big leap in capability from a user's perspective; however, the a idea of a trainable dynamic router enabling them to run inference using a lot less compute (in aggregate) to me seems like a major win. Just not necessarily a win for the user (a win for the electric grid and making OpenAI's models more cost competitive).
- 5 points
- If they are going to do this, they really ought to corroborate the face recognition with fingerprints. Many people have unrelated doppelgangers, even if an AI algorithm was near perfect: https://twinstrangers.net/
- Is there a list of the papers that were flagged as doing this?
A lot of people are reviewing with LLMs, despite it being banned. I don't entirely blame people nowadays... the person inclined to review using LLMs without double checking everything is probably someone who would have given a generic terrible review anyway.
A lot of conferences now require that one or even all authors who submit to the conference review for it, but they may be very unqualified. I've been told that I must review for conferences where some collaborators are submitting a paper and I helped, but I really don't know much about the field. I also have to be pretty picky with the venues I review for nowadays, just because my time is way too limited.
Conference reviewing has always been rife with problems, where the majority of reviewers wait until the last day which means they aren't going to do a very good job evaluating 5-10 papers.
- This will be huge in the next decade and powered by AI. There are so many competitors, currently, that it is hard to know who the winners will be. Nvidia is already angling for humanoid robotics with its investments.
- The game is only about 30 hours and has no micro transactions. It is addictive until you beat it. Easily game of the year.
- Different gyms have very different cultures. Try going to different ones to see if there is one you like. For example, Gold's Gym has a lot of bodybuilders whereas I've found the YMCA is mostly older folks trying to stay active.
- 1 point
- What I do is to always set the context by giving the my "background" and some papers as reading material such that I've conditioned the model for whatever topic that will be discussed as the first step.
- They address this in the AlphaEvolve paper:
"While AI Co-Scientist represents scientific hypotheses and their evaluation criteria in natural language, AlphaEvolve focuses on evolving code, and directs evolution using programmatic evaluation functions. This choice enables us to substantially sidestep LLM hallucinations, which allows AlphaEvolve to carry on the evolution process for a large number of time steps."
- If they only studied remote work around the time of COVID, I'm not sure if findings will generalize. I think the pandemic caused a lot of people to reassess their lives and careers, and I don't know if increases in new venture creation can be entirely attributed to remote work.
- My dishwasher from the 1990s dried much better than the one I got in 2021, which required rinsing each dish after washing to get rid of the soap taste. It then broke after only 1.5 years. My new one is better, but still leaves dishes pretty wet and I still have to rinse a lot of cups to get rid of the soap residue.
- I've been thinking deeply about a lot of the same topics via teaching a course on AGI this semester (defining it, conflicting definitions, socioeconomic impact, etc. That said, I disagree with him vehemently regarding some of his opinions such as that open weight models should be banned. He equates the weights of models as equivalent to fissile material for building nuclear bombs.
On some topics, I 100% agree with him, e.g., on the issues with value alignment (humans disagree). I devoted an entire lecture to that in my course.
I also think he doesn't understand the romance of space exploration and why, in the very long term, that is critical (learning cosmology is one of my hobbies).
- 1 point
- This is exactly how I feel. I use an AI powered email client and I specifically requested this to its dev team a year ago and they were pretty dismissive.
Are there any email clients with this function?
- That was my initial reaction as well, but I think that isn't fair upon looking more closely. Equation 1 of their paper is a unifying equation such that different choices for the terms result in the various classical and new algorithms.
I still wouldn't call it a periodic table, though.
- From an outsider's perspective who has studied the history of science, it does feel a bit like the majority of the cosmology community refuses to consider alternative explanations that may also be consistent with observations but don't assume dark matter.
For example, this relativistic MOND-inspired theory is supposedly consistent with observations from CMB and gravitational lensing: https://physics.aps.org/articles/v14/143
There is also Jonathan Oppenheim's body of work on stochastic spacetime which supposedly accounts for these observations without needing dark matter (and it also doesn't need dark energy): https://www.theguardian.com/science/2024/mar/09/controversia...
It feels like the community is stuck in dogma, and is in need of a paradigm shift. I'd like to see more effort and funding towards testing alternative theories. I'd also argue strongly for more investment in space telescopes that could further our understanding, especially given how much JWST has already provided evidence against the standard model of cosmology.
About $6 to $11B have been spent trying to detect dark matter over the past 30 years or so, and nothing has turned up. Investing in testing alternatives and efforts to gather data that would help refine hypotheses seems like money well-spent to me.
- I think this is the first time I've posted an article that has been flagged, especially given that the source is the journal Science.
As a professor working in AI, I'll probably be fine, but if I cannot get funding it will be challenging to stay in academia. Then again, both of my last major NSF proposals (which are in review) had either a required Institutional DEI Commitment Statement (that was for an NSF MRI proposal), and the other was a major AI proposal which required about 1/3 of the proposal to be aimed at broader impacts that were largely aimed at increasing diversity in AI. I do care about increasing the number of women, black, and Hispanic people in AI research, although I also had a section about how we need to increase our domestic production of AI scientists given that 70%+ of the PhDs produced in AI in the USA are non-citizens.
Both are still in review. I'm not optimistic, but the work required hundreds of hours to create those proposals....
- 104 points
- I think this is at least a major part of the problem, and I share this hypothesis (and have been saying as much for the past two years). We never train them to know what they know and what they don't know (meta-cognition). We only train them to output the next token, not to self-reflect and ask: Is this true based on my prior knowledge?
- 8 points