- This travel blog was posted a bit ago on HN [0]. Much of the nature of Greenland was a shock for me to learn about, including the severe mosquitos.
[0] https://matduggan.com/greenland-is-a-beautiful-nightmare/
- It is awesome.
But what I’ll say is, ideally they would demonstrate whether this model can perform any better than simple linear models for predicting gene expression interactions.
We’ve seen that some of the single cell “foundation” models aren’t actually the best at in silico perturbation modeling. Simple linear models can outperform them.
So this article makes me wonder: if we take this dataset they’ve acquired, and run very standard single cell RNA seq analyses (including pathway analyses), would this published association pop out?
My guess is that yes… it would. You’d just need the right scientist, right computational biologist, and right question.
However, I don’t say this to discredit the work in TFA. We are still in the early days of scSeq foundation models, and I am excited about their potential.
- > “We have a cat and a dog and a child and an adult in the same room,” he said. “Why not an A.I.?
Because pets are living creatures with agency that form emotional bonds to their owners. Pets also can’t sycophantically talk back to us.
I’m just as stunned as any other at the ability of LLMs/agents to code, plan, execute, etc.. But these sycophantic stochastic parrots do not have agency or emotion in the historic sense of those words.
LLMS can cause dangerous mental health outcomes for the unequipped who don’t understand that they don’t have agency.
- Sure thing. Here is an example set of the agent/spec/to-do files for a hobby project I'm actively working on.
https://gist.github.com/JacobBumgarner/d29b660cb81a227885acc...
- This was a fun read.
I’ve similarly been using spec.md and running to-do.md files that capture detailed descriptions of the problems and their scoped history. I mark each of my to-do’s with informational tags: [BUG], [FEAT], etc.
I point the LLM to the exact to-do (or section of to-do’s) with the spec.md in memory and let it work.
This has been working very well for me.
- I’m almost 30 and made this change about a year ago.
I now rotate between high rep (sets of 20 rep max) and medium weight weeks (sets of 8-12 reps). My joints haven’t ached in a while and I’ve become much less prone to random muscle tweaks. Mike Isreatel has an excellent intro to high rep training [0]. It produces pumps and mind-muscle connection like nothing else!
I actually went too far into the high rep/volume training direction for several months, but realized I needed to reincorporate medium weight lifts when I started losing a bit of grip strength. I am now super content with my current rotation cycle!
- Would you be willing to point me to a primer of how I can get started with building agents?
This week I experimented with building a simple planner/reviewer “agentic” iterative pipeline to automate an analysis workflow.
It was effectively me dipping my toes into this field, and I am so floored to learn more. But I’m unsure of where to start, since everything seems so fast paced.
I’m also unsure of how to experiment, since APIs rack up fees pretty quickly. Maybe local models?
- I haven’t listened to this podcast, but I listened to an excellent RadioLab podcast a while ago on the topic. They ended the podcast by discussing some of the ethics of “fixing” aphantasia, many of which I had never considered.
I recall them mentioning: 1. the ethical challenge of arguing that aphantasia is something that needs “fixed” in the first place 2. The unknowns of what might happen to someone emotionally if they go from nothing to something. This might sound odd, but we know that hyperphantasia can be associated with schizophrenia and other neuropsychiatric issues. 3. The implications of downstream cognitive “enhancements” that might result from this.
I have aphantasia, and I do not think I’d want it “fixed”.
My partner has hyperphantasia, and similarly she wouldn’t want it “fixed”.
- He would pay to have his videos as ads on other people’s videos.
E.g., I’d go to watch a video of SmarterEveryDay, and Tai Lopez would show up as an ad, telling me all about his Lamborghinis and bookshelves of books he’d never read. And people would just watch the full ad, even after they could’ve skipped (5s).
That was an interesting era of YouTube, for sure.
The cut in the demo (12:18) is very odd and makes me wonder if it’s real.