> the goal being to find a novel way to get neural networks to generalize from a few samples
Remove "neural networks". Most ARC competitors aren't using NNs or even machine learning. I'm fairly sure NNs aren't needed here.
> why is the MindsAI solution accepted?
I hope you're not serious. They obviously haven't broken any rule.
ARC is a benchmark. The point of a benchmark is to compare differing approaches. It's not rigged.
I also don't understand why MindsAI is included. ARC is supposed to grade LLMs on their ability to generalize i.e. the higher score the more useful they are. If MindsAI scores x2 than the current SOTA then why are we wasting our $20 on inferior LLMs like ChatGPT adn Claude when we could be using the one-true-god MindsAI?
If the answer is "because it's not a general-purpose LLM" then why is ARC marketed as the ultimate benchmark, the litmus test for AGI (I know I know, passing ARC doesn't mean AGI, but the opposite is true, I know)?
ARC was never supposed to grade LLMs! I designed the ARC format back when LLMs weren't a thing at all. It's a test of AI systems' ability to generalize to novel tasks.
I believe the MindsAI solution does feature novel ideas that do indeed lead to better generalization (test-time fine-tuning). So it's definitely the kind of research that ARC was supposed to incentivize -- things are working as intended. It's not a "hack" of the benchmark.
And yes, they do use a lot of synthetic pretraining data, which is much less interesting research-wise (no progress on generalization that way...) but ultimately it's on us to make a robust benchmark. MindsAI is playing by the rules.
I have one question regarding your ARC Prize competition: The current leader from the leaderboard (MindsAI) seems not to be following the original intention of the competition (fine tune a model with millions of tasks similar with the ARC tasks). IMO this is against the goal/intention of the competition, the goal being to find a novel way to get neural networks to generalize from a few samples. You can solve almost anything by brute-forcing it (fine tunning on millions of samples). If you agree with me, why is the MindsAI solution accepted?