Andreas Horn’s Post

🔥 𝗠𝗨𝗦𝗧-𝗪𝗔𝗧𝗖𝗛: 𝗜𝗹𝘆𝗮 𝗦𝘂𝘁𝘀𝗸𝗲𝘃𝗲𝗿  𝗶𝘀 𝗼𝗻𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗰𝗹𝗲𝗮𝗿𝗲𝘀𝘁 𝗺𝗶𝗻𝗱𝘀 𝗶𝗻 𝗔𝗜 - 𝗮𝗻𝗱 𝘁𝗵𝗶𝘀 𝗱𝗶𝘀𝗰𝘂𝘀𝘀𝗶𝗼𝗻 𝗮𝗱𝗱𝘀 𝗮 𝗽𝗼𝘄𝗲𝗿𝗳𝘂𝗹 𝗽𝗲𝗿𝘀𝗽𝗲𝗰𝘁𝗶𝘃𝗲: "𝘀𝗰𝗮𝗹𝗶𝗻𝗴 𝗮𝗹𝗼𝗻𝗲 𝘄𝗶𝗹𝗹 𝗻𝗼𝘁 𝗴𝗲𝘁 𝘂𝘀 𝘁𝗼 𝗔𝗚𝗜". If you are into AI and care about where it is truly headed - not the hype cycle - grab a coffee and give this a listen. 𝗔 𝗳𝗲𝘄 𝗵𝗶𝗴𝗵𝗹𝗶𝗴𝗵𝘁𝘀 𝘁𝗵𝗮𝘁 𝘀𝘁𝗼𝗼𝗱 𝗼𝘂𝘁: ⬇️ 1. The Scaling Era is ending → Ilya says that we no longer get automatic breakthroughs from more data plus more compute. The next leap requires new science and new learning principles. 2. Benchmarks lie → Models may score extremely well on benchmarks yet still make basic and repeated mistakes in deployment (e.g., introduce and re-introduce bugs). Very interesting perspective he is using: models resemble students who memorised proof techniques, rather than those who understand the concepts. 3. Humans learn with extreme efficiency → A person learns to drive after a few hours. Robots need millions of simulations. So far we are missing the principle that makes human learning work. 4. The alignment target: care for sentient life → Human values are messy and hard to define. So Ilya suggests a simpler target: create AI that genuinely cares about sentient life. If it cares about itself. and sees humans as fellow sentient beings. Alignment becomes more natural. 5. Exponential change feels boring → We are living through the biggest technological shift in human history. Billions pouring into AI and frontier models landing every few months. But because progress happens behind the scenes and humans adapt instantly to new baselines. It all feels… normal. This is why a slow takeoff can surprise us. And it does not look like science fiction until it is already here. 6. Research taste matters → Ilya explains that real breakthroughs are not found by chasing benchmarks. They come from strong intuition about what is beautiful, simple, and biologically plausible. That “taste” guides researchers to the right ideas long before there is data to prove them. And the people who bet early on those ideas end up defining the future. 𝗧𝗟.𝗗𝗥: Scaling took us here. Research takes us further. Link to the full interview: https://lnkd.in/d5WxNeDt ↓ 𝗜𝗳 𝘆𝗼𝘂 𝗮𝗽𝗽𝗿𝗲𝗰𝗶𝗮𝘁𝗲 𝗔𝗜 𝗶𝗻𝘀𝗶𝗴𝗵𝘁𝘀 𝘄𝗶𝘁𝗵 𝘀𝘂𝗯𝘀𝘁𝗮𝗻𝗰𝗲. 𝗬𝗼𝘂 𝘄𝗶𝗹𝗹 𝗲𝗻𝗷𝗼𝘆 𝗺𝘆 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿 𝗮𝘀 𝘄𝗲𝗹𝗹: https://lnkd.in/dbf74Y9E

  • No alternative text description for this image

Andreas Horn, great share. “Scaling took us here, research takes us further” hits hard. Feels like the real frontier now is new learning principles + better inductive biases, not just bigger clusters.

Really solid breakdown. The part about scaling hitting a ceiling was especially interesting.

Scaling built the foundation, new science will unlock what comes next. Andreas Horn

AI won’t get smarter just by scaling. It needs to actually learn and adapt like we do.

Like
Reply

The next breakthroughs will come from new architectures and learning principles, not just more scale. Benchmarks hide this, but real systems reveal it quickly.

Like
Reply

Really interesting discussion! The way humans learn is amazing. If we can figure that out for AI, we’ll make huge strides!

Like
Reply

Ilya’s point about AI needing to genuinely care about sentient life really hits home. It’s a reminder that behind every algorithm, there’s a deeper purpose we shouldn’t lose sight of. Excited to see where this next chapter takes us.

Like
Reply

The perspective on AI alignment and how we define sentient life is also something worth pondering.  Andreas Horn

Like
Reply

Thanks for sharing these key takeaways Andreas Horn. Very insightful breakdown.

Like
Reply

Andreas Horn, if scaling is no longer enough, what kind of research directions should we be watching or learning more about?

Like
Reply
See more comments

To view or add a comment, sign in

Explore content categories