Why I’m Feeling the A.G.I. - The New York Times

rw-book-cover

Metadata

  • Author: Kevin Roose
  • Full Title: Why I’m Feeling the A.G.I. - The New York Times
  • Category: articles
  • Summary: The author argues that we should take the progress toward artificial general intelligence (A.G.I.) seriously, as A.I. systems are rapidly improving and beginning to outperform humans in various tasks. He believes that A.G.I. may be announced within the next few years, leading to significant changes in our world. Despite skepticism from some, the advancements in A.I. are evident, and many in the industry are preparing for its arrival.
  • URL: https://www.nytimes.com/2025/03/14/technology/why-im-feeling-the-agi.html/

Highlights

  • the employees and executives of the leading A.I. labs — tend to be the most worried about how fast it’s improving. (View Highlight)
  • when I was covering the rise of social media, nobody inside Twitter, Foursquare or Pinterest was warning that their apps could cause societal chaos. (View Highlight)
  • The leading A.I. companies are actively preparing for A.G.I.’s arrival, and are studying potentially scary properties of their models, such as whether they’re capable of scheming and deception, in anticipation of their becoming more capable and autonomous. (View Highlight)
  • just as persuasive as expert opinion is the evidence that today’s A.I. systems are improving quickly, in ways that are fairly obvious to anyone who uses them. (View Highlight)
  • In A.I., bigger models, trained using more data and processing power, tend to produce better results, and today’s leading models are significantly bigger than their predecessors. (View Highlight)
  • “reasoning” models, which are built to take an additional computational step before giving a response. (View Highlight)
  • Ezra Klein recently wrote that the outputs of ChatGPT’s Deep Research, a premium feature that produces complex analytical briefs, were “at least the median” of the human researchers he’d worked with. (View Highlight)
  • software engineers tell me that A.I. does most of the actual coding for them, and that they increasingly feel that their job is to supervise the A.I. systems. (View Highlight)
  • Most of the advice I’ve heard for how institutions should prepare for A.G.I. boils down to things we should be doing anyway: modernizing our energy infrastructure, hardening our cybersecurity defenses, speeding up the approval pipeline for A.I.-designed drugs, writing regulations to prevent the most serious A.I. harms, teaching A.I. literacy in schools and prioritizing social and emotional development over soon-to-be-obsolete technical skills. These are all sensible ideas, with or without A.G.I. (View Highlight)
    • Note: Some interesting business ideas in here.
  • A bigger risk, I think, is that most people won’t realize that powerful A.I. is here until it’s staring them in the face — eliminating their job, ensnaring them in a scam, harming them or someone they love. (View Highlight)