OpenAI’s CEO, Sam Altman, recently revealed how much he leaned on ChatGPT while caring for his newborn baby. In an interview on the official OpenAI podcast, Altman admitted he doesn’t know how he would have managed those first few weeks without constantly asking his AI chatbot for help. It’s kinda wild to think about.
Altman started by acknowledging that people have been caring for babies long before AI was a thing, but then he quickly confessed, “I don’t know how I would’ve done that” without ChatGPT’s assistance. He mentioned turning to the AI repeatedly during those early days, instead of the usual books, family advice, or even Google searches. It’s funny and a bit surprising coming from a tech titan, but also kinda relatable if you think about how much we rely on tech for everything these days.
He’s head over heels for AI, saying he spends much time thinking about how his kids will use AI in the future. According to Altman, his children might never be smarter than AI itself, but they’ll grow up being much more able than previous generations, able to do things we can’t even imagine yet. Whoa, that’s a bold claim!
But here’s a question: Will AI make people more able, or will it just do the work for them? If AI writes everything from day one, will people still know how to write themselves? This raises some tricky questions about dependency and skill loss that Altman’s super-optimistic view doesn’t address.
He even called today’s time “prehistoric” compared to what’s coming, which is a bit odd since prehistory means before any human records, but you get his point — he sees AI transforming everything in ways we can’t fully grasp yet. Still, it’s worth noting that our AI models rely heavily on tons of pre-AI data to learn from, so it’s not like AI just popped out of nowhere.
One big challenge facing AI right now is chatbot contamination. Since ChatGPT launched in 2022, the synthetic data produced by chatbots has been mixing into the training pools for new AI models. This could make future AI less accurate or dependable, a problem some call AI model collapse. Some experts say fixing this issue might be too expensive or even impossible. Yikes.
Altman’s views come off as one-sided, painting a picture where everything before AI is clunky and everything after is perfect. But anyone who’s used chatbots knows they have obvious flaws, like hallucinating facts or misunderstanding questions. It’d be nice if his enthusiasm included some acknowledgment of those bumps in the road.