As artificial intelligence finds its way into our phones, cars, homes and lives -- making it easier for us to shop, work and live -- some of the biggest names in A.I. got together in Davos, Switzerland, to talk about what they fear when it comes to machine learning.
"The OpenAI-style of model is good at some things, but not good at sort of like a life and death situations," said Sam Altman, CEO of OpenAI.
But then things took an even darker turn.
"We don't want to have a Hiroshima moment,” said Salesforce CEO, Marc Benioff. “We've seen technology go really wrong, and we saw Hiroshima, we don't want to see an AI Hiroshima."
"The notion of an AI Hiroshima makes most of us, I think, think about the military context, and it becomes obvious that we need to have a lot more public conversations about the uses of AI now being proposed and potentially adopted by the military," said Irina Raicu, Markkula Center's head of internet ethics.
And yes, AI is being looked at by the military. But Raicu says there are several other AI-related concerns that should not get glossed over.
"The ones that are less discussed but should be more are issues of perpetuating bias, or impacting people's privacy, or just using people's work in communications without permission," said Raicu.
On top of all that, job concerns. Earlier this week, the International Monetary Fund released a report guessing that up to 40% of all jobs worldwide could be affected by the rise of AI.
Get a weekly recap of the latest San Francisco Bay Area housing news. >Sign up for NBC Bay Area’s Housing Deconstructed newsletter.