7/2/2023 0 Comments Quotes about decisions“We’ve got to be careful here,” Altman told the senators during a committee hearing. He’s a “little bit scared” of his own creation. I think there are people in the world who don’t want to work and get fulfillment in other ways, and that shouldn’t be stigmatized either.” Frankenstein’s monster? “I think a lot of customer service jobs, a lot of data entry jobs get eliminated pretty quickly,” Altman told David Remnick on the New Yorker Radio Hour. “The model will confidently state things as if they were facts that are entirely made up.” Out of workĬertain jobs will be wiped out fast by A.I., he said. “The thing that I try to caution people the most is what we call the ‘hallucinations problem,’” Altman told ABC News. He cautioned people who use ChatGPT that it could lie. “Given that we’re going to face an election next year and these models are getting better, I think this is a significant area of concern.” “The more general ability of these models to manipulate, to persuade, to provide sort of one-on-one interactive disinformation,” Altman said during a Senate hearing. can provide “one-on-one interactive disinformation” and, he said, potentially impact the 2024 presidential election. “I worry that as the models get better and better, the users can have less and less of their own discriminating thought process,” Altman said in his first appearance before Congress. “And that doesn’t require superintelligence.” Mental decline “The current worries that I have are that there are going to be disinformation problems or economic shocks, or something else at a level far beyond anything we’re prepared for,” Altman told Lex Fridman on his podcast. “If this technology goes wrong, it can go quite wrong.” Risky businessĪ.I. We are too,” he said at a Senate subcommittee hearing in May. “We understand that people are anxious about how it can change the way we live. has the potential to go “quite wrong,” Altman fears. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” said the statement, signed by other tech leaders such as Elon Musk and Bill Gates. “Mitigating the risk of extinction from A.I. Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.” An atomic level problemĪltman signed a statement saying that A.I. There will be other people who don’t put some of the safety limits that we put on it. “A thing that I do worry about is we’re not going to be the only creator of this technology. “We do worry a lot about authoritarian governments developing this,” he said in an interview with ABC News. “Now that they’re getting better at writing computer code, they could be used for offensive cyberattacks.” Axis of evilīad actors could use the technology, and the world has only limited time to prevent it, Altman warned. “I’m particularly worried that these models could be used for large-scale disinformation,” Altman said in an interview with ABC News. could launch cyberattacks and sow disinformation, he said. I think these are all scary.” Fake newsĪ.I. that could design novel biological pathogens. When asked by Fox News about what dangerous things A.I. “When we develop superintelligence, we’re likely to make some decisions that public market investors would view very strangely,” he said at an event in Abu Dhabi. OpenAI might make strange decisions in the future that won’t make investors happy, and Altman therefore won’t take the company public anytime soon. “That maybe there was something hard and complicated in there that we didn’t understand and have now already kicked it off.” Strange moves “What I lose the most sleep over is the hypothetical idea that we already have done something really bad by launching ChatGPT,” he said at an Economic Times event on Wednesday. “I’m more worried about an accidental misuse case in the short term.” Bad dreamsĪltman loses sleep thinking that releasing ChatGPT might have been “really bad.” “The bad case-and I think this is important to say-is, like, lights-out for all of us,” he said. The worst case could be “lights out” for humanity, Altman said in an interview with StrictlyVC.
0 Comments
Leave a Reply. |