'It could go quite wrong': ChatGPT inventor Sam Altman admits A.I. could cause 'significant harm to the world' as he testifies in front of Congress
- OpenAI CEO Sam Altman spoke to Congress about the dangers of AI
- Altman said ChatGPT is a 'printing press moment' - and not the atomic bomb
- READ MORE: ChatGPT can influence people to make life or DEATH decisions
OpenAI CEO Sam Altman urged Congress Tuesday to establish regulations for artificial intelligence, admitting that the technology 'could go quite wrong.'
Lawmakers grilled the CEO for five hours, stressing that ChatGPT and other models could reshape 'human history' for better or worse, likening it to either the printing press or the atomic bomb.
Altman, who looked flushed and wide-eyed during the exchange over the future AI could create, admitted his 'worst fears' are that 'significant harm' could be caused to the world using his technology.
'If this technology goes wrong, it could go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening,' he continued.
Tuesday's hearing is the first of a series intended to write rules for AI, which lawmakers said should have been done years ago.
Senator Richard Blumenthal, who presided over the hearing, said Congress failed to seize the moment with the birth of social media, allowing predators to harm children - but that moment has not passed with AI.

OpenAI CEO Sam Altman spoke in front of Congress about the dangers of AI after his company's ChatGPT exploded in popularity in just a few months
San Francisco-based OpenAI rocketed to public attention after it released ChatGPT late last year.
ChatGPT is a free chatbot tool that answers questions with convincingly human-like responses.
Senator Josh Hawley said: 'A year ago we couldn't have had this discussion because this tech had not yet burst to public.
'But [this hearing] shows just how rapidly [AI] is changing and transforming our world.'
Tuesday's hearing aimed not to control AI, but to start a discussion on how to make ChatGPT and other models transparent, ensure risks are disclosed and establish scorecards.
Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee's subcommittee on privacy, technology and the law, opened the hearing with a recorded speech that sounded like the senator but was actually a voice clone trained on Blumenthal's floor speeches and reciting a speech written by ChatGPT after he asked the chatbot, 'How I would open this hearing?'
The result was impressive, said Blumenthal, but he added: 'What if I had asked it, and what if it had provided an endorsement of Ukraine surrendering or (Russian President) Vladimir Putin´s leadership?'
Blumenthal said AI companies should be required to test their systems and disclose known risks before releasing them.

San Francisco-based OpenAI rocketed to public attention after it released ChatGPT late last year. ChatGPT is a free chatbot tool that answers questions with convincingly human-like responses

Altman told senators that generative AI could be a 'printing press moment,' but he is not blind to its fault, noting policymakers and industry leaders need to work together to 'make it so'
Altman, who looked flushed and wide-eyed during the grilling over the future AI could create, admitted his 'worst fears' are causing 'significant harm to the world' using his technology.
'If this technology goes wrong, it could go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening,' he continued.
One issue raised during the hearing has been discussed among the public - how AI will impact jobs.
'The biggest nightmare is the looming industrial revolution of the displacement of workers.' Blumenthal said in his opening statement.
Altman addressed this concern later in the hearing, stating he thinks the technology will 'entirely automate away some jobs.'
While ChatGPT could eliminate jobs, Altman predicted, it will also create new ones 'that we believe will be much better.'
'I believe that there will be far greater jobs on the other side of this, and the jobs of today will get better,' he said.
'I think it will entirely automate away some jobs, and it will create new ones that we believe will be much better.'
'There will be an impact on jobs. We try to be very clear about that,' he said.
Also sitting in the court was Christina Montgomery, IBM's chief privacy officer, who also admitted AI will change everyday jobs but will also create new ones.
'I am a personal example of a job that didn't exist [before AI],' she said.
The 2024 election was also brought up during the hearing by Hawley, who fears AI can sway people's opinions, and Senator Amy Klobuchar voiced concerns about it spreading misinformation.
Altman, however, agreed with their apprehensions about the future presidential election.
'It's one of my areas of greatest concerns — the more general capability of these models to manipulate, to persuade, to provide sort of one-on-one disinformation,' said Altman.
The public uses ChatGPT to write research papers, books, news articles, emails and other text-based work, while many see it as a virtual assistant.
In its simplest form, AI is a field that combines computer science and robust datasets to enable problem-solving.
The technology allows machines to learn from experience, adjust to new inputs and perform human-like tasks.
The systems, which include machine learning and deep learning sub-fields, are comprised of AI algorithms that seek to create expert systems which make predictions or classifications based on input data.
From 1957 to 1974, AI flourished. Computers could store more information and became faster, cheaper, and more accessible.
Machine learning algorithms also improved and people got better at knowing which algorithm to apply to their problem.
In 1970, MIT computer scientist Marvin Minsky told Life Magazine, 'From three to eight years, we will have a machine with the general intelligence of an average human being.'
And while the timing of the prediction was off, the idea of AI having human intelligence is not.
ChatGPT is evidence of how fast the technology is growing.


Musk, Wozniak and other tech leaders are among the 1,120 people who have signed the open letter calling for an industry-wide pause on the current 'dangerous race'
In just a few months, it has passed the bar exam with a higher score than 90 percent of humans who have taken it, and it achieved 60 percent accuracy on the US Medical Licensing Exam.
Tuesday's hearing appears to make up for lawmakers' failures with social media - getting control of the progress of AI before it is too large to contain.
Senators made it clear that they do not want industry leaders to pause development - something Elon Musk and other tech tycoons have been lobbying - but to continue their work responsibly.
Musk and more than 1,000 leading experts signed an open letter on The Future of Life Institute, calling for a pause on the 'dangerous race' to develop ChatGPT-like AI.
Kevin Baragona, whose name is penned in the letter, told DailyMail.com in March that 'AI superintelligence is like the nuclear weapons of software.'
'Many people have debated whether we should or shouldn't continue to develop them,' he continued.
Americans were wrestling with a similar idea while developing the weapon of mass destruction - at the time, it was dubbed 'nuclear anxiety.'
'It's almost akin to a war between chimps and humans, Baragona, who signed the letter, told DailyMail.com
'The humans obviously win since we're far smarter and can leverage more advanced technology to defeat them.
'If we're like the chimps, then the AI will destroy us, or we'll become enslaved to it.'
Most watched News videos
- Tourist gets instant karma after challenging bouncer to fight
- CCTV shows tourist's behaviour before bouncer steps in
- 'Rolex ripper' mistakenly targets undercover officers
- Horrific moment young mother glasses best friend in busy club
- FSU students hide as police bang on doors during active shooter alert
- Armed police run through FSU campus during shooting
- 'Rolex ripper' gang arrested after targeting undercover police
- Rescue mission underway as Italian cable car crash kills three
- Family of 17-year-old Karmelo Anthony speaks out a press conference
- Conspiracy theorists claim Blue Origin mission was FAKE
- Police carry wounded woman while telling other FSU students to run
- Freddie Flintoff's near-fatal car crash seen for first time