In a November 23 report, Reuters cited two sources as saying the letter had never been mentioned before and that the AI algorithm was a significant development before Sam Altman, co-founder and CEO of OpenAI, was fired on November 17. According to these sources, the letter was one factor in the list of reasons leading to the decision to fire OpenAI's board of directors.
The researchers who wrote and sent the letter did not immediately respond to a request for comment.
Altman made a triumphant comeback late on November 21, after more than 700 OpenAI employees threatened to quit and join Microsoft along with the fired CEO. This ended nearly a week of turmoil with a series of unexpected developments at OpenAI, one of the most prominent AI research companies in the world today and owner of the popular application ChatGPT.
Mr. Altman at an APEC event in the US on November 16.
According to one of the sources, one of OpenAI's longtime senior managers, Mira Murati, mentioned a project called Q* (pronounced "Q Star") to employees on November 22 and said a letter had been sent to the company's board of directors before the global tech shockwave last weekend.
After this story was reported, an OpenAI spokesperson said Ms. Murati had told staff what the media was about to report, but she would not comment on the accuracy of the information.
Sam Altman returns as CEO of OpenAI
One of the sources revealed that OpenAI has made progress on Project Q*. Some people at the company believe that the project could be a breakthrough in OpenAI's quest for superintelligence, also known as artificial general intelligence (AGI). The company defines AGI as an AI system that is smarter than humans.
With its vast computing resources, the new model can solve some problems, according to the source. Although the model only does math at the elementary school level, the source said that solving such problems makes researchers very optimistic about the future success of Q*.
Researchers see mathematics as a front line in the development of generative AI. Currently, generative AI can write and translate between languages, although the answers to the same questions can be very different. But mastering mathematics – where there is only one right answer – implies that AI will be able to reason better like human intelligence. AI researchers believe this could be applied to new scientific research.
Unlike a computer that can only solve a limited number of calculations, AGI can generalize, learn, and understand problems. In a letter to the OpenAI board, the researchers highlighted the power and potential dangers of AI, according to sources. Computer scientists have long discussed the dangers posed by superintelligent machines, such as whether they might decide to destroy humanity for their own benefit.
In that context, Mr. Altman led efforts to turn ChatGPT into one of the fastest-growing software applications in history and attract the investment — as well as the computing resources — needed from Microsoft to get closer to generalized superintelligence, or AGI.
In addition to announcing a slew of new tools at an event this month, Mr. Altman told world leaders in San Francisco last week that he believes AGI is within reach.
A day later, OpenAI's board fired Mr. Altman.
Source link
Comment (0)