Prior to OpenAI CEO Sam Altman's abrupt firing, a group of staff researchers sent a letter to the board of directors cautioning them about a new artificial intelligence algorithm that had the potential to pose a threat to humanity, according to two sources familiar with the matter who spoke to Reuters.Project Q* might be the big reasonOne person said that OpenAI had sent an internal message to staffers regarding a project called Q* and a letter to the board before the weekend's events.According to a source with knowledge of the matter, some members of OpenAI believe that Q* (pronounced Q-Star) could be a major breakthrough in the company's quest for artificial general intelligence (AGI).OpenAI defines AGI as autonomous systems that are capable of completing most economically valuable tasks better than humans.The letter, which had not been previously disclosed, and the discovery of the AI algorithm were significant events that occurred before the board's decision to remove Altman, as per the sources.
An internal message from CTO Mira Murati acknowledged the project and the letter to the board.The board fired former CEO Sam Altman over alleged lack of transparency.
An internal memo sent to OpenAI staff on Saturday clarified that the decision was not due to "malfeasance or anything related to our financial, business, safety, or security/privacy practices," but rather a "breakdown in communication" that led to the decision.OpenAIs new AI model reportedly outperformed grade-school studentsThe new model was able to solve certain mathematical problems with ease, thanks to vast computing resources.
While it was only capable of performing maths on the level of a grade-school student, the researchers at OpenAI were very optimistic about Q*s potential for future success.The sources stated that the letter was one of several factors that led to Altman's dismissal, with the board also concerned about the hastened commercialization of advancements without fully understanding the consequences.Researchers have warned the board about the dangers of such an AI model.
However, they did not specify the exact nature of the safety concerns.
They also highlighted the work of an "AI scientist" team exploring how to optimise AI models for better reasoning and scientific work.A few days after Altman's sacking and Emmet Shears appointment, more than 700 staff members had threatened to resign and join Microsoft in support of the terminated boss, leading Altman to be reinstated on Tuesday.
Music
Trailers
DailyVideos
India
Pakistan
Afghanistan
Bangladesh
Srilanka
Nepal
Thailand
Iraq
Iran
Russia
Brazil
StockMarket
Business
CryptoCurrency
Technology
Startup
Trending Videos
Coupons
Football
Search
Download App in Playstore
Download App
Best Collections