OpenAI's human-threatening model may be reason of Altman's sacking

INSUBCONTINENT EXCLUSIVE:
Prior to OpenAI CEO Sam Altman's abrupt firing, a group of staff researchers sent a letter to the board of directors cautioning them about a
new artificial intelligence algorithm that had the potential to pose a threat to humanity, according to two sources familiar with the matter
project called Q* and a letter to the board before the weekend's events.According to a source with knowledge of the matter, some members of
OpenAI believe that Q* (pronounced Q-Star) could be a major breakthrough in the company's quest for artificial general intelligence
(AGI).OpenAI defines AGI as autonomous systems that are capable of completing most economically valuable tasks better than humans.The
letter, which had not been previously disclosed, and the discovery of the AI algorithm were significant events that occurred before the
board's decision to remove Altman, as per the sources
An internal message from CTO Mira Murati acknowledged the project and the letter to the board.The board fired former CEO Sam Altman over
alleged lack of transparency
An internal memo sent to OpenAI staff on Saturday clarified that the decision was not due to "malfeasance or anything related to our
AI model reportedly outperformed grade-school studentsThe new model was able to solve certain mathematical problems with ease, thanks to
vast computing resources
While it was only capable of performing maths on the level of a grade-school student, the researchers at OpenAI were very optimistic about
board also concerned about the hastened commercialization of advancements without fully understanding the consequences.Researchers have
warned the board about the dangers of such an AI model
However, they did not specify the exact nature of the safety concerns
They also highlighted the work of an "AI scientist" team exploring how to optimise AI models for better reasoning and scientific work.A few
support of the terminated boss, leading Altman to be reinstated on Tuesday.