yitit
Home
/
Finance
/
OpenAI’s Employees Saved Sam Altman & Secured His Return Says Report
OpenAI’s Employees Saved Sam Altman & Secured His Return Says Report-December 2024
Dec 30, 2024 9:06 AM

This is not investment advice. The author has no position in any of the stocks mentioned. Wccftech.com has a disclosure and ethics policy.

As the dust settles around the saga surrounding board and executive conflicts at the artificial intelligence startup OpenAI, more details about behind-the-scenes events are surfacing. OpenAI - the firm behind the popular chatbot ChatGPT - took the media by surprise in November after a short blog post announced the ouster of its chief, Sam Altman. While this alone was sufficient to generate a plethora of stories, Altman, who initially agreed to step down, waged a comeback that saw him negotiate his return to the firm amidst widespread support by employees, according to fresh details shared in a report by the New York Times.

Sam Altman's Return To OpenAI Was On The Back Of Strong Employee Support That Threatened To Collapse The Company

According to the report, Mr. Altman's stunning return to OpenAI came after he marshaled support for his role at the company at his house in San Francisco. The details come courtesy of the Times's interviews with dozens of people. While matters came to a head in November, trouble at OpenAI started brewing two months back in September due to a conflict between Mr. Altman and the former OpenAI board over filling up missing seats.

Altman's decision to create a for-profit OpenAI subsidiary and his decision to elevate an OpenAI researcher to the same corporate rank as the firm's chief scientist, Ilya Sutskever, were some factors that continued to create mistrust between stakeholders within the firm. While OpenAI was founded in 2015 as a counter to Alphabet subsidiary Google's DeepMind A.I. division, Altman took the top role at the company four years later in 2019.

Four years later, the board, wary about Mr. Altman using his professional network to reverse a potential removal, quietly voted in an online meeting before informing him. Altman's initial reaction was acceptance, but he mounted a successful comeback attempt motivated by others. While he and the board initially agreed to work together to pick new members, the negotiations fell apart.

However, Altman, backed by an offer from Microsoft, was confident that he would force the board's hand and become OpenAI's CEO again after a letter signed by hundreds of employees questioned the board's motives and threatened to resign if their leader was not reinstated.

Fresh details about Altman's ouster and return come amidst other reports that suggest that the FTC is now interested in the nature of Microsoft's investment in OpenAI. The Redmond, Washington-based technology giant does not own a controlling stake in the firm, and the OpenAI's nonprofit nature removes acquisition or investment reporting responsibilities.

OpenAI's large language model (LLM) based artificial intelligence software is widely believed to lead the pack in the global A.I. industry. LLM models, such as GPT, are dubbed as transformer models by researchers and industry members. This is because they convert a set of specified or unspecified parameters into outputs, and an A.I.'s 'computational prowess' is often synonymous with the number of parameters it has trained on. For instance, Tesla, which uses machine learning for its semi-autonomous driving software system called AutoPilot, revealed last year that its cars are powered by "Neuralnetworks with 1 billion parameters, completing 144 trillion operations per second."

While OpenAI has not publicly shared details about the number of parameters that power its latest product, GPT-4, media reports have claimed that GPT-4 has 1.8 trillion parameters across 120 layers. If true, this figure places it at the top of the global A.I. food chain, making it more powerful than Google's Generalist Language Model (GLaM). According to Google, GLaM's full version has 1.2 trillion parameters across 32 layers. The A.I. does not use all of these parameters simultaneously when producing outputs or 'inferences.' Instead, it activates only 97 billion of them per prediction.

Heart diffraction glasses, making lights appear as hearts

pic.twitter.com/v1y461ALXH

— Science girl (@gunsnrosesgirl3) December 9, 2023

Comments
Welcome to yitit comments! Please keep conversations courteous and on-topic. To fosterproductive and respectful conversations, you may see comments from our Community Managers.
Sign up to post
Sort by
Show More Comments
Finance
Recent News
Copyright 2023-2024 - www.yitit.com All Rights Reserved