1. Guest
  2. Login | Subscribe
 
     
Forgot Login?  

FREE Newsletter Subscription, Click The 'Subscribe' Button Below To Subscribe!

Weekday News Bulletin

PortMac.News FREE Weekday Email News Bulletin

Be better informed, subscribe to our FREE weekday news Update service here:

PortMac Menu

This Page Code

Page-QR-Code

A chaotic battle has played out - on one side some guys who may hold the keys to the most advanced generative AI ever, on the other a bunch of 'Sell your mother for a buck' entrepreneurs.

Source : PortMac.News | Street :

Source : PortMac.News | Street | News Story:

main-block-ear
 
Inside OpenAI, a rift between billionaires and altruists
A chaotic battle has played out - on one side some guys who may hold the keys to the most advanced generative AI ever, on the other a bunch of 'Sell your mother for a buck' entrepreneurs.

News Story Summary:

The tech world watched as the board of OpenAI, the company behind ChatGPT, abruptly sacked its CEO only to bring him back and dump half the board six days later.

At the heart of the saga appears to have been a cultural schism between the profitable side of the business, led by CEO Sam Altman (Above left), and the company's non-profit board.

Altman, a billionaire Stanford drop-out who founded his first tech company at the age of 19, had overseen the expansion of OpenAI including the runaway success of ChatGPT.

But according to numerous accounts from company insiders, the safety-conscious board of directors had concerns that the CEO was on a dangerous path.

The drama that unfolded has exposed an inevitable friction between business and public interests in Silicon Valley, and raises questions about corporate governance and ethical regulation in the AI race.

How an altruistic AI start-up became a successful tech juggernaut:

Before its brainchild bot became a household name, OpenAI began as a non-profit research centre aiming to "build safe artificial general intelligence for the benefit of humanity".

Its investors, including Altman as well as Elon Musk and a string of venture capitalists, initially pledged $US1 billion in the name of a project that would be "free from financial obligations", allowed to focus solely on exploring this new technology and sharing their findings with the world.

It was seen by many as a collaborative and exciting step forward for the competitive industry.

By 2019, OpenAI needed money for the expensive computing power required to run and test its AI models and then test them again.

So a commercial arm of the research organisation was established under a hybrid for-profit, non-profit model and Sam Altman became its CEO.

At the time, the OpenAI announcement said, "no pre-existing legal structure we know of strikes the right balance", so the founders created a "capped-profit" company that would be governed by the not-for-profit board.

It was all spelled out, written in black and white, that the shareholders could only ever earn a certain amount and any value beyond that would be reinvested into the OpenAI machine.

"We sought to create a structure that will allow us to raise more money — while simultaneously allowing us to formally adhere to the spirit and the letter of our original OpenAI mission as much as possible," Ilya Sutskever, one of the company's co-founders, told VOX in 2019.

In this story, Ilya Sutskever becomes a name to remember.

From 2019, Altman became the face of generative AI. OpenAI's mission is referenced repeatedly in company documents, in interviews, on stage and as Altman himself travelled the world, warning about the power generative AI would one day have.

He was both raising the alarm, and raising OpenAI's commercial value.

"I think what they've done is taken a very great technology and created a lot of drama around it from the very beginning," Steven Kelts, a lecturer on ethics in AI at Princeton University, told the ABC this week.

With the release of GPT-3, or ChatGPT, and its widespread popularity, OpenAI's value skyrocketed, reaching an estimated peak of nearly $US90 billion, as employees sought to sell off their shares.

But this very valuable organisation was ultimately controlled by a board of directors who had been brought together to uphold the original mission of making technology that was good for the world, not making money.

And on that board was an Australian scientist who is now believed to be central to the Succession-style drama that came next.

A rift emerges and the board turns on its CEO:

Perhaps a organisational fabric weaved together by altruists, billionaires and millionaire altruists was always bound to unravel. Although few may have predicted that would happen in the space of a single week.

 Last Friday afternoon, OpenAI's board of directors made the shock announcement that it was firing Altman as CEO.

Altman reportedly found out via a Google Meet, just hours before the statement went public.

Fellow co-founder Greg Brockman was also dropped from the board but allowed to stay on as president. But he soon announced he was leaving too, adding that he and Altman had been totally blindsided.

The key decision makers had been chief scientist and co-founder Ilya Sutskever along with the three non-employee board members: Adam D'Angelo, Helen Toner and Tasha McCauley.

Toner is an Australian who studied at the University of Melbourne before forging a career in the safe use of emerging technology. It was that cause that brought her to OpenAI and then reportedly put her at odds with its rockstar founder.

It wasn't long after Altman and Brockman were gone, that three senior researchers followed suit, and soon the company was facing a bigger problem.

Over the weekend, more and more employees were threatening to quit, while major investors pressed the OpenAI board to reverse its decision. Altman even appeared back in the office on Sunday, sporting a visitor's pass.

Microsoft CEO Satya Nadella (Above right), whose company has bankrolled OpenAI to the tune of $US13 billion, was reportedly involved in the discussions to bring Altman back into the fold.

Earlier in the week, he told US-based technology journalist and commentator Kara Swisher he was not consulted on the original decision to remove Altman, despite being a major investor.

On her podcast, Swisher asked Nadella: "They didn't consult you, correct?"

"Yeah. I mean, you know, we were mostly working with Sam and the management team, and the for-profit entity and we didn't have any relationship with the nonprofit board which has the governance of this entity. That's correct," he said.

By Sunday, a deal to bring Altman back had not been made and OpenAI named a new interim CEO, Twitch founder Emmett Shear.

In a tweet announcing his acceptance, Shear promised to hire an independent investigator to look into the process that led to Altman's ousting.

The following day, Nadella announced Altman and Brockman would be joining a new advanced research team at Microsoft.

OpenAI was in crisis, with hundreds of employees threatening to quit and join their former leader. Nadella made clear there would be space for them at Microsoft.

"We want to make sure that whatever Sam does, he will definitely do it with us," he told Swisher.

"The thing that we didn't want is the team to get splintered. And the mission to get jeopardised."

Remember Sutskever? 

He was the only employee board member who had voted to oust Altman and tweeted his regret over the decision. A petition began circulating demanding the board directors step down.

Within two days, their demands were met. 

In the early hours of Wednesday morning, OpenAI announced Altman was officially back as CEO and that the majority of the board was gone.

McCauley and Toner were out, replaced by two new members: Bret Taylor and Larry Summers.

Sutskever was also dropped from the board, but kept his job as chief scientist. He declared his indescribable happiness at the return of his co-founders, with the ousted pair equally excited to "get back to coding".

Less than 72 hours after accepting the role, Shear announced he was over the moon that Altman was returning.

The ethical divide around AI that split the board

While nobody at Open AI has publicly revealed many details about the reasons behind this week's chaotic movements, there are plenty of clues.

Several observers have pointed at a cultural mismatch between the profitable side of the business, led by Altman, and the board's altruistic mission to put public safety at the forefront of its endeavours — a schism that had been growing for years.

The most intriguing example of this split was revealed on Thursday when Reuters dropped a report about a secret letter and an OpenAI project called Q*.

Ahead of Altman's ousting, several staff researchers wrote to the board of directors warning of a powerful discovery that they said could threaten humanity, according to Reuters.

The discovery is believed to be something called project Q*, or Q star.

Some at OpenAI believe Q* could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), according to the report.

OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Two sources told Reuters the letter raising concerns about project Q was one factor, among a longer list of grievances of the board, that led to Altman's firing.

Also on the list were concerns over commercialising advances in the technology before fully understanding its consequences.

"It's a massive fight over two divisions of the future of AI safety," Professor Kelts told the ABC.

"My analysis would be more that it's go-fast and go-slow. Altman is team go-fast.

"They actually are both concerned about risk, but team go-fast thinks that if you put these products out there, the quicker you get people basically red-teaming them on their own, [the better].

"So they think they're putting out safe products, but they expect people to try to break them.

"Team go-slow is more like, 'No, let's actually, internal to OpenAI, do the research on safety so that we never put out a product with guardrails that can be broken.'"

It's a divide that seems to have been reflected in comments attributed to anonymous company insiders across much of the reporting on the unfolding chaos.

According to the New York Times, one person at a company meeting on the Friday afternoon that Altman was dismissed recounted Sutskever arguing the dismissal was "necessary to protect OpenAI's mission of making artificial intelligence beneficial to humanity".

The Financial Times also reported internal concerns about Altman's efforts to raise $US100 billion from investors to establish a new microchip company.

In its statement announcing the change in leadership, the board claimed Altman had been "not consistently candid", which many have interpreted as suggesting he was not entirely truthful.

The day before he was ousted, Altman had been preaching on the transformative capabilities of generative AI at the APEC summit.

"Just in the last couple of weeks, I have gotten to be in the room, when we ... push the sort of the veil of ignorance back and the frontier of discovery forward," he told the audience listening to a panel with executives from Google and Meta.

In the same discussion, he went on to say that heavy regulation wasn't necessary just yet.

"It's a hard message to explain to people that current models are fine," he said.

"We don't need heavy regulation here. Probably not even for the next couple of generations.

"But at some point when the model can do the equivalent of a whole company, and then a whole country and then the whole world, maybe we do want some collective global supervision of that and some collective decision-making."

It struck a tone seemingly at odds with Altman's previous public statements on the subject of regulation and that of some members of the OpenAI board.

Swisher reported that the key tension was between Altman and Toner, whose position on the board had been focused on safety and "deep thinking around the long-term risks" of AI.

Toner had recently co-authored a paper on AI policy that discussed private sector signalling, which some have interpreted as suggesting the release of ChatGPT sparked "a race to the bottom" among OpenAI's competitors.

A Financial Times article published this week included comments from an interview with Toner last month, about the ethical conundrum of AI executives deciding their own destinies.

"I think for the most part [executives of AI companies] are taking the risks seriously and sort of wanting to do the right thing.

At the same time, they're obviously the ones building these systems. They're the ones who potentially stand to profit from them," she was quoted as saying.

"So I think it's really important to make sure that there is outside oversight not just by the boards of the companies but also by regulators and by the broader public.

"Even if their hearts are in the right place, we shouldn't rely on that as our primary way of ensuring they do the right thing."

Original story By | Lucy Sweeney & Emily Clark


Same | News Story' Author : Staff-Editor-02

Users | Click above to view Staff-Editor-02's 'Member Profile'

Share This Information :

Submit to DeliciousSubmit to DiggSubmit to FacebookSubmit to Google PlusSubmit to StumbleuponSubmit to TechnoratiSubmit to TwitterSubmit to LinkedIn

Add A Comment :


Security code

Please enter security code from above or Click 'Refresh' for another code.

Refresh


All Comments are checked by Admin before publication

Guest Menu

All Content & Images Copyright Portmac.news & Xitranet© 2013-2024 | Site Code : 03601