, pub-7580744294872774, DIRECT, f08c47fec0942fa0 Meta is building a massive new language for AI - and it's giving it a try for free

Meta is building a massive new language for AI - and it's giving it a try for free

Meta is building a massive new language for AI - and it's giving it a try for free

In an unusual gesture for Big Tech, the Meta AI team has built a vast new language model that shares both the extraordinary capabilities of OpenAI's groundbreaking GPT-3 neural network and details about how it is built and taught.

"We feel that allowing people to critique your work is an important aspect of the research."

 "We wholeheartedly embrace this collaboration," adds Joel Beno, managing director of Meta AI and a long-time supporter of transparency in technology creation.

The Meta procedure represents the first time that a fully trained large linguistic model has been made available to any researcher interested in studying it. 

Many people were relieved to hear the news because they were concerned about how this powerful technology would be developed by small teams behind closed doors. 

Emily M. Bender, a computational linguist at the University of Washington and a frequent critic of how language models are produced and applied, adds, "I admire the transparency here."

"It's a fantastic step," says Thomas Wolf, chief scientist of Hugging Face, the AI firm behind BigScience, an open-source language model project involving over 1,000 volunteers from across the world. 

He believes that the more open models are, the better.

In the last several years, large language models—powerful algorithms that can create pages of text and simulate human conversation—have become one of the hottest developments in AI. 

They do, however, have significant problems repeating misinformation, bigotry, and harmful language.

Putting additional people to work on the problem should, in principle, assist. Language models, however, have remained projects for wealthy tech companies because they require large quantities of data and processing capacity to train. 

The rest of the research community, including ethicists and social scientists worried about their misuse, has been forced to stand by and watch, Meta AI claims to be working on this thing. 

As Pino recalls, "A lot of us were college researchers." 

"We recognize the difference between universities and companies in terms of their ability to build these models." 

It has been difficult to make this work available to scientists. 

You want others to do your work and either improve it or take it apart. He promises that by getting more people involved in the project, it will progress more quickly.

For non-commercial purposes, Meta is providing its Open Pretrained Transformer (OPT) model for free use. It's also releasing its code as well as a diary that details the training. 

The logbook contains daily updates from team members on the training data, including how and when it was introduced to the model, as well as what worked and didn't. 

The researchers logged every issue, crash, and reboot in a three-month training procedure that ran uninterrupted from October 2021 to January 2022 in over 100 pages of notes.

OPT has the same size as GPT-3, with 175 billion parameters (values in the neural network that are modified during training). Benno claims that this was on purpose. 

The scientist designed the OPT to be similar to GPT-3 in terms of linguistic accuracy and toxicity. OpenAI GPT-3 was provided as a paid service by OpenAI, but the model or code was not disclosed. 

According to Benno, the goal was to provide scientists with a similar linguistic model for study.

Google has also been blamed for its lack of transparency as it investigates the use of huge language models in its search offerings. 

In 2020, the company sparked outrage when it fired key members of its AI ethics team after they published a report citing serious flaws in this new technology.

cultures clash

So, what is Meta's motivation for doing this? 

After all, Meta is a company that hasn't talked much about how Facebook and Instagram's algorithms work and has a history of suppressing negative results from their research teams. 

Pineau, who has struggled to increase openness in AI for a number of years, is an important reason behind Meta AI's unusual approach.

Pineau was instrumental in changing the way research is published at some of the world's most prestigious conferences by proposing a checklist of items that researchers must provide with their findings, including code and instructions about how experiments are carried out. 

She has championed that culture in Meta's AI lab since joining the company in 2017.

Margaret Mitchell, who is now at Hugging Face and was one of the AI ​​ethics researchers at Google, sees OPT's publication as a welcome step. However, she believes that transparency has its limits. 

Is the language model validated well? 

Do the expected benefits balance the expected risks, such as the spread of misinformation or the use of racist and sexist language?

She continues, "There are duties that come with releasing a big language model into the public, the result of which is likely to be utilized or affected." 

Mitchell notes that this model will be able to generate malicious information not only on its own but also through end applications developed by researchers.

OPT was audited by Meta AI to remove certain hazardous behaviors, but the goal, according to Pineau, is to provide a model that academics can learn from.

There has been a lot of discussion about how to do this in a way that allows us to sleep at night, knowing that there are reputational risks and damage, she continues. 

She disagrees with OpenAI's assertion that a model should not be disclosed because it is too detrimental, as it was when GPT-3's predecessor, GPT-2, was not released. "

I recognize these models' shortcomings," she says, "but that's not a research mentality."

No comments
Post a Comment

    Reading Mode :
    Font Size
    lines height