|

Huawei Develops PanGu-Alpha to Fill Missing Link in GPT-3

 The rise of GPT-3 in the last year has been phenomenal. Predominantly through the use OpenAi API use cases arew everywhere from writing your emails, adverts and copy to being able to determine website layouts and produce complex python code and more. There has though been one gaping hole that the OpenAi’s model had despite its near 50 Terabytes of data that it draws from and that was that most of the adaptations were predominantly in English.

Huawei it seems have, over the last few months sought to rectify that problem by developing its own Chinese dataset and sudo GPT-3 incarnation. This week Huawei unveiled these plans when they announced their solution which will be known as PanGu-Alpha, a 750-gigabyte data set that at the moment contains nearly 200 billion parameters, which is 25 million more than GPT-3, trained on 1.1 terabytes of ebooks in Chinese such as news, social media, encyclopedias, and web pages.

The company are using the 910 AI chipset with 256 teraflops with modules that can create high-stream computing power In this respect, the PanGu-α system can analyzes about 80 terabytes of data of which covers Common Crawl dataset, public datasets, and open web platforms.

Huawei says that the model will have a “superior” performance in Chinese-language tasks such as question answering, text summarization, and dialogue generation. It claims it is looking for a way to help nonprofit research institutes and companies gain access to the pretrained PanGu-α models, by releasing the code, model, and dataset or by using APIs.

Similar Posts