AI 360: 08/03/2021. A Chinese PLM, Multi-modal Neurons, Productionising ML/DL, PyTorch 1.8 and SEER

For the full experience, and for links to all referenced content, visit our website: Alibaba announce M6 Alibaba announce M6, the Multi-Modality to Multi-Modality Multitask Mega-transformer. It is the largest Chinese pretrained (multi-modal) language model, trained on over of images and 292GB of text. The data was collected from a wide variety of sources, such as online encyclopedias, crawled webpages and e-commerce stores (such as AliBaba). They introduce a 10B parameter model and a 100B model, and demonstrate that its multi-task capabilities allows the model to perform very well at a large selection of tasks, including text-to-image synthesis. OpenAI show multi-modal neuron behaviour in CLIP OpenAI probe CLIP to show us some extremely interesting behaviours and results. In humans, specific neurons in our brain get activated when certain (popular) people are shown to us, regardless of whether that is photographs, drawings, or the persons name. The multi-modal asp
Back to Top