Alibaba Cloud has unveiled a serverless version of its Platform for AI (PAI)-Elastic Algorithm Service (EAS), designed to offer a cost-efficient solution for model deployment and inference to individuals and enterprises.
During its inaugural AI & Big Data Summit in Singapore, it also announced the latest integration of its vector engine technology into more product offerings, including its data warehouse Hologres, search services Elasticsearch and OpenSearch. The integration is designed to make it easier for enterprises to access various large language models (LLMs) and build customised generative AI applications.
The PAI-EAS platform enables users to tap into computing resources as needed, eliminating the need to oversee the management and upkeep of physical or virtual servers. Furthermore, users will only be billed for the computing resources they employ, resulting in a 50% reduction in inference costs when compared with the traditional pricing model.
The serverless offering, which is currently undergoing beta testing, is accessible for image generation model deployment. In March 2024, the serverless version is scheduled to expand its capabilities to support the deployment of prominent open-source LLMs and models from Alibaba's AI model community, ModelScope. This includes models tailored for tasks such as image segmentation, summary generation, and voice recognition.
With LLMs serving, training services and the vector engine technology, Alibaba Cloud is able to support a Retrieval-Augmented Generation (RAG) process, enabling enterprises to enhance LLMs with their knowledge bases for improved outcomes. This translates to improved accuracy, accelerated retrieval of relevant information, and more nuanced insights for enterprises, contributing to heightened efficiency and decision-making capabilities across a wide range of applications.
“Alibaba Cloud continues to remain at the forefront of AI and cloud technology innovation. Our technology updates underscore our commitment to empowering enterprises with the latest Intelligence-driven solutions for heightened efficiency and performance. This marks a significant stride in our mission to provide innovative solutions that redefine the possibilities of artificial intelligence in diverse applications,” commented Zhou Jingren, Chief Technology Officer (CTO), Alibaba Cloud, during the summit.
Make model training more easily
Alibaba Cloud also announced an upgrade on its big data service, called MaxCompute MaxFrame, a distributed Python data processing framework, to tap into growing demand for data preprocessing and data offline/online analysis in AI-related computing tasks. It enables users to process massive amounts of data more efficiently and flexibly when launching AI tasks such as LLM training.
To foster enhanced creativity among designers, Alibaba Cloud has introduced PAI-Artlab, a comprehensive platform for model training and image generation. This solution empowers designers to quickly produce professional-grade designs and unlock greater creative potential.
Designers can leverage the platform to generate design images for a variety of applications, including interior home design, product promotional posters, gaming character creation, and gaming scene development. The platform also provides a rich ecosystem of ready-to-use tools to enable designers with no coding background to develop and train custom models that generate images tailored to their specific requirements. Currently, the platform is operational within mainland China and is slated for operation in the Singapore region in the upcoming months.
In a landmark move last year, Alibaba Cloud elevated its entire range of database solutions, including the cloud-native database PolarDB, cloud-native data warehouse AnalyticDB, and cloud-native multi-model database Lindorm, integrating its proprietary vector engine technology to significantly enhance performance and capabilities.
Vector engines transform text and data into a high-dimensional space, optimising AI performance by embedding large volumes of structured and unstructured context in a complex yet efficient manner. This facilitates and streamlines tasks like similarity comparisons and semantic analysis, particularly benefiting LLMs and advancing various advanced AI functionalities.
Supporting customer success
Global customers ranging from large companies and start-ups use Alibaba Cloud’s latest technologies to support their digital transformation journeys.
“There is an increasing demand for AI technologies among our global customers. By open-sourcing our proprietary language models, we are well-equipped to offer powerful computing solutions and cutting-edge AI innovations to support clients in developing customised generative AI applications, addressing their unique challenges and positioning them to harness the wave of opportunities emerging from the dynamic generative AI sector”, Selina Yuan, President of International Business at Alibaba Cloud Intelligence, told the summit.
Utilising the advanced capabilities of Alibaba Cloud's Large Language Model (LLM), Tongyi Qianwen, and Retrieval-Augmented Generation (RAG) technology, Haleon, a world-leading consumer health company, has introduced a specialised AI nutritionist for its Chinese consumers. This AI-powered nutrition expert excels at precisely interpreting consumer inquiries, engaging users with valuable nutritional guidance that is both comprehensive and delivered efficiently. The integration of Tongyi Qianwen's robust functionalities and Haleon's extensive internal nutritional knowledge base ensures that the AI nutritionist operates at the forefront of accuracy and relevancy.
Shivani Saini, Haleon's Global Vice President of Digital & Tech Business Units, emphasises the importance of digital innovation in the healthcare domain: "As the role of digital services becomes increasingly critical in the consumer health industry, our collaboration with Alibaba Cloud to harness the potential of artificial intelligence reflects our commitment to offering personalised health advice to our Chinese consumers. Our aim is to empower our customers with the tools they need for enhanced diet and nutritional management."
rinna, a Japanese startup specialising in the development of pre-trained foundation models adept at processing Japanese, has launched its latest innovation: the Nekomata models. These new models are based on the open-source Tongyi Qianwen LLMs, namely the Qwen-7B and Qwen-14B, developed by Alibaba Cloud. The Nekomata series has exhibited exceptional performance in the Stability-AI/lm-evaluation-harness, one of the prominent benchmarks for assessing Japanese language model capabilities. Furthermore, the comprehensive Qwen vocabulary significantly enhances the Nekomata models' ability to process Japanese text with greater efficiency compared to the earlier series, which were based on the Llama2 architecture.
“We were impressed by the capabilities of Alibaba Cloud’s LLMs, which help us greatly enhance the performance of our models in a cost-effective way,” said Tianyu Zhao, Researcher from rinna. “We believe Alibaba Cloud’s contribution to the open-source community can help other SMEs and startups to accelerate their AI innovation as well.”