Ruijing Guo
Intel Cloud Software Architect
Cloud Computing Software Architect at Intel Asia Pacific Research & Development Ltd, more than 20 years of infrastructure software development work, many years of network, storage and big data development work, many years of cloud computing infrastructure open source projects (OpenStack, OPNFV, Kubernetes, Istio/envoy) development work, currently focusing on enterprise AI open source project OPEA ( https://github.com/opea-project) development work. He currently serves as a technical committee member of China Cloud Infrastructure Developer Conference, a programme committee member of 2023 European Open Source Summit, and holds 5 US patents for infrastructure software.
Topic
Enterprise AI on the ground: in-depth OPEA modular architecture design and application practice
In the era of big models, enterprises face many challenges when deploying AI applications, including the complexity of model selection, the difficulty of selecting various AI components, multi-environment compatibility, resource management, security compliance, and performance optimisation. This presentation will start from these core pain points and discuss in depth how OPEA (Open Platform for Enterprise AI) can help enterprises efficiently land generative AI applications through modular architectural design and cloud-native microservices solutions.Initiated by Intel and supported by the Linux Foundation's AI and Data Foundation, OPEA aims to simplify the integration and management of AI components through standardised and scalable open source architecture. integration and management. The presentation will cover OPEA's core technical architecture, including microservice design, resource orchestration and management components, as well as real-world application cases that demonstrate how to optimise model inference, simplify component selection, enable data privacy protection and improve system performance. Through these practical experiences, we will help enterprises overcome obstacles in AI deployment, drive intelligent transformation and maximise business value. Outline: Background: major challenges in enterprise AI landing - Analyse the core pain points encountered by enterprises in the process of AI application landing, including the complexity of model and component selection, resource management in multi-cloud environments, security compliance requirements, system performance optimisation challenges, and the adaptability of different application scenarios. Current Enterprise AI Solutions - Introduce the major enterprise AI solutions in the market, including traditional proprietary AI architectures, modular solutions supported by open source ecosystems, and the advantages of cloud-native architectures. Analyse the characteristics of these solutions in terms of adaptability, flexibility, cost and ease of use. OPEA Technical Architecture and Modular Design - Analyse OPEA's modular architecture in detail, introducing core modules such as microservice architecture, resource orchestration and management components. Demonstrate how to achieve cross-platform compatibility, flexible component integration and optimised resource allocation for enterprise AI through OPEA. OPEA Application Practice Cases - Sharing the practice cases of enterprise generative AI applications, demonstrating the advantages of OPEA in large model inference, performance optimisation, data privacy protection and cross-platform integration, etc., and helping enterprises to efficiently and securely achieve the landing of AI applications. Summary - Review the role of OPEA in addressing the challenges of enterprise AI landing and look forward to its potential application in the future AI ecosystem. Summarise its contributions in improving system compatibility, resource management, performance optimisation and security compliance to support the intelligent transformation of enterprises.