Google Cloud Expands AI Offerings Amid Competitive Shifts in the Cloud Market
April 2025 – Las Vegas
At Google Cloud Next 2025, CEO Thomas Kurian provided updates on the company’s cloud and AI strategies, emphasizing platform openness, model flexibility, and ongoing innovation through internal and partner-led development. The event comes as Google Cloud continues to see substantial growth, reporting 30% quarterly revenue increases, attributed in part to generative AI adoption across enterprise sectors.
AI as a Catalyst for Cloud Growth
Kurian confirmed that generative AI is playing a key role in driving adoption of Google Cloud’s platform. Use cases span multiple industries, from foundation model training by companies like Midjourney and Anthropic to traditional firms like Ford leveraging Google’s custom TPU chips for simulation work.
Google offers three main AI access points to customers:
- Infrastructure Use: Clients utilize Google’s hardware and systems to train and deploy their own models.
- Model Access: Businesses tap into Google’s proprietary and third-party models such as Gemini, Imagen, and Veo for tasks ranging from content generation to application development.
- Prepackaged Agents: Organizations adopt AI-powered services for functions like customer support, food ordering, cybersecurity, and automotive assistance.
Kurian highlighted that AI is becoming increasingly central to the cloud business but emphasized that the company is taking a measured approach to avoid overhyping the technology.
Platform Openness and Model Variety
Google currently supports more than 200 AI models on its platform, including offerings from Anthropic, AI21 Labs, the Allen Institute, and open-source models such as Llama, Mistral, and DeepSeek. Kurian noted that model availability is determined by customer demand and developer traction, with Google continuously monitoring adoption metrics and leaderboard rankings.
While Google does not yet offer models from OpenAI, Kurian stated that the company remains open to including them should the opportunity arise.
Integration with DeepMind
A major differentiator for Google is its close collaboration with DeepMind. Kurian described an integrated development workflow where infrastructure, model training, inference, and reinforcement learning are tightly coordinated. Feedback from live deployments across services like YouTube and Search feeds directly into model improvement cycles.
This alignment enables rapid deployment of advanced models, such as Gemini, into enterprise applications. For instance, Google Cloud’s cybersecurity tools integrate Gemini to accelerate threat detection and investigation by identifying patterns in log data. Other enterprise use cases include customer service automation and fast-response conversational agents, such as the system currently being piloted with Wendy’s drive-thru ordering.
Competition and Cost Considerations
In comparing Google’s approach to that of competitors like Microsoft and Amazon, Kurian pointed to pricing and performance advantages, particularly with regard to inference costs. He stressed that inference not training is the key driver of long-term scalability and ROI for enterprise AI.
Kurian also addressed industry discussions around training costs and AI development strategy. He differentiated between “frontier research” (which explores new capabilities such as planning and reasoning) and actual training execution. Google’s investment model, he said, is structured with a clear understanding of cost-to-value ratios across R&D, training, and deployment.
Rise of Reasoning in AI Applications
Kurian highlighted that reasoning capabilities are beginning to reshape how enterprise users apply AI. Unlike earlier models that provided single-step answers, newer systems can evaluate multiple options, critique outputs, and explain decision pathways. This enables more sophisticated use cases, such as personalized travel planning or financial data analysis, where AI systems now generate plans that reflect real-world variables and user intent.
Trust in model reliability has improved, Kurian said, partly due to increased transparency around decision steps and accuracy improvements. As models become more capable of multi-step thinking, enterprise users are beginning to adopt reasoning-based features more widely.
Inference Costs and Efficiency
Kurian acknowledged that reasoning-based inference carries higher computational costs echoing comments from NVIDIA CEO Jensen Huang, but emphasized that Google’s TPUs and architecture help mitigate those costs through efficiency. The company aims to remain competitive on price-performance, especially in serving large-scale customers with heavy inference needs.