Succeeding in the agentic era requires a transformation in your data strategy: moving from human-scale to agent-first workloads, evolving from reactive intelligence to proactive action, and shifting from raw data to semantic knowledge that agents can use to reason accurately.
For over a decade, BigQuery’s continuous innovations have helped tens of thousands of organizations build a scalable data and AI foundation and navigate several industry and technological transformations. BigQuery has evolved into an autonomous data-to-AI platform, growing over 30x in data processed with Gemini, 25x in AI functions processing unstructured data, and 20x in agent-building tools with Model Context Protocol (MCP).
Customers like Definity are building data platforms to enhance their customers’ experience, improve back-office operations, and boost data team productivity.
“We stood up our data platform in Google Cloud and ingested all critical insurance data in 10 months, which is about half of the time that people see in the industry. The technology that BigQuery provides, processing large amounts of data very quickly, is giving our practitioners and engineers tools that are advanced and a platform that has AI and ML built in. We have doubled the number of users [in a very short period of time].” — Tatjana Lalkovic, Chief Technology Officer, Definity
Today, we are announcing new BigQuery capabilities in lakehouse, built-in AI processing and reasoning, and agentic experiences, all anchored by our commitment to industry-leading price-performance and enterprise readiness.
Open, cross-cloud lakehouse
Enterprise data is often scattered across applications, multiple cloud environments and on-premises. While early lakehouse solutions reduced data duplication, the agentic era demands a foundation that is natively multimodal, cross-cloud, and AI-ready. Our approach blends Apache Iceberg’s interoperability and Google’s differentiated infrastructure with new capabilities, including:
-
Managed Iceberg tables in Lakehouse (GA, formerly BigLake) enables the openness of Iceberg with advanced BigQuery capabilities, including automatic table management, Iceberg partitioning, multi-table transactions, change data capture, enhanced vectorization, and history-based optimizations.
-
Iceberg REST catalog enables read/write interoperability (preview) on Iceberg tables between BigQuery, Spark, and other OSS and third-party engines, so you don’t have to make complex engine trade-offs.
-
Cross-cloud Lakehouse (preview) brings BigQuery AI and analytics to other clouds, starting with AWS and Azure. Using open standards like the Iceberg REST Catalog, high-bandwidth networking via Cross-Cloud Interconnect, and transparent caching, BigQuery achieves performance and total cost of ownership comparable to native warehouses, enabling true cross-cloud for enterprises.
-
Catalog federation (preview) enables easy discovery, analysis, and zero-copy sharing of data across AWS Glue, Databricks, SAP, Salesforce, Snowflake, and Confluent Tableflow (coming later this year).
- Real-time data replication that closes the loop between raw data and operational action by allowing you to replicate data from Spanner, AlloyDB, and Cloud SQL instantly into BigQuery tables (GA) and Iceberg (preview).






