All about Google Cloud Next 2024: Announcements and Insights

Mohtasham Sayeed Mohiuddin
20 min readApr 13, 2024

--

Thomas Kurian, CEO of Google Cloud

The dust has settled on Google Cloud Next 2024, and the announcements are still buzzing. From AI-powered video creation to enhanced data security, Google Cloud is setting its sights on a future that’s not just innovative, but fundamentally transformative. This year’s conference wasn’t just about flashy new features; it was a glimpse into a world where intelligent automation empowers businesses, data becomes a strategic weapon, and the very fabric of the cloud is reimagined. In this blog, we’ll dissect the key announcements, analyzing their potential impact and what it means for businesses looking to stay ahead of the curve. So, buckle up and get ready to explore the exciting — and sometimes challenging — possibilities on the horizon with Google Cloud.

A3 Mega VMs(General Availability):

  • Supercharged AI Workloads: Packed with the latest Nvidia H100 GPUs, A3 Mega VMs offer double the GPU-to-GPU networking bandwidth compared to the A3 VM, making them ideal for complex AI training and inference.
  • Enhanced Security: With Confidential Computing coming to the A3 family later this year, A3 Mega VMs will allow encryption of data transfers within the VM itself, fortifying security for sensitive AI workloads.

NVIDIA GB200 NVL72(Coming in early 2025 in Google Cloud):
The NVIDIA Blackwell GPU platform will be available on the AI Hypercomputer architecture in two configurations: NVIDIA HGX B200 for the most demanding AI, data analytics, and HPC workloads; and the liquid-cooled GB200 NVL72 GPU for real-time LLM inference and training massive-scale models.

TPU v5p (General Availability):

Google’s most powerful Tensor Processing Unit (TPU) accelerator yet. Designed specifically for large-scale AI training, the v5p boasts several key improvements:

  • Unprecedented Power and Scalability: Cloud TPU v5p features 8,960 chips interconnected at 4,800 Gbps/chip in a 3D torus topology, delivering more than 2X greater FLOPS and 3X more high-bandwidth memory (HBM) compared to TPU v4.
  • Performance Enhancements: TPU v5p can train large language models (LLMs) 2.8X faster than TPU v4, and with second-generation SparseCores, it can train embedding-dense models 1.9X faster than TPU v4.
  • Optimized for Large-Scale AI Workloads: TPU v5p is 4X more scalable than TPU v4 in terms of total available FLOPs per pod, doubling the floating-point operations per second (FLOPS) over TPU v4.
  • Cost-Effective Solutions: Despite its enhanced capabilities, Google Cloud remains committed to providing cost-efficient solutions, with a focus on performance per dollar metrics.
  • Integration and Compatibility: TPU v5p integrates seamlessly with popular machine learning frameworks like TensorFlow and PyTorch, leveraging open software to optimize AI workflows and deployment across diverse use cases.

AI Optimized Storage(Preview):

At Google Cloud Next 2024, a significant focus was placed on streamlining storage solutions for demanding AI workloads. This three-pronged approach tackles different aspects of the data pipeline:

  • Cloud Storage Fuse Caching: This feature brings high-performance caching capabilities directly to Google Cloud Storage. Imagine having frequently accessed data readily available locally, even when working with vast datasets stored in the cloud. This translates to faster training times and smoother AI workflows.
  • Parallelstore Caching: Google Cloud announced deeper integration with Parallelstore, a high-performance database designed for geospatial and time-series data. This means you can leverage Parallelstore’s caching capabilities for frequently accessed AI datasets, improving performance for tasks like real-time analytics and fraud detection.
  • Hyperdisk ML: This is an intriguing new offering. While details are still emerging, Hyperdisk ML appears to be a high-performance, software-defined storage solution optimized for AI workloads. Think of it as a dedicated storage layer specifically designed to meet the demanding I/O requirements of AI training and inference tasks.

Dynamic Workload Scheduler(General Availability):

  1. Flex Start Mode:

Purpose: Designed for jobs with flexible start times.
Benefits:

  • Jobs are queued to run as soon as resources become available, maximizing resource utilization.
  • Simplifies obtaining TPU and GPU resources for jobs that can start at varying times.

Integration:

  • Now integrated across Compute Engine Managed Instance Groups, Batch, Vertex AI Custom Training, and Google Kubernetes Engine (GKE).
  • Enables running thousands of AI/ML jobs with increased obtainability across various TPU and GPU types.

2. Calendar Mode:

Purpose: Offers short-term reserved access to AI-optimized computing capacity.
Benefits:

  • Reserves collocated GPUs for up to 14 days, which can be booked up to 8 weeks in advance.
  • Extends Compute Engine’s future reservation capabilities, ensuring confirmed capacity on requested start dates.

Integration:

  • Seamlessly integrated into Compute Engine, enabling easy VM creation targeting reserved capacity for the entire reservation duration.

GKE Enterprise on GDC: GKE Enterprise is Google Cloud’s premium edition of Kubernetes, offering advanced features for managing containerized workloads at scale. By integrating it with GDC, Google is extending these capabilities to edge computing environments and beyond.

AI models on GDC(General Availability):

GDC is now offering support for open AI models like Llama and Gemma. This opens doors for businesses to leverage these powerful pre-trained models for various tasks, including:

  • Generative AI: Models like Llama can be used for tasks like text generation, code completion, and creative content creation.
  • Enhanced Search: Gemma, a massive 7B parameter model, can be integrated with GDC’s AI search solution to enable natural language search capabilities on on-premise data.

NVIDIA GPUs to Google Distributed Cloud(General Availability):

A major announcement at Google Cloud Next 2024 was the integration of NVIDIA GPUs into Google Distributed Cloud (GDC). This move signifies Google Cloud’s commitment to empowering businesses with high-performance computing capabilities at the edge and beyond the traditional cloud environment. Here’s a breakdown of what this means:

  • Unleashing Compute Power at the Edge: GDC can now leverage the power of NVIDIA GPUs, allowing businesses to run demanding workloads like AI training, scientific simulations, and high-performance graphics rendering at the edge. This can be particularly beneficial for applications that require real-time processing and analysis of data closer to its source, reducing latency and bandwidth limitations.
  • Flexibility for Diverse Needs: The announcement didn’t specify which specific NVIDIA GPUs will be available on GDC. However, it’s likely that a range of options will be offered, catering to businesses with varying performance and budget requirements.

Vector search on GDC(General Availability):
Vector search allows for efficient searching of high-dimensional data like images, videos, and complex documents. It goes beyond traditional keyword searches to find similar data points based on meaning and context.

Google Axion Processors(Coming Soon):
Google Cloud Platforms’ big announcement is the Axion processor, their first custom-designed Arm-based CPU for data centers. Here’s the gist:

  • Focus on performance and efficiency: Google claims Axion offers up to 50% better performance and 60% better energy efficiency compared to current x86 processors, and 30% better performance than other Arm-based options in the cloud.
  • Built for diverse workloads: Designed to handle web serving, data analytics, AI training, and more.
  • Leverages Arm technology: Axion utilizes Arm’s Neoverse V2 cores, known for high performance.
  • Open for business: Google plans to offer cloud instances powered by Axion soon, allowing customers to run their Arm-compatible workloads without modification.

Gemini 1.5 Pro (Preview):
Gemini 1.5 Pro represents a significant leap in Google’s large language model capabilities, particularly in its ability to handle complex and information-rich tasks.

  • Breakthrough long-context understanding: It boasts a standard context window of 128,000 tokens, significantly exceeding previous models. In private preview, a limited group can try an even more extensive 1 million token window. This allows it to process massive amounts of information at once.
  • Multimodal capabilities: Unlike prior versions, Gemini 1.5 Pro can now understand and reason across different data types, including text, code, images, and audio.
  • Enhanced performance: Achieves performance similar to the larger Gemini 1.0 Ultra model in a more efficient architecture.

Claude 3(General Availability):
Claude 3 was a major announcement at Google Cloud Next 2024! Here’s the key information:

  • Generally Available: This means Claude 3, a state-of-the-art large language model from Anthropic, is now readily accessible to Google Cloud customers through Vertex AI, their enterprise AI platform.
  • Multiple Variants: Claude 3 comes in three versions: Sonnet, Haiku, and Opus. Sonnet and Haiku are available now, with Opus coming soon.
  • Focus on Generative AI: Claude 3 is designed for applications involving generative AI tasks, where the model can create new text formats, translate languages, write different kinds of creative content, and answer your questions in an informative way.
  • Partnership with Anthropic: This announcement highlights Google Cloud’s collaboration with Anthropic, a leading AI research company, to bring advanced AI capabilities to its customers.

Supervised Tuning for Gemini Models(Preview):
Supervised tuning has been available for Gemini models on Vertex AI. Supervised tuning adapts model behavior with a labeled dataset. This process adjusts the model’s weights to minimize the difference between its predictions and the actual labels. It can improve model performance for tasks like Classification, Sentiment analysis, Entity extraction, Summarization of content that’s not complex, and Writing domain-specific queries.

Grounding with Google Search(Preview):
Google emphasized a new capability in Vertex AI where AI models can be grounded with Google Search.

  • Reduced Hallucinations: A major challenge with large language models is generating outputs that seem plausible but are factually incorrect. Grounding models with Google Search allows them to access and leverage the vast knowledge base of Google Search, improving the factual accuracy of their responses.
  • Enhanced Responsiveness: By incorporating search results, the models can tailor their responses to be more relevant and informative, addressing the user’s query comprehensively.
  • Increased Trustworthiness: Grounding in verified search results increases the trustworthiness of information provided by AI agents built on Vertex AI. This is particularly important for applications in public bodies and sensitive industries that rely on accurate and reliable information.

Prompt Management(Preview):

  • Collaboration: Enables teams to collaborate on crafting prompts, adding notes, and tracking version history. This fosters a shared understanding of effective prompts and facilitates knowledge transfer within teams.
  • Performance Comparison: Allows users to compare the quality of responses generated by different prompts. This helps identify the most effective prompts for achieving desired outcomes.
  • Improved Efficiency: Streamlines the process of iterating on prompts and testing variations. This saves time and resources compared to manual processes.

Automatic Side-by-Side(AutoSxS)(General Availability):
Automatic Side-by-Side (AutoSxS) is a feature within Google Cloud’s Vertex AI platform specifically designed for evaluating large language models (LLMs).

  • Pairwise Model Evaluation: AutoSxS allows you to compare the performance of two different LLMs side-by-side. This is useful for tasks like:
    i. Choosing the best model for a specific application.
    ii. Tracking the improvement of a model over different versions.
    iii. Benchmarking your custom LLM against pre-trained models offered by Google.
  • Supported Tasks and Criteria: AutoSxS can evaluate models for various tasks, including:
    i. Summarization: Compares the quality and accuracy of summaries generated by different models.
    ii. Question Answering: Evaluates the factual correctness and comprehensiveness of answers provided by different models for the same question.
  • Pre-generated Predictions Required: For AutoSxS to work, you need to provide pre-generated predictions from both models you want to compare. These predictions are the outputs the models generate for a specific set of inputs (text or data).
  • Integration with Vertex AI Pipeline: AutoSxS leverages the Vertex AI evaluation pipeline service. This means you can integrate model evaluation using AutoSxS into your existing Vertex AI workflows for a streamlined process.

Rapid Evaluation(Preview):
Rapid Evaluation helps customers quickly evaluate models on smaller data sets when iterating on prompt design.

Vector Search (General Availability):
Vector search is a powerful technique for efficiently searching through massive datasets of high-dimensional data. Google Cloud’s Vertex AI platform offers Vector Search capabilities, and here’s a breakdown of its key aspects:

  • Represents data points as vectors: Each data point is converted into a multidimensional vector, allowing comparison based on semantic similarity rather than exact keyword matching.
  • Enables efficient similarity search: Given a query vector, Vector Search quickly retrieves data points from the collection that are most similar to the query.

AI Meetings and Messaging Add-on(General Availability):
Google Cloud introduced the AI Meetings and Messaging Add-on for Google Workspace, bringing a suite of AI-powered features to enhance communication and collaboration. Here’s a breakdown of its key functionalities:

  • Improved Communication Efficiency:
    i. Translate for Me:
    Enables real-time translation within Google Meet and Chat, facilitating communication across language barriers. The add-on automatically detects spoken languages and translates them for a seamless experience.
    ii. Adaptive Audio: Provides synchronized audio and eliminates feedback during meetings when participants join from a physical location with laptops. This ensures clear and uninterrupted audio for everyone.
  • Enhanced Content Management:
    i. Screenshare Watermark:
    Helps discourage the unauthorized distribution of shared content during meetings. You can add custom watermarks to protect sensitive information displayed on the screen.
  • Streamlined Information Access:
    i. On-Demand Conversation Summaries:
    Provides summaries of past conversations within Google Chat directly in the home view. This allows users to quickly catch up on missed discussions or refresh their memory on key points.

AI Security Add-on(General Availability):

  • Automatic Data Classification: Leverages privacy-preserving AI models to automatically identify, classify, and label sensitive files across your organization’s Google Drive. This includes things like financial data, customer information, or intellectual property.
  • Customizable Training: The AI models can be trained on your organization’s specific data classification definitions, ensuring they recognize and label sensitive information relevant to your business.
  • Integration with DLP (Data Loss Prevention): Classified files can be automatically flagged and protected with existing DLP controls within Google Workspace. This might include restricting access, preventing downloads, or encrypting sensitive data.
  • Improved Security Posture: By proactively identifying and classifying sensitive data, the AI Security Add-on helps organizations tighten their security posture and minimize the risk of data breaches or leaks.
  • The AI Security Add-on is now available for select Workspace plans with a cost of $10 per user, per month.

Gemini in google chat(Preview):
Gemini is currently available in Google Chat but as part of a limited preview for enterprise users. Within Google Chat, Gemini can help with tasks like writing, summarizing information, and brainstorming ideas.

Imagen 2.0 on Vertex AI(General Availability):
Google Cloud offers access to Imagen 2.0 through its Vertex AI platform. This allows developers to leverage its capabilities for various creative applications within a controlled environment.

  • Higher Photorealism: Imagen 2.0 builds upon its predecessor’s capabilities, generating even more photorealistic and high-fidelity images. This is achieved through advancements in diffusion models and training techniques.
  • Enhanced Style Control: The diffusion process allows for more precise control over the artistic style of the generated images. You can provide reference images or detailed descriptions to achieve a specific aesthetic look.
  • Text-to-Image with Multiple Languages: Imagen 2.0 can now create images from text prompts in multiple languages, expanding its accessibility and global reach.
  • Text-Filtering and Watermarking: To mitigate potential misuse, Imagen 2.0 integrates safeguards like text filters to prevent the generation of violent or offensive content. Additionally, Google DeepMind’s SynthID watermarks are embedded imperceptibly into the images, helping identify AI-generated content.
  • Captions and question-answer: Imagen 2’s enhanced image understanding capabilities enable customers to create descriptive, long-form captions and get detailed answers to questions about elements within the image.

Gemini in Bigquery(Preview):

  1. Data Exploration and Analysis Assistance:
  • Natural Language Query Explanation: Prompt Gemini in natural language to explain complex SQL queries within BigQuery. This can be particularly helpful for understanding queries you may not have written yourself or for those involving intricate logic.
  • Data Summarization: Use Gemini to generate summaries of your BigQuery tables or query results. This can help you quickly grasp the key insights from your data.
  • Data Insights Generation: Prompt Gemini to identify trends, patterns, or anomalies within your BigQuery data. This can assist you in uncovering hidden insights you might have missed through traditional analysis methods.

2. AI-powered Data Preparation and Transformation:

  • Automatic Data Cleaning: Gemini might assist with identifying and correcting inconsistencies or errors within your BigQuery datasets. This can help ensure the quality and accuracy of your data for analysis.
  • Data Anonymization: Leverage Gemini to anonymize sensitive data in your BigQuery tables while preserving its analytical value. This can be crucial for adhering to data privacy regulations.
  • Feature Engineering Suggestions: Gemini could analyze your data and suggest potential features to be derived or engineered from existing data points within BigQuery. This can help improve the performance of machine learning models trained on your data.
source

Gemini in databases:
Gemini in Databases, which is part of Gemini for Google Cloud, is an AI-powered database assistant that helps you optimize your database fleet and work with the data in your databases. Gemini in Databases helps simplify all aspects of database operations, including programming, performance optimization, fleet management, governance, and migrations.

Gemini in Databases provides AI assistance to help you in the following ways:

  • Reduce risk and optimize your database fleet with Database Center.
  • Provide code assistance in Database Studio.
  • Stay ahead of potential performance issues with Enhanced Query Insights.
  • Utilize assisted code and schema conversion in Database Migration Service.

BigQuery data canvas(Preview):
The new BigQuery data canvas provides a reimagined natural language-based experience for data exploration, curation, wrangling, analysis, and visualization, allowing you to explore and scaffold your data journeys in a graphical workflow that mirrors your mental model.

Natural Language-Driven Analytics:

  • Query by Example: Instead of writing complex SQL code, you can describe your desired analysis in natural language. BigQuery Data Canvas interprets your intent and generates the corresponding SQL queries. This makes data exploration more accessible, especially for users less familiar with SQL.
  • Interactive Refinement: As you explore your data, you can refine your analysis with natural language prompts. This allows you to iterate on your queries and delve deeper into specific aspects of your data without needing to rewrite SQL code from scratch.

Visual Data Exploration:

  • Directed Acyclic Graph (DAG) Visualization: BigQuery Data Canvas presents your analysis workflow as a DAG. This graphical representation allows you to visualize the connections between data sources, transformations, and visualizations, making it easier to understand complex data pipelines.
  • Integrated Data Visualization: You can directly create and customize charts and graphs within BigQuery Data Canvas to visualize your analysis results. This eliminates the need to switch between different tools for data exploration and visualization.

Collaboration Features:

  • Shared Data Canvases: Collaborate with team members by sharing your BigQuery Data Canvas projects. This allows everyone involved to view, modify, and contribute to the analysis, fostering a collaborative data exploration experience.
  • Version Control: BigQuery Data Canvas provides version control functionality, enabling you to track changes made to your analysis and revert to previous versions if needed.

Direct Access to Vertex AI from BigQuery(Preview):
The direct integration between BigQuery and Vertex AI now enables seamless preparation and analysis of multimodal data such as documents, audio, and video files. BigQuery features rich support for analyzing unstructured data using object tables and Vertex AI Vision, Document AI, and Speech-to-Text APIs. Google now enabled BigQuery to analyze images and video using Gemini 1.0 Pro Vision, making it easier than ever to combine structured with unstructured data in data pipelines using the generative AI capabilities of the latest Gemini models.

BigQuery makes it easier than ever to execute AI on enterprise data by providing the ability to build prompts based on your BigQuery data, and use LLMs for sentiment extraction, classification, topic detection, translation, classification, data enrichment, and more.

Gemini in Looker(Preview):

Introduction of Conversational Analytics:

  • Natural Language Exploration: Looker now offers conversational analytics powered by Gemini. This allows you to ask questions about your data in plain English directly within the Looker interface. Gemini interprets your intent and retrieves relevant insights or generates visualizations based on your query. This makes data exploration more accessible for users who may not be familiar with writing complex Looker expressions.
  • Multi-Turn Conversations: You can engage in a back-and-forth conversation with Gemini to refine your analysis. Imagine asking an initial question about a specific trend, and then using follow-up questions to drill down into specific aspects or compare it to other metrics. This conversational approach allows for a more iterative and interactive exploration of your data.

Beyond Basic Exploration:

  • Report and Visualization Generation: In addition to answering questions, Gemini can assist with creating reports and visualizations. You can provide a high-level description of the insights you want to convey, and Gemini can generate draft reports or suggest visualizations tailored to your needs. This streamlines the process of creating data-driven reports.
  • Formula Assistance: Stuck on a complex Looker formula? Gemini can offer suggestions and guidance to help you build the formulas you need for your analysis. This can save time and reduce errors when working with Looker expressions.

Gemini integration in Looker represents a significant step forward in making business intelligence (BI) more accessible and user-friendly. By leveraging natural language interaction and AI-powered assistance, Gemini empowers users to get more value out of their data within the Looker platform.

Gemini Code Assist(formerly Duet AI for Developers)
Gemini Code Assist is an AI-powered coding assistant unveiled at Google Cloud Next 2024. It integrates with various development environments and offers functionalities that empower developers throughout the software development lifecycle (SDLC).

  • Code Completion: As you type your code, Gemini Code Assist suggests relevant code snippets, function calls, and variable names, accelerating your development process. This feature supports over 20 programming languages!
  • Full Codebase Awareness (Preview): This advanced capability, powered by Gemini’s large language model, allows you to make large-scale changes across your entire codebase with a single prompt. Imagine adding new features, updating dependencies, or performing comprehensive code reviews, all through intuitive natural language instructions. (Currently in preview)
  • AI-powered Code Generation: Stuck on a specific functionality? Gemini Code Assist can generate complete code blocks or even entire functions based on your descriptions, saving you time and effort.
  • Natural Language Prompts: No need to memorize complex syntax! You can interact with Gemini Code Assist using natural language to ask questions about specific coding tasks or request code snippets for functionalities you want to implement.

Gemini Cloud Assist (Preview):
Gemini Cloud Assist is a powerful tool that leverages cutting-edge AI to streamline application lifecycle management within Google Cloud Platform (GCP).

Core functionalities:

  • Design and Deployment: Assists with designing new cloud architectures and efficiently deploying workloads onto GCP. This might involve infrastructure provisioning, configuration management, and resource allocation based on your specific requirements.
  • Troubleshooting and Optimization: Helps diagnose issues within your GCP environment, pinpoint root causes of problems, and identify areas for performance optimization. This can save valuable time and resources in resolving cloud-related issues.
  • Management and Automation: Simplifies key cloud workflows through automation and intelligent guidance. Gemini Cloud Assist can automate repetitive tasks and offer suggestions for optimizing your cloud configuration.

Gemini in Threat Intelligence(Preview):
Gemini can answer questions related to threat intelligence about topics such as threat actors, their associations, and their behavior patterns. You can enter your questions into the Gemini pane.

  • Leveraging Generative AI: Gemini utilizes generative AI techniques to enhance threat intelligence analysis within the Mandiant security platform, now integrated with Google Cloud.
  • Conversational Search: Security analysts can interact with Gemini through a conversational search interface, allowing them to ask questions about threat actors, indicators of compromise (IOCs), and other threat intelligence topics in natural language. This eliminates the need for complex queries and simplifies information retrieval.
  • Enhanced Research Efficiency: Gemini automates tasks like web crawling for relevant Open-Source Intelligence (OSINT) articles. It then summarizes the information and presents it to the analyst, saving them valuable time and effort during threat research.

Gemini in Security Operations (Preview):
Gemini provides investigation assistance which can be accessed from any part of the Chronicle. Gemini can assist with your investigations by providing support for the following:

  • Search: Gemini can help you build, edit, and run searches targeted toward relevant events using natural language prompts. Gemini can also help you iterate on a search, adjust the scope, expand the time range, and add filters. You can complete all these tasks using natural language prompts entered into the Gemini pane.
  • Search summaries: Gemini can automatically summarize search results after every search and subsequent filter action. The Gemini pane summarizes the results of your search in a concise and understandable format. Gemini can also answer contextual follow-up questions about the summaries it provides.
  • Rule generation: Gemini can create new YARA-L rules from the UDM search queries it generates.
  • Security questions and threat intelligence analysis: Gemini can answer general security domain questions. Additionally, Gemini can answer specific threat intelligence questions and provide summaries about threat actors, IOCs, and other threat intelligence topics.
  • Incident remediation: Based on the event information returned, Gemini can suggest follow on steps. Suggestions might also appear after filtering search results. For example, Gemini might suggest reviewing a relevant alert or rule or filtering for a specific host or user.

Gemini in Security Command Center (Preview):
Gemini integrates with SCC, a central platform for security operations within Google Cloud, to enhance threat management through:

  • AI-powered Alert Summarization: Security analysts are bombarded with alerts. Gemini can automatically analyze and summarize critical and high-priority alerts, highlighting potential exploits and attack paths within SCC. This helps analysts prioritize the most critical threats and take action quickly.
  • Threat Scenario Simulation: Beyond summaries, Gemini can analyze security telemetry data and simulate potential attack paths based on the identified threats. This allows analysts to understand the potential scope and impact of a threat, enabling proactive mitigation strategies.
  • Recommendation for Remediation Actions: By simulating attack paths, Gemini can suggest potential remediation actions to mitigate the identified threats. This can expedite the response process and minimize potential damage.

Google Cloud Next 2024 has come to a close, and what a whirlwind it’s been! From the unveiling of Imagen 2.0’s revolutionary image generation capabilities to the introduction of Gemini’s transformative presence across various Google Cloud platforms, this year’s conference showcased a clear theme: AI is fundamentally changing the way we interact with and utilize cloud technologies.

This blog post has explored a multitude of these exciting announcements, delving into how Gemini empowers data exploration in BigQuery with BigQuery Data Canvas, simplifies threat intelligence analysis, and streamlines security operations within the Security Command Center. We’ve also discussed the potential of Vertex AI’s deeper integration with BigQuery and the future possibilities of editing within Imagen 2.0.

While there’s still much to discover as these advancements evolve, Google Cloud Next 2024 has painted a vivid picture of a future where AI acts as our partner, assisting us in unlocking the full potential of the cloud.

Watch the Complete Keynote:

Resources:

Feel free to connect on Linkedin!!

--

--

Mohtasham Sayeed Mohiuddin
Mohtasham Sayeed Mohiuddin

Written by Mohtasham Sayeed Mohiuddin

Passionate content creator exploring cloud tech and sustainability. 🌱

No responses yet