Google Research’s cover photo
Google Research

Google Research

Technology, Information and Internet

Impossible? Let's see.

About us

From conducting fundamental research to influencing product development, our research teams have the opportunity to impact technology used by billions of people every day. We aspire to make discoveries that impact everyone, and sharing our research and tools to fuel progress in the field is fundamental to our approach.

Website
https://research.google/
Industry
Technology, Information and Internet
Company size
1,001-5,000 employees

Updates

  • The biggest barrier for AI applications in Africa isn't model complexity -- it's the scarcity of data for the 2000+ spoken languages there. We just released WAXAL. This open-access dataset delivers 2,400+ hours of high-quality speech data for 27 Sub-Saharan African languages, serving 100M+ speakers. Crucially, this community-rooted effort — led by African organizations — changes the roadmap for truly inclusive voice AI. Learn more and check out the WAXAL dataset here: goo.gle/4cy4dqC Note: We have updated this post announcement with a corrected map. We apologize for the previous error and appreciate the community feedback that helped us correct it.

    • Launched in 2021 as a multi-year collaboration with African academic and community organizations, WAXAL provides the high-quality, permissively licensed data needed to build robust speech systems. 

Note: We have updated this post announcement with a corrected map. We apologize for the previous error and appreciate the community feedback that helped us correct it.
  • Google Research reposted this

    Today, we are introducing Groundsource ✨, a new AI-powered methodology that transforms public information into a high-quality record of historical disaster data - starting with flash floods in urban areas. When disaster strikes, high-quality information is a lifeline. For years, our Crisis Resilience efforts at Google Research have focused on providing early warnings for natural hazards, including prediction of severe riverine flood prediction covering 2 billion people in over 150 countries. However a significant challenge remained: for many disasters, like flash floods, the high-fidelity historical data needed to train predictive AI models simply did not exist. Groundsource addresses this data gap by using Gemini to analyze decades of public information, transforming unstructured data into a structured, high-quality archive. This allows us to map the historical footprint of disasters with high precision and build a robust scientific baseline. Here is how we are applying this research to keep communities safe: ✨ Transforming Public Information:  Groundsource used news reports of 20 years in 80 languages to identify over 2.6 million historical flood events spanning more than 150 countries. ✨ Precise Mapping:  By integrating with Google Maps, the system determines exact geographic boundaries for these events, creating a dataset for flash floods. ✨ Predicting Urban Flash Floods:  Using this new dataset, we trained a model capable of predicting flash floods in urban areas up to 24 hours in advance. ✨ Expanding Flood Hub:  These new forecasts are now available on Google’s Flood Hub, complementing our existing riverine flood models that already cover 2 billion people in more than 150 countries. We are open-sourcing this dataset to provide a massive benchmark for partners and scientists to scale their impact. Groundsource joins our Google Earth AI family of geospatial models and datasets - while we are starting with flash floods in urban areas, this methodology has the potential to be applied to other hazards like landslides or heat waves, turning verified global reports into actionable datasets. By turning the records of the past into high-quality data for the future, we are moving closer to our goal: a world where no one is surprised by a natural disaster. Read about the research: https://lnkd.in/dqc9S2XY More about the Groundsource methodology: https://lnkd.in/dYshXEmS More about protecting cities with AI-driven flash flood forecasting: https://lnkd.in/dQ32uRkG Google Earth AI: ai.google/earth-ai

    • No alternative text description for this image
  • Introducing Groundsource: Scaling Global Disaster Resilience with AI and Open Data We are excited to share a major step forward in our crisis resilience efforts. A critical data gap has long hindered the training of AI models for predicting flash floods. Our response is Groundsource, a new AI-powered methodology that analyzes decades of public information into a high-quality historical archive. By utilizing Gemini to analyze over 5 million reports, we identified 2.6 million flood events across 150 countries. Why this matters for the scientific community: • Open-Source Benchmark: A massive new dataset to train better prediction models, enable more effective disaster risk management, optimize emergency response, and revolutionize urban planning. • Closing the Gap: Critical data for regions like Africa and Southeast Asia that have lacked historical data. • Scalable Approach: This same methodology can be applied to other disasters, like landslides and heatwaves. The flash flood model, trained on the Groundsource dataset, is now active in Google’s Flood Hub tool, designed to provide up to 24 hours of advance notice of flash floods predictions in urban areas. Dive into the details and access the dataset: https://goo.gle/4bgv7B2  Learn how this work enabled urban flash flood forecasting: goo.gle/4sELTAO   #GoogleResearch #FloodHub #Gemini #MachineLearning #Hydrology #CrisisResilience

  • View organization page for Google Research

    402,548 followers

    Introducing our latest research in partnership with Beth Israel Deaconess Medical Center (BIDMC) exploring the feasibility of conversational diagnostic AI in real-world clinical workflows. While previous demonstrations of our Articulate Medical Intelligence Explorer (AMIE) utilized simulated settings, this prospective study assessed AMIE’s performance during pre-visit history taking with 100 adult patients in an ambulatory primary care clinic prior to their scheduled urgent care visit with a primary care provider (PCP). Key findings from the study include: • Zero safety stops were required by human AI supervisors across all patient interactions. • AMIE’s differential diagnosis (DDx) accuracy was 90% within its top 7 possibilities, compared to the patient’s final diagnosis as extracted from the chart 8-weeks post-encounter. • Clinical evaluators rated AMIE’s management plans on par with PCPs for appropriateness and safety, while PCPs outperformed AMIE on cost-effectiveness and practicality of plans. • Patient trust in AI increased significantly after interacting with AMIE. • PCPs noted that preparing the visit with AMIE helped to shift the dynamic from simple data gathering to more collaborative care. This research marks a crucial milestone in our evidence-based roadmap toward using generative AI to assist clinicians and increase access to care. Learn more: https://goo.gle/4rkyqgJ

  • Today we’ve published two studies in Nature Cancer on how AI can improve breast cancer detection. This new research done together with Imperial College London and NHS England marks a turning point in screening technology and reveals how AI can strengthen early detection efforts.

    Breast cancer affects one in every eight women in the UK, and early detection is crucial. Our latest research shows how AI can strengthen those early detection efforts. ⚕️Today, in Nature Cancer, we are sharing a pair of new studies conducted with Imperial College London and NHS England showing that our experimental research AI-based screening system identified 25% more “interval cancers” - cases typically missed by traditional screening - than conventional methods alone. Additionally, the research found AI is capable of reducing screening workloads by an estimated 40% for radiologists. This work marks a significant milestone in a long-term progression. These findings build on our 2020 retrospective study published in Nature, that found an earlier version of this AI-based screening system could detect cancers in a single reader setting. Collectively, these studies show how AI has the potential to strengthen early detection efforts, paving the way for more people to be diagnosed and treated sooner, with the ultimate goal of helping save lives. Key insights from the published studies: ⚕️Closing the Detection Gap:  In a study of over 125,000 women, the AI-based process identified 25% of the total interval cancers (cancers detected between scans) previously missed. ⚕️Giving radiologists more time for patient care:  Our second study of over 50,000 women showed that using AI as a "second reader" can safely reduce specialist workloads by an estimated 40%, helping to address the global shortage of radiologists. ⚕️Human-AI Interaction:  We conducted research into the "arbitration" process, analyzing how specialists interact with AI to resolve diagnostic disagreements. While arbitration successfully filters out false positives, we observed a critical tension during our simulated review: arbitration panel specialists occasionally overruled AI-detected cancers that would have otherwise gone undetected. These findings highlight the need for continued research on human-AI interaction to build specialist trust in AI’s ability to catch subtle, early-stage cancers.  ⚕️Feasibility study:  We evaluated the challenges of practical clinical AI integration into clinical workflow through a feasibility study across 12 NHS screening sites in London, processing over 9,000 cases in real-time without using AI results to impact patient care. A key lesson was that AI isn't a "plug-and-play” solution. It requires careful and continuous calibration to the unique heartbeat of each hospital and adaptation to shifting workflows, evolving equipment and diverse patient populations. Read more about this milestone in this blog: https://lnkd.in/dZ_-U9d3 Full papers in Nature Cancer: https://lnkd.in/dFrDqgh3 https://lnkd.in/d-tkEBXE

    • No alternative text description for this image
  • At 11 AM today, David Fleet, Researcher at @GoogleDeepMind, will be at #SALA2026 to present a Foundational Session on "The surprising effectiveness of generative diffusion models". He will address crucial questions about model capabilities beyond text-to-image/video generation.

    • No alternative text description for this image
  • At 1:30 PM this afternoon Pablo Samuel Castro, Research Scientist at @GoogleDeepMind, will kick off the #SALA2026 afternoon sessions with a talk on "Vibe Coding". Check it out to learn the history of vibe coding and how to use it to make your coding workflow faster and more creative.

    • No alternative text description for this image

Affiliated pages

Similar pages