YouTube Updates Profanity Guidelines & Deploys AI to Identify Teen User.

YouTube Update for Creators
Key Takeaway.
  • YouTube relaxes profanity rules, allowing limited strong language in monetized videos.
  • New AI tech will detect teen users and auto-enable safety features starting August 13, 2025.

YouTube has announced a dual rollout: updated profanity rules for creators under its Advertiser-Friendly Guidelines, alongside AI-powered age estimation technology to identify teen viewers and automatically enforce protections.

Looser Profanity Rules for Ad Monetization.

YouTube is loosening its stance on strong language in monetized videos. Under the revised guidelines, profanity include stronger language such as the f-word may still be eligible for ad revenue, depending on placement and frequency. This offers creators more flexibility, particularly when language is used for artistic or expressive purposes.

While still discouraged early in a video, profanity beyond the first 7 seconds may no longer automatically disqualify a video from monetization. This shift reflects YouTube’s increasingly nuanced approach to content that balances realistic dialogue with advertiser comfort.

AI-Driven Teen Identification & Automatic Safeguards.

Simultaneously, YouTube is implementing an AI age estimation system in the U.S., set to begin rolling out on August 13, 2025, to automatically detect users under 18—even if they misreport their birthdate. 

If the AI flags an account as underage, YouTube will activate existing protections for teens:

  • Disabling personalized ads
  • Enabling digital well-being tools: screen time reminders, bedtime alerts
  • Restricting repeated exposure to sensitive or body-image content
  • Blocking age-restricted videos unless the user verifies they are over 18 

Users mistakenly identified as teens can still contest the decision by verifying their age through government-issued ID, credit card, or selfie.

YouTube says it will initially test the system with a small group in the U.S. before a broader rollout, and closely monitor its performance.

Important Update for Creators.

These two updates together mark a shift in YouTube's efforts to balance creator freedom with safety and brand trust:

The relaxed profanity policy offers creators more flexibility while maintaining advertiser-friendly standards.

AI-based teen detection enables broader enforcement of protections without relying on user honesty or manual reporting.

Creators, especially those targeting younger audiences or using strong language, should understand these changes. Teen users are now subject to stricter content delivery protections regardless of what age they enter during sign-up.

Google Photos Perspective Correction Tool Goes Missing for Many Users.

Google Photos Logo Open on Android
Key Takeaway.
  • Google Photos removes the perspective correction tool.
  • Users report missing feature with no official response.

Recently, Google Photos has removed its long-standing perspective correction (crop/keystone) tool, frustrating photographers and everyday users who relied on it for straightening skewed shots and emulating scans. Reports across Reddit and Google’s own support forums confirm that the option has disappeared in recent app versions.

This feature was embedded within the Crop editing tools was appreciated for correcting angular distortions in photos of documents, artwork, or real estate. Though not the most widely used, its removal is sparking surprise and complaints.

The vanished function was once easily accessible as a skew-adjustment overlay in Google Photos’ Crop tool. Now, affected users—across Android and the web interface—report the feature is simply gone from editing menus. 

Google Photos Editing

The change seems to have appeared abruptly, likely tied to a recent app update that removed the tool without warning. Notably, it's still visible in older app versions like Google Photos v7.38, suggesting this is targeted in newer releases. 

Community reactions echo frustration: Reddit users on r/GooglePixel chimed in with comments like:

“I used it a lot to correct the perspective of photos I took too fast … Keystone correction is missing from the Crop tools.”

“At least a few times per week. ... I’m especially bummed that it's gone.” 

Some speculate the tool was removed due to low usage, or that user metrics excluded power users who disabled anonymous data sharing.

While the functionality is still accessible via alternative apps such as Snapseed or Google Drive’s document scanner but these are not ideal substitutes for seamless in‑Photos editing.

Affected users can try downgrading the Photos app or uninstalling and reinstalling via the Play Store to revert to an older build. However, this is a temporary fix and may become unavailable as Google continues updating the app.

As of now, Google has not officially addressed whether the removal is intentional, a bug, or part of a broader redesign. With the Pixel 10 launch drawing attention, some community members feel the timing suggests lower-priority users are being overlooked.

Google Messages Gets Full‑Screen “Details” Page.

Google Messages
Key Takeaway.
  • Google Messages has replaced the old pop-up with a full-screen message details view for better readability.
  • New visual icons now clearly indicate if a message is sent, delivered, or read.

The Google Messages app is receiving a significant UI enhancement to its message details page, switching from cramped pop-ups to a striking full‑screen redesign based on Material 3's Expressive layout elements.

Previously, long‑pressing a message and opening “View details” showed a small dialog overlaying part of the chat. Now, users see a clean full‑screen view that previews the selected message along with delivery metadata like sent, delivered, and read status. These indicators use new visual cues: a checkmark circle means sent, two checkmarks mean delivered, and a filled‑in circle after two checkmarks means read.

The new “Details” page also displays the sender’s name and phone number—but omits fields like message type (e.g. RCS with end‑to‑end encryption) and priority levels. This marks the first time Google Messages uses M3 Expressive containers, setting the stage for more such UI upgrades across the app.

What is Included in the Design Upgrade?

The redesign is rolling out broadly, including on both stable (build 20250713_01_RC04) and beta (20250725_02_RC00) versions of Google Messages. While full redesigns for Android phones are still in beta, Wear OS versions already display M3 Expressive styling, with tinted buttons, sleek bubbles, and refreshed icons.

Earlier this month, Google began blending camera and gallery access into a unified interface and now supports sending media in two quality levels: HD for optimized sharing and HD+ for original quality. The revamped message field limit now spans up to 14 lines, up from just four.

These UI refinements coincide with broader messaging improvements like group chat customization, spam and sensitive content warnings, and better support for RCS and MLS encryption across platforms.

Why Google Messages Update Matters?

This update enhances both usability and presentation: no more cropping screenshots to hide irrelevant chat content, and the full‑screen preview delivers visual clarity. “View details” is now a functional hub, not just a modal box.

By adopting Material 3 Expressive design, Google is unifying the look and feel of Messages across devices, offering users a consistent experience whether on Android or Wear OS. Enhanced status indicators and clearer UX also improve message tracking and reliability.

Looking ahead, expect Google to extend M3 Expressive styling to other areas of the app—potentially conversation view, media viewer, and group settings. Additional message details like encryption status and priority labels may also be included. If you're using the beta or stable version cited above, look for updates via the Play Store.

Google’s NotebookLM Introduces AI‑Powered Video Overviews.

Google is rolling out significant upgrades to NotebookLM, expanding its AI-powered research tool with a new Video Overviews format and a revamped Studio panel for enhanced content creation and multitasking.

The newly launched Video Overviews feature transforms dense information into narrated slideshow-style presentations. These AI-generated visuals integrate diagrams, quotes, data points, and images extracted directly from user-uploaded documents, making complex ideas more intuitive to understand. Users can tailor the output by specifying learning goals, audience, and specific segments to focus on, such as chapter-specific content or expert-level theories.

Video Overviews act as a visual counterpart to NotebookLM’s existing Audio Overviews and are now available to all English-language users, with additional languages and styles expected in upcoming updates.

Studio Panel Upgrades: Smarter Creation & Multi‑Output Workflows

NotebookLM’s Studio panel is also receiving a major upgrade. Users can now create and store multiple versions of the same output type (e.g., several Audio Overviews or Video Overviews) within a single notebook. This flexibility supports various use cases:

  • Publish content in multiple languages or perspectives.
  • Tailor outputs for different roles or audiences (e.g., student vs. manager).
  • Segment study material by chapters or modules using separate overview videos or guides.
The updated Studio interface introduces a clean layout featuring four tiles—Audio Overview, Video Overview, Mind Map, and Report—for quick access. All generated content is indexed below the tiles, and users can multitask—for instance, listening to an Audio Overview while exploring a Mind Map or reviewing a Study Guide.

NotebookLM, first launched in July 2023 and powered by Google’s Gemini AI, is also known for its Audio Overviews, which present document insights in conversational, podcast-style formats.
These new Video Overviews bring a visual dimension, essential for explaining data, workflows, diagrams, and abstract ideas more effectively.

According to internal disclosures, Google introduced Audio Overviews across more than 80 languages earlier this year, which doubled daily audio usage and significantly expanded user engagement. User feedback has driven numerous updates, including enhanced customization, in-app feedback tools, community-driven enhancements, and broader accessibility.

These additions cap a series of recent improvements, like “Featured Notebooks” (curated content from partners such as The Atlantic and The Economist) and automatic source discovery.

Google Pixel Phones Get Major GPU Boost with GameHub Support.

PC Emulator
Key Takeaway.
  • Pixel phones now support GameHub with improved Mali GPU performance.
  • Google’s updates boost GPU power by up to 60%, improving gaming and graphics.

GameSir, the company behind the GameHub PC game emulation platform, has confirmed that its latest update now supports Mali GPUs, including those used in Google Pixel devices. This support brings better performance in game emulation, especially for high-end titles, even though the feature was originally aimed at MediaTek-powered phones.

At the same time, Google's own updates to Pixel phones have quietly delivered a huge leap in GPU performance. Benchmarks show a boost of up to 60% on some Pixel models, including the Pixel 7a, Pixel 6a, and Pixel 8.

GameHub Now Enhances Pixel Gaming.

GameSir’s GameHub app allows Android phones to emulate PC games, and it now runs better on Pixel devices. According to the company, the new update optimizes Mali GPU performance using MediaTek’s GPU driver enhancements, and this extends to Pixel phones that use the same GPU family. Although no Pixel model uses a MediaTek chip, they still use Mali GPUs built into Google’s custom Tensor processors, which benefit from these changes.

With this update, Pixel users can expect smoother gameplay, better frame rates, and improved emulation experiences in demanding games.

Pixel Phones Quietly Get Driver-Based Performance Boost.

In addition to GameHub support, Google has been pushing GPU driver updates that dramatically improve performance. For example:

  • Pixel 7a: ~62% improvement in GPU benchmark scores
  • Pixel 8: ~32% increase
  • Pixel 6a: ~23% boost

These improvements were delivered via software updates, not hardware changes, showing how powerful optimization can be. Users have reported noticeable improvements in real-world games like Fortnite and Genshin Impact, where higher frame rates and more stable gameplay are being observed.

However, it’s worth noting that future Pixel models like the rumored Pixel 10 may use a different GPU (from Imagination), which may not benefit from these changes.

Chromebooks Get Closer to Linux with New Terminal Feature for Graphical Apps.

Linux Terminal
Key Takeaway.
  • Google is testing a new glaunch command that lets Chromebook users start graphical Linux apps directly from the Terminal.
  • This feature simplifies the Linux app experience on ChromeOS and could become a game-changer for developers and power users.

Google is quietly testing a powerful update for ChromeOS that could reshape how developers and Linux enthusiasts use Chromebooks. A new terminal command, glaunch, is being introduced to let users run graphical Linux applications directly from the Linux Terminal, marking a big step forward for Crostini, the built-in Linux environment on ChromeOS.

What’s New with glaunch?

Until now, using Linux apps on ChromeOS required navigating through app drawers or setting up complex launch commands. With glaunch, users can start Linux-based graphical apps like GIMP or Inkscape directly from the Terminal, making the experience faster and more intuitive.

For example, typing glaunch gimp in the Terminal would launch GIMP with its full graphical user interface, just as if you opened it from the system menu. This is especially useful for developers, creators, and power users who rely heavily on Terminal workflows.

How to Use It (Step-by-Step Guide)

Here’s how to test the new feature if you’re on a supported Chromebook:

Step 1: Enable the Linux Environment

  • Go to Settings > Developers > Turn on Linux development environment.
  • Follow the prompts to install Crostini.

Step 2: Install a Graphical Linux App

Open Terminal and update packages:

sudo apt update && sudo apt upgrade

Install a GUI app like GIMP:
sudo apt install gimp

Step 3: Launch the App Using glaunch.
glaunch gimp
If your app supports a GUI and you have the right permissions, the application should open immediately.

This feature is particularly valuable for developers who often test open-source Linux apps, educators using scientific software, and students learning code or design. It lowers the entry barrier for using full-fledged Linux programs within the lightweight Chromebook ecosystem.

Currently, glaunch is only available in the Canary channel of ChromeOS, which is reserved for experimental features. There’s no official timeline for its stable release, but its appearance suggests it may soon be part of ChromeOS’s mainstream Linux experience.

Google Workspace Account Security with Passkeys and Device-Bound Credentials.

Google Workspace
Credit: Google
Key Takeaway.
  • DBSC binds session cookies to the user’s device, making stolen cookies unusable on other devices, even if credentials are compromised.
  • Google recommends enabling DBSC with passkeys and context-aware access to safeguard enterprise accounts from phishing and cookie-based attacks.

Google Workspace has introduced a new security layer called Device Bound Session Credentials (DBSC) to help prevent attackers from hijacking accounts using stolen session cookies. The feature is now available in beta for Chrome users on Windows and is part of Google’s effort to strengthen enterprise account security.

How DBSC Enhances Session Security.

DBSC ties session cookies to the specific device used during authentication. When a user logs in, Chrome generates a unique public/private key pair—ideally stored in a Trusted Platform Module (TPM)—and binds the session cookie to this key. This means that stolen cookies cannot be reused from another device, significantly reducing the risk of remote account takeovers.

Google says this approach helps block malware-based attacks that steal session tokens after login, including those that bypass multi-factor authentication (MFA). By binding sessions to devices, attackers lose the value of exfiltrated cookies unless they have full access to the original hardware.

Session cookie theft has become a major threat, especially when targeted at enterprise users or high-profile accounts. Attackers use malware, malicious browser extensions, or man-in-the-middle phishing tools to capture authentication tokens, then reuse them to access services like Gmail, Google Drive, or Microsoft 365 without needing passwords or MFA codes.

By rolling out DBSC, Google is responding to a surge in token theft attacks observed in 2025. The feature aims to reduce account compromise even when login credentials are stolen.

How to Enable and What It Requires.

Workspace administrators can enable DBSC for their organization through Chrome policies or settings. The feature is currently supported on Chrome for Windows operating systems where TPM capabilities are available. Google also recommends combining DBSC with passkeys and context-aware access (CAA) to further reinforce its effect.

As Google rolls out broader support for DBSC, identity platforms like Okta and other browsers, including Microsoft Edge, have expressed interest in participating. Google is also working on open web standards to promote widespread adoption.

Looking Ahead

DBSC represents a shift in how session security is managed. Traditional cookie-based authentication, even when hardened with MFA, remains vulnerable if cookie theft occurs after login. With DBSC, even if attackers steal authentication tokens, they cannot exploit them from another device.

Google plans to extend DBSC to more platforms in the future and advance threat detection via its Shared Signals Framework (SSF), allowing security tools and identity providers to share risk signals in near-real time.

Google Tests AI-Powered Icon Theming for Pixel Phones.

Pixel Phone Theme Update
Key Takeaway.
  • Pixel phones are gaining AI-powered icon theming that can unify app icons even when developers haven’t added monochrome support.
  • A new “Create” option suggests users will be able to manually design custom styles, potentially including icon shapes and color variations.

Google is planning to enhance Pixel phone customization with a new feature that lets users create custom AI-powered app icon themes. The code discovered in the latest Android Canary build suggests that Pixel users may soon have more flexible styling options beyond the current themed icons.

In the Wallpaper & Style app for Pixel phones, hidden strings now reference four distinct icon style choices: Default, Minimal, AI icon, and Create. Currently, the "Minimal" style applies monochromatic-themed icons to supported apps. The upcoming “AI icon” option appears to automatically generate styled versions for apps that lack support, while “Create” likely offers a manual customization tool.

These changes aim to fix the inconsistent look of Android’s current themed icons feature, which only works with apps providing monochrome icons. The AI-powered theme could apply cohesive styling across all apps, even those without native support. 

Pixel launchers have long lacked built-in icon customization. Users currently rely on third-party launchers or manual shortcuts to style their home screens. With AI-generated themes and design tools integrated into the stock launcher, Pixel users can achieve unified aesthetics without leaving Google’s ecosystem.

The potential for user-created icon sets also expands customization possibilities. Users might choose shapes, color accents, or editing features, similar to Android’s wallpaper customization and soon-to-return icon shape options introduced in Android 16 Beta.

At this stage, the feature is only visible in specialized Canary builds. There is no official timeline from Google, and activation isn’t available via app settings. Given the early stage, this could arrive with Android 16’s Material 3 Expressive redesign, which is expected mid‑2025.

Google Brings Live Camera Input Into Search AI Mode.

Google has officially rolled out Search Live, a major enhancement to its AI Mode that lets users interact with Google Search using live camera input. This update allows users to point their Android device camera at objects and speak their questions, while the AI responds in real-time fusion of visual and voice interaction, designed to enrich search experiences.

What is Search Live, and how does it work?

Search Live builds on Project Astra’s live capabilities and integrates into Google’s AI Mode interface within the official Google app. Once enabled in Search Labs, users will see a new Live button in AI Mode at the top or bottom right. Tapping it opens a live camera viewfinder. In this mode, users can ask questions about what the camera sees, such as food ingredients, plants, and street signs, and receive detailed, contextual responses alongside relevant links, videos, and summaries.

The interface also adapts visually when active. Google’s signature colored arc dips down during AI responses, and integrated options let users mute the microphone or view transcripts without interrupting the conversation.

Search Live echoes the capabilities of Gemini Live, which previously supported voice and screen sharing. The new feature takes that experience directly into Search, weaving together Lens and generative AI to create a seamless multimodal tool.

Live AI Mode Search

Search Live Feature is Useful.

Search Live represents a new level of interactivity in everyday search behavior. Instead of typing or tapping into apps, users can now ask questions about their environment and receive AI responses based on what they see. This opens possibilities for real-time assistance—such as meal prep help, plant care tips, translation of signage, or even product lookups in stores.

Because the feature works within Search’s AI Mode, it benefits from Google’s query fan‑out system. That means it can cross-reference multiple data sources and generate concise answers with links to sources—all while keeping the interaction in a conversational format. 

Availablity of Search Live Feature.

Search Live is currently rolling out to users enrolled in Search Labs in the U.S. Users on recent Google app versions specifically 16.28 (stable) or 16.29 (beta) on Android have already reported seeing the Live icon and viewfinder during AI Mode sessions. The search bar or AI Mode interface adapts on the fly to include the Live camera option.

Google may expand the feature globally over time. Because it is managed server-side, users may need to wait a few days or restart the app to see the option, even if they meet the version requirements.

AI-Powered Weather Forecasts is Now Available on Pixel 8 and 8a.

Google Pixel Weather Forecasts

Key Takeaway.
  • Pixel 8 and Pixel 8a users with Gemini Nano can now access on-device AI Weather Reports previously exclusive to the Pixel 9 series.
  • The AI-powered summary offers a clear, conversational overview of weather patterns and alerts, improving usability and speed.

The Weather App is something that we use almost every day Play Store has tons of apps to choose from, but Pixel user has their own Weather App. Pixel users now have a reason to look forward to their weather app update. The AI Weather Report, once exclusive to the Pixel 9 series, is now appearing on Pixel 8 and Pixel 8a phones equipped with Gemini Nano. 

Previously, only Pixel 9 and newer devices received on-device AI weather forecasting. Now, users in multiple regions, including Australia and the U.S., are seeing the feature activate on Pixel 8 and 8a devices. These reports confirm that the AI Weather model automatically updates via AICore in the device’s developer settings. Once enabled, users receive an AI-generated summary of current and upcoming weather conditions within the Pixel Weather app. That summary appears above the hourly and 10‑day forecast sections.

To access this feature, users may need to enable Gemini Nano via Developer Options and allow the latest Nano model to download. Then, launching Pixel Weather may trigger the AI Weather Report to appear automatically.

What the AI Weather Report Offers

The AI Weather Report provides a concise, insightful overview that goes beyond simple data. It highlights notable details such as changing precipitation, upcoming temperature shifts, or weather alerts, all written in natural, easy-to-read language. While the full forecast features like maps and pollen counts remain unchanged, this new summary helps users quickly grasp the day ahead without sifting through numbers.

Expansion of AI in All Possible Directions.

AI Forecasts on older Pixels mean more users can benefit from Google’s evolving on-device AI capabilities. Loading the model locally ensures faster responses and greater privacy since raw data doesn’t need to be processed remotely.

This rollout reflects Google’s ongoing strategy to extend AI-first features to devices like the Pixel 8 through lightweight on-device models like Gemini Nano. It highlights how Google is turning generative AI into everyday tools on consumer devices.

The feature is deploying gradually via server-side updates. It requires the Pixel Weather app and an enabled Gemini Nano installation. Users in the U.S., Australia, and elsewhere have reported seeing the AI summary over the past week. Since it is not tied to a standard app update, the feature might take a few days to reach everyone, even on eligible devices.

Google Pixel Reclaims Top Four Spot in U.S. Phone Market.

Google Pixel 9 Pro
Key Takeaway.
  • Pixel has reclaimed its spot among the top four smartphone brands in the U.S., overtaking TCL thanks to rising demand for new devices and stronger market distribution.
  • Despite a limited global share, Google’s Pixel brand is gaining traction in North America and premium segments by emphasizing AI, design cohesion, and software longevity.

Google Pixel has officially reentered the top four smartphone brands in the United States. According to the latest update, Pixel’s share in the U.S. market edged ahead of TCL during the second quarter of 2025, thanks to consistent growth in shipments and demand for its latest models.

Steady Growth Pays Off for Pixel.

After years of hovering below major manufacturers, Pixel now holds a strong position in the U.S. smartphone rankings. In 2023, IDC estimated that Pixel held about 4.6% of the U.S. market, slightly above TCL’s 4.2% share. That put Google in fourth place, behind Apple, Samsung, and Motorola. In Q2 2025, Pixel’s consistent growth allowed it to overtake TCL for good.

Much of Pixel’s momentum stems from the success of flagship models like the Pixel 9 and Pixel 10, as well as strategic pricing and promotional efforts. Manufacturing investments in India and stronger distribution have also helped expand Pixel’s presence in both urban and suburban markets.

While Pixel enjoys this milestone in the U.S., its global share remains modest. Samsung and Apple continue to dominate worldwide phone shipments, holding nearly 20% each according to IDC and other analysts. Xiaomi and vivo trail behind, capturing between 9% and 14% of global shipments, depending on the report.

Within North America, Pixel’s growth is more notable. In Canada, Pixel's market share reportedly climbed from 6.5% in late 2024 to around 8% in mid-2025, indicating growing regional acceptance.

How Pixel’s Growth Matters to Google?

Google’s return to the top four signifies recovering strength in its hardware strategy. A meaningful increase in market share shows that timely product innovation, AI integration, and extended software support are resonating with users. Pixel continues positioning itself as a premium alternative to Samsung and Apple in the Android ecosystem.

Furthermore, surpassing TCL reflects Google’s ability to outpace a competitor that once had a larger presence in budget segments. It suggests Google is moving beyond a niche player to becoming a more visible contender in the U.S. phone market.

With Pixel now in the top four in the U.S., attention turns to whether this growth can be sustained. Google’s upcoming Pixel 10 series, stronger retail partnerships, and AI-driven features across devices like Pixel Watch and Buds could further enhance home ecosystem appeal.

The company’s continued push into fast-growing markets like India, where shipments already increased significantly in 2024, will also be key to future gains.

Android’s QR Code Scanner Interface Receives Redesign.

QR Code Scanner
Key Takeaway.
  • Android’s QR code scanner now features bottom-anchored controls and a polished launch animation for improved one-handed use.
  • The redesign simplifies post-scan actions with “Copy text” and “Share” options integrated directly into the interface.

Google has quietly rolled out a refined user interface for Android’s built-in QR code scanner through a recent Play Services update. The refreshed design brings controls within thumb reach and streamlined animations, making scanning smoother and more intuitive on modern smartphones.

When users activate the QR scanner via the Quick Settings tile or on-device shortcut, they now see a brief launch animation featuring a rounded square viewfinder. Key buttons like flashlight toggle, feedback, and “Scan from photo” are consolidated into a single pill-shaped control near the bottom of the screen.

QR Code Scanner in Android

This layout contrasts sharply with the old format, where controls were placed at the top of the UI, which often made them hard to reach with one hand.

Once a QR code is detected, the scanner overlays the decoded content in a subtle scalloped circle centered on the viewfinder. The bottom panel now offers not only an “Open” option but also convenient “Copy text” and “Share” actions, eliminating the need to navigate away from the scanning screen.

This design refresh improves usability in real-world scenarios where users often scan QR codes with one hand while multitasking. By repositioning interaction points lower on the screen, the interface reduces strain and increases accessibility.

The new layout also adds functionality by including quick-choice options right after scanning. Whether opening the link, copying content, or sharing the result, users can act faster without leaving the app.

Although Google originally previewed this redesign in its May 2025 release notes for Play Services version 25.19, the visual overhaul is only now becoming widely available as part of the v25.26.35 rollout. Since the update is delivered via Google Play Services, users may need to restart their device or wait a few hours for it to appear even if they are on the latest build.

Google Chrome Rolls Out AI-Powered Store Reviews to Help Shoppers.

AI Generated Review
Credit: Google

Key Takeaway.
  • Google Chrome now offers AI-generated store reviews within the browser’s Site Info menu to help users assess online shopping sites more easily.
  • The feature gathers reviews from platforms like Google Shopping, Trustpilot, and ScamAdvisor, summarizing them into quick, digestible insights.
Google Chrome is adding a new AI-powered feature that makes it easier for users to determine whether an online store is trustworthy. The update, now available in the United States, adds a “Store Reviews” section to the browser’s Site Info panel, giving shoppers quick summaries of retailer reputations based on customer feedback from trusted sources.

This feature is aimed at improving online shopping safety. By clicking the lock icon next to a site’s address bar, users can now view a condensed review summary highlighting key points such as product quality, shipping speed, customer service, and return policies. The reviews are collected and analyzed from Google Shopping and major third-party platforms like Trustpilot and ScamAdvisor.

For example, if a user visits a lesser-known retailer, Chrome will now display aggregated feedback and let shoppers know if others have had a good or poor experience. This helps users make informed purchasing decisions without needing to leave the page or search manually for reviews.

The feature comes at a time when online scams and unreliable e-commerce sites continue to target unsuspecting buyers. Google says this tool is part of its broader effort to make browsing safer and smarter using artificial intelligence. The browser already offers security checks, phishing alerts, and shopping-specific features such as price tracking and coupon detection.

Currently, the AI-based store reviews are only available to Chrome users in the U.S., but there’s potential for a global rollout shortly. Google has not announced support for mobile browsers yet, but the feature is active on the desktop version of Chrome for users running the latest update.

As AI continues to shape the way users interact with digital content, features like this show how Google is leaning into practical, real-time applications that enhance user trust and reduce friction in everyday tasks like shopping.

Google Home Voice Control for Lights Fails, Users Report Flickering.

Google Home Voice
Key Takeaway.
  • Google Home’s voice control for smart lights is malfunctioning. Users report unresponsive commands and flickering bulbs even when the lights are connected properly
  • Google has acknowledged the issue and suggests reconnecting light services in the Home app as a temporary workaround until a fix is released.

Users of Google Home and Assistant are experiencing problems with voice commands for smart lights. Many report that asking Google Assistant to turn lights on or off triggers flickering or fails entirely, causing frustration over disrupted automation routines. These issues have drawn attention from Android Police and user forums across multiple regions.

When users issue voice commands like “Hey Google, turn off the lights,” the Assistant often replies that the device is offline or does nothing at all. Some smart bulbs flicker repeatedly during the "off" state. In many cases, lights only respond when controlled via the Google Home app, suggesting that the problem lies with the Assistant’s voice interface rather than the connected bulbs.

Reports indicate that these glitches affect both "Made for Google" smart bulbs and third-party models connected via the Home app. Users across Reddit and support forums have shared that clearing the cache or rebooting devices generally fails to resolve the issue.

Google has publicly acknowledged the problem and confirmed it is working on a fix. A statement from the official Nest account reassures users that the issue is under investigation and that updates will follow soon.

Meanwhile, a temporary workaround is to reconnect smart light services from the Google Home app rather than resetting or repairing devices. This approach has worked for some users, although results remain inconsistent. Users also report that manually controlling lights through the app continues to work reliably.

Smart lighting is a core function in many homes using Google’s ecosystem. When voice commands fail or cause unintended flickering, it disrupts daily automation routines and undermines user trust. It also adds friction for users relying on Google Assistant for routine tasks. The issue comes at a time when users have voiced broader concerns about reliability in Google’s smart home platform. Google plans to address many of these issues with major updates later this year. 

Google Adds AI Mode Shortcut to Android Search Widget.

Google AI Mode Search
Key Takeaway.
  • Android users can now launch AI Mode from the Search widget with a dedicated shortcut, boosting access to Gemini-powered search.
  • The customizable widget shortcut is rolling out with Google app 16.28 and enhances usability without needing Search Labs enrollment.

Google is now rolling out a convenient shortcut to AI Mode directly on the Android Search widget, giving users one-tap access to its AI-powered search interface. The AI Mode icon appears in its own circle next to the voice and Lens shortcuts, making it quick to launch full-screen Gemini‑powered search responses.

What’s New in Google AI Mode and How to Use It.

Starting with app version 16.28, both beta and stable users can now customize the Google Search widget to include the AI Mode button. Long-pressing the widget brings up a Customize menu where you can enable AI Mode under Shortcuts. It will then appear alongside existing icons for voice search and Lens.

Here is a step-by-step process of how to enable Google AI Mode:

Step 1: Open the Google Search App on your Android phone.

Step 2: Click on your profile in the top-right corner and go to Settings.

Step 3: Select Customise Search widget and then select Shortcuts.

Google Search Settings

Step 4: Inside Shortcuts, you will get an option to add AI Mode to your Google Search.

AI Mode in Google Search

When you tap the AI Mode shortcut, it launches a full-screen interface where you can enter any prompt and receive AI-generated responses. It functions like a conversational search tool, using Gemini’s query fan-out technique to break down your question into subtopics and provide comprehensive information.

Users not enrolled in Search Labs may see the older Locate AI interface, where AI Mode is available in a pill-style button within the Discover feed instead of the widget area. Google encourages users to join Search Labs for a cleaner and more integrated experience.

Also Read: Google Launches AI-Powered Web Guide to Organize Search Results.

How Google AI Mode is Useful for the User.

The widget shortcut makes AI Mode more accessible and intuitive. It removes the need to open the Google app first and streamlines access for users who want next-generation search directly from their home screen.

This update reflects Google’s broader push to integrate AI deeply across its products. While new AI tools like Deep Search and Gemini 2.5 Pro are reserved for subscribers, the widget shortcut brings AI Mode to more casual users in a familiar format.

Google Pixel 9a Review With Specifications.

Google Pixel 9a

Launched on April 10, 2025, the Google Pixel 9a brings flagship‑level features to the mid‑range segment at $499. It’s designed for users seeking stellar cameras, smooth performance, and long-term software support without premium pricing. This review covers design, display, performance, cameras, battery life, software, connectivity, real‑world use, comparisons, pros & cons, and overall verdict.

✅ Pros ❌ Cons
  • Powerful Tensor G4 chip at a budget-friendly price
  • Bright 120Hz OLED display
  • Flagship-level camera quality
  • 7 years of OS & security updates
  • Excellent AI features like Call Screening
  • IP68 water and dust resistance
  • No telephoto or macro lens
  • Slower charging speeds
  • The plastic back feels less premium
  • Higher storage option availability is limited

Google Pixel 9a Specification.

The Google Pixel 9a may be a mid-range phone, but it packs a serious punch when it comes to specifications. At the heart of the device lies the Tensor G4 chipset, the same processor found in Google’s flagship Pixel 9 series. Paired with 8 GB of LPDDR5X RAM and UFS 3.1 storage, this combination delivers a fast and responsive experience for everyday tasks, app switching, and even moderate gaming.

The display is one of the standout features. You get a 6.3-inch Actua pOLED panel with a smooth 120Hz refresh rate and support for HDR10+. But what really grabs attention is the peak brightness of up to 2,700 nits, which makes outdoor visibility excellent, even under direct sunlight. This kind of screen performance is rare at this price point.

On the camera front, the Pixel 9a includes a 48 MP main sensor with optical image stabilization (OIS) and a 13 MP ultrawide lens. It may not be a triple camera setup, but Google’s computational photography ensures excellent results in most conditions. On the front, there's a 13 MP ultrawide selfie camera, which not only fits more people into the frame but also supports 4K video recording.

Battery life is impressive too. The phone houses a 5,100 mAh battery, making it the largest ever in a Pixel. It supports 23W wired charging and 7.5W wireless charging. While not the fastest in the industry, Google includes features like Battery Saver, Extreme Battery Saver, and even an option to limit charging to 80% to preserve long-term health.

Other highlights include IP68 water and dust resistance, stereo speakers, and face + fingerprint unlock. It ships with Android 15, and Google promises 7 years of OS and security updates, which is unheard of in this segment and easily one of the Pixel 9a’s biggest selling points.

Display 6.1-inch OLED, FHD+ (2400x1080), 120Hz refresh rate
Processor Google Tensor G4
RAM 8 GB LPDDR5
Storage 128 GB UFS 3.1 (no SD card slot)
Rear Camera 64MP (main) + 13MP (ultrawide), 4K@60fps video
Front Camera 13MP, 4K@30fps video
Battery 4,600mAh, 18W wired charging
Operating System Android 15 (out of the box)
Build & Design Plastic back, aluminum frame, Gorilla Glass 3 front
Water Resistance IP68 certified
Security Under-display fingerprint scanner, Face Unlock
Connectivity 5G, Wi-Fi 6E, Bluetooth 5.3, NFC, USB-C
Dimensions 152.1 x 72.6 x 8.9 mm
Weight 188 grams
Colors Obsidian Black, Porcelain, Mint
Price (USA) $499 (128 GB variant)

Google Pixel 9a Performance.

After using the Pixel 9a as my daily driver for over two months, I’m genuinely impressed by how smooth and responsive it feels. The Tensor G4 chip, paired with 8GB RAM, handles everyday tasks like browsing, messaging, and switching between apps effortlessly. I never ran into any stutters or lag, even with multiple apps running in the background.

I tried a few games like COD Mobile and Asphalt 9, and the experience was solid at medium settings. The phone did get a little warm during extended play or when downloading large files on 5G, but it never felt too hot or slowed down noticeably.

What really stood out to me were the smart AI features, things like Call Screening, Live Translate, and voice typing actually make a difference in daily use. They run smoothly and add real value.

Overall, the performance feels reliable and fluid, especially for a phone in this price range. It’s not a gaming beast, but for most users, it’s more than enough.

My Experience With Pixel 9a Camera.

From the moment I started shooting with the Pixel 9a, it felt like Google had once again worked its magic in computational photography. The 48 MP main camera with OIS and a wider f/1.7 aperture amazed me, especially in dimly lit places like art installations or evening scenes. I felt like every shot had remarkable detail, punchy yet realistic colors, and solid dynamic range. As Android Faithful wrote, “camera performance is where the 9a shines”, and they backed it up with extensive low-light testing at places such as Meow Wolf and Garden of the Gods.

I tried the new macro focusing mode too, and it produced some stunning close-ups, although focus sometimes centered only in the middle. Even so, I felt it added creative flexibility.

Pricing and Availability of Google Pixel 9a.

The Google Pixel 9a is priced at $499 in the United States, which positions it squarely in the upper mid-range category. For that price, you get the base model with 128 GB of storage, and there's also a 256 GB variant available for a bit more at $599, though Google hasn't officially listed that price across all retailers yet.

You can buy it unlocked directly from the Google Store, or through major carriers like Verizon, AT&T, and T-Mobile, which often with deals or trade-in offers that can bring the price down significantly. It's also available at retailers like Best Buy, Amazon, and Target, both online and in-store.

Considering it packs the Tensor G4 chip, a flagship-grade OLED display, and 7 years of software support, the $499 price point feels very competitive, especially when compared to other mid-range phones from Samsung or Motorola that don’t offer the same level of long-term updates or software features.

Final Verdict

The Google Pixel 9a is a standout mid-range smartphone for 2025, offering a premium display, solid camera performance, a long-lasting battery, and unmatched software update support (7 years). It brings most of the Pixel flagship experience at a significantly lower price. However, buyers should be aware of connectivity concerns, slower charging, and missing advanced AI features present in higher-end Pixel models.

Google Introduces Opal: A Vibe-Coding Tool for Building Web Apps.

Google Opal Vibe-Coding
Key Takeaway.
  • Google’s Opal lets users create and share mini web apps using only text prompts, backed by a visual workflow editor and optional manual tweaks.
  • The platform targets non-technical users and positions Google in the expanding "vibe-coding" space alongside startups and design platforms.

Google has begun testing an experimental app builder called Opal, available through Google Labs in the U.S. This new tool allows users to create functional mini web applications using only natural language prompts and no coding required. Opal aims to simplify app development, making it more accessible to creators, designers, and professionals without engineering backgrounds.

What Is Opal and How Does It Work?

Opal enables users to write a plain-language description of the app they want to build. Google's models then generate a visual workflow composed of inputs, AI prompts, outputs, and logic steps that form the backbone of the application. You can click each step to see or edit the prompt, adjust functionality, or add new steps manually using the built-in toolbar. When you are satisfied, you can publish the app and share it using a Google account link.

This interactive, visual-first approach is designed to overcome limitations of text-only vibe coding by providing clear, editable workflows. Opal supports remixing apps from a gallery of templates or building from scratch, promoting rapid experimentation.

Where Opal Fits in Google’s Vision.

While Google already offers an AI-based coding platform through AI Studio, Opal represents a broader push toward design-first and low-code tools. The visual workflow makes app logic easier to understand and edit, lowering the barrier to app creation for non-technical users. Google’s intention is to expand access to app prototyping beyond developers.

Opal positions Google alongside startups like Replit, Cursor, and design platforms like Canva and Figma. These tools are capturing attention by democratizing software creation using prompts and visual editors, growing demand for intuitive generative coding.

What It Means for Developers and Creators.

Creators and innovators can use Opal to prototype generative workflows, interactive tools, or productivity automations without writing code. Educators could also leverage it to build simple teaching aids or demonstrations. With a public beta released in the U.S., developers in labs can begin exploring and testing apps, providing feedback for future development.

The turn toward a visual workflow also offers more clarity and control, reducing confusion between prompt input and actual behavior. This can help users fine-tune apps step by step, something that traditional prompt-only systems struggle to offer.

How To Share Google Drive Documents With View-Only Access.

Share Google Drive File With View Only Access

Google Drive is a powerful tool for storing and sharing files online, whether you're working on a project, organizing personal documents, or collaborating with others. But not every file needs to be edited by everyone. Sometimes, you just want to share a folder so others can view the contents without being able to change anything. That’s where view-only access comes in handy.

You can restrict any external user from editing your Google Drive document before sharing it for team activities or collaborations. To prevent accidental changes, you can also set the document to view-only mode for everyone, including yourself. 

Let's learn both methods to make our documents and files more secure and safe from any kind of accidental editing.

Share Google Drive Documents With View-Only Access.

To follow this tutorial, all you need is an active Google Account and a document which is already been created and uploaded to Google Drive.

Step 1: Open Google Drive.

To begin, open your preferred web browser and go to https://drive.google.com. If you're not already signed in, you’ll be prompted to log in to your Google account. Once signed in, you'll land on the Google Drive homepage, where all your stored files and folders are displayed.

Step 2: Locate the Document You Want to Share

Scroll through your list of files, or use the search bar at the top to quickly find the document you intend to share. Once you locate it, you can either right-click on the file and select “Share” or open the document first and then click the “Share” button located in the top-right corner of the screen.

Google Drive Document Sreenshot

Step 3: Share with Specific People as Viewers

In the sharing dialog box that appears, you will see a field labeled “Add people and groups.” Type the email address of the person or group you want to share the document with. After entering the email, a drop-down menu will appear where you can select their permission level. 

Choose “Viewer” to ensure they can only view the document, but cannot comment on or edit it. Once done, click the “Send” button to share the document with them.

Adding Email id to share Google Docs

Step 4: Share via a View-Only Link (Optional)

If you prefer to share the document via a link rather than individual email addresses, look toward the bottom of the sharing dialog box. Under “General access”, click the dropdown that may say “Restricted” by default. Change it to “Anyone with the link”.

Once you do that, another dropdown will appear beside it—make sure it is set to “Viewer.” Then click “Copy link” to copy the shareable URL and send it via email, chat, or wherever needed.

Sharing Google Drive Doc Link

Pro Tip:
 Before sending or sharing the link, always double-check the access level to make sure the document is not mistakenly being shared with editing or commenting privileges.

Alternative way to Set Everyone's Role To view-only access.

First, open the sharing settings for the document using the same steps described above. For each listed user, including yourself, make sure the access level is set to “Viewer.” Click the dropdown beside each name and manually change the role if needed. Once this is done, no one will be able to modify the document in any way, but they can only view its content.

Changing Document Role to Viewer

Change Editing To View-Only Access in Google Docs.

There might be a possible scenario that you have already provided Editor access to many users for one Google Document, and now you want to change all the access to Viewer (View-Only) access. You can follow the above method to change the access type for each user ID one at a time, or there is a quick alternative way to do so by using Google App Script.

Changing Google Drive Document Permission Using Google App Script.

Step 1: First, go to https://script.google.com and click on "New Project" to create a blank script editor. This is where you'll write the automation code. Inside the script editor, paste the following code:
function restrictEditingToViewOnly() {
  var fileId = 'YOUR_FILE_ID_HERE'; // Replace with your actual fileID
  var file = DriveApp.getFileById(fileId);
  
  var editors = file.getEditors();
  
  for (var i = 0; i < editors.length; i++) {
    var userEmail = editors[i].getEmail();
    file.removeEditor(userEmail);
    file.addViewer(userEmail);
    Logger.log("Changed " + userEmail + " to viewer.");
  }
  
  var myEmail = Session.getActiveUser().getEmail();
  if (myEmail !== file.getOwner().getEmail()) {
    file.removeEditor(myEmail);
    file.addViewer(myEmail);
    Logger.log("You (" + myEmail + ") are now a viewer.");
  } else {
    Logger.log("You are the owner transferring ownership manually if needed.");
  }
}

Step 2: Replace 'YOUR_FILE_ID_HERE' with the actual file ID from your Google Drive document URL. This ID is the long string found in the URL of the file, typically located between /d/ and /edit.
https://docs.google.com/document/d/1XiYBcFw4VTHOmaD1pMmMTNlt2btERcxe0us3pHR4D4tNs/edit?usp=sharing

Step 3: Give your Project a good name and click the Save icon to save your project with the Script.

Step 4: Now, click the Run button (the triangular ▶️ icon) to execute the function. The first time you run the script, Google will prompt you to review and authorize the required permissions. Click on Review Permissions.
Google Script App

Step 5: If you are running Google App Script for the first time, you will get a pop-up saying "Google hasn't verified this app." You need to click on "Advanced" to open the advanced settings, click on your Project name, and provide all the required permissions to run the app.
Advance Setting for Google App Script

Step 6: You need to give your script permission to access your Google Account and select the checkbox shown below so the script can make the required changes in your Google Drive Document settings. Click on Continue to save to proceed.
Google Drive Permission
Step 9: After the script runs, all existing editors will be converted to viewers, and your own access will be downgraded unless you are the owner.

Note: Google doesn't allow you to remove your own access if you're the owner. You must transfer ownership manually through the Drive UI.

Be cautious with this script, especially if you choose to remove your own editing access. If you're the file owner, Google will not allow you to remove your own access via script, and you must have to transfer ownership manually through the Drive interface. For safety, it is always recommended to test this script on a duplicate file first to avoid losing access to important content.

DON'T MISS

AI
© all rights reserved
made with by WorkWithG