Google Drive

Google Drive

GADGETS

gadgets, Watches

STARTUPS

startup, Fashion, Memes

VIDEO

Videos

Google Files Lawsuit Over BadBox 2.0 Android Botnet Infecting Over 10 Million Devices.

Google Lawsuit Over badBox 2.0
Key Takeaway
  • Google sues alleged BadBox 2.0 operators over a global Android botnet that infected over 10 million uncertified devices.
  • The botnet was used for ad fraud and residential proxy schemes, prompting Google to update Play Protect and pursue legal action.

Google has taken legal action, filing a lawsuit in federal court in New York against 25 unnamed individuals, believed to be Chinese nationals, accused of operating the BadBox 2.0 botnet—a malicious network that has compromised more than 10 million uncertified Android-based devices globally.

According to the complaint, the botnet targets a range of off-brand hardware—TV streaming boxes, tablets, digital projectors, and car infotainment systems—which run on the Android Open Source Project (AOSP) and lack protections like Google Play Protect. Devices were infected either through supply chain malware (preinstalled before purchase) or via malicious apps downloaded after setup. Once compromised, these devices connect to a remote command-and-control (C2) server, effectively becoming part of a vast criminal network.

The attackers monetized the compromised devices through several illicit schemes:

  • Selling access as residential proxies, enabling account takeovers, DDoS attacks, and other crimes

  • Ad fraud—generating millions of fake ad impressions and clicks using hidden browsers and deceptive “evil twin” apps that mimic legitimate ones.

Google argues the botnet has damaged its reputation and financial bottom line by causing it to pay for fake ad traffic and divert resources to combat the fraud.

Google’s Response & Legal Aims

To counter this threat, Google has:

  • Updated Google Play Protect to detect and block BadBox-related apps, even if they’re sideloaded.

  • Filed the lawsuit seeking an injunction and damages, and legal authority to dismantle the botnet infrastructure, including disabling command servers and disrupting proxy access.

Despite involvement from the FBI, extraditing suspects from China remains improbable due to limited international cooperation.

What Users Should Know

  • If you’re using cheap, uncertified Android devices, especially those sold without Google certification, consider upgrading or installing trusted security software.

  • Watch for suspicious preinstalled apps or ask if the device is certified with Google Play Protect.

  • Regularly scan using Play Protect or reputable security tools, ensuring any infected apps are promptly removed.


Google Phone App on Wear OS Gains M3 Expressive.

Pixel Watch 3
Key Takeaway
  • Google's Phone app for Wear OS gets a sleek Material 3 Expressive redesign, enhancing usability and visual clarity.
  • The update introduces a revamped in-call screen, easier navigation, and consistent UI with Android 16 and Wear OS 6.

In a notable move toward unifying design across its platforms, Google has begun rolling out a striking redesign of its Phone app for Wear OS smartwatches. This update introduces the Material 3 (M3) Expressive design language, giving the app a cleaner, more intuitive, and visually consistent interface that aligns with Android 16 and Wear OS 6.

The redesign, first spotted by users and detailed by 9to5Google, marks a significant upgrade to the in-call experience and general usability of the app. It’s part of Google’s broader mission to bring Material 3 Expressive aesthetics and functionality to all its core apps across devices, including smartphones, tablets, foldables, and now wearables.

A More Intuitive In-Call Experience.

The most noticeable improvements come to the in-call screen, where UI elements have been repositioned for clarity and ease of use. The iconic red “End Call” button, previously placed among other controls, has now been moved to a prominent location at the bottom of the screen, making it easier to tap quickly and confidently, especially on smaller smartwatch displays.

Expressive UI showing up on pixel watch phone app after Beta 3
byu/sesteele13 inPixelWatch

Other controls, such as mute and the “more options” button, have also been moved upward, improving overall layout symmetry. In addition, the call duration timer is now centered horizontally on the display, offering a more balanced and visually appealing interface.

These changes may appear subtle at first glance, but they significantly enhance usability. They address one of the biggest pain points of wearable tech: the challenge of precise interactions on small screens.

Dialer Improvements and Navigation Enhancements.

Beyond the in-call interface, the update modernizes other key parts of the app. The dialer screen now features updated button styling and improved spacing, allowing for easier tapping and reducing the likelihood of accidental inputs. The new design is not only more functional but also fits naturally within the broader aesthetic of Android 16.

Google has also updated the “More” menu, transitioning from a grid-based layout to a cleaner, scrollable list format, which mirrors the approach used in other Material 3-based apps. This provides a more consistent and familiar experience across platforms.

Interestingly, incoming call screens now support both swipe gestures and button-based controls for accepting or rejecting calls. This dual-method approach gives users more control over how they interact with their devices, whether they're on the move or wearing gloves.

Material 3 Expressive: Unifying Design Across Devices

The updated Phone app is among several apps on Wear OS now embracing Material 3 Expressive, Google's latest evolution of its design system. Unlike the earlier Material You theme, which emphasized dynamic colors and personalization, M3 Expressive brings bolder visual elements, improved legibility, and smarter use of space, especially tailored for smaller, wearable screens.

Google has already started deploying this new design language across apps like Google Maps, Keep, and the revamped Tile system in Wear OS 6. These efforts are not just cosmetic. They aim to make Wear OS devices more consistent, accessible, and user-friendly—whether you're making a call, checking directions, or setting reminders.

What This Means for Users.

This refresh is rolling out gradually to users on the latest Wear OS builds, particularly those testing Wear OS 6 or using Pixel Watch and Samsung Galaxy Watch series. Users can expect a smoother, more modern interface that matches the visual tone of their smartphones and other Android devices.

As Google continues to invest in its wearable ecosystem—with Wear OS 6 on the horizon and Gemini AI integrations gaining momentum—the overhaul of core apps like Google Phone signals a deeper commitment to making smartwatches truly independent and intuitive companions.

For users, this means one thing: more power, more polish, and less friction in everyday smartwatch use.

Perplexity CEO Dares Google to Choose Between Ads and AI Innovation

Google Vs Perplexity

Key Takeaway:

  • Perplexity CEO Aravind Srinivas urges Google to choose between protecting ad revenue or embracing AI-driven browsing innovation.
  • As Perplexity’s Comet browser pushes AI-first features, a new browser war looms, challenging Google’s traditional business model.

In a candid Reddit AMA, Perplexity AI CEO Aravind Srinivas criticized Google's reluctance to fully embrace AI agents in web browsing. He believes Google faces a critical choice: either commit to supporting autonomous AI features that reduce ad clicks or maintain its ad-driven model and suffer short-term losses to stay competitive.

Srinivas argues that Google’s deeply entrenched advertising structure and bureaucratic layers are impeding innovation, especially as Comet, a new browser from Perplexity, pushes AI agents that summarize content, automate workflows, and offer improved privacy. He described Google as a “giant bureaucratic organisation” constrained by its need to protect ad revenue.

Comet, currently in beta, integrates AI tools directly within a Chromium-based browser, allowing real-time browsing, summarization, and task automation via its “sidecar” assistant. Srinivas warned that large tech firms will likely imitate Comet’s features, but cautioned that Google must choose between innovation and preservation of its existing monetization model.

Industry experts are watching closely as a new "AI browser war" unfolds. While Google may eventually incorporate ideas from Comet, such as Project Mariner, Srinivas remains confident that Perplexity's nimble approach and user-first subscription model give it a competitive edge.

Google Sets Launch Date for Pixel 10 Series.

Google Pixel 9Series Phones
Google Pixel

Key Takeaway:

  • Google will unveil the Pixel 10 series and new hardware at its “Made by Google” event on August 20 in New York City.
  • The event is expected to showcase the Pixel 10 Pro Fold, Pixel Watch 4, Pixel Buds 2a, and deeper Gemini AI integration.

Google has officially confirmed that its highly anticipated “Made by Google” event will take place on August 20, 2025, in New York City. The event, scheduled to begin at 1:00 PM ET / 10:00 AM PT, will also be streamed live on YouTube, giving fans around the globe front-row access to the launch of Google’s latest Pixel hardware lineup.

All eyes are on the upcoming Pixel 10 series, which is expected to include the Pixel 10, Pixel 10 Pro, Pixel 10 Pro XL, and a refreshed Pixel 10 Pro Fold. The new foldable model is rumored to come with IP-rated dust protection, a feature missing from earlier versions. Google is also set to debut its next-gen Tensor G5 chip, promising faster AI performance and power efficiency across all devices.

In addition to smartphones, Google is widely expected to unveil the Pixel Watch 4, which may come in two sizes with larger batteries and the latest Wear OS 6. Fans of audio accessories can also look forward to the new Pixel Buds 2a, along with a new range of Pixel Snap accessories, including wireless chargers, smart stands, and protective cases.

A major focus of the event will likely be the deeper integration of Gemini AI into Pixel devices. Google is expected to demonstrate how Gemini enhances real-time tasks, voice interactions, photo editing, and personal productivity across mobile, wearables, and smart accessories.

This year’s event marks Google’s return to hosting its Pixel showcase in New York City, which has historically been a prime location for its fall launches. With competition heating up from Samsung, Apple, and others, the August 20 reveal is strategically timed to grab attention ahead of the back-to-school and holiday seasons.

With major upgrades in hardware, AI capabilities, and ecosystem expansion, the Pixel 10 series launch could mark one of Google’s biggest hardware moments to date.

OpenAI Expands Infrastructure with Google Cloud to Power ChatGPT.

Open AI Using Google Cloud

Key Takeaway:

  • OpenAI has partnered with Google Cloud to boost computing power for ChatGPT amid rising infrastructure demands. 
  • The move marks a shift to a multi-cloud strategy, reducing dependence on Microsoft Azure and enhancing global scalability.

OpenAI has entered into a major cloud partnership with Google Cloud to meet the rising computational demands of its AI models, including ChatGPT. This move, finalized in May 2025, reflects OpenAI’s ongoing strategy to diversify its cloud infrastructure and avoid overreliance on a single provider.

Historically, OpenAI has leaned heavily on Microsoft Azure, thanks to Microsoft’s multi-billion-dollar investment and deep integration with OpenAI’s services. However, with the explosive growth of generative AI and increasing demands for high-performance GPUs, OpenAI has been aggressively expanding its cloud partnerships. The addition of Google Cloud now places the company in a “multi-cloud” model, also involving Oracle and CoreWeave, which recently secured a $12 billion agreement with OpenAI.

By tapping into Google’s global data center network—spanning the U.S., Europe, and Asia—OpenAI gains greater flexibility to manage the heavy compute workloads needed for training and running its large language models. Google, for its part, strengthens its cloud business by onboarding one of the world’s leading AI developers as a client, which not only enhances its credibility but also diversifies its cloud clientele beyond traditional enterprise workloads.

This deal marks a significant step in the ongoing arms race among tech giants to dominate cloud-based AI infrastructure. OpenAI’s multi-cloud strategy ensures resilience, scalability, and availability for its services across different regions and use cases. It also allows the company to better respond to surges in demand for ChatGPT and its API-based offerings, which serve millions of users and enterprise clients daily.

The partnership underscores a broader shift in the tech industry, where high-performance computing for AI is becoming a core battleground. For OpenAI, spreading its workload across multiple providers could mitigate risks, lower costs, and boost its capacity to innovate and iterate at speed.

Google’s AI Can Now Make Phone Calls on Your Behalf

Google Advance AI Search

Key Takeaway:

  • Google's Gemini AI can now call local businesses for users directly through Search to gather information or book services.
  • The feature uses Duplex technology and is available in the U.S., with opt-out options for businesses and premium access for AI Pro users.

Google has taken a major step forward in AI-powered assistance by rolling out a new feature in the U.S. that allows its Gemini AI to make phone calls to local businesses directly through Google Search. This tool, first tested earlier this year, lets users request information like pricing, hours of operation, and service availability without ever picking up the phone.

When someone searches for services such as pet grooming, auto repair, or dry cleaning, they may now see an option labeled “Ask for Me.” If selected, Gemini will use Google’s Duplex voice technology to place a call to the business. The AI introduces itself as calling on the user’s behalf, asks relevant questions, and then returns the response to the user via text or email.

This move transforms the search experience into a more active and intelligent assistant. Users can now delegate simple but time-consuming tasks like making inquiries or scheduling appointments. It’s part of Google’s broader strategy to make AI more agent-like, capable of taking real-world actions on behalf of users.

Making call to Local Business in Google search
Credit: Google

Businesses that don’t want to participate in this feature can opt out using their Google Business Profile settings. For users, the functionality is available across the U.S., but those subscribed to Google’s AI Pro and AI Ultra plans benefit from more usage credits and access to advanced Gemini models like Gemini 2.5 Pro. These premium tiers also include features like Deep Search, which can generate in-depth research reports on complex topics using AI reasoning.

As AI integration deepens in everyday apps, this feature showcases a new phase of interaction, where digital tools not only inform but also act on our behalf. Google’s move reflects the future of AI as not just a search engine assistant, but a personal concierge for real-world tasks.

Google’s Discover Feed Gets AI Summaries, Alarming Publishers.

Google Discover Summary Feature

Google has quietly introduced AI-generated summaries into its Discover feed on the Google Search app, a move that’s already sparking alarm across the publishing industry. With this update, users in the U.S. using Android and iOS devices will now see short, three-line summaries instead of traditional article headlines and source names when browsing trending topics in categories like entertainment and sports.

How does it work?

The summaries are produced by Google’s in-house artificial intelligence models and combine information from multiple sources. These summaries appear at the top of the Discover cards, with a small overlay of icons that indicate how many sources were used to generate the content. Tapping on the icons opens a list of original sources, which users can then click on to read the full articles. A subtle “See more” link expands the summary, and each summary is accompanied by a disclaimer that states: “Generated with AI, which can make mistakes.”

Google Discover Now Uses AI Summaries

While this change might improve user convenience by offering quick insights without needing to click through, it is generating strong backlash from digital publishers. Industry experts warn that AI summaries could accelerate the existing trend of "zero-click" behavior, where users find their answers directly on Google's platform and no longer visit the actual news sites. Publishers argue that this could deal yet another blow to traffic volumes that are already declining due to previous algorithm changes and the growing prominence of AI-powered search features.

Why Publishers Are Concerned.

Recent data supports these concerns. According to analytics firm Similarweb, traffic to news sites from Google Search has sharply decreased, falling from over 2.3 billion visits in August 2024 to under 1.7 billion by May 2025. During that same period, the number of searches ending without a single click rose from 56% to 69%. Several digital media outlets, including BuzzFeed News, Laptop Mag, and Giant Freakin Robot, have either shut down or significantly downsized in recent months, loss of referral traffic cited as a contributing factor.

Independent publishers in Europe have gone a step further by filing an antitrust complaint with the European Commission, alleging that Google’s AI-driven tools undermine competition and fail to offer publishers meaningful options to opt out of AI training or display. Similar complaints are also under consideration by the UK’s Competition and Markets Authority, putting regulatory pressure on Google to reconsider how it integrates AI into search and content feeds.

Despite mounting criticism, Google maintains that AI summaries help users explore a broader range of content and that its platforms still drive billions of clicks to publishers every day. The company also introduced monetization tools like Offerwall, which allows publishers to generate revenue through subscriptions, micropayments, and newsletter signups. However, many industry voices argue that such measures are insufficient to counteract the loss of direct web traffic.

As the rollout continues, media companies are grappling with how to adapt. Some are testing their own AI tools to produce summary-style content in a format optimized for visibility within Google's evolving ecosystem. Outlets like Yahoo, Bloomberg, and The Wall Street Journal have started experimenting with article highlights and bullet-point takeaways to compete within the AI-influenced landscape. Still, concerns remain that even these efforts may not be enough to recover lost visibility and revenue.

In the coming months, Google is expected to expand the AI summary feature to cover additional content categories and possibly introduce it in international markets. Meanwhile, publishers and regulators alike are closely watching how this move will affect the future of digital journalism, news distribution, and the broader internet economy. The tension between technological advancement and fair access to digital audiences continues to intensify, with the stakes higher than ever.

DON'T MISS

Nature, Health, Fitness
© all rights reserved
made with by templateszoo