Gemini App Unveils Prompt Bar Redesign, Bringing Model Switching Mid-Conversation
I'm a full-time Software Developer with over 4 years of experience working at one of the world’s largest MNCs. Alongside my professional role, I run a news blog, WorkWithG.com, which focuses on Google tools, tutorials, and news. I'm passionate about breaking down complex topics and making learning accessible for everyone.
Gemini App's Major Visual Redesign to Feature Scrollable Feed.
Google appears to be preparing a significant visual revamp for its Gemini AI application, shifting away from the traditional, minimalist chatbot interface. Recent reports, stemming from an APK teardown, suggest the company is testing a bold, new home screen featuring a scrollable, Instagram-like feed.
This dramatic design departure signals Google's intent to position Gemini not just as a text assistant, but as a visual discovery engine for its powerful multimodal AI capabilities. The move comes as Google seeks to maintain its competitive edge against rivals like OpenAI.
From Blank Slate to Visual Inspiration.
The current Gemini app typically greets users with a blank chat screen and a simple text input box, similar to interfaces used by competitors such as ChatGPT. The proposed redesign flips this model entirely.
The new interface, discovered in an app teardown by Android Authority, introduces a vibrant, card-based scrollable feed. This feed is packed with suggested prompts and conversation starters, each accompanied by eye-catching visuals or photos.
Key shortcuts to core tools like "Create Image" and "Deep Research" will reportedly be shifted to the top of the screen. The feed then unfolds beneath them, showcasing what is possible with the AI rather than making users figure it out on their own.
Highlighting Multimodal Strengths.
The suggested prompts are designed to highlight Gemini's diverse functionalities, moving beyond simple text queries. Examples found in the testing code include requests like “Teleport me to deep space,” “Give me a vintage grunge look,” and “Quiz me on basic biology.”
Many of these suggestions leverage Google's latest imaging models, like the popular Nano Banana editor, to showcase image generation, photo editing, and creative styling. By proactively displaying these creative use cases, Google aims to drive deeper user engagement.
This focus on visual discovery makes sense for Gemini, which excels at generating images, analyzing photos, and handling complex, multimodal reasoning tasks. The redesign acts as a discovery layer, lowering the barrier to entry for many advanced features.
A Competitive Edge in the AI Space.
The timing of this potential rollout underscores the growing importance of user experience in the competitive AI market. While the Spartan interfaces of competitors can be intimidating, Google is betting on inspiration to drive adoption.
The visual, card-based design mimics successful social platforms like Pinterest and TikTok, prioritizing scrollable content discovery. If the change goes live, it could give Gemini a distinct visual advantage and redefine user expectations for how an AI assistant should look and function.
However, since these changes were uncovered via an APK teardown, Google has yet to officially confirm the new interface. It remains unclear exactly when—or if—this dramatically redesigned version of the Gemini app will be released to the public.
I'm a full-time Software Developer with over 4 years of experience working at one of the world’s largest MNCs. Alongside my professional role, I run a news blog, WorkWithG.com, which focuses on Google tools, tutorials, and news. I'm passionate about breaking down complex topics and making learning accessible for everyone.
Xiaomi Launches 2025 Creativity Competition with Google Gemini to Explore Future of Mobile Design.
Xiaomi, a global leader in consumer electronics and smart manufacturing, has officially inaugurated its 2025 Creativity Competition. Titled "Your Screen, Your Story," the contest aims to push the boundaries of design and innovation, inviting creators worldwide to redefine the mobile user experience.
The competition is a significant collaborative effort, jointly organized by Xiaomi's International Internet Business (IIB) Department and Google Gemini. This partnership highlights a focus on integrating artificial intelligence into creative workflows, anticipating the next generation of personalized mobile interfaces.
Three Avenues for Digital Expression.
The 2025 competition is segmented into three distinct categories, each designed to capture a different facet of digital creativity. These segments encourage participants to submit works ranging from pure aesthetics to functional user design.
The categories are "One Shot, One Moment" (Wallpaper Photography), "Vision Through Intelligence" (AI Wallpapers), and "Redefining UX" (Theme Design). This structure allows photographers, AI artists, and interface designers to showcase their specific expertise.
Significantly, creators in the AI Wallpapers and Photography categories are strongly encouraged to leverage the capabilities of Google Gemini. The integration demonstrates how generative AI tools can supercharge creativity and design processes within the mobile ecosystem.
Recognition and Rewards for Top Talent.
To attract and reward the world's most innovative minds, the competition offers substantial incentives. The prestigious Gold Award winner stands to receive a prize of up to $10,000, along with broad industry exposure.
Winning works will also receive potential commercial opportunities, allowing creators to monetize their designs through Xiaomi's vast international content ecosystem. Furthermore, select exceptional entries may be featured in dedicated offline exhibitions, granting global visibility.
A distinguished panel of international design experts will evaluate the submissions. The competition also includes special awards reserved for Xiaomi Fans, fostering engagement within its dedicated user community.
Key Dates for Submissions and Awards.
The submission window opened on September 10, with user voting currently underway, having started on September 30. Creators still have time to finalize and submit their groundbreaking concepts.
Expert reviews are scheduled to take place throughout November, leading up to the final announcement of winners and the awarding of prizes in December. This timeline sets the stage for a dramatic conclusion to the year of design innovation.
I'm a full-time Software Developer with over 4 years of experience working at one of the world’s largest MNCs. Alongside my professional role, I run a news blog, WorkWithG.com, which focuses on Google tools, tutorials, and news. I'm passionate about breaking down complex topics and making learning accessible for everyone.
Google AI Mode Live Search Officially Launches in US.
Google is officially rolling out "Search Live" to users across the United States, a major advancement in its AI-powered search capabilities. The feature, previously confined to the Google Labs opt-in program, brings a new, conversational, and multimodal way for users to interact with information, using both their voice and their phone's camera in real time.
Search Live is integrated directly into the main Google app and Google Lens. It can be accessed by tapping a new "Live" icon, allowing for a hands-free, back-and-forth dialogue. This Project Astra-powered experience is designed to be context-aware and provide on-the-spot assistance for a variety of tasks, from troubleshooting a complex electronics setup to getting a real-time tutorial on making matcha. The Gemini-powered AI can interpret what is on screen and offer both verbal guidance and a carousel of relevant web links.
Search Live: A New Way to Search.
The introduction of Search Live signals a significant shift in Google's approach to search. It moves beyond the traditional text-based query and presents a more natural, intuitive method of finding information. When a user points their camera at an object, the AI can instantly identify it and provide a conversational response, eliminating the need to type out long, descriptive queries. This integration of audio and visual input, with a waveform-based user interface, makes the experience feel less like a search and more like a collaboration with a knowledgeable assistant.
For instance, a user can point their phone at a home theater system and ask which cable goes where, and Google will provide step-by-step instructions. The AI can also understand context and respond to follow-up questions, making it an ideal tool for learning a new skill or fixing a broken item without ever leaving the Google app. This functionality could prove invaluable for a wide range of tasks where a text-based search would be inefficient.
SEO in the Age of Conversational AI.
The launch of Search Live presents a new challenge and opportunity for content creators and SEO professionals. As users get answers directly from a real-time AI, the traditional SEO model of driving clicks through ranked search snippets may begin to shift. Brands will need to adapt their strategies to ensure their content is still being surfaced and cited by the AI.
Experts suggest that visibility will now depend on how prominently and frequently a brand's content is surfaced in the AI's verbal responses or the accompanying carousel of web links. The focus may move from raw keyword rankings to optimizing for rich, factual, and helpful content that is easily digestible and can be used to "train" the AI. This means creating comprehensive, high-quality content that provides definitive answers to real-world problems.
With Search Live now available to all U.S. users, Google's vision for a more interactive and personalized search experience is becoming a reality, potentially reshaping the digital landscape for users and content creators alike.
I'm a full-time Software Developer with over 4 years of experience working at one of the world’s largest MNCs. Alongside my professional role, I run a news blog, WorkWithG.com, which focuses on Google tools, tutorials, and news. I'm passionate about breaking down complex topics and making learning accessible for everyone.
Google's Gemini AI is coming to TV for Smarter Show Suggestion.
Google is bringing its powerful Gemini AI assistant to Google TV, transforming the television viewing experience with more natural, conversational interactions. The new feature, which builds on the existing Google Assistant, is initially rolling out to a limited number of devices with a wider release planned for later this year.
First teased at CES in January, Gemini on Google TV allows users to move beyond simple voice commands. The AI assistant can now handle complex, multi-part queries to help users discover content, get show recaps, and even answer general knowledge questions.
Key Features of Gemini on Google TV:
- Smarter Recommendations: Users can find the perfect show or movie for a group with a single, conversational query. For example, you can say, "Find me something to watch with my wife. I like dramas, but she likes lighthearted comedies."
- Quick Recaps: Catch up on a show you've been away from by simply asking, "What happened in the last season of 'Outlander'?"
- Educational Tool: The AI can also answer a wide range of questions, leveraging YouTube videos and other information to provide a comprehensive response. You could ask it to "Explain why volcanoes erupt to a third grader," and it will provide an answer with relevant video suggestions.
- Natural Language: The new system is designed to understand free-flowing conversations, allowing for follow-up questions and more intuitive interaction with the TV.
Also Read: Roku and YouTube TV Join Forces for an Enhanced Sports Experience.
Currently, the feature is available on the new TCL QM9K series. Google has confirmed that Gemini will expand to more devices later this year, including the Google TV Streamer, Walmart's onn 4K Pro streaming device, and select Hisense and TCL models.
To activate the new features, users can either say "Hey, Google" or press the microphone button on their TV remote, just as they would with Google Assistant. With this update, Google is positioning the television as a more central, interactive hub in the home.
I'm a full-time Software Developer with over 4 years of experience working at one of the world’s largest MNCs. Alongside my professional role, I run a news blog, WorkWithG.com, which focuses on Google tools, tutorials, and news. I'm passionate about breaking down complex topics and making learning accessible for everyone.
Google Gemini Now Lets You Share Custom 'Gems'.
Google is giving its Gemini AI a major collaboration boost by introducing a highly anticipated feature: the ability to share custom "Gems." This update transforms Gems from a personal productivity tool into a powerful asset for teams and communities, allowing users to easily share their finely-tuned AI prompts and workflows.
What are Google Gemini Gems?
Google Gemini Gems are a feature that allows you to create your own custom AI expert. Instead of starting from scratch with a new prompt every time, you can give a Gem a specific set of instructions and a persona. This Gem will then remember these details, providing consistent, tailored responses for any task you ask.
The purpose of Gems is to streamline your workflow and save time. For example, you can create a Gem to act as a "creative writer" with a specific style, or a "meeting summarizer" that always focuses on action items. You can reuse the Gem repeatedly for your specific needs by saving these instructions once.
Gems are not just for personal use. They can also be shared with other Gemini users, making them a powerful tool for collaboration. You can share your custom-built Gems with colleagues or friends, allowing a whole team to work with the same personalized AI assistant.
Share Your Custom Gems With Others Like a File.
The new sharing functionality is designed to be intuitive and familiar to anyone who uses Google Workspace products like Docs or Sheets. A user can now hit the Share icon in their Gem manager, add other users by name or group, and set permissions to either view or edit the Gem. There's also an option to generate a shareable link, making it simple to distribute a Gem across a team or an online community.
Here is a step-by-step guide to share Gemini Gems:
Open your Gem Manager: Go to the Gemini web app and open your Gem Manager. You can usually find this in the side panel or menu.
Find the Share Button: Look for the "Share" button next to the Gem you want to share. This button is typically located next to the edit icon.
Choose your sharing method: A pop-up will appear, similar to sharing a Google Doc. You can either enter the email addresses of the people you want to share with or copy a public link.
Set Permissions: You can control whether the recipient can simply view the Gem or has permission to edit its instructions.
Note: Once you share the Gem, it will be stored in a new "Gemini Gems" folder in your Google Drive, and its permissions will be protected by Drive's sharing settings.
This update positions Gemini with a clear advantage over some competitors, as it directly addresses a key need for professional and team-based work. By enabling easy, Google Docs-style collaboration on AI workflows, Google is making it easier for users to standardize tasks, share best practices, and work together on projects that leverage AI.
This enhanced shareability not only improves Gemini's utility for enterprise users but also empowers individuals to share their creativity and expertise with a wider audience, fostering a more connected and collaborative AI ecosystem.
I'm a full-time Software Developer with over 4 years of experience working at one of the world’s largest MNCs. Alongside my professional role, I run a news blog, WorkWithG.com, which focuses on Google tools, tutorials, and news. I'm passionate about breaking down complex topics and making learning accessible for everyone.
Gemini Canvas Introduces 'Select and Ask' for Intuitive Visual Editing.
![]() |
| Image Source: Google |
Google is empowering developers and designers with an incredibly intuitive new feature in Gemini Canvas: "Select and Ask." This update revolutionizes how users can visually edit web applications, allowing for direct, no-code modifications by simply clicking an element and describing the desired change in natural language.
"Select and Ask": The Future of Visual Web App Editing.
Gemini Canvas, Google's generative AI platform for web development, is designed to streamline the creation and modification of web apps. The "Select and Ask" feature takes this a giant leap forward by introducing a direct, conversational approach to UI (User Interface) adjustments.
Here's how this innovative feature works:
Click to Select: Users can now directly click on any element within their web application in Gemini Canvas's preview mode. This could be a button, a text box, an image, a navigation bar, or any other UI component.
Describe the Change: Once an element is selected, a prompt will appear, allowing the user to describe the desired modification using natural language. For example, you could say:
- "Make this button green and slightly larger."
- "Change the font of this headline to sans-serif and bold."
- "Move this image to the left and add 20 pixels of padding."
- "Change the color scheme of this section to a darker theme."
Instant Preview: The moment the description is entered, Gemini Canvas uses its underlying AI models to interpret the request and instantly apply the changes in the preview mode. This allows for real-time iteration and immediate visual feedback without writing a single line of code.
No Code Necessary: The power of "Select and Ask" lies in its ability to abstract away the complexity of code. Users don't need to understand CSS, HTML, or JavaScript to make visual adjustments. Gemini's AI handles the translation from natural language intent to code modifications behind the scenes.
Impact on Web Development and Design Workflows.
"Select and Ask" in Gemini Canvas is poised to significantly impact various roles within the web development and design ecosystem:
- For Designers: It allows designers to rapidly prototype and iterate on UI elements with unprecedented speed, directly seeing their vision come to life without needing developer intervention for minor tweaks.
- For Developers: While it doesn't eliminate coding, it frees up developers from mundane visual adjustments, allowing them to focus on more complex functionality and backend logic. It also simplifies client feedback loops, as changes can be demonstrated and implemented instantly.
- For Non-Technical Stakeholders: Business owners, product managers, and other non-technical stakeholders can now participate more directly in the design process, providing immediate feedback and seeing changes applied in real-time. This can bridge communication gaps and accelerate project timelines.
- Rapid Prototyping and A/B Testing: The ability to make instant visual changes makes rapid prototyping and A/B testing far more efficient. Different design variations can be generated and compared almost instantaneously.
This new feature in Gemini Canvas underscores Google's commitment to making generative AI a cornerstone of creative and technical workflows, democratizing web development by putting powerful visual editing capabilities directly into the hands of users with natural language.
I'm a full-time Software Developer with over 4 years of experience working at one of the world’s largest MNCs. Alongside my professional role, I run a news blog, WorkWithG.com, which focuses on Google tools, tutorials, and news. I'm passionate about breaking down complex topics and making learning accessible for everyone.
Google AI Plus: Budget-Friendly AI Subscription Tier for Emerging Markets.
What's Included in Google AI Plus?
- Gemini 2.5 Pro: The plan includes access to Google's sophisticated Gemini 2.5 Pro model. While the usage limits are less than the premium tiers, it still provides a powerful AI assistant for writing, coding, brainstorming, and complex problem-solving.
- Gemini in Google Apps: A key feature of this plan is the integration of Gemini into popular Google apps. Users can get AI assistance directly within Gmail and Docs to help with drafting emails, writing documents, and organizing content.
- Creative Tools: The subscription provides access to new creative tools, including Veo 3 Fast, Google's AI model for video generation. This allows users to create short videos directly from text prompts.
- Google One Storage: AI Plus is not just about AI; it's also a value-added service. The subscription includes 200 GB of Google One storage, which can be used across Google Drive, Gmail, and Google Photos. This is a significant benefit for users in markets where storage space is at a premium.
- NotebookLM: Subscribers also get higher access to NotebookLM, Google's AI-powered research and writing assistant, with more audio overviews, notebooks, and sources per notebook.
Google AI Plus vs. Google One AI Premium.
| Feature | Google AI Plus | Google One AI Premium |
|---|---|---|
| Primary Goal | Democratize AI in emerging markets | Cater to advanced users and professionals |
| Core AI Model | Access to Gemini 2.5 Pro (with lower usage limits) | Highest usage limits for Gemini 2.5 Pro |
| Context Window | Standard context window | 1 million tokens |
| Creative Tools | Veo 3 Fast (video generation) | Veo 3 Fast + access to Veo 3 and other models |
| Deep Research | Limited access to Deep Research | 20 reports per day (on Gemini 2.5 Pro) |
| Deep Think | Not included | 10 prompts per day (exclusive feature) |
| Google One | 200 GB storage | 2 TB storage + other premium benefits |
| Price | More affordable, regional pricing | Standard pricing ($19.99/month) |
The Bigger Picture: A Strategic Move.
I'm a full-time Software Developer with over 4 years of experience working at one of the world’s largest MNCs. Alongside my professional role, I run a news blog, WorkWithG.com, which focuses on Google tools, tutorials, and news. I'm passionate about breaking down complex topics and making learning accessible for everyone.
Google Breaks Down Gemini's Usage Limits for Free and Paid Users.
For months, Google has been using vague language like “limited access” and “highest access” to describe the usage caps for its Gemini AI. This left many users, from casual testers to professional creators, in the dark about what they were truly getting. Now, Google has finally provided a clear and detailed breakdown of the daily and monthly limits for its different Gemini tiers.
This is a major update for anyone in the tech and AI space, as it allows users to make informed decisions about which plan is right for them. Here’s a detailed look at the new limits for prompts, image generation, and other advanced features.
The Free Tier: What Users Get with Gemini 2.5 Pro.
The free tier is designed for general use and for those who want to experience the power of Gemini without a subscription. While it provides a solid introduction, the limits are designed to prevent heavy, daily use.
- Prompts: You can use Gemini 2.5 Pro for up to 5 prompts per day. This is a significant clarification from the previously unstated limits.
- Image Generation: You are able to generate or edit up to 100 images per day. This generous allowance is perfect for artists, social media managers, or anyone needing to create a steady stream of visual content.
- Context Window: The free tier includes a 32,000-token context window, which is ample for most standard conversations.
- Deep Research: For more in-depth queries, you are limited to 5 Deep Research reports per month. These reports are powered by the less advanced Gemini Flash model.
- Audio Overviews: All users, including free ones, can get up to 20 Audio Overviews per day.
Google AI Pro: The Sweet Spot for Creators.
- Image Generation: This plan offers a massive increase in creative capacity, allowing for up to 1,000 image generations per day.
- Context Window: The context window is expanded to a colossal 1 million tokens, enabling significantly longer conversations and the ability to process large documents in a single go.
- Deep Research: You get a substantial boost to 20 Deep Research reports per day, and they are powered by the more advanced Gemini 2.5 Pro model.
- Video Generation: The Pro tier introduces video creation with 3 Veo 3 Fast videos per day (currently in preview).
- Other Benefits: The plan also includes priority access to new features and other Google One benefits, like 2TB of storage.
Google AI Ultra: The Ultimate Plan for Power Users.
- Prompts: You get up to 500 prompts per day with Gemini 2.5 Pro, a staggering 100x increase compared to the free tier.
- Image Generation: Image generation remains at a high of 1,000 per day, matching the Pro plan.
- Context Window: The standard context window is also 1 million tokens, but this tier includes a special feature.
- Deep Research: The daily limit is a massive 200 Deep Research reports per day, providing almost unlimited research capabilities.
- Deep Think: This plan provides exclusive access to the Deep Think reasoning model, with a limit of 10 prompts per day and a 192,000-token context window.
- Video Generation: You can generate up to 5 videos per day using the latest Veo 3 model (currently in preview).
- Other Benefits: AI Ultra subscribers get 30TB of Google One storage, among other premium benefits.
Final Thoughts: The Importance of Transparency.
I'm a full-time Software Developer with over 4 years of experience working at one of the world’s largest MNCs. Alongside my professional role, I run a news blog, WorkWithG.com, which focuses on Google tools, tutorials, and news. I'm passionate about breaking down complex topics and making learning accessible for everyone.
Google's "AI Mode" Expands to More Languages.
Google has announced a major expansion of its "AI Mode" in Search, bringing the powerful, AI-driven experience to millions of new users worldwide. The feature, which was previously only available in English, now supports five new languages: Hindi, Indonesian, Japanese, Korean, and Brazilian Portuguese.
This update represents a significant step in Google's mission to make its most advanced AI capabilities globally accessible and locally relevant. According to Google, this goes beyond simple translation, as the company has leveraged a custom version of its Gemini 2.5 model to ensure a "nuanced understanding of local information."
What is AI Mode?
AI Mode is a new tab within Google Search designed to handle complex, multifaceted queries that would typically require multiple searches. It uses a "query fan-out" technique to issue multiple related searches concurrently across various subtopics and data sources. This method allows it to provide comprehensive, AI-based answers that offer greater breadth and depth of information than a traditional search.
The feature is particularly helpful for exploratory questions, such as planning a trip, finding local recommendations, or understanding complex topics. It also offers conversational follow-up questions, similar to what users have come to expect from Gemini and AI Overviews.
Impact and Future Outlook.
The expansion to these new languages comes shortly after Google made AI Mode available in over 180 countries and territories. This rapid rollout underscores the feature's importance to Google's future strategy. Google has claimed that AI Overviews and AI Mode are driving more queries and quality clicks to websites, despite some concerns from publishers about a potential drop in traffic.
With this expansion, Google is making it clear that AI-powered search is here to stay and will continue to evolve, reaching an ever-growing global audience.
Also Read:
I'm a full-time Software Developer with over 4 years of experience working at one of the world’s largest MNCs. Alongside my professional role, I run a news blog, WorkWithG.com, which focuses on Google tools, tutorials, and news. I'm passionate about breaking down complex topics and making learning accessible for everyone.
Gemini App is Allowing User to Upload Audio Files.
Google has rolled out a highly-requested "quality-of-life" improvement for its Gemini app, introducing the ability for users to upload audio files on the web, as well as on Android and iOS devices. This new feature significantly enhances Gemini's capabilities by allowing it to process and understand spoken content from external sources.
The process for uploading audio is straightforward and consistent across platforms. Users can access the feature by tapping the "plus" menu and selecting "Files" on their mobile device or "Upload files" on the web. The tool supports popular audio file formats, including MP3, M4A, and WAV, making it versatile for a wide range of uses.
Subscription Tiers and Audio Length.
Google has implemented a tiered system for audio length based on a user's subscription status:
- Free Users: Can upload audio files with a total length of up to 10 minutes. This is perfect for quick transcriptions, summarizing short meetings, or processing voice notes.
- Google AI Pro / Google AI Ultra Subscribers: Have a much larger capacity, allowing them to upload audio files up to 3 hours in length. This is ideal for professionals, students, and anyone needing to transcribe or analyze longer-form content like lectures, interviews, or podcasts.
This feature opens up a world of possibilities, from easily transcribing lectures for students to summarizing long interviews for journalists, making Gemini an even more powerful tool for productivity.
Also Read:
I'm a full-time Software Developer with over 4 years of experience working at one of the world’s largest MNCs. Alongside my professional role, I run a news blog, WorkWithG.com, which focuses on Google tools, tutorials, and news. I'm passionate about breaking down complex topics and making learning accessible for everyone.
Gboard New AI Powered Writing Tool For Better Messaging.
Google is fundamentally changing how we type on our phones by integrating advanced artificial intelligence directly into Gboard. This new feature, officially called "Writing Tools," leverages the on-device Gemini Nano model to provide a seamless, private, and powerful writing experience. As this update rolls out to more Android phones, here is a comprehensive look at what the feature does, how to use it, and what it means for mobile communication.
What Are Gboard's New AI Writing Tools?
The Gboard Writing Tools are a suite of generative AI features designed to assist with drafting and refining text. Unlike cloud-based tools, all processing happens securely on your device, ensuring that your data remains private. This feature is a significant leap beyond simple autocorrect, offering a range of capabilities that can be accessed with a single tap.
The primary functions include:
- Proofread: Instantly scan and correct your text for grammatical errors, spelling mistakes, and punctuation issues. This is a one-tap solution for cleaning up your messages.
- Rephrase: Offers alternative phrasing for your text. This can be used to improve clarity, make a message more concise, or simply find a better way to express an idea.
- Tone Adjustment: Adjust the tone of your writing to fit the context. Options include making your text more Professional, Friendly, or adding Emojis for a more expressive tone.
How to Use the New Gboard Feature: A Step-by-Step Guide
- Look for the Icon: Open any app where you can type, like Gmail or Messages. A new pencil with a star icon will appear in the Gboard toolbar, either next to the microphone icon or within the menu.
- Tap to Activate: After typing or selecting a paragraph of text, tap the Writing Tools icon.
- Choose a Function: A menu will pop up with options like "Proofread," "Rephrase," and various tone adjustments.
- Review and Apply: The AI will generate a suggestion. You can then review the new text and, if you like it, tap "Use this" to replace your original text.
I'm a full-time Software Developer with over 4 years of experience working at one of the world’s largest MNCs. Alongside my professional role, I run a news blog, WorkWithG.com, which focuses on Google tools, tutorials, and news. I'm passionate about breaking down complex topics and making learning accessible for everyone.
Gemini App Gets a Major Upgrade with the Nano-Banana AI Model.
Google has rolled out a significant update to its Gemini app, bringing a host of new features that enhance its creativity, privacy, and utility. The highlight of the update is the introduction of a powerful new image generation model, internally codenamed "Nano-Banana," which allows users to create and edit images with unprecedented consistency and control.
The Power of "Nano-Banana".
Officially known as Gemini 2.5 Flash Image, the new model is designed to solve one of the biggest challenges in AI image generation: maintaining a consistent subject. With this new feature, users can generate a series of images featuring the same person, pet, or object in different settings, outfits, and poses.
The model's intelligence also allows for prompt-based editing, enabling users to make precise, local changes to an image using simple, natural language commands. The model can even fuse elements from multiple photos into a single, cohesive scene, showcasing a powerful new level of AI-driven creativity.
More Than Just Images: New Privacy and Productivity Features
The "Nano-Banana" model is just one part of a broader update to the Gemini app. Google has also introduced several new features designed to improve the user experience:
- Temporary Chat: For enhanced privacy, a new Temporary Chat mode ensures that conversations are not used for AI training and are automatically deleted after 72 hours.
- Upgrades to Gemini Live: The live assistant feature is now more integrated, with on-screen guidance and the ability to connect to other Google apps like Calendar, Keep, and Tasks.
- Searchable Chat History: Users can now easily search through their past conversations with Gemini to quickly find information or revisit previous ideas.
These updates collectively transform the Gemini app into a more versatile and intelligent tool for a wide range of creative and productive tasks.
I'm a full-time Software Developer with over 4 years of experience working at one of the world’s largest MNCs. Alongside my professional role, I run a news blog, WorkWithG.com, which focuses on Google tools, tutorials, and news. I'm passionate about breaking down complex topics and making learning accessible for everyone.
Google Teases Major Nest Cam and Gemini Updates for October.
Google has officially teased a new era for its smart home products, with a major announcement scheduled for October 1. The company's teaser confirms that new Nest Cam and Gemini updates are on the way, promising a significant evolution for the Google Home ecosystem.
The most anticipated reveal is a new generation of the Nest Cam. While Google's teaser image shows a redesign, with a new look for the camera sensor, prior leaks suggest more is coming. The new indoor Nest Cam is rumored to feature a 2K video update, a significant jump in resolution that would improve image clarity and detail. Updates to the outdoor camera and doorbell are also expected to be part of the announcement.
In addition to hardware, Google is also focused on software. The teaser confirms that "Gemini is coming to Google Home," indicating that Google's advanced AI model will be integrated into the smart home experience. A new Gemini-powered speaker is also expected to be unveiled. This integration is likely to enhance the capabilities of Google Home devices, offering more intuitive and intelligent interactions.
Is that you, Gemini? Come in and make yourself at Home 🏠
— Made by Google (@madebygoogle) September 2, 2025
Sign up for updates: https://t.co/V85WgPJvQN pic.twitter.com/JJaVRW385A
This forthcoming event on October 1 marks the first official teaser for a new generation of Nest Cam products and signals Google's commitment to bringing its AI innovations to the smart home.
I'm a full-time Software Developer with over 4 years of experience working at one of the world’s largest MNCs. Alongside my professional role, I run a news blog, WorkWithG.com, which focuses on Google tools, tutorials, and news. I'm passionate about breaking down complex topics and making learning accessible for everyone.
Google Gemini Rolls Out Temporary Chats Option.
As Google announced earlier this month, Google is significantly enhancing user privacy and control in its Gemini app by introducing new features, including a "Temporary Chat" mode and more transparent data settings. This move is part of Google's ongoing effort to make its AI assistant a more personal, proactive, and powerful tool while giving users greater command over their data.
The Introduction of Temporary Chats.
The most notable new feature is Temporary Chat, a mode designed for quick, one-off conversations that users do not want to be saved. This feature, which functions similarly to an incognito window in a web browser, is ideal for exploring sensitive or private questions or brainstorming ideas outside of a user's usual topics.
Chats conducted in this mode will not appear in a user's recent chats or Gemini Apps Activity. Crucially, they will not be used to personalize a user's Gemini experience or to train Google's AI models. For technical purposes, the chats are saved for up to 72 hours to allow for feedback processing, after which they are permanently deleted.
Enhanced Data Controls and Personalization.
In addition to Temporary Chats, Google has also rolled out a new "Personal context" setting that allows Gemini to learn from past conversations. When this feature is enabled, Gemini remembers key details and preferences, leading to more relevant and natural responses. While this feature is on by default, users have full control and can easily turn it on or off at any time.
Furthermore, the "Gemini Apps Activity" setting has been renamed to a more straightforward "Keep Activity." This setting gives users granular control over whether a sample of their future uploads will be used to help improve Google's services. A new toggle has also been added to specifically control whether audio, video, and screen shares from Gemini Live are used for product improvement, with this setting off by default.
These changes collectively reflect a strategic balance between creating a more personalized AI experience and empowering users to make informed choices about their data. With these new tools, Google Gemini Boosts User Privacy with New Temporary Chats & Enhanced Data Controls.
I'm a full-time Software Developer with over 4 years of experience working at one of the world’s largest MNCs. Alongside my professional role, I run a news blog, WorkWithG.com, which focuses on Google tools, tutorials, and news. I'm passionate about breaking down complex topics and making learning accessible for everyone.
Google Vids Adds AI Avatars and Launches Free Consumer Version.
Google is making waves in the world of video creation with significant updates to Google Vids. The platform, which has already surpassed one million monthly active users, is now rolling out AI avatars for seamless video production and introducing a basic, free version of its editor for all consumers.
Google Vids Ushers in a New Era of Video with AI Avatars.
In a move set to transform how teams communicate and collaborate, Google has officially launched AI avatars within its Vids video creation app. This highly anticipated feature, first announced at Google I/O, allows users to generate polished, narrated videos by simply writing a script and selecting a digital avatar to deliver the message.
The new AI avatars are designed to eliminate the common pain points of traditional video production, such as the hassle of coordinating with on-camera talent or managing multiple takes. This functionality is ideal for a wide range of corporate and educational content, including:
- Employee Training: Creating consistent and scalable training videos.
- Product Explanations: Delivering clear, concise demos and overviews.
- Company Announcements: Producing professional-looking messages from leadership or HR.
Users can choose from a selection of preset avatars, each with a distinct look and voice. The system automatically handles the delivery of the script, including appropriate pacing and tone, providing a fast and efficient way to create high-quality content without a camera or production crew.
Vids Now Free for Everyone.
While the advanced AI features remain part of Google Workspace and Google AI Pro/Ultra subscriptions, Google is now making the basic Vids editor available to all consumers at no cost. This move significantly broadens the platform's reach, making its user-friendly tools accessible to a wider audience.
The free version includes core editing capabilities, such as the timeline-based editor, and provides access to new templates for creating personal videos like tutorials, event invitations, and social media content. The free version integrates seamlessly with Google Drive, allowing users to easily import media and start creating.
Additional AI-Powered Enhancements
Beyond AI avatars, Google is rolling out several other generative AI features to enhance the Vids experience for its paid users:
- Image-to-Video: A new capability, powered by the Veo 3 model, allows users to transform static images into dynamic, eight-second video clips with sound using a simple text prompt.
- Transcript Trim: This smart editing tool uses AI to automatically detect and remove filler words and awkward pauses from a video’s transcript, significantly reducing editing time.
- Expanded Formats: Google confirmed that portrait, landscape, and square video formats are coming soon, ensuring content is optimized for various platforms like YouTube and social media.
I'm a full-time Software Developer with over 4 years of experience working at one of the world’s largest MNCs. Alongside my professional role, I run a news blog, WorkWithG.com, which focuses on Google tools, tutorials, and news. I'm passionate about breaking down complex topics and making learning accessible for everyone.
Google Play Store Expands "Ask Play About This App" Feature with Gemini AI.
Google is continuing to expand the rollout of its AI-powered "Ask Play about this app" feature in the Play Store. This innovative tool, which integrates the power of Gemini AI directly into app listings, is designed to provide users with instant, conversational answers to their questions about an application's features and functionality.
While the feature was first introduced to a limited number of users and a select group of apps earlier this year, its availability has been steadily increasing. Sources indicate that "Ask Play" is now live for a wide range of popular and new applications across the store, marking a significant step towards a more intelligent and user-friendly app discovery experience.
The tool works by allowing users to either type a custom query or choose from a list of suggested questions, such as "How do I use this app?" or "What are its key features?" The Gemini-powered AI then generates a helpful response directly on the app's detail page, saving users the time and effort of searching for answers on the web or sifting through reviews.
This update reflects Google's strategic focus on infusing AI into its core services to improve the user experience. By providing a conversational layer of information, the company aims to reduce friction for users and help them make more informed decisions about which apps to download.
However, the rollout is still ongoing. The feature is not yet available for every single application on the Play Store, and in some cases, even major Google apps like YouTube and Google Search are still awaiting the update. As is typical with Google updates, this phased rollout allows the company to gather feedback and make adjustments before a full-scale launch. Google also introduced a feature to enable auto-opening the app instantly after installation.
For developers, the continued expansion of "Ask Play about this app" underscores the importance of a well-documented and informative app listing, as the AI draws its information from a variety of sources to provide its answers. As this tool becomes more widespread, it is poised to become a key part of the app discovery journey for millions of Android users.
I'm a full-time Software Developer with over 4 years of experience working at one of the world’s largest MNCs. Alongside my professional role, I run a news blog, WorkWithG.com, which focuses on Google tools, tutorials, and news. I'm passionate about breaking down complex topics and making learning accessible for everyone.
Google Translate Introduces AI-Powered Live Translation and Language Learning.
Seamless Live Translation.
Personalized Language Practice.
How to Access the New Google Translate Features.
- Update your Google Translate app (available on both Android and iOS).
- Tap Live translate to begin real-time conversation translation.
- Tap Practice to begin personalized learning sessions.
- For Live translate, simply speak after selecting the languages.
- For Practice, choose your skill level and goals to receive custom exercises.
I'm a full-time Software Developer with over 4 years of experience working at one of the world’s largest MNCs. Alongside my professional role, I run a news blog, WorkWithG.com, which focuses on Google tools, tutorials, and news. I'm passionate about breaking down complex topics and making learning accessible for everyone.
Google Search AI Mode Expands with Powerful Agentic and Personalized Features.
Google is taking a major leap forward in how users interact with its search engine, announcing a significant expansion of its 'AI Mode' with new agentic and personalized features. This update, detailed in a recent blog post, is designed to transform Google Search from an information retrieval tool into a powerful, AI-powered agent that can help users get things done in the real world.
Introducing Agentic Capabilities: Your Personal Assistant in Search
One of the most groundbreaking additions is the new suite of "agentic" features. Rolling out initially as a Labs experiment for Google AI Ultra subscribers in the U.S., these capabilities allow AI Mode to perform multi-step tasks for you.
A prime example is the ability to book restaurant reservations. Instead of just showing a list of restaurants, AI Mode can now handle complex requests with multiple constraints. For instance, you could ask, "Find me a quiet Italian restaurant for four people at 7 PM on Saturday that's good for a birthday dinner and has outdoor seating." The AI will then search across various platforms to find real-time availability and present a curated list of options, complete with direct links to booking pages. The article notes this functionality will soon expand to include local service appointments and event tickets.
Deeply Personalized Results Based on Your Preferences
In addition to agentic actions, the update brings a new layer of personalization. For users in the U.S. who have opted into the AI Mode experiment, Google Search can now use previous conversations and search history to provide recommendations that are more tailored to your personal tastes.
This means if you're looking for a new restaurant, the AI will factor in your past preferences for specific cuisines or dining environments to suggest places it thinks you'll genuinely like. This level of personalization moves Google Search beyond simple queries to an experience that feels uniquely your own.
Collaboration and Global Expansion
The update also includes a new link-sharing feature, making it easy to share AI Mode responses with friends and family. This is especially useful for collaborative tasks like planning a trip or a group event, where multiple people can view and discuss the same results.
Finally, in a major step to make these advanced features more widely available, Google is expanding AI Mode to over 180 new countries and territories in English. This global rollout will allow millions more users to experience a more complex and nuanced search experience, marking a new era for Google Search's evolution.
Also Read: Google Adds AI Mode Shortcut to Android Search Widget.
I'm a full-time Software Developer with over 4 years of experience working at one of the world’s largest MNCs. Alongside my professional role, I run a news blog, WorkWithG.com, which focuses on Google tools, tutorials, and news. I'm passionate about breaking down complex topics and making learning accessible for everyone.
Google Flights Unveils AI-Powered "Flight Deals".
Gone are the days of endless tab-hopping and meticulous date adjustments to find the perfect flight deal. Google is revolutionizing travel planning with the launch of "Flight Deals," a new AI-powered search tool seamlessly integrated within Google Flights. Designed specifically for flexible travelers whose top priority is saving money, this innovative feature promises to simplify the quest for affordable airfare.
How AI Transforms Your Flight Search.
At its core, "Flight Deals" leverages Google's advanced AI to understand the nuances of your travel preferences through natural language queries. Instead of rigid date and destination inputs, you can now describe your ideal trip as if you're talking to a friend. For instance, you could search for:
- Week-long trip this winter to a city with great food, nonstop.
- 10-day ski trip to a world-class resort with fresh powder.
- Romantic weekend getaways.
- See the cherry blossoms in Japan.
Beyond Filters: A More Intuitive Planning Experience.
Rollout and Availability.
I'm a full-time Software Developer with over 4 years of experience working at one of the world’s largest MNCs. Alongside my professional role, I run a news blog, WorkWithG.com, which focuses on Google tools, tutorials, and news. I'm passionate about breaking down complex topics and making learning accessible for everyone.























Latest Google News, Updates, and Features. Everything You Need to Know About Google