Google Chrome Move Address Bar to Bottom in Android.

Chrome Change Position of Address Bar in Android

Google has introduced the feature to move the Chrome Address bar to the bottom of the screen on Android Devices. Announced in a blog post on August 3, 2025, this update enhances browsing comfort for users who prefer one-handed operation or find it easier to reach the bottom of their screens on larger devices.

The Chrome team stated, “We launched this feature because we heard your requests loud and clear. Now you can customize your browsing experience to suit your habits.” This update aligns with Google's broader efforts to offer more flexible and personalized experiences across its platforms.

Having a bigger phone screen, I am definitely going to use this feature to improve the control on the screen while browsing on my favourite browser. What about you? If you are using an Android phone and not using Chrome's Address Bar at the bottom, then you must give it a try. Follow the steps given below to enable it:

How To Move Chrome Address Bar To The Bottom?

Google has made it simple to switch the location of the address bar:

Method 1: Long Press Option.

  1. Open your Google Chrome app on your Android phone.
  2. Long-press the address bar, and you will get options to move the address bar or to copy the link. 
  3. Tap “Move address bar to bottom.” and you can see a smooth integration of the Address bar at the bottom of your screen.
Move Chrome Address Bar to Bottom

Method 2: From Settings

  1. Tap the three-dot menu in Google Chrome on your Android Device.
  2. Go to Settings > Address Bar.
  3. Choose Top or Bottom according to your preference.
Move Address Bar to Top or Bottom

You can move the address bar back to the top at any time using the same methods.

The repositioning of the address bar might seem like a small UI tweak, but it’s part of a larger design philosophy making tools more ergonomic, accessible, and tailored to user behavior. One-handed usability is becoming increasingly important as smartphone screen sizes grow.

Google is Quietly Preserving Some Goo.gl Links Despite Shutting Down.

Goo.gl Link Shorterner

In a surprising twist, Google is preserving access to select goo.gl short links, even though the official Google URL Shortener service was discontinued years ago. According to a recent investigation by Android Authority, while the public-facing service is no longer functional, some old shortened links still redirect properly, raising questions about Google’s handling of legacy data and link preservation.

In an update to its developer blog, Google explained:

While we previously announced discontinuing support for all goo.gl URLs after August 25, 2025, we've adjusted our approach to preserve actively used links. We understand these links are embedded in countless documents, videos, posts, and more, and we appreciate the input received.

The End of Goo.gl.

The Google URL Shortener service, launched in 2009, was originally created to help users share links in a more compact format, especially for mobile users and platforms like Twitter. Over the years, it gained popularity among marketers, bloggers, and everyday users.

However, in March 2018, Google announced the gradual shutdown of the goo.gl service in favor of Firebase Dynamic Links (FDL), citing a shift in user behavior toward more dynamic mobile-first solutions. The URL shortener was fully shut down to the public in March 2019, and Google stated that existing links would continue to redirect, but the creation and management of new ones would no longer be supported.

Originally announced in July 2024, Google planned to sunset the legacy goo.gl shortener entirely, citing declining usage as 99 % of links had no activity as of mid‑2024. Those links were set to return errors, breaking outdated or embedded references across the web.

Fast forward to 2025, and some goo.gl links are surprisingly still active and functional, redirecting users to the correct destination. However, not all goo.gl links behave the same, and some now lead to a generic error page, while others continue to work perfectly.

This inconsistency suggests that Google might be selectively preserving certain high-traffic or important goo.gl links, possibly based on usage history or relevance. While there’s no official statement from Google on the exact criteria, it’s clear that some level of backend maintenance or archival logic is in place.

Implications for Users and the Web.

This partial survival of goo.gl links offers both opportunity and caution for users. On one hand, those with important legacy content tied to goo.gl links might find relief knowing some of their URLs still work. On the other hand, the unpredictability means businesses or publishers relying on goo.gl for permanent redirects should consider migrating to a more stable URL shortening service.

From a web archiving and digital preservation perspective, this raises interesting questions about Google’s long-term commitment to preserving parts of the internet’s past infrastructure. It also shows that tech giants, even when discontinuing a service, may still quietly maintain support for legacy features that serve value behind the scenes.

What is Google AI Mode in Search?

Google AI Mode

Google AI Mode is now officially available to all users beyond Google Pixel, and no sign-in to Google Labs is required. You may have already tried it or seen someone using its full capabilities. If not, this is the perfect time to explore it.

This isn’t your traditional Google Search experience. AI Mode transforms how you interact with information, offering a completely new and immersive way to browse. Integrated directly into Google Search, it can answer almost anything you ask, not just through typing, but also using your voice, an image, or even a live video.

Yes, you read that right, you can ask live questions just by opening your camera. Amazing, isn’t it? It truly feels like we’re stepping into a whole new era of intelligent and interactive searching.

To better understand how AI Mode transforms your search experience, here’s a deep dive into what it is and how it works:

What is Google AI Mode?

Google AI Mode is a next-generation search experience built directly into Google Search, powered by the advanced Gemini 2.x language models. It transforms traditional searches by generating conversational, AI-generated responses instead of just listing links or snippets. The system can break down complex or multi-part queries into subtopics, conduct simultaneous searches, and synthesize findings into a clear, readable overview.

What sets AI Mode apart is its multimodal capability: you can interact using text, voice, or images, and even use your phone’s camera for live video searching. Whether you’re snapping a photo, speaking a question aloud, or typing your query, AI Mode understands context and delivers helpful responses all within the familiar Google Search interface.

Launched experimentally in March 2025 through Search Labs, AI Mode has since rolled out more broadly in the U.S., India, and the U.K., but still operates as an opt-in experience for many users. You can enable it by selecting the dedicated AI Mode tab inside Google Search on mobile or desktop. As Google refines the feature with user feedback, it’s gradually expanding globally, offering richer, more intuitive search interactions.

How To Access Google AI Mode?

Google AI Mode is available directly through the Google Search bar with a glowing icon named "AI Mode". Initially launched via Search Labs, this feature was opt-in only. As of mid-2025, Google has started rolling it out more widely, especially in countries like the United States, India, and the United Kingdom. If you are in one of these supported regions, you can see the “AI Mode” tab in Google Search on Chrome or the Google app for Android and iOS. If you are using the Google app, then you can also enable or disable AI Mode search from the custom widget shortcuts settings.

On mobile, this appears as a toggle or extra card above regular search results. On a desktop, it may show as a separate section at the top. In some devices, tapping the mic icon or camera icon also opens access to the multimodal AI features built into the mode. If you don't see this option, you can go to labs.google.com/search and manually enroll if it’s still available in your country.

Importantly, while Google AI Mode is part of the Search experience, it differs from Gemini chat. You don’t need to visit a separate site like gemini.google.com. Instead, AI Mode blends into your regular browsing and searching activities, offering instant answers, breakdowns, summaries, and follow-up suggestions all within the main Google interface. Over time, it is expected to become the default search experience for many users as Google continues its AI-first transformation.

Google AI Mode Search Result

How To Use Google AI Mode?

Google AI Mode is powered by Google's advanced Gemini models, which are designed to handle multiple types of input like text, images, audio, and video. Instead of simply matching keywords like traditional search, Gemini understands the context behind your query and responds with smart, conversational answers. This allows AI Mode to offer a more natural and interactive experience.

You can interact with AI Mode in several ways. Here are the three main modes of interaction available in Google AI Mode:

1. Text Input Mode

You can simply type your question or search query in the usual Google Search bar. With AI Mode enabled, instead of standard blue links, you'll receive AI-generated overviews with relevant insights, summaries, and suggested next steps. It makes your search more informative and contextual.

2. Voice Input Mode

Using your microphone, you can speak your queries just like talking to a voice assistant. AI Mode processes your speech in real time and returns results in the same AI-generated format. It’s great for hands-free use or when you're on the move.

3. Visual (Camera) Input Mode

This is one of the most futuristic features. You can point your camera at an object, document, or place and ask questions about it. For example, take a photo of a math problem or a plant, and AI Mode will try to answer or provide information based on what it sees, like Google Lens, but now powered by generative AI for smarter responses. 

This makes Google AI Mode feel less like a search engine and more like a helpful assistant that works across different inputs.

The underlying Gemini model is capable of drawing on the latest information from the web while simultaneously integrating learned user preferences to refine its output over time. This makes Google AI Mode not only faster and more convenient than older search methods, but also significantly more intelligent and capable. It represents a major leap forward in how users find, understand, and interact with information online.

How Is Google AI Mode Different from ChatGPT or Gemini?

As AI tools become more integrated into our daily digital lives, it’s natural to wonder how Google's new AI Mode stands apart from other popular tools like ChatGPT and Gemini. While all three leverage powerful AI models, their purpose, design, and experience vary greatly. Here's how AI Mode differs:

AI Mode vs ChatGPT:

ChatGPT is a conversational AI designed for open-ended dialogue, writing, learning, and creative tasks. You usually access it through a dedicated interface like the ChatGPT website or app. In contrast, Google AI Mode is embedded directly into Google Search. It enhances your search experience with live, AI-generated overviews and real-time web results. Plus, AI Mode supports multimodal input—you can interact using text, voice, or even your phone’s camera to ask about what you see.

AI Mode vs Gemini App:

Google Gemini is a standalone AI app that functions like a full digital assistant. It’s better suited for in-depth tasks like writing, brainstorming, or coding. While both Gemini and AI Mode are powered by Google’s Gemini models, AI Mode is focused on enriching the search experience, not replacing your assistant. It helps you get instant answers while browsing or searching, especially using visual or spoken input.

The Core Difference:

Google AI Mode is search-enhancing and visually interactive, while ChatGPT and the Gemini app are conversation-based and more general-purpose. AI Mode is ideal when you want quick, AI-powered context while browsing, especially when using your phone's camera or voice, making it feel like a smart layer over traditional Google Search.

Conclusion.

Google AI Mode represents a significant leap in how we interact with information online. Unlike traditional search experiences, it brings AI directly into your fingertips, allowing you to search and learn using text, voice, images, or even live video. Whether you’re looking for quick facts, exploring visual content, or asking complex questions in natural language, AI Mode simplifies and enhances the process with speed and context.

Its integration into everyday Google Search means you don’t need to switch to a different app or platform. The experience is seamless, intuitive, and designed to feel like you’re having a conversation with your browser. And with Google continuing to expand its multimodal capabilities, this is just the beginning of a new era of intelligent, interactive browsing.

If you haven’t tried it yet, now’s the perfect time to explore Google AI Mode and see how it can reshape your digital habits.

How To Transfer Ownership of a File in Google Drive.

Transfer Ownership of Google Docs

Last month, I was wrapping up a freelance project I led for several months. All the project-related documents, reports, and shared resources were stored in my Google Drive. However, since the project was about to end, I decided to share all the related files with my client so she could take care of it further. 

Simply giving her an editing option wasn't enough, I need to give her full ownership of the files. That's when I discovered how to transfer ownership of files in Google Drive.

Transfer Ownership of a File in Google Drive.

Transferring Ownership ensures that the new person has full control over files just like you. Here's how you can do it step-by-step using your personal Google account.

Step 1: Open Google Drive and locate the file.

Go to drive.google.com in any browser where you're signed into your personal Google account. Navigate through your folders or use the search bar to find the specific file you want to transfer.

Step 2: Right-click the file and choose "Share."

Once you’ve found the file, right-click on it. From the context menu that appears, click on the “Share” option. This will open a sharing settings dialog where you manage access and permissions.

Share G-Drive File

Step 3: Add the person you want to make the new owner

If the intended new owner isn’t already listed, type their email address in the “Add people and groups” box at the top. Set any type of access you want (Viewer, Commenter, or Editor) and give a meaningful message in the message box. Then click “Send” to share the file with them. Make sure you’re entering a valid Google account email.

Share Google Drive Files

Step 4: Change their role to ‘Owner’

After the person has been added (or if they were already there), locate their name under the “People with access” section. Click the drop-down arrow next to their current role (e.g., Editor) and choose “Transfer ownership.”

Transfer Ownership

Step 5: Confirm the transfer

A pop-up will appear with a warning saying that "You'll be the owner until this person accepts ownership."Click on the Send Invitation button in the pop-up, and the user will get an email to accept the ownership of the file.

Send Invitation
Note: Once the ownership is transferred to the new user, you will still hold Editor access until the new owner removes you or makes changes to your access. You won't be able to revert the ownership again.

Step 6: Email Sent to New Owner.

The new Owner will receive an email invite to accept ownership of the shared file. The user can Accept or Decline the request based on their choice. Once accepted, complete ownership will transfer to new user.
Email Send to New Owner

Note: In personal (free) Google accounts, you cannot directly transfer ownership of a folder. Google only allows the transfer of ownership of individual files, not folders. But there are many alternative ways to transfer ownership of an entire folder and its content that we are going to learn in the next part.

Transfer Ownership of Entire Folder in Google Drive.

If you are using a free personal Google account, then you can transfer ownership of the folder to any other user, but even after that, you will still own the subfolder and files within it. There is no direct approach to transfer the ownership of the entire folder with its content in the free version of the Google account, but you get this option in the premium version of Google Workspace.

Oh, does it mean that I need to pay a premium to use this feature? Well, not really, there are a few alternative methods that we can use to achieve our goal. Let's learn a few of them here.

Method 1: Transfer Ownership Manually.

Step 1: Transfer Ownership of the Folder.

In this method, you need to transfer ownership twice to the new user. First, you have to transfer the ownership of the Folder by following the same steps that we have performed above for a single file.

Step 2: Transfer Ownership of all SubFiles.

Once the folder ownership is transferred to the new user, you can open the folder and check that you are still the owner of the entire content present in it. Press Ctrl + A from your keyboard to select files at once and repeat the first step to transfer ownership of all the selected files in one go.

Transfer Ownership of all Files at Once

Step 3: The New Owner receives an email.

The new owner with whom you want to share all the files will receive an email with a list of files and a Respond button, which will redirect the user to Google Drive to review and accept the ownership of the files.

Email Send to New Owner

Step 4: Accept Ownership of all shared Files.

When the new owner opens the folder, they can still see your name as the owner of the files present inside. The user will get two options: review each file one by one and accept the ownership, or select all entire files altogether and accept the ownership of all the files. Ask the new owner to follow the steps shown below:

To accept ownership, press Ctrl+A to select the file and click on the Share+ icon. You will get a pop-up with an "Accept Ownership?" button next to your User ID.

Accept Ownership of Entire Folder

Note: If the folder contains another subfolder with more files inside it, then you need to repeat the same steps 1 and 2 again to share the ownership of the files present in subfolders. You also need to repeat the same steps if more files get added inside that folder.

If you use Google Workspace, you can instead move files into a Shared Drive, where ownership belongs to the team. The above methods are good if you are handling everything by yourself.

Reddit Aims to Become a Full-Fledged Search Engine as Q2 Profits Soar.

Reddit Logo
Key Takeaway.
  • Reddit plans to evolve into a full-fledged search engine powered by AI and real user discussions.
  • The platform reported its most profitable quarter ever, with a 78% year-over-year revenue increase.

Reddit is making a bold move toward becoming more than just a social platform instead, it now wants to be your go-to search engine. In its Q2 2025 shareholder letter, Reddit CEO Steve Huffman announced that the company is “concentrating its resources” to turn Reddit into a serious search competitor. With more than 70 million weekly active users engaging with its built-in search tools, Reddit is confident in the unique value it offers: real human answers from real community conversations.

One of the biggest drivers of this vision is Reddit Answers, an AI-powered search feature that delivers responses based on Reddit’s vast archive of posts and comments. Launched late last year, Reddit Answers has quickly grown from 1 million weekly users in Q1 to over 6 million in Q2. Huffman shared plans to integrate Reddit Answers more deeply into the platform's core search and roll it out globally. Unlike traditional search engines that offer AI summaries, Reddit’s model keeps the human element front and center, using trusted community content to generate results.

This ambitious shift comes on the heels of Reddit’s most profitable quarter yet. In Q2 2025, the company reported $500 million in revenue—a 78% increase from the previous year—and $89 million in net income, giving it a net margin of 18%. Its adjusted EBITDA also reached $167 million, with a 33% margin. These strong financials give Reddit the freedom to invest heavily in search innovation and expand its global footprint.

Huffman emphasized that Reddit’s unique advantage lies in its ability to deliver nuanced, people-driven insights. As tech giants like Google pivot to AI-driven answers, Reddit sees an opportunity to win over users who value a human perspective. The goal is clear: keep users on the platform by offering them deeper, more authentic answers powered by community knowledge.

With over 416 million weekly active users and a rising interest in alternative search experiences, Reddit’s transformation into a search engine could signal a major shift in how people look for information online.

Google Rolls Out ‘Deep Think’ Mode in Gemini 2.5 to AI Ultra Subscribers.

Deep Think
Key Takeaway.
  • Google launches Deep Think mode for Gemini 2.5, offering advanced reasoning and step-by-step problem solving to AI Ultra users.
  • Deep Think achieved gold-level performance at the International Mathematical Olympiad and scored 87.6% on LiveCodeBench.

Google has officially rolled out ‘Deep Think’, a powerful reasoning mode for Gemini 2.5 Pro, exclusively to AI Ultra subscribers. First teased during Google I/O 2025, this upgrade represents one of the most significant leaps in AI reasoning and structured problem-solving to date.

Now available on the web and mobile versions of Gemini, Deep Think allows the AI to take more time and apply deeper, multi-path reasoning to user prompts. The new feature comes with a dedicated button in the Gemini prompt bar and is aimed at users who need detailed answers to complex problems, especially in fields like mathematics, software development, and scientific research.

A New Way for Gemini to “Think”.

Unlike the traditional Gemini 2.5 response mechanism, Deep Think applies parallel hypothesis exploration, allowing it to simulate multiple reasoning paths before concluding with the most optimal answer. This mirrors a form of decision-making similar to how expert humans solve intricate challenges.

According to Google, this is enabled by what it calls a “higher thinking budget,” giving Gemini more processing power and internal resources to spend time analyzing, validating, and refining its outputs.

For advanced tasks, such as writing long code snippets, solving Olympiad-level math problems, or developing strategic plans, Deep Think now represents Gemini’s most powerful mode of cognition yet.

Parral Thinking
Credit: Google

Performance of Deep Think.

Google’s Deep Think mode, available in Gemini 2.5 Pro, significantly raises the bar for AI reasoning, creativity, and problem-solving. By enabling the model to explore multiple reasoning paths in parallel and synthesize stronger final outputs, Deep Think showcases dramatic improvements in several high-stakes performance benchmarks, many of which are used to test advanced human intelligence.

Key Benchmark Results with Deep Think.

1. LiveCodeBench (Coding Reasoning)

In coding benchmarks, Deep Think delivers a remarkable 87.6% score on LiveCodeBench, a major jump from the standard Gemini 2.5 Pro’s 80.4%. This benchmark tests the model’s ability to solve competition-level programming problems under strict constraints. With this performance, Deep Think now surpasses all major AI models, including OpenAI’s GPT‑4, Anthropic’s Claude 3.5, and Elon Musk’s Grok 4.

2. MMMU (Massive Multidisciplinary Multimodal Understanding)

When it comes to complex multimodal reasoning, Deep Think achieves an impressive 84.0% on the MMMU benchmark. This test evaluates the model’s ability to handle cross-domain questions that involve interpreting text, images, tables, and other structured data. The high score demonstrates Gemini's growing strength in understanding and synthesizing diverse types of information.

3. International Mathematical Olympiad (IMO) Gold Medal Standard

An advanced version of Gemini with Deep Think achieved a breakthrough by solving 5 out of 6 problems from the International Mathematical Olympiad, earning a gold medal–level score. This is one of the most prestigious mathematics contests in the world, and Gemini’s performance was officially verified by IMO coordinators, making it the first time an AI has independently demonstrated such elite mathematical ability.

4. Creative Reasoning and Synthesis

Beyond raw accuracy, Deep Think is designed for deliberative, multi-path reasoning. The model takes more time to “think,” allowing it to simulate several solution paths, compare outcomes, and arrive at refined conclusions. This approach results in more structured, step-by-step responses, better self-verification, and increased reliability, especially for solving STEM problems, complex business logic, and academic tasks that require precision. These results position Gemini as one of the most academically capable AI systems ever deployed to the public.

Also Read: Google Launches Gemini Drops Feed to Centralize AI Tips and Updates.

Who can access Deep Think?

As of today, Deep Think is rolling out in phases to users subscribed to the AI Ultra tier at $249.99 per month in the US. AI Ultra Access comes with:

  • Daily usage limits to balance computing cost and performance.
  • Tool-enabled mode (when allowed) that lets Gemini use code execution, web search, and other APIs during its reasoning process.
  • Structured output formatting for step-by-step solutions, logic trees, and even visual representations of reasoning.

Developer Preview on Deep Think.

Google also confirmed that API access to Deep Think for both tool-enabled and tool-free variants will be offered to select developers and enterprise partners in the coming weeks. This move could reshape how businesses deploy autonomous agents, customer support bots, and research assistants.

Notably, Deep Think can be integrated into long-context workflows, with Gemini 2.5 already supporting 1 million tokens in its context window. Reports suggest Google may soon expand this further to 2 million tokens, making it suitable for full-document analysis, multi-step reasoning, and long-form content generation.

DON'T MISS

AI
© all rights reserved
made with by templateszoo