How To Transfer Ownership of a File in Google Drive.

Transfer Ownership of Google Docs

Last month, I was wrapping up a freelance project I led for several months. All the project-related documents, reports, and shared resources were stored in my Google Drive. However, since the project was about to end, I decided to share all the related files with my client so she could take care of it further. 

Simply giving her an editing option wasn't enough, I need to give her full ownership of the files. That's when I discovered how to transfer ownership of files in Google Drive.

Transfer Ownership of a File in Google Drive.

Transferring Ownership ensures that the new person has full control over files just like you. Here's how you can do it step-by-step using your personal Google account.

Step 1: Open Google Drive and locate the file.

Go to drive.google.com in any browser where you're signed into your personal Google account. Navigate through your folders or use the search bar to find the specific file you want to transfer.

Step 2: Right-click the file and choose "Share."

Once you’ve found the file, right-click on it. From the context menu that appears, click on the “Share” option. This will open a sharing settings dialog where you manage access and permissions.

Share G-Drive File

Step 3: Add the person you want to make the new owner

If the intended new owner isn’t already listed, type their email address in the “Add people and groups” box at the top. Set any type of access you want (Viewer, Commenter, or Editor) and give a meaningful message in the message box. Then click “Send” to share the file with them. Make sure you’re entering a valid Google account email.

Share Google Drive Files

Step 4: Change their role to ‘Owner’

After the person has been added (or if they were already there), locate their name under the “People with access” section. Click the drop-down arrow next to their current role (e.g., Editor) and choose “Transfer ownership.”

Transfer Ownership

Step 5: Confirm the transfer

A pop-up will appear with a warning saying that "You'll be the owner until this person accepts ownership."Click on the Send Invitation button in the pop-up, and the user will get an email to accept the ownership of the file.

Send Invitation
Note: Once the ownership is transferred to the new user, you will still hold Editor access until the new owner removes you or makes changes to your access. You won't be able to revert the ownership again.

Step 6: Email Sent to New Owner.

The new Owner will receive an email invite to accept ownership of the shared file. The user can Accept or Decline the request based on their choice. Once accepted, complete ownership will transfer to new user.
Email Send to New Owner

Note: In personal (free) Google accounts, you cannot directly transfer ownership of a folder. Google only allows the transfer of ownership of individual files, not folders. But there are many alternative ways to transfer ownership of an entire folder and its content that we are going to learn in the next part.

Transfer Ownership of Entire Folder in Google Drive.

If you are using a free personal Google account, then you can transfer ownership of the folder to any other user, but even after that, you will still own the subfolder and files within it. There is no direct approach to transfer the ownership of the entire folder with its content in the free version of the Google account, but you get this option in the premium version of Google Workspace.

Oh, does it mean that I need to pay a premium to use this feature? Well, not really, there are a few alternative methods that we can use to achieve our goal. Let's learn a few of them here.

Method 1: Transfer Ownership Manually.

Step 1: Transfer Ownership of the Folder.

In this method, you need to transfer ownership twice to the new user. First, you have to transfer the ownership of the Folder by following the same steps that we have performed above for a single file.

Step 2: Transfer Ownership of all SubFiles.

Once the folder ownership is transferred to the new user, you can open the folder and check that you are still the owner of the entire content present in it. Press Ctrl + A from your keyboard to select files at once and repeat the first step to transfer ownership of all the selected files in one go.

Transfer Ownership of all Files at Once

Step 3: The New Owner receives an email.

The new owner with whom you want to share all the files will receive an email with a list of files and a Respond button, which will redirect the user to Google Drive to review and accept the ownership of the files.

Email Send to New Owner

Step 4: Accept Ownership of all shared Files.

When the new owner opens the folder, they can still see your name as the owner of the files present inside. The user will get two options: review each file one by one and accept the ownership, or select all entire files altogether and accept the ownership of all the files. Ask the new owner to follow the steps shown below:

To accept ownership, press Ctrl+A to select the file and click on the Share+ icon. You will get a pop-up with an "Accept Ownership?" button next to your User ID.

Accept Ownership of Entire Folder

Note: If the folder contains another subfolder with more files inside it, then you need to repeat the same steps 1 and 2 again to share the ownership of the files present in subfolders. You also need to repeat the same steps if more files get added inside that folder.

If you use Google Workspace, you can instead move files into a Shared Drive, where ownership belongs to the team. The above methods are good if you are handling everything by yourself.

Reddit Aims to Become a Full-Fledged Search Engine as Q2 Profits Soar.

Reddit Logo
Key Takeaway.
  • Reddit plans to evolve into a full-fledged search engine powered by AI and real user discussions.
  • The platform reported its most profitable quarter ever, with a 78% year-over-year revenue increase.

Reddit is making a bold move toward becoming more than just a social platform instead, it now wants to be your go-to search engine. In its Q2 2025 shareholder letter, Reddit CEO Steve Huffman announced that the company is “concentrating its resources” to turn Reddit into a serious search competitor. With more than 70 million weekly active users engaging with its built-in search tools, Reddit is confident in the unique value it offers: real human answers from real community conversations.

One of the biggest drivers of this vision is Reddit Answers, an AI-powered search feature that delivers responses based on Reddit’s vast archive of posts and comments. Launched late last year, Reddit Answers has quickly grown from 1 million weekly users in Q1 to over 6 million in Q2. Huffman shared plans to integrate Reddit Answers more deeply into the platform's core search and roll it out globally. Unlike traditional search engines that offer AI summaries, Reddit’s model keeps the human element front and center, using trusted community content to generate results.

This ambitious shift comes on the heels of Reddit’s most profitable quarter yet. In Q2 2025, the company reported $500 million in revenue—a 78% increase from the previous year—and $89 million in net income, giving it a net margin of 18%. Its adjusted EBITDA also reached $167 million, with a 33% margin. These strong financials give Reddit the freedom to invest heavily in search innovation and expand its global footprint.

Huffman emphasized that Reddit’s unique advantage lies in its ability to deliver nuanced, people-driven insights. As tech giants like Google pivot to AI-driven answers, Reddit sees an opportunity to win over users who value a human perspective. The goal is clear: keep users on the platform by offering them deeper, more authentic answers powered by community knowledge.

With over 416 million weekly active users and a rising interest in alternative search experiences, Reddit’s transformation into a search engine could signal a major shift in how people look for information online.

Google Rolls Out ‘Deep Think’ Mode in Gemini 2.5 to AI Ultra Subscribers.

Deep Think
Key Takeaway.
  • Google launches Deep Think mode for Gemini 2.5, offering advanced reasoning and step-by-step problem solving to AI Ultra users.
  • Deep Think achieved gold-level performance at the International Mathematical Olympiad and scored 87.6% on LiveCodeBench.

Google has officially rolled out ‘Deep Think’, a powerful reasoning mode for Gemini 2.5 Pro, exclusively to AI Ultra subscribers. First teased during Google I/O 2025, this upgrade represents one of the most significant leaps in AI reasoning and structured problem-solving to date.

Now available on the web and mobile versions of Gemini, Deep Think allows the AI to take more time and apply deeper, multi-path reasoning to user prompts. The new feature comes with a dedicated button in the Gemini prompt bar and is aimed at users who need detailed answers to complex problems, especially in fields like mathematics, software development, and scientific research.

A New Way for Gemini to “Think”.

Unlike the traditional Gemini 2.5 response mechanism, Deep Think applies parallel hypothesis exploration, allowing it to simulate multiple reasoning paths before concluding with the most optimal answer. This mirrors a form of decision-making similar to how expert humans solve intricate challenges.

According to Google, this is enabled by what it calls a “higher thinking budget,” giving Gemini more processing power and internal resources to spend time analyzing, validating, and refining its outputs.

For advanced tasks, such as writing long code snippets, solving Olympiad-level math problems, or developing strategic plans, Deep Think now represents Gemini’s most powerful mode of cognition yet.

Parral Thinking
Credit: Google

Performance of Deep Think.

Google’s Deep Think mode, available in Gemini 2.5 Pro, significantly raises the bar for AI reasoning, creativity, and problem-solving. By enabling the model to explore multiple reasoning paths in parallel and synthesize stronger final outputs, Deep Think showcases dramatic improvements in several high-stakes performance benchmarks, many of which are used to test advanced human intelligence.

Key Benchmark Results with Deep Think.

1. LiveCodeBench (Coding Reasoning)

In coding benchmarks, Deep Think delivers a remarkable 87.6% score on LiveCodeBench, a major jump from the standard Gemini 2.5 Pro’s 80.4%. This benchmark tests the model’s ability to solve competition-level programming problems under strict constraints. With this performance, Deep Think now surpasses all major AI models, including OpenAI’s GPT‑4, Anthropic’s Claude 3.5, and Elon Musk’s Grok 4.

2. MMMU (Massive Multidisciplinary Multimodal Understanding)

When it comes to complex multimodal reasoning, Deep Think achieves an impressive 84.0% on the MMMU benchmark. This test evaluates the model’s ability to handle cross-domain questions that involve interpreting text, images, tables, and other structured data. The high score demonstrates Gemini's growing strength in understanding and synthesizing diverse types of information.

3. International Mathematical Olympiad (IMO) Gold Medal Standard

An advanced version of Gemini with Deep Think achieved a breakthrough by solving 5 out of 6 problems from the International Mathematical Olympiad, earning a gold medal–level score. This is one of the most prestigious mathematics contests in the world, and Gemini’s performance was officially verified by IMO coordinators, making it the first time an AI has independently demonstrated such elite mathematical ability.

4. Creative Reasoning and Synthesis

Beyond raw accuracy, Deep Think is designed for deliberative, multi-path reasoning. The model takes more time to “think,” allowing it to simulate several solution paths, compare outcomes, and arrive at refined conclusions. This approach results in more structured, step-by-step responses, better self-verification, and increased reliability, especially for solving STEM problems, complex business logic, and academic tasks that require precision. These results position Gemini as one of the most academically capable AI systems ever deployed to the public.

Also Read: Google Launches Gemini Drops Feed to Centralize AI Tips and Updates.

Who can access Deep Think?

As of today, Deep Think is rolling out in phases to users subscribed to the AI Ultra tier at $249.99 per month in the US. AI Ultra Access comes with:

  • Daily usage limits to balance computing cost and performance.
  • Tool-enabled mode (when allowed) that lets Gemini use code execution, web search, and other APIs during its reasoning process.
  • Structured output formatting for step-by-step solutions, logic trees, and even visual representations of reasoning.

Developer Preview on Deep Think.

Google also confirmed that API access to Deep Think for both tool-enabled and tool-free variants will be offered to select developers and enterprise partners in the coming weeks. This move could reshape how businesses deploy autonomous agents, customer support bots, and research assistants.

Notably, Deep Think can be integrated into long-context workflows, with Gemini 2.5 already supporting 1 million tokens in its context window. Reports suggest Google may soon expand this further to 2 million tokens, making it suitable for full-document analysis, multi-step reasoning, and long-form content generation.

YouTube Updates Profanity Guidelines & Deploys AI to Identify Teen User.

YouTube Update for Creators
Key Takeaway.
  • YouTube relaxes profanity rules, allowing limited strong language in monetized videos.
  • New AI tech will detect teen users and auto-enable safety features starting August 13, 2025.

YouTube has announced a dual rollout: updated profanity rules for creators under its Advertiser-Friendly Guidelines, alongside AI-powered age estimation technology to identify teen viewers and automatically enforce protections.

Looser Profanity Rules for Ad Monetization.

YouTube is loosening its stance on strong language in monetized videos. Under the revised guidelines, profanity include stronger language such as the f-word may still be eligible for ad revenue, depending on placement and frequency. This offers creators more flexibility, particularly when language is used for artistic or expressive purposes.

While still discouraged early in a video, profanity beyond the first 7 seconds may no longer automatically disqualify a video from monetization. This shift reflects YouTube’s increasingly nuanced approach to content that balances realistic dialogue with advertiser comfort.

AI-Driven Teen Identification & Automatic Safeguards.

Simultaneously, YouTube is implementing an AI age estimation system in the U.S., set to begin rolling out on August 13, 2025, to automatically detect users under 18—even if they misreport their birthdate. 

If the AI flags an account as underage, YouTube will activate existing protections for teens:

  • Disabling personalized ads
  • Enabling digital well-being tools: screen time reminders, bedtime alerts
  • Restricting repeated exposure to sensitive or body-image content
  • Blocking age-restricted videos unless the user verifies they are over 18 

Users mistakenly identified as teens can still contest the decision by verifying their age through government-issued ID, credit card, or selfie.

YouTube says it will initially test the system with a small group in the U.S. before a broader rollout, and closely monitor its performance.

Important Update for Creators.

These two updates together mark a shift in YouTube's efforts to balance creator freedom with safety and brand trust:

The relaxed profanity policy offers creators more flexibility while maintaining advertiser-friendly standards.

AI-based teen detection enables broader enforcement of protections without relying on user honesty or manual reporting.

Creators, especially those targeting younger audiences or using strong language, should understand these changes. Teen users are now subject to stricter content delivery protections regardless of what age they enter during sign-up.

Google Photos Perspective Correction Tool Goes Missing for Many Users.

Google Photos Logo Open on Android
Key Takeaway.
  • Google Photos removes the perspective correction tool.
  • Users report missing feature with no official response.

Recently, Google Photos has removed its long-standing perspective correction (crop/keystone) tool, frustrating photographers and everyday users who relied on it for straightening skewed shots and emulating scans. Reports across Reddit and Google’s own support forums confirm that the option has disappeared in recent app versions.

This feature was embedded within the Crop editing tools was appreciated for correcting angular distortions in photos of documents, artwork, or real estate. Though not the most widely used, its removal is sparking surprise and complaints.

The vanished function was once easily accessible as a skew-adjustment overlay in Google Photos’ Crop tool. Now, affected users—across Android and the web interface—report the feature is simply gone from editing menus. 

Google Photos Editing

The change seems to have appeared abruptly, likely tied to a recent app update that removed the tool without warning. Notably, it's still visible in older app versions like Google Photos v7.38, suggesting this is targeted in newer releases. 

Community reactions echo frustration: Reddit users on r/GooglePixel chimed in with comments like:

“I used it a lot to correct the perspective of photos I took too fast … Keystone correction is missing from the Crop tools.”

“At least a few times per week. ... I’m especially bummed that it's gone.” 

Some speculate the tool was removed due to low usage, or that user metrics excluded power users who disabled anonymous data sharing.

While the functionality is still accessible via alternative apps such as Snapseed or Google Drive’s document scanner but these are not ideal substitutes for seamless in‑Photos editing.

Affected users can try downgrading the Photos app or uninstalling and reinstalling via the Play Store to revert to an older build. However, this is a temporary fix and may become unavailable as Google continues updating the app.

As of now, Google has not officially addressed whether the removal is intentional, a bug, or part of a broader redesign. With the Pixel 10 launch drawing attention, some community members feel the timing suggests lower-priority users are being overlooked.

Google Messages Gets Full‑Screen “Details” Page.

Google Messages
Key Takeaway.
  • Google Messages has replaced the old pop-up with a full-screen message details view for better readability.
  • New visual icons now clearly indicate if a message is sent, delivered, or read.

The Google Messages app is receiving a significant UI enhancement to its message details page, switching from cramped pop-ups to a striking full‑screen redesign based on Material 3's Expressive layout elements.

Previously, long‑pressing a message and opening “View details” showed a small dialog overlaying part of the chat. Now, users see a clean full‑screen view that previews the selected message along with delivery metadata like sent, delivered, and read status. These indicators use new visual cues: a checkmark circle means sent, two checkmarks mean delivered, and a filled‑in circle after two checkmarks means read.

The new “Details” page also displays the sender’s name and phone number—but omits fields like message type (e.g. RCS with end‑to‑end encryption) and priority levels. This marks the first time Google Messages uses M3 Expressive containers, setting the stage for more such UI upgrades across the app.

What is Included in the Design Upgrade?

The redesign is rolling out broadly, including on both stable (build 20250713_01_RC04) and beta (20250725_02_RC00) versions of Google Messages. While full redesigns for Android phones are still in beta, Wear OS versions already display M3 Expressive styling, with tinted buttons, sleek bubbles, and refreshed icons.

Earlier this month, Google began blending camera and gallery access into a unified interface and now supports sending media in two quality levels: HD for optimized sharing and HD+ for original quality. The revamped message field limit now spans up to 14 lines, up from just four.

These UI refinements coincide with broader messaging improvements like group chat customization, spam and sensitive content warnings, and better support for RCS and MLS encryption across platforms.

Why Google Messages Update Matters?

This update enhances both usability and presentation: no more cropping screenshots to hide irrelevant chat content, and the full‑screen preview delivers visual clarity. “View details” is now a functional hub, not just a modal box.

By adopting Material 3 Expressive design, Google is unifying the look and feel of Messages across devices, offering users a consistent experience whether on Android or Wear OS. Enhanced status indicators and clearer UX also improve message tracking and reliability.

Looking ahead, expect Google to extend M3 Expressive styling to other areas of the app—potentially conversation view, media viewer, and group settings. Additional message details like encryption status and priority labels may also be included. If you're using the beta or stable version cited above, look for updates via the Play Store.

Google’s NotebookLM Introduces AI‑Powered Video Overviews.

Google is rolling out significant upgrades to NotebookLM, expanding its AI-powered research tool with a new Video Overviews format and a revamped Studio panel for enhanced content creation and multitasking.

The newly launched Video Overviews feature transforms dense information into narrated slideshow-style presentations. These AI-generated visuals integrate diagrams, quotes, data points, and images extracted directly from user-uploaded documents, making complex ideas more intuitive to understand. Users can tailor the output by specifying learning goals, audience, and specific segments to focus on, such as chapter-specific content or expert-level theories.

Video Overviews act as a visual counterpart to NotebookLM’s existing Audio Overviews and are now available to all English-language users, with additional languages and styles expected in upcoming updates.

Studio Panel Upgrades: Smarter Creation & Multi‑Output Workflows

NotebookLM’s Studio panel is also receiving a major upgrade. Users can now create and store multiple versions of the same output type (e.g., several Audio Overviews or Video Overviews) within a single notebook. This flexibility supports various use cases:

  • Publish content in multiple languages or perspectives.
  • Tailor outputs for different roles or audiences (e.g., student vs. manager).
  • Segment study material by chapters or modules using separate overview videos or guides.
The updated Studio interface introduces a clean layout featuring four tiles—Audio Overview, Video Overview, Mind Map, and Report—for quick access. All generated content is indexed below the tiles, and users can multitask—for instance, listening to an Audio Overview while exploring a Mind Map or reviewing a Study Guide.

NotebookLM, first launched in July 2023 and powered by Google’s Gemini AI, is also known for its Audio Overviews, which present document insights in conversational, podcast-style formats.
These new Video Overviews bring a visual dimension, essential for explaining data, workflows, diagrams, and abstract ideas more effectively.

According to internal disclosures, Google introduced Audio Overviews across more than 80 languages earlier this year, which doubled daily audio usage and significantly expanded user engagement. User feedback has driven numerous updates, including enhanced customization, in-app feedback tools, community-driven enhancements, and broader accessibility.

These additions cap a series of recent improvements, like “Featured Notebooks” (curated content from partners such as The Atlantic and The Economist) and automatic source discovery.

DON'T MISS

AI
© all rights reserved
made with by WorkWithG