Google has started to roll out advanced AI features for its Gemini Live assistant that support real-time interaction via smartphone screens and cameras. The new features, as part of the company’s “Project Astra” initiative, allow users to get instantaneous responses to visual inputs, thus augmenting the utility of mobile devices.
Live Video Mode for instant AI assistance
Among the new prominent features is the live video mode of Gemini, where users can aim their smartphone cameras at objects and have immediate AI-provided insights. Such functionality eventually makes Gemini a more dynamic, context-sensitive assistant, capable of recognizing objects, reading signs, and even providing step-by-step instructions for tasks.
Live video mode is not just meant to be a visual search feature; it is an interactive experience that allows users to have a continuous conversation with Gemini AI. So, this is a major step towards making AI assistants intuitive and practical in real-life situations.
Screen Sharing for seamless user support
Google is also launching a screen-sharing feature, which allows users to share their smartphone screens with Gemini AI for live support. The feature is most useful for resolving apps, accessing settings, or asking for help on complicated processes without having to manually explain the problem.
At first launched on specific Android devices, the screen-sharing feature further allows Gemini to provide users with even more assistance across a wider spectrum of tasks. It should extend to more users in the following weeks as Google further refines the technology.
Gradual rollout and future expansion
The new functionality is already being phased out, and availability is based on region and device support. Google has not provided an exact timeline for the broader rollout but stated that it will gradually integrate these features more deeply into the Gemini platform.
These enhancers reflect Google’s ongoing efforts to compete with Google’s other AI-operated assistants and strengthen Gemini’s role in the developed scenario of AI-operated interactions. As the rollout continues, the user response will play an important role in shaping the future repetitions of these features.
RELATED
- Google faces EU probe for alleged Digital Markets Act violations in Search and Play Store
- Google releases Android 16 Beta 3 with Platform Stability
- HUAWEI phones regain access to Google apps despite US sanctions
- DOJ pressures Google to sell Chrome and limit search monopoly in antitrust case
- Apple’s M3 Ultra chip benchmark surfaces: How it compares to previous models
Implications for AI assistants
The implementation of Google of these AI functions is at the forefront of the virtual auxiliary scenario. While Amazon’s Alexa Plus and Apple’s modified Siri remains in development, Gemini’s sophisticated capabilities are already on, providing a great interactive and responsible experience to their users. Anyway, this innovation reflects the dedication of Google for the leading in AI technology and shows practical use for user interaction.