18494949. Multi-Screen Interaction Method and Apparatus, Terminal Device, and Vehicle simplified abstract (Huawei Technologies Co., Ltd.)

From WikiPatents
Jump to navigation Jump to search

Multi-Screen Interaction Method and Apparatus, Terminal Device, and Vehicle

Organization Name

Huawei Technologies Co., Ltd.

Inventor(s)

Rui Ma of Shanghai (CN)

Ziheng Guo of Shenzhen (CN)

Jia Shi of Shenzhen (CN)

Multi-Screen Interaction Method and Apparatus, Terminal Device, and Vehicle - A simplified explanation of the abstract

This abstract first appeared for US patent application 18494949 titled 'Multi-Screen Interaction Method and Apparatus, Terminal Device, and Vehicle

Simplified Explanation

- Method and apparatus for multi-screen interaction in intelligent vehicle technologies - Specific gesture detected to convert content displayed on first display into sub-image - Identifier of another display and orientation information displayed on first display - User can intuitively see movement direction of subsequent gesture - Second display determined based on movement direction of specific gesture - Sub-image controlled to move as specific gesture moves on first display - Sub-image moves to second display when movement distance of specific gesture exceeds threshold - Implementation of multi-screen interaction function

Potential Applications

- Intelligent vehicles - Smart homes - Gaming consoles

Problems Solved

- Enhances user experience in interacting with multiple screens - Provides intuitive control over content display - Facilitates seamless transition between displays

Benefits

- Improved user interface design - Enhanced user engagement - Efficient multi-screen interaction functionality


Original Abstract Submitted

A multi-screen interaction method and apparatus, a terminal device, and a vehicle are provided, and relate to the field of intelligent vehicle technologies. In this method, after it is detected a specific gesture, content displayed on a first display is converted into a sub-image, and an identifier of another display capable of displaying the sub-image and orientation information relative to the first display are displayed on the first display, so that a user can intuitively see a movement direction of a subsequent gesture. Then, a second display is determined based on a movement direction of the specific gesture, and the sub-image is controlled to move as the specific gesture moves on the first display. When it is detected that a movement distance of the specific gesture is greater than a specified threshold, the sub-image moves to the second display. Therefore, a multi-screen interaction function is implemented.