Hyundai motor company (20240338934). OBJECT REGION SEGMENTATION DEVICE AND OBJECT REGION SEGMENTATION METHOD THEREOF simplified abstract

From WikiPatents
Revision as of 00:40, 18 October 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

OBJECT REGION SEGMENTATION DEVICE AND OBJECT REGION SEGMENTATION METHOD THEREOF

Organization Name

hyundai motor company

Inventor(s)

Jae Hoon Cho of Seoul (KR)

OBJECT REGION SEGMENTATION DEVICE AND OBJECT REGION SEGMENTATION METHOD THEREOF - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240338934 titled 'OBJECT REGION SEGMENTATION DEVICE AND OBJECT REGION SEGMENTATION METHOD THEREOF

Simplified Explanation: The patent application describes a device and method for segmenting object regions in images using a deep-learning network model.

  • The device includes a processor and storage for storing the deep-learning network model.
  • The network model consists of three parts: a first model for generating a pseudo label, a second model for creating a confidence map for the pseudo label, and a third model for segmenting the object region.
  • The processor inputs an unlabeled image to the first model to generate a pseudo label, then uses the pseudo label to create a confidence map with the second model.
  • The third model is trained using the pseudo label for pixels with confidence levels above a certain threshold on the confidence map.

Key Features and Innovation:

  • Utilizes a deep-learning network model for object region segmentation.
  • Incorporates three network models for different stages of the segmentation process.
  • Trains the model based on confidence levels of the pseudo labels.

Potential Applications:

  • Image segmentation in various industries such as medical imaging, autonomous vehicles, and surveillance systems.
  • Object recognition and tracking in real-time applications.
  • Augmented reality and virtual reality for enhanced user experiences.

Problems Solved:

  • Efficient and accurate object region segmentation in images.
  • Automation of the segmentation process without manual intervention.
  • Improved performance compared to traditional segmentation methods.

Benefits:

  • Faster and more precise object region segmentation.
  • Enhanced image analysis capabilities.
  • Scalable and adaptable to different types of images and objects.

Commercial Applications: Title: Deep-Learning Object Region Segmentation Device for Image Analysis This technology can be used in industries such as healthcare, automotive, and security for image analysis, object recognition, and tracking applications. The market implications include improved efficiency, accuracy, and automation in various sectors.

Prior Art: There is prior art related to deep-learning models for image segmentation and object recognition, which can be found in academic research papers, patents, and industry publications.

Frequently Updated Research: Researchers are constantly improving deep-learning models for image analysis and object segmentation, with new techniques and algorithms being developed regularly to enhance performance and accuracy.

Questions about Object Region Segmentation: 1. How does the deep-learning network model improve object region segmentation compared to traditional methods? 2. What are the potential limitations or challenges of using deep-learning models for object region segmentation?


Original Abstract Submitted

an object region segmentation device and an object region segmentation method thereof are provided. the object region segmentation device includes a processor and storage. the storage stores a deep-learning network model for segmenting an object region in an image. the deep-learning network model includes a first network model for generating a pseudo label, a second network model for generating a confidence map for the pseudo label, and a third network model for segmenting the object region in the image. the processor inputs an unlabeled image to the first network model to generate the pseudo label, inputs the pseudo label to the second network model to generate the confidence map, and trains the third network model using a pseudo label corresponding to at least one pixel, a confidence level of which is greater than or equal to a threshold, on the confidence map.