HON HAI PRECISION INDUSTRY CO., LTD. patent applications published on November 30th, 2023

From WikiPatents
Revision as of 05:42, 5 December 2023 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Patent applications for HON HAI PRECISION INDUSTRY CO., LTD. on November 30th, 2023

METHOD FOR DETECTING PRODUCT FOR DEFECTS, ELECTRONIC DEVICE, AND STORAGE MEDIUM (18126804)

Main Inventor

CHUNG-YU WU


Brief explanation

The patent application describes a method for detecting defects in a product using an electronic device. 
  • The method involves obtaining an image of the product to be detected.
  • The image is then input into a pre-trained autoencoder to obtain a reconstructed image.
  • A difference image is generated by comparing the original image with the reconstructed image.
  • Clustering processing is performed on the difference image to obtain a number of feature absolute values.
  • A target image is generated based on the number of feature absolute values, the difference image, and a preset value.
  • The target image is then analyzed for defects to determine a defect detection result.

Abstract

A method for detecting a product for defects implemented in an electronic device, the method obtains an image of a product to be detected, obtains a reconstructed image by inputting the image to be detected into a pre-trained autoencoder, generates a difference image from the image and the reconstructed image, obtains a number of feature absolute values by performing clustering processing on the difference image; generates a target image according to the number of feature absolute values, the difference image, and a preset value; and determines a defect detection result by detecting the target image for defects.

METHOD FOR INSPECTING PRODUCT DEFECTS, ELECTRONIC DEVICE, AND STORAGE MEDIUM (17990571)

Main Inventor

YIN-CHUNG LEUNG


Brief explanation

- The patent application describes a method for inspecting product defects using an electronic device.

- The method involves determining the category of a product and obtaining golden sample images of the product. - An inspection tool is selected based on the product category using a preset application. - Labeling information is created for the golden sample images based on the selected inspection tools. - Images of the product to be inspected are obtained and inspected using the labeling information from the golden sample images.

Abstract

A method for inspecting product defects implemented in an electronic device includes determining a category of a product, and obtaining golden sample images of the product; selecting at least one inspection tool by a preset application according to the category of the product; creating labeling information of the golden sample image of the product according to the selected inspection tools; and obtaining at least one image of a product to be inspected, and inspecting the at least one image of the product to be inspected according to the labeling information of the golden sample images of the product.

METHOD FOR DETECTING MEDICAL IMAGES, ELECTRONIC DEVICE, AND STORAGE MEDIUM (17896829)

Main Inventor

TZU-CHEN LIN


Brief explanation

The patent application describes a method for detecting medical images using an electronic device. Here are the key points:
  • The method involves obtaining medical images to be detected.
  • A reconstructed image is generated by inputting the target image into a pre-trained variational autoencoder model.
  • The pixel values of the reconstructed image and the target image are used to determine a target area.
  • The target image is then inputted into a pre-trained convolutional neural network model to obtain a feature area and a lesion category.
  • If there is a feature area corresponding to the target area, a lesion area and corresponding lesion category are determined.
  • Finally, a detection result of the image to be detected is generated based on the determined lesion area and category.

Abstract

A method for detecting medical images implemented in an electronic device includes obtaining at least one image to be detected; obtaining a reconstructed image by inputting the at least one image to be detected as a target image into a pre-trained variational autoencoder model; determining a target area according to pixel values of pixels in the reconstructed image and the target image; obtaining a feature area and a lesion category of the feature area by inputting the target image into a pre-trained convolutional neural network model; when there is a feature area corresponding to the target area in the target image, determining a lesion area and a corresponding lesion category based on the target area and the feature area, and generating a detection result of the image to be detected.

IMAGE FEATURE MATCHING METHOD, COMPUTER DEVICE, AND STORAGE MEDIUM (17857098)

Main Inventor

WAN-JHEN LEE


Brief explanation

- The present disclosure describes a method for matching image features using an edge detection algorithm.

- The method involves identifying weak texture areas in two images and extracting feature points from these areas. - The feature points from the first image are matched with corresponding feature points from the second image by determining a target point for each first feature point. - The position difference between each first feature point and its corresponding target point is calculated to determine a matching point for each first feature point. - This method can be used to accurately match image features, which can be useful in various applications such as image recognition and object tracking.

Abstract

An image feature matching method is provided by the present disclosure. The method includes determining a first weak texture area of a first image and a second weak texture area of a second image based on an edge detection algorithm. First feature points of the first weak texture area and second feature points of the second weak texture area are extracted. The first feature points and the second feature points are matched by determining a target point for each of the first feature points from the second feature points. Once a position difference value between each first feature point and the corresponding target point is determined, a matching point for each first feature point is determined according to the position difference value between the each first feature point and the corresponding target point.

METHOD FOR TRAINING DEPTH ESTIMATION MODEL, METHOD FOR ESTIMATING DEPTH, AND ELECTRONIC DEVICE (17954535)

Main Inventor

YU-HSUAN CHIEN


Brief explanation

The patent application describes a method for training a depth estimation model in an electronic device. Here are the key points:
  • The method involves obtaining a pair of images from a training data set.
  • The first image is inputted into the depth estimation model to obtain a disparity map.
  • The disparity map is then added to the first image to generate a second image.
  • The pixel values of corresponding pixels in the first and second images are compared using mean square error and cosine similarity calculations.
  • Mean values of the mean square error and cosine similarity are calculated.
  • The first mean value represents the average error between the pixel values of the first and second images.
  • The second mean value represents the average similarity between the pixel values of the first and second images.
  • The first and second mean values are added together to obtain a loss value for the depth estimation model.
  • The depth estimation model is iteratively trained based on the loss value, improving its accuracy over time.

Abstract

A method for training a depth estimation model implemented in an electronic device includes obtaining a first image pair from a training data set; inputting the first left image into the depth estimation model, and obtaining a disparity map; adding the first left image and the disparity map, and obtaining a second right image; calculating a mean square error and cosine similarity of pixel values of all corresponding pixels in the first right image and the second right image; calculating mean values of the mean square error and the cosine similarity, and obtaining a first mean value of the mean square error and a second mean value of the cosine similarity; adding the first mean value and the second mean value, and obtaining a loss value of the depth estimation model; and iteratively training the depth estimation model according to the loss value.

METHOD FOR GENERATING DEPTH IN IMAGES, ELECTRONIC DEVICE, AND NON-TRANSITORY STORAGE MEDIUM (18097080)

Main Inventor

JUNG-HAO YANG


Brief explanation

The patent application describes a method and system for generating depth in monocular images using binocular images and instance segmentation labels.
  • The method involves acquiring multiple sets of binocular images and building a dataset with instance segmentation labels for content.
  • A trained autoencoder network is obtained by training it using the dataset with instance segmentation labels.
  • When a monocular image is input into the trained autoencoder network, a first disparity map is obtained.
  • The first disparity map is then converted to obtain a depth image corresponding to the monocular image.
  • By combining binocular images with instance segmentation images as training data, monocular images can be easily processed to output the disparity map.
  • This allows for depth estimation in monocular images by converting the disparity map to a depth image.
  • The patent also discloses an electronic device and a non-transitory storage for implementing this method and system.

Abstract

A method and system for generating depth in monocular images acquires multiple sets of binocular images to build a dataset containing instance segmentation labels as to content; training an work using the dataset with instance segmentation labels to obtain a trained autoencoder network; acquiring monocular image, the monocular image is input into the trained autoencoder network to obtain a first disparity map and the first disparity map is converted to obtain depth image corresponding to the monocular image. The method combines binocular images with instance segmentation images as training data for training an autoencoder network, monocular images can simply be input into the autoencoder network to output the disparity map. Depth estimation for monocular images is achieved by converting the disparity map to a depth image corresponding to the monocular image. An electronic device and a non-transitory storage are also disclosed.

REMOTE COLLABORATION METHOD, REMOTE DEVICE AND STORAGE MEDIUM (18232850)

Main Inventor

HAI-PING TANG


Brief explanation

This patent application describes a method for remote collaboration between a remote device and a wearable device. The method involves the remote device receiving an image from the wearable device and determining a reference area based on that image. The remote device then determines the position of a target object relative to the reference area and generates indication information based on this position. The indication information is then transmitted back to the wearable device. 
  • Remote collaboration method applied to a remote device and wearable device
  • Remote device receives an image from the wearable device
  • Remote device determines a reference area based on the received image
  • Remote device determines the position of a target object relative to the reference area
  • Indication information is generated based on the position of the target object
  • Indication information is transmitted back to the wearable device
  • Improves efficiency and accuracy in determining the position of the target object.

Abstract

A remote collaboration method applied to a remote device is provided. In the method, the remote device receives a first image transmitted by a wearable device and determines a reference area based on the first image. The remote device further determines a position of a target object relative to the reference area and generates the indication information according to the position of the target object relative to the reference area and transmits the indication information to the wearable device. The method can provide a user with the indication information of the position of the target object in the eyesight through remote collaboration between the remote device and the wearable device, thereby improving the efficiency and accuracy of determining the position of the target object.

IMAGE RECOGNITION METHOD, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM (17854320)

Main Inventor

CHIEH LEE


Brief explanation

The patent application describes a method for image recognition on an electronic device.
  • The method involves constructing a semantic segmentation network.
  • If the initial labeled result of an image does not match the expected result, a target image and its labeled result are obtained.
  • A second semantic segmentation network is created by training the first network using multiple target images and their labeled results.
  • The image to be recognized is inputted into the second network to obtain its labeled result.

Abstract

An image recognition method applied to an electronic device is provided. The method includes constructing a first semantic segmentation network. In response that an initial labeled result of one of a plurality of initial labeled images does not match a preset labeled result, a target image corresponding to the one of the plurality of initial labeled images and a target labeled result of the target image are obtained. A second semantic segmentation network is obtained by training the first semantic segmentation network based on a plurality of the target images and the target labeled result of each target image, and a labeled result of an image to be recognized is obtained by inputting the image to be recognized into the second semantic segmentation network.

METHOD FOR DETECTING ROAD CONDITIONS AND ELECTRONIC DEVICE (17846232)

Main Inventor

SHIH-CHAO CHIEN


Brief explanation

- The patent application describes a method for detecting road conditions using an electronic device.

- The device captures images of the scene in front of a vehicle and inputs them into a trained semantic segmentation model. - The device uses a backbone network for feature extraction and obtains multiple feature maps. - These feature maps are then processed by a first segmentation network and a second segmentation network within the head network. - The first segmentation network outputs a recognition result, while the second segmentation network outputs another recognition result. - The device then determines whether it is safe for the vehicle to continue driving based on these recognition results.

Abstract

A method for detecting road conditions applied in an electronic device obtains images of a scene in front of a vehicle, and inputs the images into a trained semantic segmentation model. The electronic device inputs the images into a backbone network for feature extraction and obtains a plurality of feature maps, inputs the feature maps into the head network, processes the feature maps by a first segmentation network of the head network, and outputs a first recognition result. The electronic device further processes the feature maps by a second segmentation network of the head network, and outputs a second recognition result, and determines whether the vehicle can continue to drive on safely according to the first recognition result and the second recognition result.

METHOD FOR DETECTING THREE-DIMENSIONAL OBJECTS IN ROADWAY AND ELECTRONIC DEVICE (17895517)

Main Inventor

CHIEH LEE


Brief explanation

- The patent application describes a method for detecting 3D objects on roadways using an electronic device.

- The device uses a semantic segmentation model to analyze training images and extract features. - Convolution and pooling operations are performed on the training images to obtain feature maps. - The feature maps are then up-sampled to generate first images. - The device classifies pixels on the first images and calculates a classification loss to optimize the model. - The trained semantic segmentation model is then used to analyze detection images. - The device determines object models, point cloud data, and distances from a depth camera to the object models. - Rotation angles of the object models are determined based on the point cloud data and object models. - Positions of the object models in 3D space are determined using the distances, rotation angles, and object positions.

Abstract

A method for detecting three-dimensional (3D) objects in roadway is applied in an electronic device. The device inputs training images into a semantic segmentation model, and performs convolution operations and pooling operations on the training images and obtains feature maps. The electronic device performs up-sampling operations on the feature maps to obtain first images, classifies pixels on the first images, calculates and optimizes a classification loss and obtains a trained semantic segmentation model. The device inputs the detection images into the trained semantic segmentation model, determines object models of the objects, point cloud data and distances from the depth camera to the object models, determines rotation angles of the object models according to the point cloud data and the object models, and determines positions of the object models in 3D space according to the distances, the rotation angles, and positions of the objects.

METHOD FOR DETECTION OF THREE-DIMENSIONAL OBJECTS AND ELECTRONIC DEVICE (17854301)

Main Inventor

CHIH-TE LU


Brief explanation

The patent application describes a method for using machine learning to detect 3D objects on or near a roadway using an electronic device. 
  • The method involves obtaining images of the road and inputting them into a trained object detection model.
  • The model then determines the categories of objects in the images, as well as their 2D bounding boxes and rotation angles.
  • The electronic device uses this information to determine object models and their 3D bounding boxes.
  • The distance from the camera to the object models is determined based on the size of the 2D bounding boxes, image information, and focal length of the camera.
  • The positions of the object models in a 3D space can be determined using the rotation angles, distance, and 3D bounding boxes.
  • These positions are considered as the positions of the objects in the 3D space.

Abstract

A method for detection of three-dimensional (3D) objects on or around a roadway by machine learning, applied in an electronic device, obtains images of road, inputs the images into a trained object detection model, and determines categories of objects in the images, two-dimensional (2D) bounding boxes of the objects, and parallax (rotation)angles of the objects. The electronic device determines object models and 3D bounding boxes of the object models and determines distance from the camera to the object models according to size of the 2D bounding boxes, image information of the detection images, and focal length of the camera. The positions of the object models in a 3D space can be determined according to the rotation angles, the distance, and the 3D bounding boxes, and the positions of the object models are taken as the position of the objects in the 3D space.

METHOD FOR DETECTING THREE-DIMENSIONAL OBJECTS IN RELATION TO AUTONOMOUS DRIVING AND ELECTRONIC DEVICE (17895496)

Main Inventor

CHIEH LEE


Brief explanation

- The patent application describes a method for detecting 3D objects in relation to autonomous driving using an electronic device.

- The device obtains detection images and depth images as input. - It uses a trained object detection model to determine the categories of objects and their 2D bounding boxes in the detection images. - The device then determines object models and 3D bounding boxes of the objects based on their categories. - It calculates point cloud data and distances from the depth camera to the object models of the selected objects. - The device also determines the angles of rotation of the object models based on the object models and point cloud data. - It can determine the respective positions of the objects in 3D space using the distance, rotation angles, and 3D bounding boxes.

Abstract

A method for detecting three-dimensional (3D) objects in relation to autonomous driving is applied in an electronic device. The device obtains detection images and depth images, =inputs the detection images into a trained object detection model to determine categories of objects in the detection images and two-dimensional (2D) bounding boxes of the objects. The device determines object models of the objects and 3D bounding boxes of the object models according to the object categories, and calculates point cloud data of the objects selected and distances from the depth camera to the object models. The device determines angles of rotation of the object models of the objects according to the object models of the objects and the point cloud data, and can determine respective positions of the objects in 3D space according to the distance from the depth camera to the object models, the rotation angles, and the 3D bounding boxes.

DETECTION SYSTEM USED IN PRODUCT QUALITY DETECTION (18120937)

Main Inventor

CHIA-EN CHANG


Brief explanation

The abstract describes a detection system that determines whether a product is qualified or not for testing purposes. The system consists of a first detection module, a sensing module, and a control module. 
  • The first detection module includes a photographing apparatus and a light source.
  • The light source emits light and illuminates the product to be tested.
  • The photographing apparatus captures an image of the product.
  • The control module receives a sensing signal from the sensing module.
  • When the sensing signal is received, the control module activates the light source and instructs the photographing apparatus to capture an image of the product.
  • The control module then analyzes the captured image to determine if the product is qualified for testing or not.

Abstract

A detection system determines to-be-test products to be qualified or not. The detection system includes a first detection module, a sensing module, and a control module. The first detection module includes a first photographing apparatus and a first light source. The first light source being a plane light source emits light and illuminates the to-be-tested product while the first photographing apparatus captures an image of the to-be-tested product. The control module controls the first light source to emit light and control the first photographing apparatus to capture the image of the to-be-tested product when receiving the sensing signal generated by the sensing module. The control module further analyzes the captured image for determining the to-be-tested product to be qualified or not.