Samsung electronics co., ltd. (20240091951). SYNERGIES BETWEEN PICK AND PLACE: TASK-AWARE GRASP ESTIMATION simplified abstract
Contents
- 1 SYNERGIES BETWEEN PICK AND PLACE: TASK-AWARE GRASP ESTIMATION
- 1.1 Organization Name
- 1.2 Inventor(s)
- 1.3 SYNERGIES BETWEEN PICK AND PLACE: TASK-AWARE GRASP ESTIMATION - A simplified explanation of the abstract
- 1.4 Simplified Explanation
- 1.5 Potential Applications
- 1.6 Problems Solved
- 1.7 Benefits
- 1.8 Potential Commercial Applications
- 1.9 Possible Prior Art
- 1.10 Original Abstract Submitted
SYNERGIES BETWEEN PICK AND PLACE: TASK-AWARE GRASP ESTIMATION
Organization Name
Inventor(s)
Nikhil Narsingh Chavan Dafle of Jersey City NJ (US)
Vasileios Vasilopoulos of Woodbridge NJ (US)
Shubham Agrawal of Jersey City NJ (US)
Jinwook Huh of Millburn NJ (US)
Suveer Garg of New York NY (US)
Pedro Piacenza of Jersey City NJ (US)
Isaac Hisano Kasahara of Brooklyn NY (US)
Kazim Selim Engin of Weehawken NJ (US)
Zhanpeng He of New York NY (US)
Shuran Song of New York NY (US)
Ibrahim Volkan Isler of Saint Paul MN (US)
SYNERGIES BETWEEN PICK AND PLACE: TASK-AWARE GRASP ESTIMATION - A simplified explanation of the abstract
This abstract first appeared for US patent application 20240091951 titled 'SYNERGIES BETWEEN PICK AND PLACE: TASK-AWARE GRASP ESTIMATION
Simplified Explanation
The patent application describes systems, methods, and apparatuses for controlling a robot with a manipulator to grasp and place a target object in a scene based on 3D geometry information and affordance information obtained from neural network models.
- Determining 3D geometry information about a target object and the scene where it will be placed based on images.
- Obtaining affordance information by providing the 3D geometry information to neural network models.
- Commanding the robot to grasp the target object with the manipulator based on the affordance information.
- Commanding the robot to position the manipulator to place the target object in the scene based on the affordance information.
Potential Applications
This technology could be applied in industries such as manufacturing, logistics, and healthcare where robots are used for tasks involving grasping and placing objects.
Problems Solved
This technology solves the problem of efficiently and accurately controlling a robot to manipulate objects in a given scene based on visual information and affordance cues.
Benefits
The benefits of this technology include improved efficiency, accuracy, and adaptability in robot manipulation tasks, leading to increased productivity and reduced errors.
Potential Commercial Applications
The potential commercial applications of this technology include automated assembly lines, warehouse operations, and medical procedures where precise object manipulation is required.
Possible Prior Art
One possible prior art for this technology could be robotic systems that use computer vision and neural networks for object recognition and manipulation tasks.
Unanswered Questions
How does the neural network model determine the affordance information for the robot manipulation tasks?
The neural network model processes the 3D geometry information of the target object and scene to predict the optimal grasp orientation and placement direction for the robot manipulator.
What types of objects and scenes can this technology effectively work with?
This technology can effectively work with a wide range of objects and scenes as long as they can be represented in 3D geometry information and affordance cues can be extracted from them.
Original Abstract Submitted
systems, methods, and apparatuses for controlling a robot including a manipulator, including: determining three-dimensional (3d) geometry information about a target object based on an image of the target object; determining 3d geometry information about a scene in which the target object is to be placed based on at least one image of the scene; obtaining affordance information by providing the 3d geometry information about the target object and the 3d geometry information about the scene to at least one neural network model; commanding the robot to grasp the target object using the manipulator according to a grasp orientation corresponding to the affordance information; and commanding the robot to position the manipulator according to a placement direction corresponding to the affordance information in order to place the target object at a location in the scene.
- Samsung electronics co., ltd.
- Nikhil Narsingh Chavan Dafle of Jersey City NJ (US)
- Vasileios Vasilopoulos of Woodbridge NJ (US)
- Shubham Agrawal of Jersey City NJ (US)
- Jinwook Huh of Millburn NJ (US)
- Suveer Garg of New York NY (US)
- Pedro Piacenza of Jersey City NJ (US)
- Isaac Hisano Kasahara of Brooklyn NY (US)
- Kazim Selim Engin of Weehawken NJ (US)
- Zhanpeng He of New York NY (US)
- Shuran Song of New York NY (US)
- Ibrahim Volkan Isler of Saint Paul MN (US)
- B25J9/16
- G06T7/60
- G06T7/73