UiPath, Inc. patent applications on June 27th, 2024
Patent Applications by UiPath, Inc. on June 27th, 2024
UiPath, Inc.: 2 patent applications
UiPath, Inc. has applied for patents in the areas of B25J9/16 (1), G05B19/4155 (1), G06N20/00 (1), G06Q10/0637 (1), G06Q10/0631 (1) B25J9/163 (1), G06Q10/06316 (1)
With keywords such as: features, image, rpa, screen, based, learning, machine, task, preprocessed, and element in patent application abstracts.
Patent Applications by UiPath, Inc.
Inventor(s): Prabhdeep Singh of Bellevue WA (US) for uipath, inc., Christian Berg of Seattle WA (US) for uipath, inc.
IPC Code(s): B25J9/16, G05B19/4155, G06N20/00, G06Q10/0637
CPC Code(s): B25J9/163
Abstract: process evolution for robotic process automation (rpa) and rpa workflow micro-optimization are disclosed. initially, an rpa implementation may be scientifically planned, potentially using artificial intelligence (ai). embedded analytics may be used to measure, report, and align rpa operations with strategic business outcomes. rpa may then be implemented by deploying ai skills (e.g., in the form of machine learning (ml) models) through an ai fabric that seamlessly applies, scales, manages ai for rpa workflows of robots. this cycle of planning, measuring, and reporting may be repeated, potentially guided by more and more ai, to iteratively improve the effectiveness of rpa for a business. rpa implementations may also be identified and implemented based on their estimated return on investment (roi).
Inventor(s): Gregory Allen BARELLO of Seattle WA (US) for uipath, inc.
IPC Code(s): G06Q10/0631, G06T3/40, G06T9/00, G06V10/44, G06V10/74, G06V10/762, G06V10/764, G06V10/82, G06V30/18, G06V30/412
CPC Code(s): G06Q10/06316
Abstract: systems and methods for extracting features from screen images for performing a task mining task are provided. a screen image depicting a user interface of a computing system is received. the screen image is preprocessed to generate a preprocessed screen image and processing results. image features are extracted from the preprocessed screen image using a first machine learning based network. text features and control element features are extracted from the processing results using a second machine learning based network. the text features and the control element features are encoded using a third machine learning based network to generate representative features of the screen image. a task is performed on the screen image based on one or more of the image features, the text features, the control element features, or the representative features. results of the task are output.