NVIDIA Corporation patent applications on April 3rd, 2025
Patent Applications by NVIDIA Corporation on April 3rd, 2025
NVIDIA Corporation: 18 patent applications
NVIDIA Corporation has applied for patents in the areas of G06N3/08 (4), G06V10/82 (3), G06V10/776 (2), G06V10/774 (2), G06T3/4046 (2) G06T3/4046 (2), G06N3/08 (2), G01S7/417 (1), G06F30/27 (1), G06F30/323 (1)
With keywords such as: neural, network, data, generate, parameters, include, training, techniques, embodiments, and present in patent application abstracts.
Patent Applications by NVIDIA Corporation
Inventor(s): Patrik GEBHARDT of Cupertino CA US for nvidia corporation, Alexander POPOV of Kirkland WA US for nvidia corporation, Shane MURRAY of San Jose CA US for nvidia corporation
IPC Code(s): G01S7/41, G01S13/58, G01S13/931
CPC Code(s): G01S7/417
Abstract: embodiments of the present disclosure relate to a system and method used to transfer image data via ethernet. in some embodiments, the method may include determining, using a machine learning model, an estimated velocity corresponding to an object based at least on measured radar data, where the measured radar data may correspond to radar detections associated with the object. in some embodiments, the method may further include determining expected radar data corresponding to the object based at least on the estimated velocity. some embodiments may additionally include updating one or more parameters of the machine learning model based on the difference between the measured radar data and the expected radar data.
Inventor(s): Jonah PHILION of Toronto CA for nvidia corporation, Sanja FIDLER of Toronto CA for nvidia corporation, Jason PENG of Vancouver CA for nvidia corporation
IPC Code(s): G06F30/27
CPC Code(s): G06F30/27
Abstract: in various examples, systems and methods are disclosed relating to generating tokens for traffic modeling. one or more circuits can identify trajectories in a dataset, and generate actions from the identified trajectories. the one or more circuits can generate, based at least on the plurality of actions and at least one trajectory of the plurality of trajectories, a set of tokens representing actions to generate trajectories of one or more agents in a simulation. the one or more circuits may update a transformer model to generate simulated actions for simulated agents based at least on tokens generated from the trajectories in the dataset.
Inventor(s): Chia-Tung HO of Santa Clara CA US for nvidia corporation, Haoxing REN of Austin TX US for nvidia corporation
IPC Code(s): G06F30/323
CPC Code(s): G06F30/323
Abstract: various embodiments are directed towards techniques for automatically generating standard cell layouts. in various embodiments, those techniques include processing a netlist graph to generate a plurality of graph embeddings, processing the plurality graph embedding via a transformer model to generate a plurality of device component embeddings, generating a page rank value for each device included in the netlist graph based on the plurality of device component embeddings, performing one or more clustering operations on the page rank values to generate a plurality of device clusters, and performing one or more standard cell synthesis operations using labels for the plurality of device clusters to generate at least one standard cell layout for the netlist graph.
Inventor(s): James Robert Lucas of Royston GB for nvidia corporation, Derek Lim of Cambridge MA US for nvidia corporation, Haggai Maron of Rehovot IL for nvidia corporation, Marc Teva Law of Toronto CA for nvidia corporation
IPC Code(s): G06N3/0455, G06N3/08
CPC Code(s): G06N3/0455
Abstract: embodiments are disclosed for a generating graph representations of neural networks to be used as input for one or more metanetworks. architectural information can be extracted from a neural network and used to generate graph a representation. a subgraph can be generated for each layer of the neural network, where each subgraph includes nodes that correspond to neurons and connecting edges that correspond to weights. each layer of the neural network can be associated with a bias node that is connected to individual nodes of that layer using edges representing bias weights. various types of neural networks and layers of neural networks can be represented by such graphs, which are then used as inputs for metanetworks. the subgraphs can be combined into a comprehensive graph representation of the neural network, which can be provided as input to a metanetwork to generate network parameters or perform another such operation.
Inventor(s): Clement Farabet of Mill Valley CA US for nvidia corporation, John Zedlewski of San Francisco CA US for nvidia corporation, Zachary Taylor of Santa Cruz CA US for nvidia corporation, Greg Heinrich of Nice FR for nvidia corporation, Claire Delaunay of Menlo Park CA US for nvidia corporation, Mark Daly of Eagle ID US for nvidia corporation, Matthew Campbell of Surf City NC US for nvidia corporation, Curtis Beeson of Irwin PA US for nvidia corporation, Gary Hicok of Mesa CA US for nvidia corporation, Michael Cox of Menlo Park CA US for nvidia corporation, Rev Lebaredian of Los Gatos CA US for nvidia corporation, Tony Tamasi of Portola Valley CA US for nvidia corporation, David Auld of Saratoga CA US for nvidia corporation
IPC Code(s): G06N3/063, G06F9/455, G06F18/2413, G06N3/045, G06N3/08, G06N20/00, G06V10/44, G06V10/764, G06V10/82, G06V20/56
CPC Code(s): G06N3/063
Abstract: in various examples, physical sensor data may be generated by a vehicle in a real-world environment. the physical sensor data may be used to train deep neural networks (dnns). the dnns may then be tested in a simulated environmentâin some examples using hardware configured for installation in a vehicle to execute an autonomous driving software stackâto control a virtual vehicle in the simulated environment or to otherwise test, verify, or validate the outputs of the dnns. prior to use by the dnns, virtual sensor data generated by virtual sensors within the simulated environment may be encoded to a format consistent with the format of the physical sensor data generated by the vehicle.
Inventor(s): Zekun Hao of Santa Clara CA US for nvidia corporation, Ming-Yu Liu of San Jose CA US for nvidia corporation, Arun Mallya of Mountain View CA US for nvidia corporation
IPC Code(s): G06N3/08, G06N3/04
CPC Code(s): G06N3/08
Abstract: performance of a neural network is usually a function of the capacity, or complexity, of the neural network, including the depth of the neural network (i.e. the number of layers in the neural network) and/or the width of the neural network (i.e. the number of hidden channels). however, improving performance of a neural network by simply increasing its capacity has drawbacks, the most notable being the increased computational cost of a higher-capacity neural network. since modern neural networks are configured such that the same neural network is evaluated regardless of the input, a higher capacity neural network means a higher computational cost incurred per input processed. the present disclosure provides for a multi-layer neural network that allows for dynamic path selection through the neural network when processing an input, which in turn can allow for increased neural network capacity without incurring the typical increased computation cost associated therewith.
Inventor(s): Tero Tapani Karras of Helsinki FI for nvidia corporation, Miika Samuli Aittala of Helsinki FI for nvidia corporation, Janne Johannes Hellsten of Helsinki FI for nvidia corporation, Jaakko T. Lehtinen of Helsinki FI for nvidia corporation, Timo Oskari Aila of Tuusula FI for nvidia corporation, Samuli Matias Laine of Vantaa FI for nvidia corporation
IPC Code(s): G06N3/08
CPC Code(s): G06N3/08
Abstract: apparatuses, systems, and techniques to train neural networks and to use neural networks to perform inference. in at least one embodiment, a balanced concatenation layer performs a balanced concatenation operation during a forward pass of a training iteration during the training of a neural network. in at least one embodiment, a balanced concatenation layer performs a balanced concatenation operation during the use of a neural network to perform inference.
Inventor(s): Tero Tapani Karras of Helsinki FI for nvidia corporation, Miika Samuli Aittala of Helsinki FI for nvidia corporation, Janne Johannes Hellsten of Helsinki FI for nvidia corporation, Jaakko T. Lehtinen of Helsinki FI for nvidia corporation, Timo Oskari Aila of Tuusula FI for nvidia corporation, Samuli Matias Laine of Vantaa FI for nvidia corporation
IPC Code(s): G06N3/084
CPC Code(s): G06N3/084
Abstract: apparatuses, systems, and techniques to train neural networks. in at least one embodiment, a first normalization of learned parameters of one or more learned layers is performed during a forward pass of a training iteration and a second normalization of the learned parameters is performed during a parameter update phase of the training iteration. in at least one embodiment, the first normalization is performed using first scaling factors and the second normalization is performed using second scaling factors.
Inventor(s): Samuli Matias Laine of Vantaa FI for nvidia corporation, Miika Samuli Aittala of Helsinki FI for nvidia corporation, Janne Johannes Hellsten of Helsinki FI for nvidia corporation, Jaakko T. Lehtinen of Helsinki FI for nvidia corporation, Timo Oskari Aila of Tuusula FI for nvidia corporation, Tero Tapani Karras of Helsinki FI for nvidia corporation
IPC Code(s): G06N3/0985
CPC Code(s): G06N3/0985
Abstract: apparatuses, systems, and techniques to compute neural network parameters and to use a neural network to perform inference. in at least one embodiment, neural network parameters are computed, after training, by determining a weighted average of snapshots of averaged parameters that form a basis set of averaged parameter snapshots, each respective snapshot of averaged parameters including a plurality of network parameters averaged by a respective combination of an averaging function and one or more averaging parameters.
Inventor(s): Koki Nagano of Playa Vista CA US for nvidia corporation, Alexander Trevithick of Mamaroneck NY US for nvidia corporation, Matthew Aaron Wong Chan of Los Altos CA US for nvidia corporation, Towaki Takikawa of Toronto CA for nvidia corporation, Umar Iqbal of San Jose CA US for nvidia corporation, Shalini De Mello of San Francisco CA US for nvidia corporation
IPC Code(s): G06T3/4046, G06T5/60, G06T5/70, G06T15/08
CPC Code(s): G06T3/4046
Abstract: systems and methods are disclosed that relate to synthesizing high-resolution 3d geometry and strictly view-consistent images that maintain image quality without relying on post-processing super resolution. for instance, embodiments of the present disclosure describe techniques, systems, and/or methods to scale neural volume rendering to the much higher resolution of native 2d images, thereby resolving fine-grained 3d geometry with unprecedented detail. embodiments of the present disclosure employ learning-based samplers for accelerating neural rendering for 3d gan training using up to five times fewer depth samples, which enables embodiments of the present disclosure to explicitly ârender every pixelâ of the full-resolution image during training and inference without post-processing super-resolution in 2d. together with learning high-quality surface geometry, embodiments of the present disclosure synthesize high-resolution 3d geometry and strictly viewâconsistent images while maintaining image quality on par with baselines relying on post-processing super resolution.
Inventor(s): Benjamin David Eckart of Oakland CA US for nvidia corporation, Anthea Li of San Jose CA US for nvidia corporation, Chao Liu of Santa Clara CA US for nvidia corporation, Kevin Shih of Cambridge MA US for nvidia corporation, Jan Kautz of Lexington MA US for nvidia corporation
IPC Code(s): G06T3/4046
CPC Code(s): G06T3/4046
Abstract: parametric distributions of data are one type of data model that can be used for various purposes such as for computer vision tasks that may include classification, segmentation, 3d reconstruction, etc. these parametric distributions of data may be computed from a given data set, which may be unstructured and/or which may include low-dimensional data. current solutions for learning parametric distributions of data involve explicitly learning kernel parameters. however, this explicit learning approach is not only inefficient in that it requires a high computational cost (i.e. from a large number of floating point operations per second), but it also leaves room for improvement in terms of accuracy of the resulting learned model. the present disclosure provides a neural network architecture that implicitly learns a parametric distribution of data, which can reduce the computational cost while improve accuracy when compared with prior solutions that rely on the explicit learning design.
Inventor(s): Dae Jin KIM of San Jose CA US for nvidia corporation, Chun-Wei CHEN of San Jose CA US for nvidia corporation, Leon WANG of Richmond Hill CA for nvidia corporation, Anshul JAIN of San Franciso CA US for nvidia corporation
IPC Code(s): G06T7/80, H04N17/00, H04N23/90
CPC Code(s): G06T7/80
Abstract: in various examples, one or more interior or occupant monitoring sensors may be calibrated using one or more display units (e.g., a projector, led panel, laser robot, heads-up display) that can actively project or display a unique visual pattern and change one or more attributes of the patterns (e.g., shape, color, brightness, size, frame rate, perspective, etc.) without moving the one or more display units. the present techniques may be utilized to iteratively calibrate a sensor using a dynamic visual pattern with one or more visual attributes that vary from iteration to iteration, and/or to interleave different visual patterns in a common region of an overlapping field of view shared by multiple sensors.
Inventor(s): Sihyun Yu of Gyeonggi-do KR for nvidia corporation, Weili Nie of Sunnyvale CA US for nvidia corporation, De-An Huang of Cupertino CA US for nvidia corporation, Boyi Li of Berkeley CA US for nvidia corporation, Animashree Anandkumar of Pasadena CA US for nvidia corporation
IPC Code(s): G06T11/00, G06N3/0455, G06T9/00
CPC Code(s): G06T11/00
Abstract: systems and methods are disclosed that train a content frame-motion latent diffusion model (cdm) and use the cdm to generate requested videos. the cmd may be a two-stage framework that first compresses videos to a succinct latent space and then learns the video distribution in this latent space. for instance, the cmd may include an autoencoder and two diffusion models. in a first stage, using the autoencoder, a low-dimensional latent decomposition into a content frame and latent motion representation is learned. in the second stage, without adding any new parameters, the content frame distribution may be fine-tuned by using a pretrained image diffusion model, which allows the cmd to leverage the rich visual knowledge in pretrained image diffusion models. in addition, a new lightweight diffusion model may be used to generate motion latent representations that are conditioned on the given content frame.
Inventor(s): Karsten Julian Kreis of Vancouver CA for nvidia corporation, Maria Shugrina of Toronto CA for nvidia corporation, Ming-Yu Liu of San Jose CA US for nvidia corporation, Or Perel of Tel Aviv IL for nvidia corporation, Sanja Fidler of Toronto CA for nvidia corporation, Towaki Alan Takikawa of Toronto CA for nvidia corporation, Tsung-Yi Lin of Sunnyvale CA US for nvidia corporation, Xiaohui Zeng of Toronto CA for nvidia corporation
IPC Code(s): G06T15/06, G06T15/00
CPC Code(s): G06T15/06
Abstract: systems and methods of the present disclosure include interactive editing for generated three-dimensional (3d) models, such as those represented by neural radiance fields (nerfs). a 3d model may be presented to a user in which the user may identify one or more localized regions for editing and/or modification. the localized regions may be selected and a corresponding 3d volume for that region may be provided to one or more generative networks, along with a prompt, to generate new content for the localized regions. each of the original nerf and the newly generated nerf for the new content may then be combined into a single nerf for a combined 3d representation with the original content and the localized modifications.
Inventor(s): Dejia Xu of San Jose CA US for nvidia corporation, Morteza Mardani of Santa Clara CA US for nvidia corporation, Jiaming Song of San Carlos CA US for nvidia corporation, Sifei Liu of Santa Clara CA US for nvidia corporation, Ye Yuan of Santa Clara CA US for nvidia corporation, Arash Vahdat of San Mateo CA US for nvidia corporation
IPC Code(s): G06T15/20, G06V10/774, G06V10/776, G06V10/82
CPC Code(s): G06T15/20
Abstract: virtual reality and augmented reality bring increasing demand for 3d content creation. in an effort to automate the generation of 3d content, artificial intelligence-based processes have been developed. however, these processes are limited in terms of the quality of their output because they typically involve a model trained on limited 3d data thereby resulting in a model that does not generalize well to unseen objects, or a model trained on 2d data thereby resulting in a model that suffers from poor geometry due to ignorance of 3d information. the present disclosure jointly uses both 2d and 3d data to train a machine learning model to be able to generate 3d content from a single 2d image.
Inventor(s): Ali Hatamizadeh of Los Angeles CA US for nvidia corporation, Michael Ranzinger of Park City UT US for nvidia corporation, Jan Kautz of Lexington MA US for nvidia corporation
IPC Code(s): G06V10/82, G06V10/26, G06V10/774, G06V10/776, G06V10/94
CPC Code(s): G06V10/82
Abstract: transformers are neural networks that learn context and thus meaning by tracking relationships in sequential data. the main building block of transformers is self-attention which allows for cross interaction among all input sequence tokens with each other. this scheme effectively captures short-and long-range spatial dependencies and imposes time and space quadratic complexity in terms of the input sequence length, which enables their use with natural language processing (nlp) and computer vision tasks. while the training parallelism of transformers allows for competitive performance, unfortunately the inference is slow and expensive due to the computational complexity. the present disclosure provides a computer vision retention model that is configured for both parallel training and recurrent inference, which can enable competitive performance during training and fast and memory-efficient inferences during deployment.
Inventor(s): Mohammad Mobin of Saratoga CA US for nvidia corporation, Vishnu Balan of Murphy TX US for nvidia corporation, Johan Jacob Mohr of Copenhagen DK for nvidia corporation, Thorkild Franck of Roskilde DK for nvidia corporation
IPC Code(s): H04L25/03, H04B17/21, H04B17/391
CPC Code(s): H04L25/03057
Abstract: disclosed are apparatuses, systems, and techniques for deploying and training machine learning models for fast and efficient equalization of signals transmitted over communication channels. in one embodiment, the techniques include processing, using first model(s), a digital representation of a received (rx), via a communication channel, signal to obtain channel loss metrics representative of a difference between the rx signal and a transmitted (tx) signal. the techniques further include obtaining a first set of equalization (eq) parameter(s), and iteratively obtaining a second set of eq parameter(s). the techniques further include configuring, using the second set of the eq parameters, one or more eq circuits to equalize at least one of the rx signal, the tx signal, or a channel signal.
Inventor(s): William Joseph ARMSTRONG of Rochester MN US for nvidia corporation, Chao-Lin CHIU of New Taipei City TW for nvidia corporation, Mihir JOSHI of Santa Clara CA US for nvidia corporation, Nikesh OSWAL of Pune IN for nvidia corporation, Mark Alan OVERBY of Snohomish WA US for nvidia corporation, Hyung Taek RYOO of Pleasanton CA US for nvidia corporation
IPC Code(s): H04L9/40, H04L12/40
CPC Code(s): H04L63/0876
Abstract: in various examples, a technique for securely transmitting can (controller area network) messages is disclosed that includes receiving, using a cryptographic engine, a message from an application to be transmitted over a can (controller area network) bus, wherein the cryptographic engine executes a secure firmware and is implemented on an on-die discrete processor. the technique further includes accessing, using the secure firmware, a key from a plurality of keys associated with an authentication process from a secure memory associated with the cryptographic engine. additionally, the technique includes computing an authentication tag using the key and the message and transmitting the message with the authentication tag over the can bus to a destination address.
- NVIDIA Corporation
- G01S7/41
- G01S13/58
- G01S13/931
- CPC G01S7/417
- Nvidia corporation
- G06F30/27
- CPC G06F30/27
- G06F30/323
- CPC G06F30/323
- G06N3/0455
- G06N3/08
- CPC G06N3/0455
- G06N3/063
- G06F9/455
- G06F18/2413
- G06N3/045
- G06N20/00
- G06V10/44
- G06V10/764
- G06V10/82
- G06V20/56
- CPC G06N3/063
- G06N3/04
- CPC G06N3/08
- G06N3/084
- CPC G06N3/084
- G06N3/0985
- CPC G06N3/0985
- G06T3/4046
- G06T5/60
- G06T5/70
- G06T15/08
- CPC G06T3/4046
- G06T7/80
- H04N17/00
- H04N23/90
- CPC G06T7/80
- G06T11/00
- G06T9/00
- CPC G06T11/00
- G06T15/06
- G06T15/00
- CPC G06T15/06
- G06T15/20
- G06V10/774
- G06V10/776
- CPC G06T15/20
- G06V10/26
- G06V10/94
- CPC G06V10/82
- H04L25/03
- H04B17/21
- H04B17/391
- CPC H04L25/03057
- H04L9/40
- H04L12/40
- CPC H04L63/0876
(Ad) Transform your business with AI in minutes, not months
Trusted by 1,000+ companies worldwide