18528292. TEMPORALLY AMORTIZED SUPERSAMPLING USING A KERNEL SPLATTING NETWORK simplified abstract (Intel Corporation)

From WikiPatents
Jump to navigation Jump to search

TEMPORALLY AMORTIZED SUPERSAMPLING USING A KERNEL SPLATTING NETWORK

Organization Name

Intel Corporation

Inventor(s)

Dmitry Kozlov of Nizhny Novgorod (RU)

Aleksei Chernigin of Nizhny Novgorod (RU)

Dmitry Tarakanov of Nizhny Novgorod (RU)

TEMPORALLY AMORTIZED SUPERSAMPLING USING A KERNEL SPLATTING NETWORK - A simplified explanation of the abstract

This abstract first appeared for US patent application 18528292 titled 'TEMPORALLY AMORTIZED SUPERSAMPLING USING A KERNEL SPLATTING NETWORK

Simplified Explanation

The graphics processor described in the patent application utilizes a mixed precision convolutional neural network to perform supersampling anti-aliasing operations. The processor receives a set of data including previous frame data, current frame data, jitter offset data, and velocity data, preprocesses this data, passes it through a feature extraction network, and generates an anti-aliased output frame based on the processed data.

  • Processing resources configured for supersampling anti-aliasing operation
  • Utilizes a mixed precision convolutional neural network
  • Receives and preprocesses data including previous frame data, current frame data, jitter offset data, and velocity data
  • Passes data through a feature extraction network
  • Generates anti-aliased output frame based on processed data

Potential Applications

The technology described in this patent application could be applied in the development of advanced graphics processors for gaming consoles, virtual reality systems, and high-performance computing applications.

Problems Solved

This technology addresses the issue of aliasing in computer graphics, which can result in jagged edges and visual artifacts in rendered images. By utilizing a mixed precision convolutional neural network for anti-aliasing, the processor is able to produce smoother, more realistic graphics.

Benefits

The use of a mixed precision convolutional neural network for anti-aliasing can improve the visual quality of rendered images, leading to a more immersive and realistic user experience. Additionally, this technology can enhance the performance of graphics processors by efficiently processing complex data sets.

Potential Commercial Applications

The technology described in this patent application has potential commercial applications in the gaming industry, virtual reality development, and high-performance computing markets. Companies developing graphics processors and rendering software could benefit from implementing this advanced anti-aliasing technology.

Possible Prior Art

One possible prior art in the field of graphics processing is the use of traditional anti-aliasing techniques such as multisampling and super sampling. These methods have been used to reduce aliasing artifacts in rendered images, but the use of a mixed precision convolutional neural network for anti-aliasing represents a novel approach to improving graphics quality.


Original Abstract Submitted

One embodiment provides a graphics processor comprising a set of processing resources configured to perform a supersampling anti-aliasing operation via a mixed precision convolutional neural network. The set of processing resources include circuitry configured to receive, at an input block of a neural network model, a set of data including previous frame data, current frame data, jitter offset data, and velocity data, pre-process the set of data to generate pre-processed data, provide pre-processed data to a feature extraction network of the neural network model and an output block of the neural network model, process the first pre-processed data at the feature extraction network via one or more encoder stages and one or more decoder stages, output tensor data from the feature extraction network to the output block, and generate an anti-aliased output frame via the output block based on the current frame data and the tensor data output from the feature extraction network.