17805698. RESIDUAL NEURAL NETWORK MODELS FOR DIGITAL PRE-DISTORTION OF RADIO FREQUENCY POWER AMPLIFIERS simplified abstract (SAMSUNG ELECTRONICS CO., LTD.)

From WikiPatents
Jump to navigation Jump to search

RESIDUAL NEURAL NETWORK MODELS FOR DIGITAL PRE-DISTORTION OF RADIO FREQUENCY POWER AMPLIFIERS

Organization Name

SAMSUNG ELECTRONICS CO., LTD.

Inventor(s)

Ziming He of Cambridge (GB)

Fei Tong of Cambridge (GB)

RESIDUAL NEURAL NETWORK MODELS FOR DIGITAL PRE-DISTORTION OF RADIO FREQUENCY POWER AMPLIFIERS - A simplified explanation of the abstract

This abstract first appeared for US patent application 17805698 titled 'RESIDUAL NEURAL NETWORK MODELS FOR DIGITAL PRE-DISTORTION OF RADIO FREQUENCY POWER AMPLIFIERS

Simplified Explanation

The patent application describes techniques for using bidirectional recurrent neural networks (BiRNN) to improve the performance of digital pre-distortion (DPD) techniques for radio frequency power amplifiers (PAs). The techniques involve using residual learning and long short-term memory (LSTM) projection layer features to reduce complexity and memory requirements.

  • Bidirectional recurrent neural networks (BiRNN) are used to improve digital pre-distortion (DPD) techniques for radio frequency power amplifiers (PAs).
  • Residual learning and long short-term memory (LSTM) projection layer features are implemented to reduce computational complexity and memory requirements.
  • Applying residual learning in BiLSTM and using LSTM projection to develop a DPD structure provide advantages over existing techniques.
  • The techniques reduce complexity in training and pre-distortion, require less memory to store DPD neural network coefficients, and achieve similar or better linearization performance compared to other LSTM models.
  • Faster training convergence speed can be achieved compared to other LSTM models.

Potential Applications

The techniques described in the patent application can be applied in the field of radio frequency power amplifiers (PAs) to improve the performance of digital pre-distortion (DPD) techniques. This can have applications in various wireless communication systems, such as cellular networks, satellite communication systems, and wireless local area networks (WLANs).

Problems Solved

The techniques described in the patent application address the problems of complexity and memory requirements in digital pre-distortion (DPD) techniques for radio frequency power amplifiers (PAs). By implementing residual learning and LSTM projection layer features, the complexity in training and pre-distortion is reduced, and significantly less memory is required to store the DPD neural network coefficients. Additionally, the techniques offer faster training convergence speed compared to other LSTM models.

Benefits

The described techniques offer several benefits over existing techniques for digital pre-distortion (DPD) in radio frequency power amplifiers (PAs):

  • Reduced complexity in training and pre-distortion.
  • Significantly less memory required to store DPD neural network coefficients.
  • Similar or better linearization performance compared to other LSTM models.
  • Faster training convergence speed compared to other LSTM models.


Original Abstract Submitted

One or more aspects of the techniques and models described herein provide for bidirectional recurrent neural network (BiRNN)-based digital pre-distortion techniques for radio frequency (RF) power amplifiers (PAs). As an example, a digital pre-distorter (DPD) system may implement residual learning and long short-term memory (LSTM) projection layer features to reduce computational complexity and memory requirements. Implementing the described unconventional techniques of applying residual learning in RNN (e.g., in BiLSTM), using LSTM projection to develop a DPD structure, or both, may provide several advantages over preexisting techniques. For instance, the complexity in training and pre-distortion may be reduced and significantly less memory may be required to store the DPD neural network coefficients (e.g., while achieving similar or better linearization performance compared to other LSTM models). Further, faster training convergence speed may be achieved (e.g., compared to other LSTM models).