17969591. REINFORCEMENT LEARNING-BASED ENHANCED DISTRIBUTED CHANNEL ACCESS simplified abstract (QUALCOMM Incorporated)

From WikiPatents
Revision as of 06:10, 26 April 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

REINFORCEMENT LEARNING-BASED ENHANCED DISTRIBUTED CHANNEL ACCESS

Organization Name

QUALCOMM Incorporated

Inventor(s)

Gaurang Naik of San Diego CA (US)

George Cherian of San Diego CA (US)

Sai Yiu Duncan Ho of San Diego CA (US)

Yanjun Sun of San Diego CA (US)

Abhishek Pramod Patil of San Diego CA (US)

Alfred Asterjadhi of San Diego CA (US)

Abdel Karim Ajami of Lakeside CA (US)

REINFORCEMENT LEARNING-BASED ENHANCED DISTRIBUTED CHANNEL ACCESS - A simplified explanation of the abstract

This abstract first appeared for US patent application 17969591 titled 'REINFORCEMENT LEARNING-BASED ENHANCED DISTRIBUTED CHANNEL ACCESS

Simplified Explanation

The patent application describes a method for using a reinforcement learning model to determine parameters associated with a channel access procedure in wireless communication devices.

  • Wireless communication devices receive information from the RL model and transmit a protocol data unit (PDU) based on the model's output during a specific time slot.
  • The RL model is used to perform a distributed channel access procedure, and the PDU is transmitted according to this procedure during the designated time slot.
  • The information from the RL model can configure the model or indicate whether the device is allowed to retrain the model.

Potential Applications

This technology can be applied in various wireless communication systems where efficient channel access procedures are required, such as in IoT devices, smart grids, and industrial automation.

Problems Solved

1. Improved efficiency in channel access procedures in wireless communication systems. 2. Enhanced performance and reliability of communication networks by using reinforcement learning models.

Benefits

1. Increased throughput and reduced latency in wireless communication systems. 2. Adaptive and self-optimizing channel access procedures based on real-time data. 3. Potential for autonomous operation and decision-making in wireless networks.

Potential Commercial Applications

Optimizing spectrum utilization in 5G networks Enhancing connectivity in smart city infrastructure Improving network efficiency in autonomous vehicles

Possible Prior Art

One possible prior art could be the use of machine learning algorithms for optimizing channel access in wireless communication systems. However, the specific application of reinforcement learning models for this purpose may be novel and inventive.

Unanswered Questions

== How does this technology compare to traditional methods of channel access in wireless communication systems? This article does not provide a direct comparison between the proposed reinforcement learning-based approach and traditional methods of channel access. It would be beneficial to understand the performance metrics and efficiency gains of the new technology compared to existing solutions.

== What are the potential limitations or challenges of implementing a reinforcement learning model for channel access procedures in wireless communication devices? While the article highlights the benefits and applications of the technology, it does not address any potential limitations or challenges that may arise during implementation. It would be important to consider factors such as computational complexity, training data requirements, and scalability issues when deploying RL models in real-world communication systems.


Original Abstract Submitted

This disclosure provides methods, components, devices and systems for use of a reinforcement learning (RL) model to obtain one or more parameters associated with a channel access procedure. Some aspects more specifically relate to mechanisms according to which a wireless communication device may receive information associated with the RL model and transmit a protocol data unit (PDU) during a slot that is based on an output of the model. The wireless communication device may use the RL model to perform a distributed channel access procedure in accordance with the information and may further transmit the PDU, during the slot that is based on the output of the RL model, in accordance with the distributed channel access procedure. The information associated with the RL model may indicate or configure the RL model or may indicate whether the wireless communication is allowed to retrain the RL model.