Automated vehicles (AV) will need updates on driving conditions. Past studies envision roadside infrastructure transmitting such updates via beams of concentrated, millimeter radio waves. The challenges, though, are:
To help address both challenges, NIST researchers analyzed these roadside infrastructure studies and developed a method which uses “Reinforcement Learning,” a form of artificial intelligence that rewards a system for an intended performance. The method was described in Deep Reinforcement Learning Assisted Beam Tracking and Data Transmission for 5G V2X (Vehicle-to-Everything) Networks, published in IEEE Transactions on Intelligent Transportation Systems.
The method’s reinforcement learning helps the roadside infrastructure optimize the predictions of rapidly moving AV locations based on their downlinks. It also helps the roadside infrastructure form and adjust optimum beam patterns for transmitting data to AVs.
This method was based on NIST researchers using a reinforcement learning framework, in which they mapped the parameters that influence the performance of vehicle-to-infrastructure communications into state, action, and reward forms. They also found that beam tracking accuracy and beam optimization could be increased by revising this framework.
NIST researchers used simulations to assess the method. The results showed that this method performs well in tracking accuracy, data rate, and temporal efficiency. Simulations also show that the selected framework outperformed other frameworks that were considered.