Optimality of Myopic Sensing in Multichannel Opportunistic Access
Citations Over TimeTop 1% of 2009 papers
Abstract
This paper considers opportunistic communication over multiple channels where the state (ldquogoodrdquo or ldquobadrdquo) of each channel evolves as independent and identically distributed (i.i.d.) Markov processes. A user, with limited channel sensing capability, chooses one channel to sense and decides whether to use the channel (based on the sensing result) in each time slot. A reward is obtained whenever the user senses and accesses a ldquogoodrdquo channel. The objective is to design a channel selection policy that maximizes the expected total (discounted or average) reward accrued over a finite or infinite horizon. This problem can be cast as a partially observed Markov decision process (POMDP) or a restless multiarmed bandit process, to which optimal solutions are often intractable. This paper shows that a myopic policy that maximizes the immediate one-step reward is optimal when the state transitions are positively correlated over time. When the state transitions are negatively correlated, we show that the same policy is optimal when the number of channels is limited to two or three, while presenting a counterexample for the case of four channels. This result finds applications in opportunistic transmission scheduling in a fading environment, cognitive radio networks for spectrum overlay, and resource-constrained jamming and antijamming.
Related Papers
- → Distributed Reinforcement Learning based MAC protocols for autonomous cognitive secondary users(2011)42 cited
- → Optimal Scheduling of Cooperative Spectrum Sensing in Cognitive Radio Networks(2010)35 cited
- → Partially observable Markov decision process-based MAC-layer sensing optimisation for cognitive radios exploiting rateless-coded spectrum aggregation(2012)7 cited
- → Sensing, probing, and transmitting strategy for energy harvesting cognitive radio(2017)6 cited
- → Multi-armed Bandit Online Learning Based on POMDP in Cognitive Radio(2014)