Multi-Agent Reinforcement Learning for Energy Harvesting Two-Hop Communications with Full Cooperation

Andrea Ortiz,Hussein Al-Shatri,T. Weber,A. Klein

Published 2017 in arXiv.org

ABSTRACT

We focus on energy harvesting (EH) two-hop communications since they are the essential building blocks of more complicated multi-hop networks. The scenario consists of three nodes, where an EH transmitter wants to send data to a receiver through an EH relay. The harvested energy is used exclusively for data transmission and we address the problem of how to efficiently use it. As in practical scenarios, we assume only causal knowledge at the EH nodes, i.e., in each time interval, the transmitter and the relay know their own current and past amounts of incoming energy, battery levels, data buffer levels and channel coefficients for their own transmit channels. Our goal is to find transmission policies which aim at maximizing the throughput considering that the EH nodes fully cooperate with each other to exchange their causal knowledge during a signaling phase. We model the problem as a Markov game and propose a multi-agent reinforcement learning algorithm to find the transmission policies. Furthermore, we show the trade-off between the achievable throughput and the signaling required, and provide convergence guarantees for the proposed algorithm. Results show that even when the signaling overhead is taken into account, the proposed algorithm outperforms other approaches that do not consider cooperation among the nodes.

PUBLICATION RECORD

  • Publication year

    2017

  • Venue

    arXiv.org

  • Publication date

    2017-02-08

  • Fields of study

    Mathematics, Computer Science, Engineering

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-47 of 47 references · Page 1 of 1