Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

SDN Flow Entry Management Using Reinforcement Learning

SDN Flow Entry Management Using Reinforcement Learning Modern information technology services largely depend on cloud infrastructures to provide their services. These cloud infrastructures are built on top of Datacenter Networks (DCNs) constructed with high-speed links, fast switching gear, and redundancy to offer better flexibility and resiliency. In this environment, network traffic includes long-lived (elephant) and short-lived (mice) flows with partitioned/aggregated traffic patterns. Although SDN-based approaches can efficiently allocate networking resources for such flows, the overhead due to network reconfiguration can be significant. With limited capacity of Ternary Content-Addressable Memory (TCAM) deployed in an OpenFlow enabled switch, it is crucial to determine which forwarding rules should remain in the flow table and which rules should be processed by the SDN controller in case of a table-miss on the SDN switch. This is needed in order to obtain the flow entries that satisfy the goal of reducing the long-term control plane overhead introduced between the controller and the switches. To achieve this goal, we propose a machine learning technique that utilizes two variations of Reinforcement Learning (RL) algorithms—the first of which is a traditional RL-based algorithm, while the other is deep reinforcement learning-based. Emulation results using the RL algorithm show around 60% improvement in reducing the long-term control plane overhead and around 14% improvement in the table-hit ratio compared to the Multiple Bloom Filters (MBF) method, given a fixed size flow table of 4KB. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Autonomous and Adaptive Systems (TAAS) Association for Computing Machinery

Loading next page...
 
/lp/association-for-computing-machinery/sdn-flow-entry-management-using-reinforcement-learning-dFPQkM7Omh

References (44)

Publisher
Association for Computing Machinery
Copyright
Copyright © 2018 ACM
ISSN
1556-4665
eISSN
1556-4703
DOI
10.1145/3281032
Publisher site
See Article on Publisher Site

Abstract

Modern information technology services largely depend on cloud infrastructures to provide their services. These cloud infrastructures are built on top of Datacenter Networks (DCNs) constructed with high-speed links, fast switching gear, and redundancy to offer better flexibility and resiliency. In this environment, network traffic includes long-lived (elephant) and short-lived (mice) flows with partitioned/aggregated traffic patterns. Although SDN-based approaches can efficiently allocate networking resources for such flows, the overhead due to network reconfiguration can be significant. With limited capacity of Ternary Content-Addressable Memory (TCAM) deployed in an OpenFlow enabled switch, it is crucial to determine which forwarding rules should remain in the flow table and which rules should be processed by the SDN controller in case of a table-miss on the SDN switch. This is needed in order to obtain the flow entries that satisfy the goal of reducing the long-term control plane overhead introduced between the controller and the switches. To achieve this goal, we propose a machine learning technique that utilizes two variations of Reinforcement Learning (RL) algorithms—the first of which is a traditional RL-based algorithm, while the other is deep reinforcement learning-based. Emulation results using the RL algorithm show around 60% improvement in reducing the long-term control plane overhead and around 14% improvement in the table-hit ratio compared to the Multiple Bloom Filters (MBF) method, given a fixed size flow table of 4KB.

Journal

ACM Transactions on Autonomous and Adaptive Systems (TAAS)Association for Computing Machinery

Published: Nov 26, 2018

Keywords: Flow entry

There are no references for this article.