|| Device | SoftWLC |
|| DocMainTitle | General information |
|| DocTitle3 | Wi-Fi network controller |
|| fwversion | 1.26 |


General information

SoftWLC is a Wi-Fi software controller which includes a number of modules performing various functions to provide a comprehensive solution for organizing centralized Wi-Fi networks with Portal and Enterprise authorization. According to project requirements, certain modules can be included in the system or excluded from it.

Key features of SoftWLC:

Main modules of SoftWLC

Typical connection diagrams

The simplest minimal diagram



In the minimum configuration, the solution is deployed on two servers: each server hosts all modules with Active-Active redundancy. SoftWLC servers are included in the operator's data transmission network. L3 connectivity is organized between them and the access points. This is enough to start using the product. Management interfaces are configured on the access points to interact with SoftWLC in a separate management VLAN. Client data goes from APs in client VLANs (usually a separate VLAN for each SSID) to the operator's network and then to the Internet.

Global distributed diagram for large carriers


This diagram is typical for large distributed networks deployed within several cities or regions. SoftWLC is deployed on several servers depending on the planned load. In the maximum configuration, the solution is deployed on 10 servers: EMS, WEB portal and APB are installed on the first pair of servers, Database – on the second pair, RADIUS – on the third pair, DHCP – on the fourth pair. It is recommended to set Admin Panel on a separate redundant front-end server with maximum security settings. All servers are connected to the switch pair working in the stack which in its turn is connected to the redundant router pair in the operator's network. In case the customer wishes to terminate subscriber sessions on their premises, it is possible to relocate one or more ESR-1000s to the customer's network. If the operator has a globally distributed network structure united by trunk lines, it is possible to relocate ESR-1000 routers to branches of the network.

Diagram with usage of ESR-1000 service routers

ESR-1000 service routers are used in the network for several purposes:

Access points can be connected both to the operator's own access network and included in the customer's access network. When enabled, access points build Soft GRE tunnels to ESR-1000 service routers in the carrier's network, through which further data is routed to SoftWLC or to the Internet.

Soft GRE tunnels are made between the ESR-1000 and the access points through the operator's L3 infrastructure. Two tunnels are formed from each access point: Management tunnel to transmit management traffic and Data tunnel to transmit subscriber traffic.

Inside the Management tunnel, the management traffic of the access point is transmitted in a separate management network. This subnet is invisible to the operator's L3 segment due to the GRE tunnel headers. Within the Data tunnel, subscriber traffic is transmitted. This traffic is terminated at ESR-1000 and further routed to the operator's network (towards its NAT).


System requirements for SoftWLC server

The SoftWLC software controller must be installed on a server powered by Ubuntu Server 16.04 LTS / Ubuntu Server 18.04 LTS / Astra Linux Common Edition 2.12.44 / Debian 9

Support is provided only for Ubuntu Server 16.04 LTS / Ubuntu Server 18.04 LTS / Astra Linux Common Edition 2.12.44 / Debian 9

When selecting a server, the following system requirements must be taken into account (requirements are provided for the VM without taking into account system redundancy):

Number of devices

VM name

CPU core, Xeon

RAM, GB

HDD, GB

10 – 200 APs

SoftWLC4, 64-bit x86 CPUs
8200
200 – 500 APsSoftWLC4, 64-bit x86 CPUs
16200
500 – 1000 APsSoftWLC6, 64-bit x86 CPUs
12200
DataBase4, 64-bit x86 CPUs
16200
1000 – 2000 APs




EMS6, 64-bit x86 CPUs
14200
RADIUS4, 64-bit x86 CPUs
6100
WEB-Portal4, 64-bit x86 CPUs
840

MySQL

4, 64-bit x86 CPUs
24500
MongoDB4, 64-bit x86 CPUs
10200