General information
SoftWLC controller backup is necessary for synchronization of system-critical files (settings, firmware files, data uploads), MySQL databases, MongoDB databases, as well as DHCP servers. Such a model ensures availability and relevance of data on both controllers in case of failure of one, network unavailability, power supply problems.
SoftWLC controller backup configuration includes the following steps:
- installing and configuring keepalived (performed according to the master-slave scheme)
- configuring rsync
- configuring MySQL replication (performed by counter replication on the master-master principle)
- configuring replicaSet MongoDB (replication is performed by combining 3 nodes into a Replica Set)
- configuring Eltex-PCRF operation in cluster mode
- changing configuration of modules for them to use virtual IP
In configuration examples of this section, IP addresses will be referred to as <ip_server1>, <ip_server2> and <virtual_ip>, where:
- <ip_server1> — real ip address of the first server
- <ip_server2> — real ip address of the second server
- <virtual_ip> — virtual ip address
For correct operation, it is required to provide L2 connectivity between two remote servers.
Installing and configuring keepalived
Package description
Keepalived package is an open source software used to perform high availability and load balancing functions. The first function is based on the implementation of the VRRP protocol, and the second is based on the Linux Virtual Server (IPVS) kernel module. Keepalived is not developed by Eltex and does not include adjustments other than configuration. Keepalived is used to provide backup for SoftWLC controllers with only VRRP functions.
Installing keepalived
In order to install the package, download it on the server and run the following command (installation must be done under the root superuser on both servers):
admin@ubuntu:/# sudo apt update admin@ubuntu:/# sudo apt install keepalived
After installation, add the Keepalived daemon to the autostart and run it:
admin@ubuntu:/# sudo systemctl enable keepalived admin@ubuntu:/# sudo systemctl start keepalived
Main configuration file
On both servers, in the /etc/keepalived/keepalived.conf file, change the following parameters:
<interface> — name of the network interface (different for each server) similar to the entry (eth1);
<virtual_ip> — virtual ip address (with prefix) similar to the entry (100.111.195.202 /24);
<ip_address of another server> — ip address of another server similar to the entry (100.111.195.200);
Test script
The script pings a default gateway and returns the result code. Thus, SoftWLC is guaranteed to be accessible for external clients if the script has been executed successfully.
In the current implementation on both servers, it is proposed to use the following as a test script:
where <default_gw_ip> is the default gateway for this server, similar to the entry (100.10.194.1);.
Configuring role change
When the server state changes, the script keep_notify.sh is executed where <mysql_user> and <mysql_password> are the login and password from the MySQL database (by default root/root).
replicaSet MongoDB master changing script
For scripts to work correctly, assign rights for their execution:
admin@swlc01-server:/# sudo chmod +x /etc/keepalived/check_ping.sh admin@swlc01-server:/# sudo chmod +x /etc/keepalived/keep_notify.sh admin@swlc01-server:/# sudo chmod +x /etc/keepalived/mongo_switch.js
Saving log to separate file
By default, keepalived saves a log to the /var/log/syslog file. For keepalived debugging, monitoring and managing convenience, separate log filing can be configured.
rsyslog configuration example is introduced below:
nano -w /etc/rsyslog.d/10-keepalived.conf if $programname contains 'Keepalived' then /var/log/keepalived.log if $programname contains 'Keepalived' then ~
Then restart rsyslog using the command:
admin@swlc01-server:/#sudo service rsyslog restart
Now messages from the keepalived daemon will only get into the log file /var/log/keepalived.log and will not get into /var/log/syslog.
Starting/stopping keepalived procedure
To start the service, run the following command:
admin@master:/#sudo service keepalived start
To stop the service:
admin@master:/#sudo service keepalived stop
To check the service status, run the command:
admin@master:/#sudo service keepalived status
On one of the servers, if configured correctly, an interface with a virtual ip should be displayed.
To check the operation of the keepalived service, disable the server that has virtual_ip present in the interfaces. Virtual_ip should appear on the second server.
Configuring rsync
Rsync in the backup scheme is responsible for synchronizing service files, Eltex-EMS and Eltex-APB services, as well as firmware files, configuration templates, point configuration uploads. Rsync is a client-server software. Master server acts as a client and synchronizes slave server's directories with local ones.
Configuring rsync server
To enable the rsync server, it is necessary to set the value RSYNC_ENABLE=true on each server in the /etc/default/rsync file:
Create the /etc/rsyncd.conf file. The file listing is given below.
hosts allow = <another_server_ip> <virtual ip>
For authentication, configure the rsync user. For this, create /etc/rsyncd.secrets files on each server and specify a password in them.
backup:rspasswd
Assign file access rights by running on both servers:
admin@swlc01-server:/#sudo chmod 600 /etc/rsyncd.secrets
Configuring synchronization start
Create files /etc/rsync_client.secrets, in which specify the password:
admin@swlc01-server:/# echo "rspasswd" > /etc/rsync_client.secrets && chmod 600 /etc/rsync_client.secrets
The file synchronization operation is performed by the cron task in which the script /usr/lib/eltex-ems/scripts/rsync_ems_backup.sh is executed. The script starts rsync client and synchronizes local directories with directories of the second (backup) server. Synchronization is started only if the server is in the master state.
In line 6, replace HOST with the ip address of another server (Example: HOST=100.111.195.201)
Create cron tasks on both servers to start synchronization every minute:
root@swlc01-server:/# crontab -l | { cat; echo "*/1 * * * * /usr/lib/eltex-ems/scripts/rsync_ems_backup.sh"; } | crontab
Checking the task list:
root@swlc01-server:/# crontab -l root@swlc01-server:/# */1 * * * * /usr/lib/eltex-ems/scripts/rsync_ems_backup.sh
If the task was not added or was accidentally added several times, then edit the list manually:
root@swlc01-server:/# crontab -e Select an editor. To change later, run 'select-editor'. 1. /bin/nano <---- easiest 2. /usr/bin/vim.tiny 3. /usr/bin/code 4. /bin/ed Choose 1-4 [1]: 1 # choose an editor
Starting/stopping procedure
To start the service, use the command:
admin@swlc01-server:/# sudo service rsync start
To stop the service, use the command:
admin@swlc01-server:/# sudo service rsync stop
To check the service status, use the command:
admin@swlc01-server:/# sudo service rsync status
Rsync operation can be checked in EMS. In the "Information" tab check the State of backup system-Rsync service section.
Both servers should have an entry like:
OK. Successful synchronization of files from directory: /usr/lib/eltex-ems/conf/* OK. Successful synchronization of files from directory: /tftpboot/* OK. Successful synchronization of files from directory: /var/ems-data/WP/*
Configuring MySQL replication
Backup of data stored in MySQL database is carried out by master-master replication. That means each server is both master and slave at the same time. The scheme implies writing all database updates of the first server to a special binary log. The second server reads the log and applies the changes. The second server replicates data from the first, and the first from the second. That allows having a relevant copy of a database on two hosts simultaneously. If connection fails, changes are accumulated and then synchronized after reconnection.
Data dump transferring and transferring to the second server
When configuring backup during operation (i.e. if the current server's MySQL already has data in it), it is necessary to replicate data to the second server. This can be done using the mysqldump utility.
To do this, lock the tables on the first server, remove the dump, unlock the tables and copy the resulting file to the second server:
root@swlc01-server:/# mysql -uroot -proot -e "FLUSH TABLES WITH READ LOCK;" root@swlc01-server:/# mysqldump -uroot -proot --databases ELTEX_PORTAL eltex_alert eltex_auth_service eltex_ems radius wireless > mysqldump_master.sql root@swlc01-server:/# mysql -uroot -proot -e "UNLOCK TABLES;" root@swlc01-server:/# scp mysqldump_master.sql <username>@<ip_server2>:/home/<username>/
Then generate a dump on the second server:
root@swlc01-server:/# mysql -uroot -proot < /home/<username>/mysqldump_master.sql
Configuring MySQL
Mysql daemon configuration aims at specifying binary logs writing parameters. The words "first server" and "second server" are further conditional and used to refer to differences in server configurations:
In the /etc/mysql/my.cnf file, add the path to the file /etc/mysql/mysql.conf.d/
Comment out or delete a line on both servers:
bind-address = 127.0.0.1
Specify server-id. For servers, set unique identifiers, for example, for the first:
server-id = 1
for the second:
server-id = 2
Enable binary logs on both servers:
log_bin = /var/log/mysql/mysql-bin.log
specify auto_increment_increment (increment step) and auto_increment_offset (start point) parameters.
For the first server:
auto_increment_increment= 2 auto_increment_offset = 1
For the second server:
auto_increment_increment= 2 auto_increment_offset = 2
On both servers: specify the databases for which logs will be written:
binlog-do-db = eltex_alert binlog-do-db = eltex_ems binlog-do-db = wireless binlog-do-db = radius binlog-do-db = eltex_auth_service binlog-do-db = ELTEX_PORTAL binlog-do-db = eltex_doors binlog-do-db = eltex_ngw
specify databases for which logs will not be generated:
binlog-ignore-db = mysql binlog-ignore-db = Syslog binlog-ignore-db = performance_schema binlog-ignore-db = information_schema
Restart mysql on each server and create a database for replication:
admin@swlc01-server:/# sudo service mysql restart
Creating user accounts
For replication to work, a service account should be created on both servers. The server will connect master server and get data changes using this account.
Create an account in the MySQL console for replication on the first server:
GRANT SELECT, SUPER, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'replication'@'<ip_server2>' IDENTIFIED BY 'password'; GRANT SELECT, SUPER, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'replication'@'<ip_server1>' IDENTIFIED BY 'password'; #required to check the replication status from EMS FLUSH PRIVILEGES;
Create an account in the MySQL console for replication on the second server:
GRANT SELECT, SUPER, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'replication'@'<ip_server1>' IDENTIFIED BY 'password'; GRANT SELECT, SUPER, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'replication'@'<ip_server2>' IDENTIFIED BY 'password'; #required to check the replication status from EMS FLUSH PRIVILEGES;
SELECT privilege is used to check replication performance from GUI EMS
Granting rights to service users
Open /usr/lib/eltex-ems/conf/config.txt , see which username/password are used (by default - javauser / javapassword)
Grant them external access rights on both servers:
GRANT ALL PRIVILEGES ON *.* TO 'javauser'@'%' IDENTIFIED BY 'javapassword'; GRANT ALL PRIVILEGES ON eltex_auth_service.* TO 'javauser'@'%'; GRANT ALL PRIVILEGES ON `radius`.* TO 'javauser'@'%'; GRANT ALL PRIVILEGES ON `wireless`.* TO 'javauser'@'%'; GRANT ALL PRIVILEGES ON `Syslog`.* TO 'javauser'@'%'; GRANT ALL PRIVILEGES ON `eltex_doors`.* TO 'javauser'@'%'; GRANT ALL PRIVILEGES ON `eltex_ngw`.* TO 'javauser'@'%'; GRANT ALL PRIVILEGES ON `ELTEX_PORTAL`.* TO 'javauser'@'%'; GRANT ALL PRIVILEGES ON `eltex_ems`.* TO 'javauser'@'%'; GRANT ALL PRIVILEGES ON `eltex_alert`.* TO 'javauser'@'%'; GRANT ALL PRIVILEGES ON `eltex_auth_service`.* TO 'javauser'@'%'; FLUSH PRIVILEGES;
Starting replication
Starting replication on the second server
Run the show master status command in MySQL console on the first server and analyze the values obtained:
mysql> show master status \G *************************** 1. row *************************** File: mysql-bin.000001 Position: 00000107 Binlog_Do_DB: eltex_alert,eltex_ems,wireless,radius,eltex_auth_service,ELTEX_PORTAL,eltex_doors,eltex_ngw Binlog_Ignore_DB: mysql,Syslog,performance_schema,information_schema 1 row in set (0.00 sec)
Remember the File and Position parameters.
Configure and start replication of the second server from the first one (perform actions on the second server):
STOP SLAVE; CHANGE MASTER TO MASTER_HOST='<ip_server1>', MASTER_USER='replication', MASTER_PASSWORD='password', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=107; START SLAVE;
where
MASTER_LOG_FILE='mysql-bin.000001' — specify the File value received on the first server;
MASTER_LOG_POS=107 – specify the Position value received on the first server.
Check the replication status on the second server:
mysql> show slave status \G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: <ip_server1> Master_User: replication Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000001 Read_Master_Log_Pos: 107 Relay_Log_File: mysqld-relay-bin.000001 Relay_Log_Pos: 107 Relay_Master_Log_File: mysql-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 107 Relay_Log_Space: 107 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 2 1 row in set (0.00 sec)
If the Slave_IO_Running and Slave_SQL_Running parameters are set to "Yes", replication has started successfully.
Starting replication on the first server
On the second server, run:
mysql> show master status \G *************************** 1. row *************************** File: mysql-bin.000001 Position: 00000107 Binlog_Do_DB: eltex_alert,eltex_ems,eltex_ont,radius,wireless,eltex_auth_service,payments,ELTEX_PORTAL Binlog_Ignore_DB: mysql,Syslog,performance_schema,information_schema 1 row in set (0.00 sec)
Configure and start replication of the first server from the second (perform actions on the first server):
STOP SLAVE; CHANGE MASTER TO MASTER_HOST='<ip_server2>', MASTER_USER='replication', MASTER_PASSWORD='password', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=107; START SLAVE;
Check the replication status on the first server:
mysql> show slave status \G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: <ip_server2> Master_User: replication Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000001 Read_Master_Log_Pos: 107 Relay_Log_File: mysqld-relay-bin.000001 Relay_Log_Pos: 107 Relay_Master_Log_File: mysql-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes ...
If the Slave_IO_Running and Slave_SQL_Running parameters are set to "Yes", the values of Master_Log_File and Read_Master_Log_Pos are replicated in both directions.
Checking replication from EMS-GUI
MySQL replication state can be controlled from GUI EMS. To do this, edit the configuration file /etc/eltex-ems/check-ems-replication.conf. Changes must be made on both servers.
# Enable("Yes") / Disable("No") replication check ENABLE_REPLICATION="Yes" # The first replication host's address HOST1=<ip_server1> # The second replication host's address HOST2=<ip_server2> # mysql server access parameters # mysql user USER="replication" # mysql password PASSWORD="password"
where
ENABLE_REPLICATION — whether replication verification is enabled (set to "Yes");
HOST1, HOST2 — ip addresses of servers;
USER, PASSWORD — login/password of the account for replication to work.
Configuring MongoDB
In MongoDB, replication is performed by grouping several (3 for standard configuration) nodes into Replica Set. Replica Set consists of one primary node and several secondary nodes. The following scheme explains it in details:
- Primary — the primary server of MongoDB.
- Secondary — exact copies of database(s) with real-time synchronization.
- Arbiter — server is only responsible for the election of a successor, it cannot become a successor itself, therefore it is recommended to give minimum resources for arbiter, SoftWLC does not need to be installed on the arbiter.
Minimum requirements for mongo-db arbiter:
- vCore: 1, 64-bit x86 CPUs
- vRAM: 2 GB
- vHDD: 20 GB
All data modification operations are performed only on primary. Thus, MongoDB automatically performs failover and replaces Primary with an operating node if current Primary fails. But that requires 3+ nodes in Replica Set.
Installing mongodb on arbiter
For replication to work correctly, MongoDB versions must match on all hosts. For standard mongo installation, version 3.6.3 is required, for the example below, version 4 is required.
To install the required version of mongodb, perform the following steps:
Create a file /etc/apt/sources.list.d/mongodb-org-4.0.list and write the mongo repo into it
deb [ arch=amd64 ] http://mirror.yandex.ru/mirrors/repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 multiverse
On the server, run the following command:
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4
Download mongodb-org
sudo apt-get update sudo apt install mongodb-org
Make sure that mongodb version 4.0.28 or higher is installed on the server
mongo --version или dpkg -l | grep mongo
Run the following commands:
sudo systemctl enable mongod.service sudo systemctl start mongod.service
replicaSet configuration
In /etc/mongod.conf on all nodes:
Add/uncomment a block:
replication: replSetName: "<replica_set_name>"
where <replica_set_name> is the name of the replica set. The name is chosen arbitrarily, but must be the same on all servers.
Allow external connections by specifying the address 0.0.0.0 in the bindIp parameter (bind_ip in the old version of mongo) (0.0.0.0 — allows connections from any ip addresses):
bindIp: 0.0.0.0
Restart MongoDB:
root@swlc01-server:/# service mongod restart
On the first node, open MongoDB console:
root@swlc01-server:/# mongo
Create replica set configuration:
If you want to use hostname instead of ip address in the configuration, then, in /etc/hostname on all nodes, specify names of nodes and, in /etc/hosts on all nodes, add all nodes with the type <IP_address> <hostname>
rs.initiate( { _id: "replica_set_name", version: 1, members: [ { _id: 0, host : "ip_mongo_primary:27017" }, { _id: 1, host : "ip_mongo_secondary:27017" } ] } )
Add Arbiter node in Replica Set (on PRIMARY):
rs.add("<ip_server>:27017",true)
After a while, shell prompt should be changed to:
replica_set_name:PRIMARY>
To view the Replica Set configuration, run the following command:
replica_set_name:PRIMARY> rs.config()
To check the Replica Set status, run the rs.status()command in the MongoDB console.
Adding/deleting/changing nodes in Replica Set
Configuration of nodes in Replica Set can be performed only on PRIMARY.
Add Secondary node in Replica Set (on PRIMARY):
rs.add("<ip_server>:27017")
If MongoDB responds to this command with an error, maybe there is no connection to the second node (or bindIp: 127.0.0.1 is registered there), or replication block is not configured there.
On the second node, MongoDB management console prompt should be changed to:
root@swlc01-server:/# mongo replica_set_name:SECONDARY>
Add Arbiter node in Replica Set:
replica_set_name:PRIMARY> rs.add("<ip_server>:27017",true)
Delete a node from Replica Set (run on PRIMARY):
replica_set_name:PRIMARY> rs.remove("<ip_server>:27017")
To update the server's address, run the following commands:
replica_set_name:PRIMARY> cfg = rs.conf() replica_set_name:PRIMARY> cfg.members[<index>].host = "<ip_server>:27017" replica_set_name:PRIMARY> rs.reconfig(cfg)
To check replication performance, disable the PRIMARY server: the server that was SECONDARY will switch to the PRIMARY status
Eltex-PCRF operation in cluster mode
Configuring PCRF cluster
Open 5701 tcp and 5801 tcp ports between PCRF servers.
On servers, in /etc/eltex-pcrf/hazelcast-cluster-network.xml configuration files, specify the addresses of the network interfaces (lines 5 and 22 of the example contain the server's address, and lines 14 and 15 contain the list of all cluster members).
Configuration example:
<network> <!-- Write here public address of the node --> <!-- specify the server's own address here --> <public-address>ip_server1</public-address> <port auto-increment="false" port-count="100">5701</port> <outbound-ports> <ports>0</ports> </outbound-ports> <join> <multicast enabled="false"/> <tcp-ip enabled="true"> <!-- List IP addresses of all cluster members (including this one) --> <member>ip_server1</member> <member>ip_server2</member> </tcp-ip> <discovery-strategies> </discovery-strategies> </join> <interfaces enabled="true"> <!-- specify the server's own address here --> <interface>ip_server1</interface> </interfaces>
Permit cluster start in /etc/eltex-pcrf/eltex-pcrf configuration.json configuration file:
"cluster.enable" : true,
Restart Eltex-PCRF using the command:
admin@swlc01-server:/# sudo service eltex-pcrf restart
Cluster state check
{ "data" : { "enabled" : true, "state" : "ACTIVE", "members" : [ { "address" : "ip_server1", "local" : true, "active" : true }, { "address" : "ip_server2", "local" : false, "active" : true } ], "messagesStats" : { "received" : 45157, "sent" : 45144 }, "mongo" : { "available" : false, "error" : "not running with --replSet" } }, "key" : "PcrfErrorCode.success", "message" : "Success", "code" : 0, "args" : [ ] }
Specifics of ESR configuration for interaction with PCRF cluster
When using a PCRF cluster on ESR, configure interaction with all nodes of the cluster using their real address.
Configuring SoftWLC modules
It is necessary to configure SoftWLC modules on both servers to interact with controller via virtual ip. The following configuration files should be modified.
When changing the Mysql and MongoDB database connection settings, be extremely careful with the DB connection settings. Configuration errors, such as errors in characters between parameters (for example, "?" instead of "&"), extra characters, etc. will cause hard-to-diagnose DB connection errors!
After making changes to the configuration files, restart the corresponding service:
root@swlc01-server:/# service eltex-<service_name> restart
If you use a single-host system on each of the SoftWLC servers, replacing localhost or 127.0.0.1 with <virtual_ip> in the configuration files of the services that access the MySQL database is not required.
- Change localhost to <virtual_ip> in line 24.
- Change mongodb://localhost to mongodb://ip_mongo_primary,ip_mongo_secondary in all lines and specify the ReplicaSet that you configured in /etc/mongod.conf. So the line will look like the following:
mongodb://192.168.10.3:27017,192.168.10.4:27017/pcrf?replicaSet=Cluster&waitQueueMultiple=500&connectTimeoutMS=10000&socketTimeoutMS=0&readPreference=secondaryPreferred mongodb://192.168.10.3:27017,192.168.10.4:27017/ott?replicaSet=Cluster&waitQueueMultiple=500&connectTimeoutMS=10000&socketTimeoutMS=0&readPreference=secondaryPreferred
- Change localhost to <virtualip> in all lines except line 14.
- Change 127.0.0.1 to <virtualip> in all lines except line 14.
- Change localhost to <virtualip> in the lines 15, 48, 54, 64, 95.
- Change localhost to <virtualip> in line 27.
- Change mongodb://localhost to mongodb://ip_mongo_primary,ip_mongo_secondary in all lines and specify the ReplicaSet that you configured in /etc/mongod.conf. So the line will look like the following:
pcrf.mongodb.uri=mongodb://192.168.10.3:27017,192.168.10.4:27017/pcrf?replicaSet=Cluster wificab.mongodb.uri=mongodb://192.168.10.3:27017,192.168.10.4:27017/wifi-customer-cab?replicaSet=Cluster sorm2.mongodb.uri=mongodb://192.168.10.3:27017,192.168.10.4:27017/sorm2?replicaSet=Cluster ott.mongodb.uri=mongodb://192.168.10.3:27017,192.168.10.4:27017/ott?replicaSet=Cluster
- Change localhost to <virtualip> in all lines.
- Change 127.0.0.1 to <virtualip> in all lines.
- Change localhost to virtual_ip in line 44.
- Change localhost to <virtualip> in all lines.
- Change 127.0.0.1 to <virtualip> in all lines.
- Change mongodb://localhost to mongodb://ip_mongo_primary,ip_mongo_secondary in line 4 and specify the ReplicaSet that you configured in /etc/mongod.conf. So the line will look like the following:
<entry key="mongoaddress">mongodb://192.168.10.3:27017,192.168.10.4:27017/wifi-customer-cab?replicaSet=Cluster</entry>
- Change localhost to <virtualip> in all lines.
- Change localhost to <virtualip> in the lines 4, 17, 26, 35, 48, 57, 66, 75, 84, 98.
Adding user to NAS table
To access your Admin Panel, you need to add the appropriate entries to the NAS table.
The table is stored in eltex_auth_service database. It contains the addresses of clients that have rights to send user authorization requests. If a client is not included into the table, authorization requests will be ignored.
To do this, in your Admin Panel, in the Settings → Server addresses section, add:
- <ip_server_1> — Server-1 IP address
- <ip_server_2> — Server-2 IP address
- <virtual_ip> — Virtual IP address
RADIUS key — eltex
Changing configuration via GUI
SoftWLC modules should also be configured via graphical interface.
Admin Panel
In the section Settings → Integration in the parameters PCRF url, NGW Client url and Portal constructor URL, change localhost to a virtual ip address:
Portal Constructor
Replace localhost with a virtual IP address in the following sections:
System settings → Portal Constructor
System settings → NBI access
System settings → NGW access
System settings → PCRF access
System settings → Mercury access
EMS-GUI
In EMS GUI, replace localhost (or 127.0.0.1) with a virtual IP address in the following sections:
Administration → EMS server configuration → System modules settings → pcrf
Administration → EMS server configuration → System modules settings → radius
Administration → EMS server configuration → System modules settings → softwlc.nbi
Administration → EMS server configuration → System modules settings → system
Administration → EMS server configuration → System modules settings → tftpserver
Administration → EMS server configuration → System modules →wirelessCommon
This key must match the /etc/eltex-wifi-cab/local_secret file on each host where eltex-wifi-cab is installed.
If you use the netconf module, then it is also necessary to update information there.
Administration → EMS server configuration → System modules settings → netconf