This section describes steps of the initial ECSS configuration before checking basic functionality.
Before starting configuring, make sure the following:
First of all, perform general settings for the entire system:
Next, declare and configure the following services:
It is recommended to perform the initial setup in the order given below.
Classification of clusters by roles:
This stage includes configuration of all clusters of the system (core, ds, mediator, pa_sip, pa_megaco). Each cluster can include one or more nodes of the same type. For example, SIP adapter cluster (pa_sip) may consist of several SIP adapter nodes.
When installing the license, the standard subsystem topology is automatically set:
|
To manage clusters of the system, the following can be used:
These commands are the main ones for setting any cluster.
To configure individual settings of a particular cluster, use the command:
/cluster/<SOME_ROLE>/<NAME_CLUSTER>/<GROUP>/set <PROPERTY> [<NAME_NODE>|add| remove] <VALUE>
To view individual settings of a particular cluster, use the command:
/cluster/<SOME_ROLE>/<NAME_CLUSTER>/<GROUP>/info [<PROPERTY>]
where:
The storage cluster performs a function of a distributed configuration data storage of the entire system. Also, a telephone call routing module is implemented within this subsystem.
To change individual cluster settings, use the command:
/cluster/storage/<NAME_CLUSTER>/<GROUP>/set <PROPERTY> <VALUE>
To view set values of cluster parameters, use the command:
/cluster/storage/<NAME_CLUSTER>/<GROUP>/info [<PROPERTY>]
where:
At the initial stage at the level of the data storage cluster, it is enough to install only services (ss) that will be used in the domains in the future. Other settings do not need to be changed without a specific need.
Example:
admin@mycelium1@ecss1:/$ cluster/storage/ds1/ss/install ds1@ecss1 ss_fax_receiver.xml Successfully installed: /var/lib/ecss/ss/ss_fax_receiver.xml |
The Clusters application is used to view and change cluster properties, where a cluster with storage role should be selected.
To manage services in the web configurator, the Service Management (SS install) application is used.
The mediator cluster is designed to collect and export warnings and statistical information.
To change individual cluster settings, use the command:
/cluster/mediator/<NAME_CLUSTER>/<GROUP>/set <PROPERTY> <VALUE>
To view the set values of cluster parameters, use the command:
/cluster/mediator/<NAME_CLUSTER>/<GROUP>/info [<PROPERTY>
where:
The path to CLI commands for managing clusters with the mediator role is /cluster/mediator:
admin@ds1@ecss1:/$ ls -t /cluster/mediator/
|-/md1
|-/alarms
| |-*clear
| |-*delete
| |-*export
| |-*generate-alarm
| |-*list
| |-*maskadd
| |-*maskdel
| |-*masklist
| |-*maskmod
| |-*masktrace
| |-*res-cleanup
| |-/notifiers
| |-/email
| | |-*clean
| | |-*info
| | |-*send_test_email
| | |-*set
| |-/jabber
| |-*clean
| |-*info
| |-*send_test_jabber
| |-*set
|-/ap
| |-*speaker-off
| |-*status
|-/properties
| |-/cocon_http_terminal
| | |-*clean
| | |-*info
| | |-*set
| |-/rpss
| |-*clean
| |-*info
| |-*set
|-/sip
| |-/isup-cause-messages
| | |-*clean
| | |-*info
| | |-*set
| |-/sip-error-messages
| | |-*clean
| | |-*info
| | |-*set
| |-/sip-internal-messages
| | |-*clean
| | |-*info
| | |-*set
| |-/sip-status-messages
| |-*clean
| |-*info
| |-*set
|-/snmp
| |-/agent
| |-/properties
| |-*clean
| |-*info
| |-*set
|-/statistics
|-*add
|-*addcolmap
|-*delcolmap
|-*delete
|-*list
|-*statmodinfo
|
The Clusters application is used to view and change properties, where a cluster with mediator role should be selected. To configure the masking of warnings, the Alarm list application is used.
The core cluster implements the logic of managing telephone calls processing (Call Control functions), providing services and billing functionality provision.
To change individual cluster settings, use the command:
/cluster/core/<NAME_CLUSTER>/<GROUP>/set <PROPERTY> <VALUE>
To view the set values of cluster parameters, use the command:
/cluster/core/<NAME_CLUSTER>/<GROUP>/info [<PROPERTY>]
where:
The path to CLI commands for managing clusters with the core role is /cluster/core: For clusters with core role, all the default parameter values necessary for operation are afore ready to handle the load.
If it is provided by the project, then the following are configured at the initial stage:
The Clusters application is used to view and change properties, where cluster with the core role should be selected. The CDR is configured in the CDR Manager application.
To change individual cluster settings, use the command:
/cluster/core/<NAME_CLUSTER>/<GROUP>/set <PROPERTY> <VALUE>
To view the set values of cluster parameters, use the command:
/cluster/core/<NAME_CLUSTER>/<GROUP>/info [<PROPERTY>]
where:
The path to CLI commands for managing pa_sip clusters is /cluster/adapter/<PA_SIP>/.
At the initial stage, only SIP transport (ipset) needs to be configured on the adapter in accordance with the project:
Configuration of the remaining SIP parameters has been moved to the virtual PBX level. |
SIP transport parameters can also be configured using the web configurator. To configure the interface, open the Clusters application, select the adapter and enter the transport parameters.
The procedure for configuring the software media server (MSR):
This step includes the process of creating virtual PBX, setting up call routing rules, trunks, subscribers, subscriber service rules.
It is necessary to create a domain for each of the projected domains. When creating, add the ECSS-10 administrator to the Administrators group, as well as to the Users group of this domain.
The entire infrastructure for providing telephone services based on ECSS-10, namely the configuration of connected gateways, subscriber data, numbering plan and routing rules, as well as access rights to operational management and support functions are described within a specific domain.
Thus, the domain can be represented as a logical part of a flexible switch that implements the functionality of a separate PBX.
There may be several such entities on a flexible switch. In the ECSS-10 system, domain and virtual PBX are synonyms.
In fact, the deployment of several domains and links between them makes it possible to implement a segment or the entire NGN network within a single installation.
Domain systems and flexible system of access rights differentiation allow the telecommunications carrier to perform the functions of PBX hosting for third-party customers.
Customer of the telecommunications carrier can place their corporate PBX or communication node on the capacities of the ECSS-10 system deployed by the carrier. Functions of operational management for this PBX can be transferred to the customer fully or partially (the scheme of responsibility differentiation for the operation of this PBX is used).
Each virtual PBX contains the following set of parameters:
Virtual PBX configuration algorithm:
Domain creating is performed in one of two ways:
/domain/declare);After creating domains, the necessary domain-level directories will be automatically created in the file system.
Routing of telephone calls — process of determining the destination interface for a specific call based on information about the interface of the call source, information about the caller's and called subscriber's phone number, caller category, time of day and day of the week. The routing context — set of routing rules unique in the routing domain, within which the interface of the called subscriber is defined. |
Description of the call routing process in the ECSS-10 system is given in the section "Virtual PBX. Telephone call routing v2".
In accordance with the project, it is necessary to create call routing contexts that will be used in the domain settings in the future.
There are several ways to do this:
Further, if it is defined in the project, it is necessary to prepare modification and adaptation contexts also for each domain.
Routing contexts must be assigned for the system:ivr and system:teleconference interfaces. Settings are made using CLI commands. The path of the commands is /domain/<DOMAIN>/system-iface/.
Users are subjects who work with the system via СoСon or web configurator.
Each user has the following set of parameters:
By default, an admin user with system administrator rights is created in the system. The default password is password.
To differentiate rights, create the required number of users and assign rights and roles to each. For more information, see /cocon/ commands description. For convenience's sake, each user can configure the parameters of the CoCon shell using the global command /shell-options.
To apply different subscriber level limitations, configure them for each domain.
The following types of restrictions for subscribers are distinguished:
The configuring is performed :
In accordance with the project, create the required number of subscribers in the domains. It can be done using the CLI command /domain/<DOMAIN>/sip/user/declare or using web configurator application Subscriber card.
When creating a subscriber, assign a number, a group, a routing context, a CDR group, an authorization method and authorization data. You can also immediately assign a set of necessary services for each subscriber and configure the necessary restrictions.
Subscribers in the domain is an optional condition, there may be purely transit domains in the system, on which the appropriate rules for passing calls are configured.
|
The declaration and configuration of trunks is carried out:
Then, if necessary, supplementary services are configured on the trunks.
Bridge is a virtual trunk that allows to connect two virtual PBX within one ECSS-10 system. |
If there is more than one domain in the system, then communication between them is carried out using bridges.
Bridges are created and configured:
For each domain, it is possible to set different kinds of restrictions within the licence.
Table 1. List of domain level limits
| Property name | Default value | Description |
|---|---|---|
| alias_limit | infinity (limited by license) | Total number of subscribers (including virtual ones) in this virtual PBX. |
| call_limit | infinity (limited by license) | Total number of simultaneously active calls for this virtual PBX. |
| virtual_alias_limit | infinity (limited by license) | Total number of virtual subscribers in this virtual PBX. |
| digitmap | List of the set masks by which aliases will be validated when created, parameters are described on the page / domain/ — commands for managing virtual PBX. | |
| failover | true | The need to reserve calls on this virtual PBX. This parameter is used only in redundant systems. Since the use of the backup increases the consumption of system resources (processor, RAM, etc.), the exclusion of a virtual PBX from the redundancy scheme allows saving some resources and direct the saved resources to call processing. In the regular operation of the system, this allows increasing productivity at the expense of reliability. |
| callcenter\enabled | true | Access to the contact center for this virtual PBX. |
| callcenter\active_agents | infinity (limited by license) | Maximum number of connected Call center agents for a domain. |
| callcenter\active_supervisors | infinity (limited by license) | Maximum number of connected Call center supervisors for a domain. |
| tc\active_conferences | infinity (limited by license) | Maximum number of active conferences for a domain. |
| tc_count_active_channels | infinity (limited by license) | Maximum number of connected subscribers to the conference of the Teleconference service for a domain. |
| ivr\enabled | true | Access to the IVR and dialer functions for this virtual PBX. |
| ivr\incoming_script\enabled | true | Use the default_incoming_call script for incoming trunks as the IVR routing context. |
| teleconference\enabled | true | Access to the Selector communication service for this virtual PBX. |
| tsmn\concurrent_calls | 0 | Total number of simultaneously active calls for the TSMN system on the main trunk. |
| tsmn\concurrent_calls\redundancy | 0 | Total number of simultaneously active calls for the TSMN system on the backup trunk. |
| add_on_conferences_limit | infinity (limited by license) | Total number of simultaneously active conferences for this virtual PBX. |
| meet_me_limit | infinity (limited by license) | Total number of active users of "meet me" rooms for this virtual PBX. |
| chat_room_limit | infinity (limited by license) | Total number of active conference rooms for this virtual PBX. |
| dialer\channels | 0 (limited by license) | Number of simultaneous calls for calling campaigns. |
| recorder\voice\channels | 0 (limited by license) | Number of simultaneous channels for recording conversations. |
| ss_package | 0 (limited by license) | Number of licensed service packages. |
Restrictions are configured in accordance with the project:
Additional IVR scenarios can be configured for each virtual PBX, if they are immediately provided by the project. Scripts are created in the web configurator application IVR editor. Scripts can also be managed using CLI commands. See /domain/<DOMAIN>/ivr/ — IVR script management commands.
If necessary, restrictions for IVR operation can be configured for each domain:
Use of supplementary service applications that expand the functionality is available as a part of the ECSS-10 ecosystem:
Contact technical support for consultation on configuring.
For productive systems, ECSS-10 configuring consists of the following steps:
In order to isolate the MSR media server from the rest of the system, it is necessary to allocate separate processor cores for it. To allocate individual processor cores, perform the following steps:
Open the file:
/etc/default/grub |
bring the GRUB_CMDLINE_LINUX="" parameter to the following form:
GRUB_CMDLINE_LINUX="isolcpus=8-11" |
This example isolates cores from 8 to11. It is also possible to list 1, 2, 4-6, etc.
Update the grub configuration. To do this, run the command:
sudo update-grub |
Restart the system.
If everything is done correctly, then after a reboot htop will show zero load on isolated cores.
cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors conservative ondemand userspace powersave performance |
By default, Ubuntu has five processor profiles:
Profile description:
The system can withstand a heavy load in performance mode. To enable this mode by default, bring the /etc/rc.local file to the following format:
#!/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor >/dev/null exit 0 |
The installation must be done by creating an additional rule in /etc/udev/rules.d/. |
For MSR to run on separate processor cores, it is necessary to bring the /etc/systemd/system/ecss-media-server.service.d/override.con file to the following format:
[Service] CPUAffinity=8-11 CPUSchedulingPolicy=rr |
Before that, enable the MSR instance:
systemctl enable ecss-media-server@msr.service systemctl edit ecss-media-server@msr.service |
To view which cores the service has started on, you can use htop. You need to add a Processor column in it.
In this example, MSR is running on cores 8, 9, 10, 11. CPUSchedulingPolicy is required only if isolcpus is specified.
MSR configuring is described in more detail on the page Configuring the software media server.
In order for processor cores to be used correctly, it is necessary to adjust the startup parameters of erlang nodes on productive systems.
To do this, a scheme for placing nodes on cores is being developed. The scheme is developed according to the following rules:
The distribution of cores on the example of a dual-processor HP BL660 server with two Intel Xeon E5-4657L processors with 12 cores and hyperthreading support, which can form 8 virtual cores:
myc 0-3 ds 4-7 core 8-23 sip 24-31 md 32-35 rest 36-39 sp 40-43 msr 44-47 |
To implement this distribution, it is necessary to enable a mode of using only the required number of cores on the erlang node.
To do this, edit the vm.args file of each node located at the path /usr/lib/ecss/ECSS- SERVICE-NAME/releases/VERSION/.
For example, for ecss-core, edit the file:
sudo mcedit /usr/lib/ecss/ecss-core/releases/3.14.10.210/vm.args |
Add options to this file that specify the use of the required number of logical processor cores and the number of active schedulers.
For 16 cores, this is:
+sct L0-15c0-15 +sbt db +S16:16 |
For 8 cores:
+sct L0-7c0-7 +sbt db +S8:8 |
For 4 cores:
+sct L0-3c0-3 +sbt db +S4:4 |
For 2 cores:
+sct L0-1c0-1 +sbt db +S2:2 |
The next task is to install the service on the selected cores. This is done in the same way as described for MSR.
Run the command:
sudo systemctl edit ECSS-SERVICE-NAME |
Adding parameters:
[Service] CPUAffinity=0-3 |
where CPUAffinity value specifies the cores on which the service processes should be started.
Example of ecss-core configuration according to the above scheme:
> sudo systemctl edit ecss-core.service [Service] CPUAffinity=8-23 |
After configuring the CPUAffinity parameters for all services, restart the service configuration with the command:
sudo systemctl daemon-reload |
Restart services:
sudo systemctl restart ecss.slice |
To make sure that the services are correctly linked to the cores, use the htop utility by enabling the display of the PROCESSOR column.