Дерево страниц
Перейти к концу метаданных
Переход к началу метаданных

This section describes steps of the initial ECSS configuration before checking basic functionality.

Terms and definitions

  • Alias — set of data about the subscriber;
  • Bridge — virtual gateway that connects virtual PBXs. The term "bridge" was introduced to create means of controlling connections between virtual PBX. Calls between virtual PBX of the same ECSS-10 system are routed within this system via bridge. At the same time, inter-station connecting lines are not involved. The bridge is presented in the form of two interfaces connected to each other. Each interface is declared in its own virtual PBX. For a bridge, as for a classic trunk, various types of restrictions can be set, for example, the number of channels, which gives a limit on the number of simultaneously established connections between virtual PBX and allows normalization of the load;
  • Domain (virtual PBX— collection consisting of many routing contexts, interfaces, and aliases. The closest equivalent is a description of the numbering and routing plan within the framework of a classical telephone exchange for traditional networks;
  • Cluster — complex of elements of the same type that perform, from the system point of view, a single function. With their help, the computational topology of the system is described. In this system, the cluster element is a node. A cluster exists as long as it includes at least one node;
  • Media resources — description of the media server parameters required to work with it;
  • Media server (MSR) — component of the ECSS-10 system designed for speech and video information processing over the RTP protocol, organizing conferences, recording conversations, playing media files and various combinations of these modes. Media server resources are managed using the control channel mechanism (RFC 6230 Media Control Channel Framework, RFC 6231 IVR Control Package, RFC 6505 Mixer Control Package);
  • Node — is Erlang virtual machine and an element of the ECSS-10 computing cluster. Nodes in ECSS-10 are typed according to the functionality performed on them. The same type of nodes are combined into clusters of the appropriate type. Example: the Core cluster consists of nodes that perform the function of the switching system core;
  • IVR (Interactive Voice Response) — system of prerecorded voice messages that performs a routing function in call center or PBX, using information entered by the client on the telephone keypad using tone dialing;
  • LDAP (Lightweight Directory Access Protocol) — application-level protocol for accessing the directory service;
  • RADIUS — protocol providing a centralized method of authenticating users by accessing an external server. The RADIUS protocol is used for authentication, authorization, and accounting. The RADIUS server operates with a user database that contains authentication data for each user. Thus, the use of RADIUS protocol provides centralized management and additional protection when accessing network resources.

Prerequisites

Before starting configuring, make sure the following:

Initial configuration order

First of all, perform general settings for the entire system:

  • clusters;
  • media server;
  • routing, modification and adaptation;
  • ECSS users;
  • optionally;
    • Radius;
    • LDAP.

Next, declare and configure the following services:

  • domains;
  • trunks;
  • bridges;
  • IVR;
  • subscribers;
  • services on trunks and subscribers.

It is recommended to perform the initial setup in the order given below.

Configuring clusters

Classification of clusters by roles:

  • BUS — integration bus cluster that provides reliable message transmission;
  • CORE — cluster that performs the functions of routing telephone calls and processing services;
  • STORAGE — cluster for storing long-term data;
  • MEDIATOR — cluster that provides complex management functions, provision of statistical information and alarm;
  • ADAPTER — cluster that performs functions of interacting with gateways operating under one of the protocols: H.248/Megaco, SIP and SIP-T, PA MGCP, PA Sigtran.

This stage includes configuration of all clusters of the system (core, ds, mediator, pa_sip, pa_megaco). Each cluster can include one or more nodes of the same type. For example, SIP adapter cluster (pa_sip) may consist of several SIP adapter nodes.

When installing the license, the standard subsystem topology is automatically set:

  • a certain set of clusters with standard names is installed;
  • a specific set of nodes with standard names in clusters is set.

To manage clusters of the system, the following can be used:

  • CLI — Command Line Interface (for a description of the commands for managing clusters, see /cluster/);
  • Clusters application in the web configurator.

General commands for configuring cluster properties

These commands are the main ones for setting any cluster.

To configure individual settings of a particular cluster, use the command:

/cluster/<SOME_ROLE>/<NAME_CLUSTER>/<GROUP>/set <PROPERTY> [<NAME_NODE>|add| remove] <VALUE>

To view individual settings of a particular cluster, use the command:

/cluster/<SOME_ROLE>/<NAME_CLUSTER>/<GROUP>/info [<PROPERTY>]

where:

  • <SOME_ROLE> — cluster role: adapter, bus, core, mediator, storage;
  • <NAME_CLUSTER> — cluster name;
  • <GROUP> — group of parameters;
  • <NAME_NODE> — node name;
  • <PROPERTY> — property name;
  • <VALUE> — property value.

STORAGE cluster settings

The storage cluster performs a function of a distributed configuration data storage of the entire system. Also, a telephone call routing module is implemented within this subsystem.

Configuring via CLI (CoCon)

To change individual cluster settings, use the command:

/cluster/storage/<NAME_CLUSTER>/<GROUP>/set <PROPERTY> <VALUE>

To view set values of cluster parameters, use the command:

/cluster/storage/<NAME_CLUSTER>/<GROUP>/info [<PROPERTY>]

where:

  • <NAME_CLUSTER> — cluster name, default is ds1;
  • <GROUP> — group of parameters;
  • <PROPERTY> — property name;
  • <VALUE> — property value.

At the initial stage at the level of the data storage cluster, it is enough to install only services (ss) that will be used in the domains in the future. Other settings do not need to be changed without a specific need.

Example:

admin@mycelium1@ecss1:/$ cluster/storage/ds1/ss/install ds1@ecss1 ss_fax_receiver.xml               
Successfully installed: /var/lib/ecss/ss/ss_fax_receiver.xml

Configuring cluster parameters via web interface

The Clusters application is used to view and change cluster properties, where a cluster with storage role should be selected.

To manage services in the web configurator, the Service Management (SS install) application is used.

Configuring MEDIATOR cluster parameters

The mediator cluster is designed to collect and export warnings and statistical information.

Configuring via CLI (CoCon)

To change individual cluster settings, use the command:

/cluster/mediator/<NAME_CLUSTER>/<GROUP>/set <PROPERTY> <VALUE>

To view the set values of cluster parameters, use the command:

/cluster/mediator/<NAME_CLUSTER>/<GROUP>/info [<PROPERTY>

where:

  • <NAME_CLUSTER> — cluster name, by default is md1;
  • <GROUP> — group of parameters;
  • <PROPERTY> — property name;
  • <VALUE> — property value.

The path to CLI commands for managing clusters with the mediator role is /cluster/mediator:

admin@ds1@ecss1:/$ ls -t /cluster/mediator/
|-/md1
  |-/alarms
  | |-*clear
  | |-*delete
  | |-*export
  | |-*generate-alarm
  | |-*list
  | |-*maskadd
  | |-*maskdel
  | |-*masklist
  | |-*maskmod
  | |-*masktrace
  | |-*res-cleanup
  | |-/notifiers
  |   |-/email
  |   | |-*clean
  |   | |-*info
  |   | |-*send_test_email
  |   | |-*set
  |   |-/jabber
  |     |-*clean
  |     |-*info
  |     |-*send_test_jabber
  |     |-*set
  |-/ap
  | |-*speaker-off
  | |-*status
  |-/properties
  | |-/cocon_http_terminal
  | | |-*clean
  | | |-*info
  | | |-*set
  | |-/rpss
  |   |-*clean
  |   |-*info
  |   |-*set
  |-/sip
  | |-/isup-cause-messages
  | | |-*clean
  | | |-*info
  | | |-*set
  | |-/sip-error-messages
  | | |-*clean
  | | |-*info
  | | |-*set
  | |-/sip-internal-messages
  | | |-*clean
  | | |-*info
  | | |-*set
  | |-/sip-status-messages
  |   |-*clean
  |   |-*info
  |   |-*set
  |-/snmp
  | |-/agent
  |   |-/properties
  |     |-*clean
  |     |-*info
  |     |-*set
  |-/statistics
    |-*add
    |-*addcolmap
    |-*delcolmap
    |-*delete
    |-*list
    |-*statmodinfo

Configuring cluster parameters via web interface

The Clusters application is used to view and change properties, where a cluster with mediator role should be selected. To configure the masking of warnings, the Alarm list application is used.

Configuring CORE cluster parameters

The core cluster implements the logic of managing telephone calls processing (Call Control functions), providing services and billing functionality provision.

Configuring via CLI (CoCon)

To change individual cluster settings, use the command:

/cluster/core/<NAME_CLUSTER>/<GROUP>/set <PROPERTY> <VALUE>

To view the set values of cluster parameters, use the command:

/cluster/core/<NAME_CLUSTER>/<GROUP>/info [<PROPERTY>]

where:

  • <NAME_CLUSTER> — cluster name, by default is core1;
  • <GROUP> — group of parameters;
  • <PROPERTY> — property name;
  • <VALUE> — property value.

The path to CLI commands for managing clusters with the core role is /cluster/core: For clusters with core role, all the default parameter values necessary for operation are afore ready to handle the load.

If it is provided by the project, then the following are configured at the initial stage:

  • call notification service parameters;
  • TTS subsystem for CDR collection.

Configuring cluster parameters via web interface

The Clusters application is used to view and change properties, where cluster with the core role should be selected. The CDR is configured in the CDR Manager application. 

Configuring PA_SIP cluster parameters

Configuring via CLI (CoCon)

To change individual cluster settings, use the command:

/cluster/core/<NAME_CLUSTER>/<GROUP>/set <PROPERTY> <VALUE>

To view the set values of cluster parameters, use the command:

/cluster/core/<NAME_CLUSTER>/<GROUP>/info [<PROPERTY>]

where:

  • <NAME_CLUSTER> — cluster name, by default is sip1;
  • <GROUP> — group of parameters;
  • <PROPERTY> — property name;
  • <VALUE> — property value.

The path to CLI commands for managing pa_sip clusters is /cluster/adapter/<PA_SIP>/.

At the initial stage, only SIP transport (ipset) needs to be configured on the adapter in accordance with the project: 

  • select interfaces (node-ip) for adapter operation. If the system is redundant, then the virtual IP configured in /etc/keepalived.conf are set;
  • add transport ports for receiving SIP protocol messages;
  • set the required QoS DSCP value.

Configuration of the remaining SIP parameters has been moved to the virtual PBX level.

Configuring cluster parameters via web interface

SIP transport parameters can also be configured using the web configurator. To configure the interface, open the Clusters application, select the adapter and enter the transport parameters. 

Configuring a software media server

The procedure for configuring the software media server (MSR):

  • configuring the media server file;
  • starting the media server;
  • adding media resources to ECSS-10:
    • by CLI commands (/system/media/resource/);
    • via web configurator application MSR registrars.

Creating and configuring domains (virtual PBX)

This step includes the process of creating virtual PBX, setting up call routing rules, trunks, subscribers, subscriber service rules.

Domain declaration 

It is necessary to create a domain for each of the projected domains. When creating, add the ECSS-10 administrator to the Administrators group, as well as to the Users group of this domain.

Virtual PBX configuration procedure (domain)

The entire infrastructure for providing telephone services based on ECSS-10, namely the configuration of connected gateways, subscriber data, numbering plan and routing rules, as well as access rights to operational management and support functions are described within a specific domain.

Thus, the domain can be represented as a logical part of a flexible switch that implements the functionality of a separate PBX.

There may be several such entities on a flexible switch. In the ECSS-10 system, domain and virtual PBX are synonyms.

In fact, the deployment of several domains and links between them makes it possible to implement a segment or the entire NGN network within a single installation.

Domain systems and flexible system of access rights differentiation allow the telecommunications carrier to perform the functions of PBX hosting for third-party customers.

Customer of the telecommunications carrier can place their corporate PBX or communication node on the capacities of the ECSS-10 system deployed by the carrier. Functions of operational management for this PBX can be transferred to the customer fully or partially (the scheme of responsibility differentiation for the operation of this PBX is used).

Each virtual PBX contains the following set of parameters:

  • the list of routing contexts of the virtual PBX;
  • the list of aliases contained in this this virtual PBX;
  • the list of services set in virtual PBX.

Virtual PBX configuration algorithm:

  1. create a virtual PBX, when creating a VPBX, add the ECSS-10 administrator to the VPBX administrators group and the VPBX user group;
  2. set limits on the number of aliases, simultaneous calls (optional, performed by the ECSS-10 administrator);
  3. add and configure routing contexts;
  4. add and configure subscribers;
  5. add and configure services.

Domain creating is performed in one of two ways:

  • via the Command Line Interface (/domain/declare);
  • via web configurator Domains application.

After creating domains, the necessary domain-level directories will be automatically created in the file system.

Configuring routing

Routing of telephone calls — process of determining the destination interface for a specific call based on information about the interface of the call source, information about the caller's and called subscriber's phone number, caller category, time of day and day of the week.

The routing context — set of routing rules unique in the routing domain, within which the interface of the called subscriber is defined.

Description of the call routing process in the ECSS-10 system is given in the section "Virtual PBX. Telephone call routing v2".

Creating routing contexts

In accordance with the project, it is necessary to create call routing contexts that will be used in the domain settings in the future.

There are several ways to do this:

  • create and import contexts manually in the format specified in the documentation. Xml-formatted contexts are created in the /var/lib/ecss/routing/ctx/src/<DOMAIN> directory;
  • create and configure contexts using the web configurator application Routing manager for each domain;

Further, if it is defined in the project, it is necessary to prepare modification and adaptation contexts also for each domain. 

Applying routing contexts to system interfaces

Routing contexts must be assigned for the system:ivr and system:teleconference interfaces. Settings are made using CLI commands. The path of the commands is /domain/<DOMAIN>/system-iface/.

Adding and configuring user rights

Users are subjects who work with the system via СoСon or web configurator.

Each user has the following set of parameters:

  • Name;
  • Password;
  • User group(s);
  • Role.

By default, an admin user with system administrator rights is created in the system. The default password is password.

To differentiate rights, create the required number of users and assign rights and roles to each. For more information, see /cocon/ commands description. For convenience's sake, each user can configure the parameters of the CoCon shell using the global command /shell-options.

Configuring subscriber service limiting rules

To apply different subscriber level limitations, configure them for each domain.

The following types of restrictions for subscribers are distinguished:

  • long-term restrictions that are introduced when the subscriber connects and are prescribed in the contract with the subscriber are called the access type (access_type);
  • grouping of subscribers to allow subscribers of one group to access subscribers of another group is called an access group (access_group);
  • temporary restrictions associated with non-payment of bills by the subscriber are called the service mode (regime);
  • restrictions that the subscriber sets themselves are called barring.

The configuring is performed :

  • via the Command Line Interface, see commands:
    • /domain/<DOMAIN>/access-group — access group management commands;
    • /domain/<DOMAIN>/access-type — access type management commands;
    • /domain/<DOMAIN>/regime/ — maintenance mode management commands subscribers.
  • via web configurator application Access manager.

Creating and configuring subscribers

In accordance with the project, create the required number of subscribers in the domains. It can be done using the CLI command  /domain/<DOMAIN>/sip/user/declare or using web configurator application Subscriber card.

When creating a subscriber, assign a number, a group, a routing context, a CDR group, an authorization method and authorization data. You can also immediately assign a set of necessary services for each subscriber and configure the necessary restrictions.

Subscribers in the domain is an optional condition, there may be purely transit domains in the system, on which the appropriate rules for passing calls are configured.

Creating and configuring trunks

  • A trunk is a collection of resources for servicing telephone calls in a given direction;
  • SIP trunk is a direction that operates on the SIP/SIP protocol-T/ SIP-I;
  • Dynamic trunk is a trunk with mandatory registration support. To make a call via dynamic trunk, the interacting gateway must be registered on this trunk in the ECSS-10 system.

The declaration and configuration of trunks is carried out:

  • via the Command Line Interface, see section /domain/<DOMAIN>/trunk/sip/  — SIP trunk management commands;
  • via web configurator application Trunk Manager.

Then, if necessary, supplementary services are configured on the trunks.

Creating and configuring bridge interfaces

Bridge is a virtual trunk that allows to connect two virtual PBX within one ECSS-10 system.

If there is more than one domain in the system, then communication between them is carried out using bridges.

Bridges are created and configured:

  • via Command Line Interface, see /bridge/ — bridge interfaces management commands;
  • via web configurator application Bridge manager.

Setting restrictions

For each domain, it is possible to set different kinds of restrictions within the licence.

Table 1. List of domain level limits

Property nameDefault value

Description

alias_limitinfinity (limited by license)Total number of subscribers (including virtual ones) in this virtual PBX.
call_limitinfinity (limited by license)Total number of simultaneously active calls for this virtual PBX.
virtual_alias_limitinfinity (limited by license)Total number of virtual subscribers in this virtual PBX.
digitmap
List of the set masks by which aliases will be validated when created, parameters are described on the page / domain/ — commands for managing virtual PBX.
failovertrue

The need to reserve calls on this virtual PBX. This parameter is used only in redundant systems. Since the use of the backup increases the consumption of system resources (processor, RAM, etc.), the exclusion of a virtual PBX from the redundancy scheme allows saving some resources and direct the saved resources to call processing. In the regular operation of the system, this allows increasing productivity at the expense of reliability.

callcenter\enabledtrueAccess to the contact center for this virtual PBX.
callcenter\active_agentsinfinity (limited by license)

Maximum number of connected Call center agents for a domain.

callcenter\active_supervisorsinfinity (limited by license)Maximum number of connected Call center supervisors for a domain.
tc\active_conferencesinfinity (limited by license)Maximum number of active conferences for a domain.
tc_count_active_channelsinfinity (limited by license)Maximum number of connected subscribers to the conference of the Teleconference service for a domain.
ivr\enabledtrueAccess to the IVR and dialer functions for this virtual PBX.
ivr\incoming_script\enabledtrue

Use the default_incoming_call script for incoming trunks as the IVR routing context.

teleconference\enabledtrueAccess to the Selector communication service for this virtual PBX.
tsmn\concurrent_calls0Total number of simultaneously active calls for the TSMN system on the main trunk.
tsmn\concurrent_calls\redundancy0Total number of simultaneously active calls for the TSMN system on the backup trunk.
add_on_conferences_limitinfinity (limited by license)Total number of simultaneously active conferences for this virtual PBX.
meet_me_limitinfinity (limited by license)

Total number of active users of "meet me" rooms for this virtual PBX.

chat_room_limitinfinity (limited by license)Total number of active conference rooms for this virtual PBX.
dialer\channels

0 (limited by license)

Number of simultaneous calls for calling campaigns.
recorder\voice\channels0 (limited by license)Number of simultaneous channels for recording conversations.
ss_package0 (limited by license)

Number of licensed service packages.

Restrictions are configured in accordance with the project:

  • via command line interface, see /domain/<DOMAIN>/properties/restrictions/ — commands for managing virtual PBX restrictions;
  • via web configurator application Domains (Domain Properties →> System Properties →> Restrictions).

IVR Scenarios

Additional IVR scenarios can be configured for each virtual PBX, if they are immediately provided by the project. Scripts are created in the web configurator application IVR editor. Scripts can also be managed using CLI commands. See /domain/<DOMAIN>/ivr/ — IVR script management commands.

If necessary, restrictions for IVR operation can be configured for each domain:

  • via Command Line Interface, see section /system/ivr/script/restrictions/ — management commands for IVR script restriction settings;
  • via web configurator application IVR restrictions manager.

Configuring supplementary services

Use of supplementary service applications that expand the functionality is available as a part of the ECSS-10 ecosystem:

  • Call center;
  • tools for conducting conference calls;
  • Auto call service ;
  • Automatic Speech Recognition (ASR) service;
  • integration with Desktop assistant, CRM, Skype for business;
  • Auto Secretary service;
  • visualization of statistical data in the Grafana monitoring system;
  • Subscriber Portal application;
  • Autoprovision (AUP) system for automatic configuration and software updates of telephone sets.

Contact technical support for consultation on configuring. 

Configuring ECSS-10 for productive systems

For productive systems, ECSS-10 configuring consists of the following steps:

Allocation of individual processor cores for MSR

In order to isolate the MSR media server from the rest of the system, it is necessary to allocate separate processor cores for it. To allocate individual processor cores, perform the following steps:

  1. Open the file:

    /etc/default/grub

    bring the GRUB_CMDLINE_LINUX="" parameter to the following form:

    GRUB_CMDLINE_LINUX="isolcpus=8-11"

    This example isolates cores from 8 to11. It is also possible to list 1, 2, 4-6, etc.

  2. Update the grub configuration. To do this, run the command:

    sudo update-grub
  3. Restart the system. 

    If everything is done correctly, then after a reboot htop will show zero load on isolated cores.

Setting scaling_governor to performance mode

cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors conservative ondemand userspace powersave performance

By default, Ubuntu has five processor profiles:

Profile description:

  • conservative — slowly increases the processor frequency depending on the load on the system and abruptly resets the frequency to the minimum at idle;
  • ondemand — quickly increases the processor frequency with increasing load and slowly resets the frequency to a minimum when idle;
  • userspace — allows specifying the frequency manually;
  • powersave — corresponds to the minimum allowable CPU frequency;
  • performance — corresponds to the maximum CPU frequency.

The system can withstand a heavy load in performance mode. To enable this mode by default, bring the /etc/rc.local file to the following format:

#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor >/dev/null
exit 0


The installation must be done by creating an additional rule in /etc/udev/rules.d/.

Running MSR on isolated CPU cores

For MSR to run on separate processor cores, it is necessary to bring the /etc/systemd/system/ecss-media-server.service.d/override.con file to the following format:

[Service] 
CPUAffinity=8-11 
CPUSchedulingPolicy=rr

Before that, enable the MSR instance:

systemctl enable ecss-media-server@msr.service
systemctl edit ecss-media-server@msr.service

To view which cores the service has started on, you can use htop. You need to add a Processor column in it.

In this example, MSR is running on cores 8, 9, 10, 11. CPUSchedulingPolicy is required only if isolcpus is specified.

MSR configuring is described in more detail on the page Configuring the software media server.

Configuring use of specific processor cores for erlang-based services

In order for processor cores to be used correctly, it is necessary to adjust the startup parameters of erlang nodes on productive systems.

To do this, a scheme for placing nodes on cores is being developed. The scheme is developed according to the following rules:

  • use more than two cores;
  • it is necessary that one node does not use cores on different processors;
  • for heavily loaded nodes, such as core and sip, allocate individual cores;
  • nodes that are not loaded can be placed on a single core;
  • for core, it is necessary to allocate a larger number of cores.

The distribution of cores on the example of a dual-processor HP BL660 server with two Intel Xeon E5-4657L processors with 12 cores and hyperthreading support, which can form 8 virtual cores:

myc  0-3
ds   4-7
core 8-23
sip  24-31
md   32-35
rest 36-39
sp   40-43
msr  44-47

To implement this distribution, it is necessary to enable a mode of using only the required number of cores on the erlang node.

To do this, edit the vm.args file of each node located at the path /usr/lib/ecss/ECSS- SERVICE-NAME/releases/VERSION/.

For example, for ecss-core, edit the file:

sudo mcedit /usr/lib/ecss/ecss-core/releases/3.14.10.210/vm.args

Add options to this file that specify the use of the required number of logical processor cores and the number of active schedulers.

For 16 cores, this is:

+sct L0-15c0-15
+sbt db
+S16:16

For 8 cores:

+sct L0-7c0-7
+sbt db
+S8:8

For 4 cores:

+sct L0-3c0-3
+sbt db
+S4:4

For 2 cores:

+sct L0-1c0-1
+sbt db
+S2:2

The next task is to install the service on the selected cores. This is done in the same way as described for MSR.

Run the command:

sudo systemctl edit ECSS-SERVICE-NAME

Adding parameters:

[Service]
CPUAffinity=0-3

where CPUAffinity value specifies the cores on which the service processes should be started.

Example of ecss-core configuration according to the above scheme:

> sudo systemctl edit ecss-core.service
[Service]
CPUAffinity=8-23

After configuring the CPUAffinity parameters for all services, restart the service configuration with the command:

sudo systemctl daemon-reload

Restart services:

sudo systemctl restart ecss.slice

To make sure that the services are correctly linked to the cores, use the htop utility by enabling the display of the PROCESSOR column.

  • Нет меток