- RestFS — component that provides HTTP API for working with files.
- GlusterFS — distributed file system that provides reliable storage (an optional component for systems without redundancy).
At the system level, there is a registry of RestFS clusters that the system can work with.
By default, the RestFS cluster registry has a default cluster, which is displayed in the URL: http://system.restfs.ecss:9990
RestFS cluster can have any name except for system
, and also cannot start with:
- "
http://";
- "
https://";
- "
ftp://"
; - "
file://"
.
RestFS cluster named default
cannot be removed.
At the system level, RestFS management/monitoring commands are under /restfs path.
At the virtual PBX level — under /domain/<DOMAIN>/restfs/ path.
Configuring RestFS for one node
To run RestFS in one node mode, it is enough to install ecss-restfs packet. During the installation, a number of questions were asked. Next, start the restfs service and check its status.
sudo apt install ecss-restfs sudo systemctl start ecss-restfs.service sudo systemctl status ecss-restfs.service
By default, the service listens for HTTP requests on port 9990 and works with the /var/lib/ecss/restfs directory.
RestFS settings for a claster based on glusterfs server
In order to ensure data replication between cluster servers, configure glusterfs-server.
As example, ECSS-10 system running in a cluster with the following settings is given:
The IP adress of ecss1 — 192.168.118.222;
The IP adress of ecss2 — 192.168.118.224.
Install gluster server and attr package on both hosts:
sudo aptitude install glusterfs-server attr
To add a server to the file storage pool, run the command on ecss1:
sudo gluster peer probe 192.168.118.224
After that, the information about ecss1 should appear on ecss2 when executing
sudo gluster peer status
command:Number of Peers: 1 Hostname: 192.168.118.222 Uuid: 569c4730-a3a7-4d29-a132-b1bcdad792d8 State: Peer in Cluster (Connected)
To create a cluster on ecss1, run the command:
sudo gluster volume create ecss_volume replica 2 transport tcp 192.168.118.222:/var/lib/ecss/glusterfs 192.168.118.224:/var/lib/ecss/glusterfs force
To start the created cluster, run the following command on ecss1:
sudo gluster volume start ecss_volume
To check the cluster status on ecss1, run the command:
sudo gluster volume info
It is necessary to pay attention to "Status" and "Bricks" fields — they should look as follows:
Volume Name: ecss_volume Type: Replicate Volume ID: 60774e49-d2f1-4b06-bb4a-3f39ccf1ea73 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 192.168.118.222:/restfs Brick2: 192.168.118.224:/restfs
To mount glusterfs partition, perform on both ecss1 and ecss2 hosts the following actions:
Create a new systemd unit:
/etc/systemd/system/ecss-glusterfs-mount.service
and add the following parameters there:
[Unit] Description=mount glusterfs After=network.target Requires=network.target [Service] RemainAfterExit=no Type=forking RestartSec=10s Restart=always ExecStart=/sbin/mount.glusterfs localhost:/ecss_volume /var/lib/ecss/restfs -o fetch-attempts=10 ExecStop=/bin/umount /var/lib/ecss/restfs [Install] WantedBy=multi-user.target
Add unit to the startup.
Add unit to the startup with the following command:
sudo systemctl enable ecss-glusterfs-mount.service
Reboot the host:
sudo reboot
If the host cannot be rebooted, then the following commands can be executed:
sudo systemctl daemon-reload sudo systemctl restart ecss-glusterfs-mount.service
After mounting on both hosts, run the command:
df -h
When viewing the information, a mounted section should appear:
/dev/sda10 19G 6,5G 11G 38% /var/lib/mysql /dev/sda8 4,5G 213M 4,1G 5% /var/log /dev/sda5 37G 48M 35G 1% /var/lib/ecss/ecss-media-server/records /dev/sda6 19G 44M 18G 1% /var/lib/ecss/cdr /dev/sda7 19G 44M 18G 1% /var/lib/ecss/statistics /dev/sda9 19G 7,6G 9,7G 44% /var/log/ecss localhost:/ecss_volume 46G 59M 44G 1% /var/lib/ecss/restfs*
Running RestFS in cluster mode
To run RestFS in cluster mode, it is enough that ecss-restfs package is installed and launched on two nodes at once. Command to start ecss-restfs service:
sudo systemctl start ecss-restfs.service
Starting RestFS in case of unavailability of other cluster members
In the applied glusterfs concept, all servers are equivalent. However, the volume section is not activated if there is no quorum (a cluster has a quorum if there are enough voting cluster elements with the same consistent cluster representation and activity). This is a defense mechanism that is typical for all distributed fault-tolerant systems and is designed to protect the system from split-brain - situation when each system believes that another node has failed.
This situation may occur when only one of the servers is loaded, and the second one is turned off or unavailable. Volume will not be automatically activated until another server occurance for excluding data discrepancies.
If starting the second server is impossible or delayed for a long time, then volume can be manually switched to operating mode by running the command:
sudo gluster volume ecss_volume start force
Problems associated with the emergence of split-brain
If one of the cluster nodes is unavailable, problems with files may occur. After the work is restored, they will be in split, which is why you will have to start synchronization between nodes manually.
In order to solve this problem, you will need to use cluster.favorite-child-policy key. When it is enabled, all files in the split will be automatically synchronized with each other according to specified rule.
Enabling this parameter is performed by the command:
sudo gluster volume set ecss_volume cluster.favorite-child-policy size
Editing the settings of glusterfs-server.service unit
To configure glusterfs service management via systemctl, run the command:
sudo systemctl edit glusterfs-server.service
A text editor window should open. Enter the following parameters there:
[Service] KillMode=control-group RemainAfterExit=no
Save the changes and run the command:
sudo systemctl daemon-reload