Дерево страниц

Сравнение версий

Ключ

  • Эта строка добавлена.
  • Эта строка удалена.
  • Изменено форматирование.

...

At the virtual PBX level — under /domain/<DOMAIN>/restfs/ path.

Configuring RestFS for one node

To run RestFS in one node mode, it is enough to install ecss-restfs packet. During the installation, a number of questions were asked. Next, start the restfs service and check its status.

...

By default, the service listens for HTTP requests on port 9990 and works with the /var/lib/ecss/restfs directory.

Якорь
Glusterfs
Glusterfs
RestFS settings for a claster based on glusterfs server

A Shared Block
shared-block-keyglusterfs

In order to ensure data replication between cluster servers, configure glusterfs-server.

As example, ECSS-10 system running in a cluster with the following settings is given:

  • The IP adress of ecss1 — 192.168.118.222;

  • The IP adress of ecss2 — 192.168.118.224.

  1. Install gluster server and attr package on both hosts:

    Без форматирования
    sudo aptitude install glusterfs-server attr


  2. To add a server to the file storage pool, run the command on ecss1:

    Без форматирования
    sudo gluster peer probe 192.168.118.224

    After that, the information about ecss1 should appear on ecss2 when executing sudo gluster peer status command:

    Без форматирования
    Number of Peers: 1 Hostname: 192.168.118.222 Uuid: 569c4730-a3a7-4d29-a132-b1bcdad792d8 State: Peer in Cluster (Connected)


  3. To create a cluster on ecss1, run the command:

    Без форматирования
    sudo gluster volume create ecss_volume replica 2 transport tcp 192.168.118.222:/var/lib/ecss/glusterfs 192.168.118.224:/var/lib/ecss/glusterfs force


  4. To start the created cluster, run the following command on ecss1:

    Без форматирования
    sudo gluster volume start ecss_volume


  5. To check the cluster status on ecss1, run the command:

    Без форматирования
    sudo gluster volume info

    It is necessary to pay attention to "Status" and "Bricks" fields — they should look as follows:

    Без форматирования
    Volume Name: ecss_volume 
    Type: Replicate 
    Volume ID: 60774e49-d2f1-4b06-bb4a-3f39ccf1ea73 
    Status: Started Number of Bricks: 1 x 2 = 2 
    Transport-type: tcp 
    Bricks: 
    Brick1: 192.168.118.222:/restfs 
    Brick2: 192.168.118.224:/restfs


  6. To mount glusterfs partition, perform on both ecss1 and ecss2 hosts the following actions:

    • Create a new systemd unit:

      Без форматирования
      /etc/systemd/system/ecss-glusterfs-mount.service

      and add the following parameters there:

      Без форматирования
      [Unit] 
      Description=mount glusterfs 
      After=network.target 
      Requires=network.target 
      
      [Service] 
      RemainAfterExit=no 
      Type=forking 
      RestartSec=10s 
      Restart=always 
      ExecStart=/sbin/mount.glusterfs localhost:/ecss_volume /var/lib/ecss/restfs -o fetch-attempts=10 
      ExecStop=/bin/umount /var/lib/ecss/restfs 
      
      [Install] 
      WantedBy=multi-user.target


    • Add unit to the startup.

      Add unit to the startup with the following command:

      Без форматирования
      sudo systemctl enable ecss-glusterfs-mount.service


    • Reboot the host:

      Без форматирования
      sudo reboot

      If the host cannot be rebooted, then the following commands can be executed:

      Без форматирования
      sudo systemctl daemon-reload
      sudo systemctl restart ecss-glusterfs-mount.service

      After mounting on both hosts, run the command:

      Без форматирования
      df -h

      When viewing the information, a mounted section should appear:

      Без форматирования
      /dev/sda10                     19G  6,5G   11G  38% /var/lib/mysql
      /dev/sda8                     4,5G  213M  4,1G   5% /var/log
      /dev/sda5                      37G   48M   35G   1% /var/lib/ecss/ecss-media-server/records
      /dev/sda6                      19G   44M   18G   1% /var/lib/ecss/cdr
      /dev/sda7                      19G   44M   18G   1% /var/lib/ecss/statistics
      /dev/sda9                      19G  7,6G  9,7G  44% /var/log/ecss
      localhost:/ecss_volume   	 46G   59M   44G   1% /var/lib/ecss/restfs*


Running RestFS in cluster mode

To run RestFS in cluster mode, it is enough that ecss-restfs package is installed and launched on two nodes at once. Command to start  ecss-restfs service: 

Без форматирования
sudo systemctl start ecss-restfs.service

Starting RestFS in case of unavailability of other cluster members

In the applied glusterfs concept, all servers are equivalent. However, the volume section is not activated if there is no quorum (a cluster has a quorum if there are enough voting cluster elements with the same consistent cluster representation and activity). This is a defense mechanism that is typical for all distributed fault-tolerant systems and is designed to protect the system from split-brain - situation when each system believes that another node has failed.

This situation may occur when only one of the servers is loaded, and the second one is turned off or unavailable. Volume will not be automatically activated until another server occurance for excluding data discrepancies.

If starting the second server is impossible or delayed for a long time, then volume can be manually switched to operating mode by running the command:

Без форматирования
sudo gluster volume ecss_volume start force

Problems associated with the emergence of split-brain

If one of the cluster nodes is unavailable, problems with files may occur. After the work is restored, they will be in split, which is why you will have to start synchronization between nodes manually.

In order to solve this problem, you will need to use cluster.favorite-child-policy key. When it is enabled, all files in the split will be automatically synchronized with each other according to specified rule.

Enabling this parameter is performed by the command:

Без форматирования
sudo gluster volume set ecss_volume cluster.favorite-child-policy size

Editing the settings of glusterfs-server.service unit

To configure glusterfs service management via systemctl, run the command:

Без форматирования
sudo systemctl edit glusterfs-server.service

A text editor window should open. Enter the following parameters there:

Без форматирования
[Service] 
KillMode=control-group 
RemainAfterExit=no

Save the changes and run the command:

Без форматирования
sudo systemctl daemon-reload


...