EVI Analytics — video analytics module designed to detect, recognize, count, and track objects according to models.
It is recommended to update the system core before installing the video driver.
System requirements for video analytics
Before starting, it is necessary to read the requirements for the Video analytics module in the EVI Platform system requirements section.
Requirements for the Video analytics module
Using virtualization
The video analytics system requires a video card.
To operate in a virtual environment, the hypervisor must support hardware passthrough of PCI/PCI-E devices, including the video card.
Compatibility with virtualization does not ensure stable operation of the video analytics module.
A discrete graphics card is required for the module to function. The minimum configuration for operation is listed below:
| OS | GPU | SERIES | VRAM | DRIVER |
|---|---|---|---|---|
| Ubuntu 22.04 | NVIDIA | Quadro or RTX | 8 GB | 545 |
Example of installed core and driver versions
There may be problems installing the driver due to the Linux kernel. Examples of working options are shown below:
| OS | Kernel | Driver |
|---|---|---|
| Ubuntu 22.04.4 LTS | GNU/Linux 6.5.0-44-generic x86_64 | NVIDIA-SMI 545.29.06 |
| Ubuntu 22.04.4 LTS | GNU/Linux 6.5.0-45-generic x86_64 | NVIDIA-SMI 575.64.05 |
Manual installation of docker and drivers
Driver version
It is recommended to use the latest driver available in the OS repository.
To work with the video card, it is necessary to install the driver:
sudo apt install -yfqq nvidia-driver-575
The next step is to install docker for working with the analytics container and nvidia-docker2 for working with the graphics card:
sudo apt-get update sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl https://get.docker.com | sh \ && sudo systemctl --now enable docker distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \ && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \ && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update sudo apt-get install docker-ce docker-compose nvidia-docker2
sudo systemctl restart docker
Add a user to the docker group:
sudo usermod -aG docker $(whoami)
A server reboot is required.
sudo reboot
It is necessary to check whether the driver is running in the system after installation:
nvidia-smi
An error when executing the nvidia-smi command indicates that the video driver failed to start and/or there is no discrete graphics card. Video analytics cannot be launched without a working NVIDIA driver.
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 575.144.03 Driver Version: 575.144.03 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3080 Off | 00000000:01:00.0 Off | N/A |
| 0% 41C P2 113W / 370W | 3902MiB / 10240MiB | 5% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 2144100 C tritonserver 1296MiB |
| 0 N/A N/A 2144155 C tritonserver 1296MiB |
| 0 N/A N/A 2144163 C tritonserver 1296MiB |
+-----------------------------------------------------------------------------------------+
Configuring database access for container connection
Before starting the configuration, it is necessary to stop the EVI services.
sudo systemctl stop evi-core.service evi-scud.service evi-live.service evi-archive.service evi-analyzer.service
It is necessary to open access for connecting the analytics container to the database.
/etc/postgresql/17/main/pg_hba.conf
Add the following line to the IPv4 local connections section and save the file.
Since containers get IP addresses from the 172.16.0.0/12 subnet by default, it is necessary to specify it.
# IPv4 local connections: host all all 127.0.0.1/32 scram-sha-256 host all all 172.16.0.0/12 md5
Open the database configuration file. Remove the hash symbol (#) to activate the listen_addresses parameter and specify the server's IP address in it. Then save the changes to the file.
/etc/postgresql/17/main/postgresql.conf
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
#listen_addresses = 'localhost' # what IP address(es) to listen on;
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
listen_addresses = 'localhost, 192.168.1.10' # what IP address(es) to listen on;
To apply the changes, restart the postgresql database service:
sudo systemctl restart postgresql
sudo systemctl start evi-core.service evi-scud.service evi-live.service evi-archive.service evi-analyzer.service
Loading the image and starting the evi-analytics container
Analytics images have a large file size, so they may take some time to load.
After configuring access to the database, it is necessary to upload the evi-analytics_1.4.0.sh script to the analytics server:
wget --no-check-certificate https://archive.eltex-co.ru/evi-raw/evi-1.4.0/evi-analytics_1.4.0.sh
bash evi-analytics_1.4.0.sh EFNRS_DB_HOST="IP ADDRESS OF YOUR DATABASE"
bash evi-analytics_1.4.0.sh EFNRS_DB_HOST="192.168.1.10"
Performance verification
docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Status}}"
Example of successfully launched containers
CONTAINER ID NAMES STATUS
f01deee7d6a5 user-triton1-1 Up 23 hours
d70d9e7004ac user-triton3-1 Up 23 hours
c011105320e9 user-triton2-1 Up 23 hours
d17d6261648f user-evi-analytics-haproxy-1 Up 23 hours
fa2db6812f3b user-evi-analytics-1 Up 23 hours