Из-за переноса сервиса eltex-portal в отдельный сервлет, на однохостовых инсталляция могут возникать проблемы. связанные с необходимостью перенастройки сети для того чтобы поддержать в рабочем состоянии сервис портальной авторизации. Одним из возможных решений этих проблем является запуск прокси-сервера на базе nginx, для того чтобы обеспечить взаимодействие по старому порту 8080 "как раньше".
В данной статье представлены инструкции по установке и настройке nginx, а также отмечены необходимые изменения в конфигурации tomcat.
Установка и конфигурирование nginx
Для работы требуется версия nginx 1.12.2 и выше. Подробные инструкции по установке представлены на официальном сайте: https://nginx.ru/en/linux_packages.html#stable
После установки, необходимо добавить конфигурационный файл softwlc.conf в каталог /etc/nginx/conf.d/ .
#
# Softwlc server configuration
#
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=eltex_cache:32m max_size=5g inactive=60m use_temp_path=off;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 8080;
listen [::]:8080;
# SSL configuration
# listen 8443 ssl;
# listen [::]:8443 ssl;
# Enables or disables gzipping of responses.
gzip on;
# Enables or disables decompression of gzipped responses for clients that lack gzip support.
# If enabled, the following directives are also taken into account when determining if clients support gzip: gzip_http_version, gzip_proxied, and gzip_disable.
# See also the gzip_vary directive.
gunzip on;
# The ngx_http_gzip_static_module module allows sending precompressed files with the “.gz” filename extension instead of regular files.
# This module is not built by default, it should be enabled with the --with-http_gzip_static_module configuration parameter.
# Enables (“on”) or disables (“off”) checking the existence of precompressed files. The following directives are also taken into account: gzip_http_version, gzip_proxied, gzip_disable, and gzip_vary.
# With the “always” value (1.3.6), gzipped file is used in all cases, without checking if the client supports it.
# It is useful if there are no uncompressed files on the disk anyway or the ngx_http_gunzip_module is used.
gzip_static on;
# Enables gzipping of responses for the specified MIME types in addition to “text/html”.
# The special value “*” matches any MIME type (0.8.29). Responses with the “text/html” type are always compressed
gzip_types text/html text/plain text/css image/x-icon image/bmp image/png image/gif image/jpeg image/jpg application/json application/x-javascript application/javascript text/javascript;
gzip_comp_level 9;
# Sets the size of the buffer used for reading the first part of the response received from the proxied server.
# This part usually contains a small response header. By default, the buffer size is equal to one memory page.
# This is either 4K or 8K, depending on a platform. It can be made smaller, however.
proxy_buffer_size 128k;
# Sets the number and size of the buffers used for reading a response from the proxied server, for a single connection.
# By default, the buffer size is equal to one memory page. This is either 4K or 8K, depending on a platform.
proxy_buffers 4 256k;
# When buffering of responses from the proxied server is enabled, limits the total size of buffers that can be busy sending
# a response to the client while the response is not yet fully read.
# In the meantime, the rest of the buffers can be used for reading the response and, if needed, buffering part of the response to a temporary file.
# By default, size is limited by the size of two buffers set by the proxy_buffer_size and proxy_buffers directives.
proxy_busy_buffers_size 256k;
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
# ssl_certificate "путь_к_сертификату"
# ssl_certificate_key "путь_к_приватному_ключу";
# server_name имя_сервера ;
# When enabled, only one request at a time will be allowed to populate a new cache element
# identified according to the proxy_cache_key directive by passing a request to a proxied server.
# Other requests of the same cache element will either wait for a response to appear in the cache
# or the cache lock for this element to be released, up to the time set by the proxy_cache_lock_timeout directive.
proxy_cache_lock on;
# Allows starting a background subrequest to update an expired cache item, while a stale cached response is returned to the client.
# Note that it is necessary to allow the usage of a stale cached response when it is being updated (proxy_cache_use_stale updating)
# Be aware of the fact that these options appeared in the newer nginx versions
proxy_cache_background_update on;
# The updating parameter permits using a stale cached response if it is currently being updated.
# This allows minimizing the number of accesses to proxied servers when updating cached data.
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
# Enables revalidation of expired cache items using conditional requests with the “If-Modified-Since” and “If-None-Match” header fields.
proxy_cache_revalidate on;
# Sets the number of requests after which the response will be cached.
proxy_cache_min_uses 2;
# Enables byte-range support for both cached and uncached responses from the proxied server regardless of the “Accept-Ranges” field in these responses.
proxy_force_ranges on;
# Sets the maximum allowed size of the client request body, specified in the “Content-Length” request header field.
# If the size in a request exceeds the configured value, the 413 (Request Entity Too Large) error is returned to the client.
# Please be aware that browsers cannot correctly display this error. Setting size to 0 disables checking of client request body size.
client_max_body_size 25m;
# Allows redefining or appending fields to the request header passed to the proxied server.
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
# Eltex Portal section
location ~* /eltex_portal/(gif|jpeg|jpg|png|svg|js|css|fonts)/ {
proxy_cache eltex_cache;
proxy_ignore_headers Cache-Control Expires;
proxy_cache_valid any 1h;
# адрес eltex portal
proxy_pass http://127.0.0.1:9000;
}
# caching shared resources that are downloaded by id for a day (these are images and everything except text)
# and totally static resources (/img/) are to stay for a while too
# you can actually keep these forever, just be aware that they consume some space on your disk (don't forget to change the size of your cache!)
location ~* /eltex_portal/(portal/download/shared/[0-9]+|img/) {
proxy_cache eltex_cache;
proxy_ignore_headers Cache-Control Expires;
proxy_cache_valid 404 302 304 5m;
proxy_cache_valid any 24h;
# адрес eltex portal
proxy_pass http://127.0.0.1:9000;
}
# caching generated portal styles
location ~* /eltex_portal/portal/download/private/.+/generated-style {
proxy_cache eltex_cache;
proxy_set_header If-Modified-Since $http_if_modified_since;
proxy_cache_valid 404 302 304 5m;
proxy_cache_valid any 1h;
proxy_pass http://127.0.0.1:9000;
}
# caching private resources (that are text usually)
location ~* /eltex_portal/(portal/download/private|portal)/ {
proxy_cache eltex_cache;
proxy_ignore_headers Cache-Control Expires;
proxy_cache_valid 404 302 304 5m;
proxy_cache_valid any 1h;
# адрес eltex portal
proxy_pass http://127.0.0.1:9000;
}
location /eltex_portal/ {
# адрес eltex portal
proxy_pass http://127.0.0.1:9000;
}
# Eltex Portal Constructor section
# caching shared resources and static resources
location ~* /epadmin/(portal/download/[0-9]+|portal/download/shared/[0-9]+|font-awesome|img|js|css) {
proxy_cache eltex_cache;
proxy_cache_valid any 24h;
# адрес eltex portal constructor
proxy_pass http://127.0.0.1:9001;
}
location /epadmin/ {
# адрес eltex portal constructor
proxy_pass http://127.0.0.1:9001;
}
location /ep-demo {
# адрес eltex portal constructor
proxy_pass http://127.0.0.1:9001;
}
# wifi-cab
location /wifi-cab {
# адрес wifi-cab
proxy_pass http://127.0.0.1:8083;
}
location /wifi-cab/PUSH {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_buffering off;
proxy_ignore_client_abort off;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
# адрес wifi-cab
proxy_pass http://127.0.0.1:8083/wifi-cab/PUSH;
}
# nbi
location /axis2 {
# адрес tomcat
proxy_pass http://127.0.0.1:8081;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
location /northbound {
proxy_pass http://127.0.0.1:8087;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
location /eltex-radius-nbi {
proxy_pass http://127.0.0.1:8081;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
# ems
location /ems {
# адрес tomcat
proxy_pass http://127.0.0.1:8087;
}
location /ems_files {
proxy_pass http://127.0.0.1:8087;
}
# deny root
# опционально запрещаем доступ в корень
location / {
deny all;
}
}
Настройка tomcat
В конфигурационном файле /etc/tomcat7/server.xml необходимо изменить слушаемый порт с 8080 на 8081 в разделе Service:
и добавить проксирование заголовков разделе Host:
Пример итоговой конфигурации (удалены все закоммментированные разделы): server.xml
После изменения всех конфигурационных файлов необходимо перезапустить сначала tomcat, чтобы освободить порт 8080, а затем nginx, чтобы запустилось проксирование.
service tomcat7 restart
service nginx restart
Работа всех web-сервисов будет происходить как и прежде.
Докеризация сервиса
Имеется возможность запустить сервис nginx в Docker-контейнере. Для этого необходимо создать файл с настройками docker-compose.yml и .env следующего содержания:
version: "3"
services:
eltex-nginx:
container_name: eltex-nginx
image: ${ELTEX_HUB}/eltex-nginx:${SWLC_VERSION}
restart: unless-stopped
ports:
- "8080:8080/tcp"
environment:
# Настройки таймзоны
- TZ=${TZ}
volumes:
#Путь к файлу softwlc.conf
- ./softwlc.conf:/etc/nginx/conf.d/default.conf:ro
#Путь к директории с логами
- ./volumes/logs/nginx:/var/log/nginx
ELTEX_HUB=hub.eltex-co.ru/softwlc
SWLC_VERSION=1.29
TZ=Asia/Novosibirsk
В разделе volumes необходимо указать путь к файлу конфигурации softwlc.conf, а также путь к директории, которая будет содержать логи.
Запустить контейнер можно командой:
Описание переменных окружения
- ELTEX_HUB - Докер-регистри Элтекс
- SWLC_VERSION - версия контроллера SoftWLC. Сервис NGINX работает в контейнере docker, начиная с версии 1.29
- TZ - часовой пояс в формате Asia/Novosibirsk (список существующих можно посмотреть командой timedatectl list-timezones).