241 Commits

Author SHA1 Message Date
bunkerity
b0ca85ff75 v1.2.5 - performance improvement 2021-05-14 16:42:08 +02:00
Bunkerity
2f115c444d Merge pull request #131 from bunkerity/issue-templates
Update issue templates
2021-05-14 16:37:37 +02:00
Bunkerity
7f15741ea2 Update issue templates 2021-05-14 16:33:01 +02:00
bunkerity
288b8eb851 docs improvement + road to v1.2.5 2021-05-14 15:41:15 +02:00
bunkerity
61c08fb97b docs - troubleshooting 2021-05-14 12:12:33 +02:00
bunkerity
01ef47a669 docs - security tuning improvement 2021-05-14 11:15:00 +02:00
florian
71515a9101 doc - volumes list 2021-05-13 20:34:41 +02:00
bunkerity
a33d0658c6 docs - road to a beautiful documentation 2021-05-13 17:46:31 +02:00
bunkerity
0b3ff6a9f4 bad behavior - move from fail2ban to pure lua 2021-05-13 16:21:51 +02:00
bunkerity
eb2d0d330d performance - rsyslog and fail2ban removing 2021-05-13 11:14:39 +02:00
bunkerity
5bcbb38638 doc - official document started 2021-05-12 17:35:32 +02:00
bunkerity
ca660b2501 init work on official doc 2021-05-12 12:28:01 +02:00
bunkerity
3a34436cd8 add AquaeAtrae example for ROOT_SITE_SUBFOLDER 2021-05-12 12:07:29 +02:00
bunkerity
b1d03cd11c performance - move bad user-agents and referrers checks from nginx to LUA with caching 2021-05-11 15:30:16 +02:00
bunkerity
42c3fb8740 add sandbox allow-downloads to the default value of CONTENT_SECURITY_POLICY 2021-05-11 08:57:23 +02:00
bunkerity
f1c043604a add missing backslash in the quickstart guide and update autoconf examples with the depends_on directive 2021-05-11 08:54:34 +02:00
bunkerity
fd61df205f performance - move external blacklists checks from nginx to LUA 2021-05-10 17:51:07 +02:00
bunkerity
009d6fb5ae choose connection and nofile numbers, increase error_log level to get modsecurity rules, add MODSECURITY_SEC_AUDIT_ENGINE var 2021-05-05 17:38:22 +02:00
bunkerity
ba4185a42e jobs - fix automatic reload 2021-05-03 14:18:10 +02:00
bunkerity
70976d0fbc fix user-agent not blocking and add documentation on bundle when USE_CUSTOM_HTTPS=yes 2021-05-03 13:59:55 +02:00
bunkerity
062a39c63a integrate AquaeAtrae work - add ROOT_SITE_SUBFOLDER 2021-05-03 10:31:37 +02:00
bunkerity
83841b290a jobs - edit adren work on external blacklists 2021-05-02 16:14:13 +02:00
Bunkerity
10dc58cb6d Merge pull request #126 from adren/patch-6
deduplicate list of user-agents
2021-05-02 15:14:10 +02:00
Bunkerity
668754686c Merge pull request #125 from adren/patch-5
more optimized way to generate map referrer file
2021-05-02 15:13:17 +02:00
Bunkerity
84b1933f63 Merge pull request #124 from adren/patch-4
improve the generation of blocking file (abusers)
2021-05-02 15:12:19 +02:00
Bunkerity
15f6d0a32a Merge pull request #123 from adren/patch-3
improve generation of block file (Tor exit nodes)
2021-05-02 15:11:27 +02:00
Bunkerity
e628361a89 Merge pull request #122 from adren/patch-1
huge improvement to generate blocking file
2021-05-02 15:10:52 +02:00
Cyril Chaboisseau
f8d71e067e improved way to generate user-agent file 2021-05-01 19:04:18 +02:00
Cyril Chaboisseau
02ae3b6bd3 change IFS before subshell
There needs to be a change in IFS before the 2 curl commands in order to keep line by line formatting
2021-05-01 15:48:33 +02:00
Cyril Chaboisseau
2fb0e7c473 deduplicate list of user-agents 2021-05-01 15:08:52 +02:00
Cyril Chaboisseau
9adcc2f1a7 more optimized way to generate map referrer file 2021-05-01 14:51:28 +02:00
Cyril Chaboisseau
7b98db4d14 improve the generation of blocking file (abusers) 2021-05-01 12:29:15 +02:00
Cyril Chaboisseau
ddb2b85916 improve generation of block file (Tor exit nodes) 2021-05-01 12:25:43 +02:00
Cyril Chaboisseau
da1a460a64 huge improvement to generate blocking file
process the file in 2 commands (grep + sed) instead of a loop running on each line
the time to generate the file takes 0.235 seconds instead of one hour, making it roughly 15,000 times quicker
the output file is exactly the same as with the former method
2021-05-01 11:42:07 +02:00
bunkerity
07be626842 hotfix - fix API in autoconf swarm mode 2021-04-28 17:40:54 +02:00
bunkerity
3bb164395e hotfix - move API_WHITELIST_IP edit to lua.sh 2021-04-28 17:00:50 +02:00
bunkerity
bc2568a172 v1.2.4 - nginx 1.20.0 support 2021-04-27 17:43:38 +02:00
Bunkerity
5ec74880d8 update README for v1.2.4 2021-04-27 17:40:33 +02:00
bunkerity
f84fd7c9a2 fix permissions issues for autoconf and fix volume for ghost example 2021-04-27 16:49:45 +02:00
bunkerity
6521d7a27a fix client cache so it works in combination with reverse proxy and examples update 2021-04-27 15:31:56 +02:00
bunkerity
813607fbc3 improve crowdsec example and disable modsec logging when not necessary 2021-04-27 11:21:30 +02:00
bunkerity
843644f806 log - replace some WARN tags from LUA logs with NOTICE to avoid confusion 2021-04-27 09:57:07 +02:00
bunkerity
19fa0eb25f log - print modsec_audit.log to make debugging easier 2021-04-27 09:46:40 +02:00
bunkerity
b4df287228 log - send logs to remote syslog server 2021-04-27 09:30:10 +02:00
florian
5ce41edc03 api - whitelist IP/network for API 2021-04-26 22:22:34 +02:00
florian
a3cfb50b4d example - fix certbot wildcard 2021-04-26 21:34:18 +02:00
bunkerity
25494acace example - wildcard certificate with certbot 2021-04-26 17:44:48 +02:00
bunkerity
a98dae1fb6 fix CVE-2021-20205 and examples update 2021-04-26 17:00:23 +02:00
bunkerity
1a7abab570 nginx 1.20.0 support 2021-04-26 14:59:12 +02:00
florian
42b7a57f01 fix autoconf bug when removing config with multiple server name and increase default LIMIT_CONN_MAX for average website with HTTP2 2021-04-26 11:39:12 +02:00
bunkerity
02f9fbe5fc autoconf - fix certbot bug when multiple server_name for one service 2021-04-20 11:46:53 +02:00
bunkerity
69fe066777 autoconf - fix bug when multiple server_name for one service 2021-04-20 10:00:25 +02:00
bunkerity
74417abc9c fixing bugs - run as GID 101 instead of 0, different permissions checks in swarm mode and disable including server confs in swarm mode 2021-04-16 16:56:45 +02:00
bunkerity
ba7524a419 fixed LUA bug 2021-04-13 17:27:52 +02:00
bunkerity
b55aafb997 finding the LUA bug 2021-04-13 17:01:27 +02:00
Bunkerity
deeb7a76a2 Merge pull request #117 from thelittlefireman/patch-9
Fix lua mistake
2021-04-13 16:49:45 +02:00
thelittlefireman
ee8aaa4e7e fix lua crash 2 2021-04-11 15:45:46 +02:00
thelittlefireman
605d59a45c Fix lua mistake
#116
2021-04-11 15:33:31 +02:00
bunkerity
b85c991b6e bug fixes - /usr/local/lib/lua rights and syntax error in site-config 2021-04-09 17:40:19 +02:00
bunkerity
0d3658adf0 REVERSE_PROXY_HEADERS - use proxy_set_header instead of more_set_headers 2021-04-09 17:27:22 +02:00
bunkerity
0b22209c96 documentation - userns remap feature 2021-04-09 16:22:31 +02:00
bunkerity
e44a1f3e14 added the uri to limit_req_zone key to limit bruteforce attack on a specific resource instead of the whole service 2021-04-09 15:54:26 +02:00
bunkerity
aa614f82f9 print error when permissions are wrong on common volumes 2021-04-09 14:54:15 +02:00
bunkerity
c03d410b0a refactored whitelisting of user-agents 2021-04-09 14:23:52 +02:00
bunkerity
e190167bfc CIDR support with whitelist/blacklist IP 2021-04-09 14:10:17 +02:00
bunkerity
31e72dce1c fix /usr/local/lib/lua rights and multiple server_name support with autoconf 2021-04-09 11:37:13 +02:00
bunkerity
b8105fc558 feature - whitelist URI 2021-04-09 10:31:00 +02:00
bunkerity
e73c10fd80 crowdsec - fix permissions on /usr/local/lib/lua and on /var/log files 2021-04-09 10:01:23 +02:00
bunkerity
a122a259c0 minor fix on AutoConf logs and auto disable etag with reverse proxy 2021-04-09 09:51:17 +02:00
bunkerity
7c4894d3b8 autoconf - fix remove event, generate config from nginx vars, more logs 2021-03-26 15:18:35 +01:00
bunkerity
533c2a1034 fix sed script when writing site env 2021-03-22 09:38:36 +01:00
bunkerity
5611d544d6 remove reference to USE_PHP 2021-03-19 09:38:44 +01:00
bunkerity
397182f18d add link to twitter account 2021-03-18 18:11:52 +01:00
bunkerity
c5c5fb17b5 v1.2.3 - swarm support 2021-03-18 18:08:42 +01:00
bunkerity
017a7780fb README update, default cron update and new parameters to ui 2021-03-18 17:11:58 +01:00
bunkerity
34d9db7a8b web ui - bug fixes 2021-03-18 12:34:46 +01:00
bunkerity
361c66ca61 fixed bugs with MULTISITE variables and swarm example 2021-03-18 10:29:37 +01:00
bunkerity
afc6678855 road to v1.2.3 - fixing bugs 2021-03-17 17:55:56 +01:00
bunkerity
c40fb33175 road to swarm - automatic reload after jobs 2021-03-17 12:16:56 +01:00
bunkerity
93ad3c0b51 road to swarm - let's encrypt fix 2021-03-17 10:37:20 +01:00
bunkerity
ceed904882 road to swarm - still some mess to fix 2021-03-16 17:56:24 +01:00
Bunkerity
b8027d2bac Merge pull request #102 from thelittlefireman/proxy_custom_headers
[NEED TESTING] Enhancement add custom proxy headers #97
2021-03-16 10:08:05 +01:00
Bunkerity
8d03a14a6a Merge pull request #103 from thelittlefireman/fix_truncated_3
Fix truncated 3
2021-03-16 10:06:05 +01:00
thelittlefireman
d16f4517a4 Enhancement add custom proxy headers #97 2021-03-15 23:24:58 +01:00
thelittlefireman
89ca91b3ff Fix truncated variables (last commit) 2021-03-15 22:54:30 +01:00
bunkerity
6a714e2ece road to swarm - fix race condition on initial configuration 2021-03-14 16:50:08 +01:00
bunkerity
0d3da03534 prepare /www directory, fix log socket path and whitelist acme challenges path 2021-03-14 12:33:59 +01:00
bunkerity
33163f65b3 init work on disabling root processes 2021-03-13 22:52:23 +01:00
bunkerity
a2543384cd road to swarm - add openssl to autoconf, fix api_uri in LUA, fix file rights 2021-03-13 15:28:15 +01:00
bunkerity
3591715f21 road to swarm - fixing things 2021-03-12 17:31:26 +01:00
bunkerity
95f7ca5b2d road to swarm support - needs a lot of testing 2021-03-12 15:17:45 +01:00
bunkerity
816fa47cbb introducing SWARM_MODE env var 2021-03-12 12:40:52 +01:00
Bunkerity
7756c2df3c Merge pull request #98 from mromanelli9/fix/readme
Fix README
2021-03-12 10:44:06 +01:00
bunkerity
7509ec2f2c basic API to be used in swarm mode 2021-03-12 10:42:31 +01:00
Marco Romanelli
6e93575e16 remove ALLOWALL from X_FRAME_OPTIONS options 2021-03-11 14:41:23 +01:00
Marco Romanelli
ba4c977550 remove old anchor 2021-03-11 11:49:46 +01:00
bunkerity
781e4c8cbb autoconf little work on swarm support 2021-03-10 17:24:02 +01:00
bunkerity
e04c783d1e autoconf - init work on swarm mode 2021-03-09 17:33:22 +01:00
bunkerity
e12b656bd5 Merge branch 'patch-7' of https://github.com/thelittlefireman/bunkerized-nginx into dev 2021-03-08 14:06:54 +01:00
bunkerity
cae05447d3 custom crontab values 2021-03-08 13:58:14 +01:00
bunkerity
4b58e22657 Merge branch 'patch-5' of https://github.com/thelittlefireman/bunkerized-nginx into dev 2021-03-08 13:52:35 +01:00
bunkerity
6b56e21a09 Merge branch 'whitelist_ua' of https://github.com/thelittlefireman/bunkerized-nginx into dev 2021-03-08 13:46:28 +01:00
bunkerity
544a09e8da Update lua-cs-bouncer 2021-03-08 12:02:56 +01:00
bunkerity
8386dd4a2a custom config outside server block 2021-03-08 11:53:11 +01:00
root
f052a25168 Merge branch 'pre_server_confs' of https://github.com/thelittlefireman/bunkerized-nginx into dev 2021-03-08 11:47:45 +01:00
Bunkerity
43750f5536 Merge pull request #73 from thelittlefireman/patch-4
Add missing reverse proxy header (X-Forwarded-Host)
2021-03-08 10:16:31 +01:00
Bunkerity
9142afdb54 Merge pull request #72 from thelittlefireman/patch-3
Fix #71 - limit connection by IP
2021-03-08 10:15:14 +01:00
thelittlefireman
66c4fed791 Fix env variable with space are truncated 2
Fix #82
2021-03-05 23:59:38 +01:00
thelittlefireman
f41846e9d6 Fix env variable with space are truncated
Fix #82
2021-03-05 23:56:19 +01:00
thelittlefireman
92cc705b92 Reduce memory usage : set cron tasks at different hours. 2021-03-03 13:02:56 +01:00
thelittlefireman
47fb3a05b3 Upgrade crowdsecurity/lua-cs-bouncer
Upgrade crowdsecurity/lua-cs-bouncer to latest version to include commit dcfba46ccd
2021-03-03 10:01:04 +01:00
thelittlefireman
5940f402c7 improve default tls security 2021-02-28 23:59:22 +01:00
thelittlefireman
d9ca275d53 Add before server {} config. 2021-02-03 14:17:20 +01:00
thelittlefireman
8353bd9c85 Allow to add a whitelist by site on user-agent 2021-02-03 13:51:15 +01:00
thelittlefireman
d902e2f297 Add last missing reverse proxy header
X-Forwarded-Host (https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-Host)[https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-Host]
2021-01-03 16:28:51 +01:00
thelittlefireman
1a8b8043c8 Add LIMIT_CONN var to server.conf 2021-01-02 14:26:52 +01:00
thelittlefireman
65120a7e97 Add USE_CONN_LIMIT info to Readme.md
and fix small typo
2021-01-02 14:25:34 +01:00
thelittlefireman
b093a47554 Add default values for LIMIT_CONN 2021-01-02 14:18:26 +01:00
thelittlefireman
73dbf03c9a add USE_LIMIT_CONN zone to global config 2021-01-02 14:15:18 +01:00
thelittlefireman
6ee746236a Add USE_LIMIT_CONN to site-config 2021-01-02 14:11:36 +01:00
thelittlefireman
fa935eb6e3 edit nginx.conf to add limit_conn 2021-01-02 14:04:34 +01:00
thelittlefireman
cf231e13cb Add limit-conn.conf 2021-01-02 13:35:18 +01:00
bunkerity
d5d699252c v1.2.2 - web UI (beta) 2020-12-30 21:22:18 +01:00
bunkerity
50f95420b5 README update - road to v1.2.2 2020-12-30 17:57:00 +01:00
bunkerity
dc382c3e04 various fixes - autoconf process order, multisite config and examples 2020-12-30 16:22:10 +01:00
bunkerity
0026328f25 edit default FAIL2BAN_IGNOREIP subnets 2020-12-30 14:31:16 +01:00
Bunkerity
9023ab5aed Merge pull request #67 from thelittlefireman/patch-2
Fix #66
2020-12-30 14:28:34 +01:00
thelittlefireman
124474ad66 Edit README.md to add FAIL2BAN_IGNOREIP 2020-12-29 03:47:41 +01:00
thelittlefireman
eac9c8f513 Prepare FAIL2BAN_IGNOREIP to avoid self blocking 2020-12-29 03:43:38 +01:00
thelittlefireman
1ee490de6d Prepare FAIL2BAN_IGNOREIP to avoid self blocking 2020-12-29 03:41:27 +01:00
bunkerity
825e6a747e crowdsec v1 integrated 2020-12-28 21:41:30 +01:00
bunkerity
09a984c86b started crowdsec v1 integration 2020-12-28 18:42:20 +01:00
bunkerity
fd7afa17b3 fix missing ';' in include 2020-12-28 11:43:43 +01:00
Bunkerity
b9b7fdfcc4 Merge pull request #63 from thelittlefireman/patch-1
Fix missing proxy headers
2020-12-28 11:38:07 +01:00
bunkerity
58e1d66bc7 UI - minor alert css fix 2020-12-28 11:37:13 +01:00
bunkerity
7026643f8a UI - fix missing MULTISITE env var when managing services 2020-12-28 10:20:29 +01:00
bunkerity
06f688fe97 fixed stop and reload operations 2020-12-27 22:25:07 +01:00
bunkerity
c65b78b1cc UI - instances/services backend update (needs testing) 2020-12-27 17:04:59 +01:00
bunkerity
f9b9b9546f UI - introduced multiple config parameters (like reverse proxy) in frontend 2020-12-27 14:42:52 +01:00
bunkerity
b5fe6335c7 UI - instances backend started 2020-12-24 14:55:11 +01:00
bunkerity
951f3957fd UI - default service values 2020-12-24 11:36:19 +01:00
bunkerity
0f520b8914 UI - services backend started 2020-12-23 22:29:50 +01:00
bunkerity
569ad75c42 UI - config.json refactoring 2020-12-23 11:31:37 +01:00
bunkerity
bd7b6af668 UI - load config template from json 2020-12-22 22:35:15 +01:00
bunkerity
459bb8ea1c UI services modals and default CSP update (fix new tab links) 2020-12-22 11:42:49 +01:00
bunkerity
208b5acb30 UI - minor services list improvement 2020-12-21 17:34:45 +01:00
bunkerity
59b2fed416 UI - basic services list 2020-12-21 15:32:15 +01:00
thelittlefireman
a4871a915e Add missing proxy headers 2020-12-20 16:21:01 +01:00
thelittlefireman
026783f018 Fix missing reverse proxy headers 2020-12-20 16:19:27 +01:00
thelittlefireman
8115853453 Fix missing proxy headers on site-config.sh 2020-12-20 16:16:26 +01:00
bunkerity
c5f283b00e UI - minor front update 2020-12-18 17:23:23 +01:00
bunkerity
03ce7a6483 fix modsec double inclusion when MULTISITE=yes 2020-12-18 10:40:05 +01:00
bunkerity
3f7e2c54b3 JOBS - fixed some job script and right temp nginx reload 2020-12-16 18:56:11 +01:00
bunkerity
bb0f46d8af JOBS - fix job_log 2020-12-16 16:06:36 +01:00
bunkerity
c5b32dfc4c fix CVE-2020-1971 again 2020-12-16 15:47:02 +01:00
bunkerity
9a4f96ad18 fix CVE-2020-1971 2020-12-16 15:40:38 +01:00
bunkerity
f258426f55 JOBS - fallback to old conf in case reload failed 2020-12-16 15:22:49 +01:00
bunkerity
119e963612 JOBS - be more verbose about jobs failure/success 2020-12-16 11:43:41 +01:00
Bunkerity
373988670a Merge pull request #54 from thelittlefireman/patch-4
Fix #52
2020-12-16 10:04:21 +01:00
thelittlefireman
2a956f2cd3 Fix #52
Fix #52
2020-12-13 12:39:46 +01:00
bunkerity
15a37a8682 UI - minor UI improvement 2020-12-12 17:28:45 +01:00
bunkerity
3a3d527907 UI - basic read fixes 2020-12-11 17:03:43 +01:00
bunkerity
e6b5f460c9 UI - basic read from docker API 2020-12-11 15:17:18 +01:00
bunkerity
002e3ed2ba security tests for autoconf and ui 2020-12-11 11:49:22 +01:00
bunkerity
7b55acbe8b web UI example and CVE-2020-8231 fix again 2020-12-11 11:44:45 +01:00
bunkerity
559b7835d4 ui - automated build 2020-12-11 10:52:44 +01:00
bunkerity
4ea01bd93f print some logs when blocking bots 2020-12-10 22:36:32 +01:00
bunkerity
a73891a3b8 fix CVE-2020-8231 2020-12-10 21:42:24 +01:00
bunkerity
26199f52c8 remove additional / in modsecurity include 2020-12-10 21:32:44 +01:00
bunkerity
5c3f94a84f edit reverse proxy var name in README 2020-12-10 21:25:39 +01:00
bunkerity
043fcdc136 autoconf - automated build 2020-12-09 18:30:12 +01:00
bunkerity
b86ded3d1c autoconf - multi arch Dockerfile 2020-12-09 17:36:39 +01:00
bunkerity
92569679b6 dynamic reload of nginx by sending SIGHUP 2020-12-09 17:00:09 +01:00
bunkerity
15e74e4860 more work on standalone autoconf 2020-12-09 12:00:54 +01:00
bunkerity
fd0a6412d0 init work on standalone autoconf 2020-12-08 23:27:23 +01:00
bunkerity
419fdfc86e fix auth basic when MULTISITE=yes 2020-12-08 11:29:43 +01:00
bunkerity
0bc1f652b4 v1.2.1 - autoconf feature (beta) 2020-12-07 21:20:13 +01:00
bunkerity
6c7461e298 integrate thelittlefireman work 2020-12-07 17:09:31 +01:00
bunkerity
d01bc5e014 Merge branch 'patch-1' of https://github.com/thelittlefireman/bunkerized-nginx into dev 2020-12-07 17:08:12 +01:00
bunkerity
75c69c8105 last fixes before next release ? 2020-12-07 16:53:00 +01:00
thelittlefireman
e26b8482aa Add missing EMAIL_LETS_ENCRYPT parameter 2020-12-07 11:31:23 +01:00
bunkerity
f618c73e6c road to v1.2.1 2020-12-06 22:22:58 +01:00
bunkerity
78c1e5c676 examples - same domains for internal tests 2020-12-05 21:39:48 +01:00
bunkerity
481e10d3ef reverse proxy - websocket example 2020-12-05 16:43:14 +01:00
bunkerity
aae2a71983 autoconf - php example 2020-12-05 16:30:50 +01:00
bunkerity
f3bf04e390 dirty fix to disable default server when MULTISITE=yes 2020-12-05 16:07:40 +01:00
bunkerity
36cbb927c0 autoconf - various fixes 2020-12-05 11:06:38 +01:00
bunkerity
95153dbc5d moved UA, referrer and country check after whitelist and blacklist check 2020-12-04 22:58:48 +01:00
bunkerity
26947179a4 moved UA and referrer check to LUA 2020-12-04 22:21:38 +01:00
bunkerity
88f27bfeb8 autoconf - reverse proxy example and pass default vars 2020-12-04 22:06:15 +01:00
bunkerity
3cc1615c4d fix user-agent script 2020-12-04 21:29:04 +01:00
bunkerity
8bacf722a6 Merge branch 'fix/variable-naming' of https://github.com/mromanelli9/bunkerized-nginx into dev 2020-12-04 17:02:23 +01:00
bunkerity
2bfc4b41fa first work on automatic configuration 2020-12-04 16:55:09 +01:00
Marco Romanelli
587d4a92eb incorrect variable naming 2020-12-04 16:38:48 +01:00
bunkerity
c311d0c825 add crawler-detecter bad UA 2020-12-04 10:09:05 +01:00
bunkerity
0d03f49ebc websocket support with reverse proxy 2020-12-04 09:53:40 +01:00
bunkerity
2112c306a8 custom log format 2020-12-02 16:46:54 +01:00
bunkerity
8f9dcc5ab8 last fix ? 2020-12-02 14:47:08 +01:00
bunkerity
2fe05d3fd3 fixing scripts again and again 2020-12-02 14:31:54 +01:00
bunkerity
db04c0345c fix referrers again 2020-12-02 13:49:48 +01:00
bunkerity
ed8bd902b1 fix referrers script 2020-12-02 11:09:38 +01:00
bunkerity
3a7aa5d9c0 block bad referrers 2020-12-02 10:41:50 +01:00
bunkerity
9ec9de6ca2 multiple lets encrypt certificates when MULTISITE=yes 2020-12-02 10:17:55 +01:00
bunkerity
791342cbe6 fix LUA DNS code when answers is nil 2020-12-02 10:00:16 +01:00
bunkerity
2f23671c3b fail2ban fix when MULTISITE=yes 2020-12-02 09:36:56 +01:00
bunkerity
e350a717ff fix default DNS_RESOLVERS 2020-12-02 09:32:32 +01:00
bunkerity
e818acb0d1 prestashop example 2020-11-29 16:50:53 +01:00
bunkerity
b92f74ed98 dirty fix for CVE-2020-28928 2020-11-29 15:30:12 +01:00
bunkerity
9688e66508 check all vulnerabilities with trivy 2020-11-29 15:10:08 +01:00
bunkerity
700dfc0184 v1.2.0 release 2020-11-23 00:05:22 +01:00
bunkerity
42e4298b5c readme update - v1.2.0 changes 2020-11-22 23:39:01 +01:00
bunkerity
813b42cfa9 php and nextcloud examples fix 2020-11-22 17:38:07 +01:00
bunkerity
58fcf0a725 added Permissions-Policy header 2020-11-21 16:41:27 +01:00
bunkerity
5879183802 custom headers to remove 2020-11-21 16:21:54 +01:00
bunkerity
2032596880 automatic trivy scan 2020-11-21 15:54:52 +01:00
bunkerity
eaf817d57a php config and examples fixes 2020-11-18 15:21:08 +01:00
bunkerity
dd7768c856 whitelist/blacklist country at LUA level to avoid SEO issues 2020-11-18 11:37:42 +01:00
bunkerity
fe1d724c9f country whitelist/blacklist 2020-11-18 11:21:25 +01:00
bunkerity
0635eb368b various bug fixes 2020-11-15 20:49:43 +01:00
bunkerity
fbf81c94be cached blacklists data 2020-11-15 15:43:41 +01:00
bunkerity
ed451877ae examples update and multiple REVERSE_PROXY_* on single site 2020-11-15 14:55:48 +01:00
bunkerity
0f18e9c552 reverse proxy support via env vars 2020-11-14 17:30:38 +01:00
bunkerity
8f7cb5318e proxy caching support 2020-11-14 16:58:52 +01:00
bunkerity
60fbbc1013 move some http directives to server 2020-11-14 14:19:27 +01:00
bunkerity
0f0593456c various fixes 2020-11-13 17:57:39 +01:00
bunkerity
8cdc155ac0 multisite examples and certbot renew fix 2020-11-13 15:10:29 +01:00
bunkerity
1abe1da89e brotli support 2020-11-12 15:03:45 +01:00
bunkerity
f18c054b42 gzip support 2020-11-12 14:37:01 +01:00
bunkerity
4dea1975e2 client caching 2020-11-12 14:02:48 +01:00
bunkerity
c2b05c463c fix BLOCK_COUNTRY bug and add support for ModSecurity custom confs when multisite=yes 2020-11-11 22:36:22 +01:00
bunkerity
2da51d92a6 multisite - bug fixes 2020-11-11 16:54:27 +01:00
bunkerity
bd7997497b autotest through github actions 2020-11-10 15:25:49 +01:00
bunkerity
e89e34a84f auto test fix 2020-11-08 22:08:50 +01:00
bunkerity
ff02878dd8 auto test setup 2020-11-08 21:59:19 +01:00
bunkerity
44b016be93 road to multi server block support 2020-11-08 17:37:48 +01:00
bunkerity
36c4f3e065 v1.1.2 - CrowdSec integration and custom ports 2020-11-06 22:49:18 +01:00
bunkerity
798f6c726d examples - nextcloud fix and tomcat 2020-11-06 22:24:34 +01:00
bunkerity
761c14a0b8 custom HTTP and HTTPS ports 2020-11-06 17:11:27 +01:00
bunkerity
4a07eca696 crowdsec integration 2020-11-06 16:56:16 +01:00
bunkerity
e1274a6082 passbolt example 2020-11-04 11:16:26 +01:00
Luka TK
3ec81cd849 Fix broken line in README 2020-11-01 22:52:55 +01:00
255 changed files with 18546 additions and 1805 deletions

17
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,17 @@
---
name: Bug report
about: Something is not working as expected
title: "[BUG]"
labels: bug
assignees: ''
---
**Description**
Concise description of what you're trying to do, the expected behavior and the current bug.
**How to reproduce**
Give steps on how to reproduce the bug (e.g. : commands, configs, tests, environment, version, ...).
**Logs**
The logs generated by bunkerized-nginx. **DON'T FORGET TO REMOVE PRIVATE DATA LIKE IP ADDRESSES !**

View File

@@ -0,0 +1,26 @@
name: Automatic test on autoconf
on:
push:
branches: [dev, master]
pull_request:
branches: [dev, master]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout source code
uses: actions/checkout@v2
- name: Build the image
run: docker build -t autotest-autoconf -f autoconf/Dockerfile .
- name: Run Trivy security scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: 'autotest-autoconf'
format: 'table'
exit-code: '1'
ignore-unfixed: true
severity: 'UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL'

View File

@@ -0,0 +1,26 @@
name: Automatic test on ui
on:
push:
branches: [dev, master]
pull_request:
branches: [dev, master]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout source code
uses: actions/checkout@v2
- name: Build the image
run: docker build -t autotest-ui -f ui/Dockerfile .
- name: Run Trivy security scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: 'autotest-ui'
format: 'table'
exit-code: '1'
ignore-unfixed: true
severity: 'UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL'

View File

@@ -0,0 +1,28 @@
name: Automatic test
on:
push:
branches: [dev, master]
pull_request:
branches: [dev, master]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout source code
uses: actions/checkout@v2
- name: Build the image
run: docker build -t autotest .
- name: Run autotest
run: docker run autotest test
- name: Run Trivy security scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: 'autotest'
format: 'table'
exit-code: '1'
ignore-unfixed: true
severity: 'UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL'
skip-dirs: '/usr/lib/go'

2
.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
.idea/
docs/_build/

View File

@@ -1,4 +1,4 @@
FROM nginx:stable-alpine
FROM nginx:1.20.0-alpine
COPY nginx-keys/ /tmp/nginx-keys
COPY compile.sh /tmp/compile.sh
@@ -6,24 +6,28 @@ RUN chmod +x /tmp/compile.sh && \
/tmp/compile.sh && \
rm -rf /tmp/*
COPY entrypoint.sh /opt/entrypoint.sh
COPY dependencies.sh /tmp/dependencies.sh
RUN chmod +x /tmp/dependencies.sh && \
/tmp/dependencies.sh && \
rm -rf /tmp/dependencies.sh
COPY entrypoint/ /opt/entrypoint
COPY confs/ /opt/confs
COPY scripts/ /opt/scripts
COPY fail2ban/ /opt/fail2ban
COPY logs/ /opt/logs
COPY lua/ /opt/lua
RUN apk --no-cache add certbot libstdc++ libmaxminddb geoip pcre yajl fail2ban clamav apache2-utils rsyslog openssl lua libgd && \
chmod +x /opt/entrypoint.sh /opt/scripts/* && \
mkdir /opt/entrypoint.d && \
rm -f /var/log/nginx/* && \
chown root:nginx /var/log/nginx && \
chmod 750 /var/log/nginx && \
touch /var/log/nginx/error.log /var/log/nginx/modsec_audit.log && \
chown nginx:nginx /var/log/nginx/*.log
COPY prepare.sh /tmp/prepare.sh
RUN chmod +x /tmp/prepare.sh && \
/tmp/prepare.sh && \
rm -f /tmp/prepare.sh
VOLUME /www /http-confs /server-confs /modsec-confs /modsec-crs-confs
# fix CVE-2021-20205
RUN apk add "libjpeg-turbo>=2.1.0-r0"
VOLUME /www /http-confs /server-confs /modsec-confs /modsec-crs-confs /cache /pre-server-confs /acme-challenge
EXPOSE 8080/tcp 8443/tcp
ENTRYPOINT ["/opt/entrypoint.sh"]
USER nginx:nginx
ENTRYPOINT ["/opt/entrypoint/entrypoint.sh"]

View File

@@ -1,4 +1,4 @@
FROM amd64/nginx:stable-alpine
FROM amd64/nginx:1.20.0-alpine
COPY nginx-keys/ /tmp/nginx-keys
COPY compile.sh /tmp/compile.sh
@@ -6,24 +6,28 @@ RUN chmod +x /tmp/compile.sh && \
/tmp/compile.sh && \
rm -rf /tmp/*
COPY entrypoint.sh /opt/entrypoint.sh
COPY dependencies.sh /tmp/dependencies.sh
RUN chmod +x /tmp/dependencies.sh && \
/tmp/dependencies.sh && \
rm -rf /tmp/dependencies.sh
COPY entrypoint/ /opt/entrypoint
COPY confs/ /opt/confs
COPY scripts/ /opt/scripts
COPY fail2ban/ /opt/fail2ban
COPY logs/ /opt/logs
COPY lua/ /opt/lua
RUN apk --no-cache add certbot libstdc++ libmaxminddb geoip pcre yajl fail2ban clamav apache2-utils rsyslog openssl lua libgd && \
chmod +x /opt/entrypoint.sh /opt/scripts/* && \
mkdir /opt/entrypoint.d && \
rm -f /var/log/nginx/* && \
chown root:nginx /var/log/nginx && \
chmod 750 /var/log/nginx && \
touch /var/log/nginx/error.log /var/log/nginx/modsec_audit.log && \
chown nginx:nginx /var/log/nginx/*.log
COPY prepare.sh /tmp/prepare.sh
RUN chmod +x /tmp/prepare.sh && \
/tmp/prepare.sh && \
rm -f /tmp/prepare.sh
VOLUME /www /http-confs /server-confs /modsec-confs /modsec-crs-confs
# fix CVE-2021-20205
RUN apk add "libjpeg-turbo>=2.1.0-r0"
VOLUME /www /http-confs /server-confs /modsec-confs /modsec-crs-confs /cache /pre-server-confs /acme-challenge
EXPOSE 8080/tcp 8443/tcp
ENTRYPOINT ["/opt/entrypoint.sh"]
USER nginx:nginx
ENTRYPOINT ["/opt/entrypoint/entrypoint.sh"]

View File

@@ -3,7 +3,7 @@ FROM alpine AS builder
ENV QEMU_URL https://github.com/balena-io/qemu/releases/download/v4.0.0%2Bbalena2/qemu-4.0.0.balena2-arm.tar.gz
RUN apk add curl && curl -L ${QEMU_URL} | tar zxvf - -C . --strip-components 1
FROM arm32v7/nginx:stable-alpine
FROM arm32v7/nginx:1.20.0-alpine
COPY --from=builder qemu-arm-static /usr/bin
@@ -13,24 +13,28 @@ RUN chmod +x /tmp/compile.sh && \
/tmp/compile.sh && \
rm -rf /tmp/*
COPY entrypoint.sh /opt/entrypoint.sh
COPY dependencies.sh /tmp/dependencies.sh
RUN chmod +x /tmp/dependencies.sh && \
/tmp/dependencies.sh && \
rm -rf /tmp/dependencies.sh
COPY entrypoint/ /opt/entrypoint
COPY confs/ /opt/confs
COPY scripts/ /opt/scripts
COPY fail2ban/ /opt/fail2ban
COPY logs/ /opt/logs
COPY lua/ /opt/lua
RUN apk --no-cache add certbot libstdc++ libmaxminddb geoip pcre yajl fail2ban clamav apache2-utils rsyslog openssl lua libgd && \
chmod +x /opt/entrypoint.sh /opt/scripts/* && \
mkdir /opt/entrypoint.d && \
rm -f /var/log/nginx/* && \
chown root:nginx /var/log/nginx && \
chmod 750 /var/log/nginx && \
touch /var/log/nginx/error.log /var/log/nginx/modsec_audit.log && \
chown nginx:nginx /var/log/nginx/*.log
COPY prepare.sh /tmp/prepare.sh
RUN chmod +x /tmp/prepare.sh && \
/tmp/prepare.sh && \
rm -f /tmp/prepare.sh
VOLUME /www /http-confs /server-confs /modsec-confs /modsec-crs-confs
# fix CVE-2021-20205
RUN apk add "libjpeg-turbo>=2.1.0-r0"
VOLUME /www /http-confs /server-confs /modsec-confs /modsec-crs-confs /cache /pre-server-confs /acme-challenge
EXPOSE 8080/tcp 8443/tcp
ENTRYPOINT ["/opt/entrypoint.sh"]
USER nginx:nginx
ENTRYPOINT ["/opt/entrypoint/entrypoint.sh"]

View File

@@ -3,7 +3,7 @@ FROM alpine AS builder
ENV QEMU_URL https://github.com/balena-io/qemu/releases/download/v4.0.0%2Bbalena2/qemu-4.0.0.balena2-aarch64.tar.gz
RUN apk add curl && curl -L ${QEMU_URL} | tar zxvf - -C . --strip-components 1
FROM arm64v8/nginx:stable-alpine
FROM arm64v8/nginx:1.20.0-alpine
COPY --from=builder qemu-aarch64-static /usr/bin
@@ -13,24 +13,28 @@ RUN chmod +x /tmp/compile.sh && \
/tmp/compile.sh && \
rm -rf /tmp/*
COPY entrypoint.sh /opt/entrypoint.sh
COPY dependencies.sh /tmp/dependencies.sh
RUN chmod +x /tmp/dependencies.sh && \
/tmp/dependencies.sh && \
rm -rf /tmp/dependencies.sh
COPY entrypoint/ /opt/entrypoint
COPY confs/ /opt/confs
COPY scripts/ /opt/scripts
COPY fail2ban/ /opt/fail2ban
COPY logs/ /opt/logs
COPY lua/ /opt/lua
RUN apk --no-cache add certbot libstdc++ libmaxminddb geoip pcre yajl fail2ban clamav apache2-utils rsyslog openssl lua libgd && \
chmod +x /opt/entrypoint.sh /opt/scripts/* && \
mkdir /opt/entrypoint.d && \
rm -f /var/log/nginx/* && \
chown root:nginx /var/log/nginx && \
chmod 750 /var/log/nginx && \
touch /var/log/nginx/error.log /var/log/nginx/modsec_audit.log && \
chown nginx:nginx /var/log/nginx/*.log
COPY prepare.sh /tmp/prepare.sh
RUN chmod +x /tmp/prepare.sh && \
/tmp/prepare.sh && \
rm -f /tmp/prepare.sh
VOLUME /www /http-confs /server-confs /modsec-confs /modsec-crs-confs
# fix CVE-2021-20205
RUN apk add "libjpeg-turbo>=2.1.0-r0"
VOLUME /www /http-confs /server-confs /modsec-confs /modsec-crs-confs /cache /pre-server-confs /acme-challenge
EXPOSE 8080/tcp 8443/tcp
ENTRYPOINT ["/opt/entrypoint.sh"]
USER nginx:nginx
ENTRYPOINT ["/opt/entrypoint/entrypoint.sh"]

View File

@@ -1,4 +1,4 @@
FROM i386/nginx:stable-alpine
FROM i386/nginx:1.20.0-alpine
COPY nginx-keys/ /tmp/nginx-keys
COPY compile.sh /tmp/compile.sh
@@ -6,24 +6,28 @@ RUN chmod +x /tmp/compile.sh && \
/tmp/compile.sh && \
rm -rf /tmp/*
COPY entrypoint.sh /opt/entrypoint.sh
COPY dependencies.sh /tmp/dependencies.sh
RUN chmod +x /tmp/dependencies.sh && \
/tmp/dependencies.sh && \
rm -rf /tmp/dependencies.sh
COPY entrypoint/ /opt/entrypoint
COPY confs/ /opt/confs
COPY scripts/ /opt/scripts
COPY fail2ban/ /opt/fail2ban
COPY logs/ /opt/logs
COPY lua/ /opt/lua
RUN apk --no-cache add certbot libstdc++ libmaxminddb geoip pcre yajl fail2ban clamav apache2-utils rsyslog openssl lua libgd && \
chmod +x /opt/entrypoint.sh /opt/scripts/* && \
mkdir /opt/entrypoint.d && \
rm -f /var/log/nginx/* && \
chown root:nginx /var/log/nginx && \
chmod 750 /var/log/nginx && \
touch /var/log/nginx/error.log /var/log/nginx/modsec_audit.log && \
chown nginx:nginx /var/log/nginx/*.log
COPY prepare.sh /tmp/prepare.sh
RUN chmod +x /tmp/prepare.sh && \
/tmp/prepare.sh && \
rm -f /tmp/prepare.sh
VOLUME /www /http-confs /server-confs /modsec-confs /modsec-crs-confs
# fix CVE-2021-20205
RUN apk add "libjpeg-turbo>=2.1.0-r0"
VOLUME /www /http-confs /server-confs /modsec-confs /modsec-crs-confs /cache /pre-server-confs /acme-challenge
EXPOSE 8080/tcp 8443/tcp
ENTRYPOINT ["/opt/entrypoint.sh"]
USER nginx:nginx
ENTRYPOINT ["/opt/entrypoint/entrypoint.sh"]

959
README.md

File diff suppressed because it is too large Load Diff

View File

@@ -1 +1 @@
1.1.1
1.2.5

131
autoconf/AutoConf.py Normal file
View File

@@ -0,0 +1,131 @@
from Config import Config
import utils
import os
class AutoConf :
def __init__(self, swarm, api) :
self.__swarm = swarm
self.__servers = {}
self.__instances = {}
self.__sites = {}
self.__config = Config(self.__swarm, api)
def get_server(self, id) :
if id in self.__servers :
return self.__servers[id]
return False
def reload(self) :
return self.__config.reload(self.__instances)
def pre_process(self, objs) :
for instance in objs :
(id, name, labels) = self.__get_infos(instance)
if "bunkerized-nginx.AUTOCONF" in labels :
if self.__swarm :
self.__process_instance(instance, "create", id, name, labels)
else :
if instance.status in ("restarting", "running", "created", "exited") :
self.__process_instance(instance, "create", id, name, labels)
if instance.status == "running" :
self.__process_instance(instance, "start", id, name, labels)
for server in objs :
(id, name, labels) = self.__get_infos(server)
if "bunkerized-nginx.SERVER_NAME" in labels :
if self.__swarm :
self.__process_server(server, "create", id, name, labels)
else :
if server.status in ("restarting", "running", "created", "exited") :
self.__process_server(server, "create", id, name, labels)
if server.status == "running" :
self.__process_server(server, "start", id, name, labels)
def process(self, obj, event) :
(id, name, labels) = self.__get_infos(obj)
if "bunkerized-nginx.AUTOCONF" in labels :
self.__process_instance(obj, event, id, name, labels)
elif "bunkerized-nginx.SERVER_NAME" in labels :
self.__process_server(obj, event, id, name, labels)
def __get_infos(self, obj) :
if self.__swarm :
id = obj.id
name = obj.name
labels = obj.attrs["Spec"]["Labels"]
else :
id = obj.id
name = obj.name
labels = obj.labels
return (id, name, labels)
def __process_instance(self, instance, event, id, name, labels) :
if event == "create" :
self.__instances[id] = instance
if self.__swarm and len(self.__instances) == 1 :
if self.__config.initconf(self.__instances) :
utils.log("[*] Initial config succeeded")
else :
utils.log("[!] Initial config failed")
utils.log("[*] bunkerized-nginx instance created : " + name + " / " + id)
elif event == "start" :
self.__instances[id].reload()
utils.log("[*] bunkerized-nginx instance started : " + name + " / " + id)
elif event == "die" :
self.__instances[id].reload()
utils.log("[*] bunkerized-nginx instance stopped : " + name + " / " + id)
elif event == "destroy" or event == "remove" :
del self.__instances[id]
if self.__swarm and len(self.__instances) == 0 :
with open("/etc/crontabs/nginx", "w") as f :
f.write("")
if os.path.exists("/etc/nginx/autoconf") :
os.remove("/etc/nginx/autoconf")
utils.log("[*] bunkerized-nginx instance removed : " + name + " / " + id)
def __process_server(self, instance, event, id, name, labels) :
vars = { k.replace("bunkerized-nginx.", "", 1) : v for k, v in labels.items() if k.startswith("bunkerized-nginx.")}
if event == "create" :
utils.log("[*] Generating config for " + vars["SERVER_NAME"] + " ...")
if self.__config.generate(self.__instances, vars) :
utils.log("[*] Generated config for " + vars["SERVER_NAME"])
self.__servers[id] = instance
if self.__swarm :
utils.log("[*] Activating config for " + vars["SERVER_NAME"] + " ...")
if self.__config.activate(self.__instances, vars) :
utils.log("[*] Activated config for " + vars["SERVER_NAME"])
else :
utils.log("[!] Can't activate config for " + vars["SERVER_NAME"])
else :
utils.log("[!] Can't generate config for " + vars["SERVER_NAME"])
elif event == "start" :
if id in self.__servers :
self.__servers[id].reload()
utils.log("[*] Activating config for " + vars["SERVER_NAME"] + " ...")
if self.__config.activate(self.__instances, vars) :
utils.log("[*] Activated config for " + vars["SERVER_NAME"])
else :
utils.log("[!] Can't activate config for " + vars["SERVER_NAME"])
elif event == "die" :
if id in self.__servers :
self.__servers[id].reload()
utils.log("[*] Deactivating config for " + vars["SERVER_NAME"])
if self.__config.deactivate(self.__instances, vars) :
utils.log("[*] Deactivated config for " + vars["SERVER_NAME"])
else :
utils.log("[!] Can't deactivate config for " + vars["SERVER_NAME"])
elif event == "destroy" or event == "remove" :
if id in self.__servers :
if self.__swarm :
utils.log("[*] Deactivating config for " + vars["SERVER_NAME"])
if self.__config.deactivate(self.__instances, vars) :
utils.log("[*] Deactivated config for " + vars["SERVER_NAME"])
else :
utils.log("[!] Can't deactivate config for " + vars["SERVER_NAME"])
del self.__servers[id]
utils.log("[*] Removing config for " + vars["SERVER_NAME"])
if self.__config.remove(vars) :
utils.log("[*] Removed config for " + vars["SERVER_NAME"])
else :
utils.log("[!] Can't remove config for " + vars["SERVER_NAME"])

209
autoconf/Config.py Normal file
View File

@@ -0,0 +1,209 @@
#!/usr/bin/python3
import utils
import subprocess, shutil, os, traceback, requests, time
class Config :
def __init__(self, swarm, api) :
self.__swarm = swarm
self.__api = api
def initconf(self, instances) :
try :
for instance_id, instance in instances.items() :
env = instance.attrs["Spec"]["TaskTemplate"]["ContainerSpec"]["Env"]
break
vars = {}
for var_value in env :
var = var_value.split("=")[0]
value = var_value.replace(var + "=", "", 1)
vars[var] = value
utils.log("[*] Generating global config ...")
if not self.globalconf(instances) :
utils.log("[!] Can't generate global config")
return False
utils.log("[*] Generated global config")
if "SERVER_NAME" in vars and vars["SERVER_NAME"] != "" :
for server in vars["SERVER_NAME"].split(" ") :
vars_site = vars.copy()
vars_site["SERVER_NAME"] = server
utils.log("[*] Generating config for " + vars["SERVER_NAME"] + " ...")
if not self.generate(instances, vars_site) or not self.activate(instances, vars_site, reload=False) :
utils.log("[!] Can't generate/activate site config for " + server)
return False
utils.log("[*] Generated config for " + vars["SERVER_NAME"])
with open("/etc/nginx/autoconf", "w") as f :
f.write("ok")
utils.log("[*] Waiting for bunkerized-nginx tasks ...")
i = 1
started = False
while i <= 10 :
time.sleep(i)
if self.__ping(instances) :
started = True
break
i = i + 1
utils.log("[!] Waiting " + str(i) + " seconds before retrying to contact bunkerized-nginx tasks")
if started :
utils.log("[*] bunkerized-nginx tasks started")
proc = subprocess.run(["/bin/su", "-s", "/opt/entrypoint/jobs.sh", "nginx"], env=vars, capture_output=True)
return proc.returncode == 0
else :
utils.log("[!] bunkerized-nginx tasks are not started")
except Exception as e :
utils.log("[!] Error while initializing config : " + str(e))
return False
def globalconf(self, instances) :
try :
for instance_id, instance in instances.items() :
env = instance.attrs["Spec"]["TaskTemplate"]["ContainerSpec"]["Env"]
break
vars = {}
for var_value in env :
var = var_value.split("=")[0]
value = var_value.replace(var + "=", "", 1)
vars[var] = value
proc = subprocess.run(["/bin/su", "-s", "/opt/entrypoint/global-config.sh", "nginx"], env=vars, capture_output=True)
if proc.returncode == 0 :
return True
else :
utils.log("[*] Error while generating global config : return code = " + str(proc.returncode))
except Exception as e :
utils.log("[!] Exception while generating global config : " + str(e))
return False
def generate(self, instances, vars) :
try :
# Get env vars from bunkerized-nginx instances
vars_instances = {}
for instance_id, instance in instances.items() :
if self.__swarm :
env = instance.attrs["Spec"]["TaskTemplate"]["ContainerSpec"]["Env"]
else :
env = instance.attrs["Config"]["Env"]
for var_value in env :
var = var_value.split("=")[0]
value = var_value.replace(var + "=", "", 1)
vars_instances[var] = value
vars_defaults = vars.copy()
vars_defaults.update(vars_instances)
vars_defaults.update(vars)
# Call site-config.sh to generate the config
proc = subprocess.run(["/bin/su", "-s", "/bin/sh", "-c", "/opt/entrypoint/site-config.sh" + " \"" + vars["SERVER_NAME"] + "\"", "nginx"], env=vars_defaults, capture_output=True)
if proc.returncode == 0 and vars_defaults["MULTISITE"] == "yes" and self.__swarm :
proc = subprocess.run(["/bin/su", "-s", "/opt/entrypoint/multisite-config.sh", "nginx"], env=vars_defaults, capture_output=True)
if proc.returncode == 0 :
return True
utils.log("[!] Error while generating site config for " + vars["SERVER_NAME"] + " : return code = " + str(proc.returncode))
except Exception as e :
utils.log("[!] Exception while generating site config : " + str(e))
return False
def activate(self, instances, vars, reload=True) :
try :
# Get first server name
first_server_name = vars["SERVER_NAME"].split(" ")[0]
# Check if file exists
if not os.path.isfile("/etc/nginx/" + first_server_name + "/server.conf") :
utils.log("[!] /etc/nginx/" + first_server_name + "/server.conf doesn't exist")
return False
# Include the server conf
utils.replace_in_file("/etc/nginx/nginx.conf", "}", "include /etc/nginx/" + first_server_name + "/server.conf;\n}")
# Reload
if not reload or self.reload(instances) :
return True
except Exception as e :
utils.log("[!] Exception while activating config : " + str(e))
return False
def deactivate(self, instances, vars) :
try :
# Get first server name
first_server_name = vars["SERVER_NAME"].split(" ")[0]
# Check if file exists
if not os.path.isfile("/etc/nginx/" + first_server_name + "/server.conf") :
utils.log("[!] /etc/nginx/" + first_server_name + "/server.conf doesn't exist")
return False
# Remove the include
utils.replace_in_file("/etc/nginx/nginx.conf", "include /etc/nginx/" + first_server_name + "/server.conf;\n", "")
# Reload
if self.reload(instances) :
return True
except Exception as e :
utils.log("[!] Exception while deactivating config : " + str(e))
return False
def remove(self, vars) :
try :
# Get first server name
first_server_name = vars["SERVER_NAME"].split(" ")[0]
# Check if file exists
if not os.path.isfile("/etc/nginx/" + first_server_name + "/server.conf") :
utils.log("[!] /etc/nginx/" + first_server_name + "/server.conf doesn't exist")
return False
# Remove the folder
shutil.rmtree("/etc/nginx/" + first_server_name)
return True
except Exception as e :
utils.log("[!] Error while deactivating config : " + str(e))
return False
def reload(self, instances) :
return self.__api_call(instances, "/reload")
def __ping(self, instances) :
return self.__api_call(instances, "/ping")
def __api_call(self, instances, path) :
ret = True
for instance_id, instance in instances.items() :
# Reload the instance object just in case
instance.reload()
# Reload via API
if self.__swarm :
# Send POST request on http://serviceName.NodeID.TaskID:8000/action
name = instance.name
for task in instance.tasks() :
if task["Status"]["State"] != "running" :
continue
nodeID = task["NodeID"]
taskID = task["ID"]
fqdn = name + "." + nodeID + "." + taskID
req = False
try :
req = requests.post("http://" + fqdn + ":8080" + self.__api + path)
except :
pass
if req and req.status_code == 200 :
utils.log("[*] Sent API order " + path + " to instance " + fqdn + " (service.node.task)")
else :
utils.log("[!] Can't send API order " + path + " to instance " + fqdn + " (service.node.task)")
ret = False
# Send SIGHUP to running instance
elif instance.status == "running" :
try :
instance.kill("SIGHUP")
utils.log("[*] Sent SIGHUP signal to bunkerized-nginx instance " + instance.name + " / " + instance.id)
except docker.errors.APIError as e :
utils.log("[!] Docker error while sending SIGHUP signal : " + str(e))
ret = False
return ret

45
autoconf/Dockerfile Normal file
View File

@@ -0,0 +1,45 @@
FROM nginx:stable-alpine AS builder
FROM alpine
COPY --from=builder /etc/nginx/ /opt/confs/nginx
RUN apk add py3-pip apache2-utils bash certbot curl logrotate openssl && \
pip3 install docker requests && \
mkdir /opt/entrypoint && \
mkdir -p /opt/confs/site && \
mkdir -p /opt/confs/global && \
mkdir /opt/scripts && \
addgroup -g 101 nginx && \
adduser -h /var/cache/nginx -g nginx -s /sbin/nologin -G nginx -D -H -u 101 nginx && \
mkdir /etc/letsencrypt && \
chown root:nginx /etc/letsencrypt && \
chmod 770 /etc/letsencrypt && \
mkdir /var/log/letsencrypt && \
chown root:nginx /var/log/letsencrypt && \
chmod 770 /var/log/letsencrypt && \
mkdir /var/lib/letsencrypt && \
chown root:nginx /var/lib/letsencrypt && \
chmod 770 /var/lib/letsencrypt && \
mkdir /cache && \
chown root:nginx /cache && \
chmod 770 /cache && \
touch /var/log/jobs.log && \
chown root:nginx /var/log/jobs.log && \
chmod 770 /var/log/jobs.log && \
chown -R root:nginx /opt/confs/nginx && \
chmod -R 770 /opt/confs/nginx && \
mkdir /acme-challenge && \
chown root:nginx /acme-challenge && \
chmod 770 /acme-challenge
COPY autoconf/misc/logrotate.conf /etc/logrotate.conf
COPY scripts/* /opt/scripts/
COPY confs/site/ /opt/confs/site
COPY confs/global/ /opt/confs/global
COPY entrypoint/* /opt/entrypoint/
COPY autoconf/* /opt/entrypoint/
RUN chmod +x /opt/entrypoint/*.py /opt/entrypoint/*.sh /opt/scripts/*.sh
ENTRYPOINT ["/opt/entrypoint/entrypoint.sh"]

44
autoconf/Dockerfile-amd64 Normal file
View File

@@ -0,0 +1,44 @@
FROM nginx:stable-alpine AS builder
FROM amd64/alpine
COPY --from=builder /etc/nginx/ /opt/confs/nginx
RUN apk add py3-pip apache2-utils bash certbot curl logrotate openssl && \
pip3 install docker requests && \
mkdir /opt/entrypoint && \
mkdir -p /opt/confs/site && \
mkdir -p /opt/confs/global && \
mkdir /opt/scripts && \
addgroup -g 101 nginx && \
adduser -h /var/cache/nginx -g nginx -s /sbin/nologin -G nginx -D -H -u 101 nginx && \
mkdir /etc/letsencrypt && \
chown root:nginx /etc/letsencrypt && \
chmod 770 /etc/letsencrypt && \
mkdir /var/log/letsencrypt && \
chown root:nginx /var/log/letsencrypt && \
chmod 770 /var/log/letsencrypt && \
mkdir /var/lib/letsencrypt && \
chown root:nginx /var/lib/letsencrypt && \
chmod 770 /var/lib/letsencrypt && \
mkdir /cache && \
chown root:nginx /cache && \
chmod 770 /cache && \
touch /var/log/jobs.log && \
chown root:nginx /var/log/jobs.log && \
chmod 770 /var/log/jobs.log && \
chown -R root:nginx /opt/confs/nginx && \
chmod -R 770 /opt/confs/nginx && \
mkdir /acme-challenge && \
chown root:nginx /acme-challenge && \
chmod 770 /acme-challenge
COPY autoconf/misc/logrotate.conf /etc/logrotate.conf
COPY scripts/* /opt/scripts/
COPY confs/global/ /opt/confs/global
COPY confs/site/ /opt/confs/site
COPY entrypoint/* /opt/entrypoint/
COPY autoconf/* /opt/entrypoint/
RUN chmod +x /opt/entrypoint/*.py /opt/entrypoint/*.sh /opt/scripts/*.sh
ENTRYPOINT ["/opt/entrypoint/entrypoint.sh"]

View File

@@ -0,0 +1,50 @@
FROM alpine AS builder
ENV QEMU_URL https://github.com/balena-io/qemu/releases/download/v4.0.0%2Bbalena2/qemu-4.0.0.balena2-arm.tar.gz
RUN apk add curl && curl -L ${QEMU_URL} | tar zxvf - -C . --strip-components 1
FROM nginx:stable-alpine AS builder2
FROM arm32v7/alpine
COPY --from=builder qemu-arm-static /usr/bin
COPY --from=builder2 /etc/nginx/ /opt/confs/nginx
RUN apk add py3-pip apache2-utils bash certbot curl logrotate openssl && \
pip3 install docker requests && \
mkdir /opt/entrypoint && \
mkdir -p /opt/confs/site && \
mkdir -p /opt/confs/global && \
mkdir /opt/scripts && \
addgroup -g 101 nginx && \
adduser -h /var/cache/nginx -g nginx -s /sbin/nologin -G nginx -D -H -u 101 nginx && \
mkdir /etc/letsencrypt && \
chown root:nginx /etc/letsencrypt && \
chmod 770 /etc/letsencrypt && \
mkdir /var/log/letsencrypt && \
chown root:nginx /var/log/letsencrypt && \
chmod 770 /var/log/letsencrypt && \
mkdir /var/lib/letsencrypt && \
chown root:nginx /var/lib/letsencrypt && \
chmod 770 /var/lib/letsencrypt && \
mkdir /cache && \
chown root:nginx /cache && \
chmod 770 /cache && \
touch /var/log/jobs.log && \
chown root:nginx /var/log/jobs.log && \
chmod 770 /var/log/jobs.log && \
chown -R root:nginx /opt/confs/nginx && \
chmod -R 770 /opt/confs/nginx && \
mkdir /acme-challenge && \
chown root:nginx /acme-challenge && \
chmod 770 /acme-challenge
COPY autoconf/misc/logrotate.conf /etc/logrotate.conf
COPY scripts/* /opt/scripts/
COPY confs/global/ /opt/confs/global
COPY confs/site/ /opt/confs/site
COPY entrypoint/* /opt/entrypoint/
COPY autoconf/* /opt/entrypoint/
RUN chmod +x /opt/entrypoint/*.py /opt/entrypoint/*.sh /opt/scripts/*.sh
ENTRYPOINT ["/opt/entrypoint/entrypoint.sh"]

View File

@@ -0,0 +1,50 @@
FROM alpine AS builder
ENV QEMU_URL https://github.com/balena-io/qemu/releases/download/v4.0.0%2Bbalena2/qemu-4.0.0.balena2-aarch64.tar.gz
RUN apk add curl && curl -L ${QEMU_URL} | tar zxvf - -C . --strip-components 1
FROM nginx:stable-alpine AS builder2
FROM arm64v8/alpine
COPY --from=builder qemu-aarch64-static /usr/bin
COPY --from=builder2 /etc/nginx/ /opt/confs/nginx
RUN apk add py3-pip apache2-utils bash certbot curl logrotate openssl && \
pip3 install docker requests && \
mkdir /opt/entrypoint && \
mkdir -p /opt/confs/site && \
mkdir -p /opt/confs/global && \
mkdir /opt/scripts && \
addgroup -g 101 nginx && \
adduser -h /var/cache/nginx -g nginx -s /sbin/nologin -G nginx -D -H -u 101 nginx && \
mkdir /etc/letsencrypt && \
chown root:nginx /etc/letsencrypt && \
chmod 770 /etc/letsencrypt && \
mkdir /var/log/letsencrypt && \
chown root:nginx /var/log/letsencrypt && \
chmod 770 /var/log/letsencrypt && \
mkdir /var/lib/letsencrypt && \
chown root:nginx /var/lib/letsencrypt && \
chmod 770 /var/lib/letsencrypt && \
mkdir /cache && \
chown root:nginx /cache && \
chmod 770 /cache && \
touch /var/log/jobs.log && \
chown root:nginx /var/log/jobs.log && \
chmod 770 /var/log/jobs.log && \
chown -R root:nginx /opt/confs/nginx && \
chmod -R 770 /opt/confs/nginx && \
mkdir /acme-challenge && \
chown root:nginx /acme-challenge && \
chmod 770 /acme-challenge
COPY autoconf/misc/logrotate.conf /etc/logrotate.conf
COPY scripts/* /opt/scripts/
COPY confs/global/ /opt/confs/global
COPY confs/site/ /opt/confs/site
COPY entrypoint/* /opt/entrypoint/
COPY autoconf/* /opt/entrypoint/
RUN chmod +x /opt/entrypoint/*.py /opt/entrypoint/*.sh /opt/scripts/*.sh
ENTRYPOINT ["/opt/entrypoint/entrypoint.sh"]

44
autoconf/Dockerfile-i386 Normal file
View File

@@ -0,0 +1,44 @@
FROM nginx:stable-alpine AS builder
FROM i386/alpine
COPY --from=builder /etc/nginx/ /opt/confs/nginx
RUN apk add py3-pip apache2-utils bash certbot curl logrotate openssl && \
pip3 install docker requests && \
mkdir /opt/entrypoint && \
mkdir -p /opt/confs/site && \
mkdir -p /opt/confs/global && \
mkdir /opt/scripts && \
addgroup -g 101 nginx && \
adduser -h /var/cache/nginx -g nginx -s /sbin/nologin -G nginx -D -H -u 101 nginx && \
mkdir /etc/letsencrypt && \
chown root:nginx /etc/letsencrypt && \
chmod 770 /etc/letsencrypt && \
mkdir /var/log/letsencrypt && \
chown root:nginx /var/log/letsencrypt && \
chmod 770 /var/log/letsencrypt && \
mkdir /var/lib/letsencrypt && \
chown root:nginx /var/lib/letsencrypt && \
chmod 770 /var/lib/letsencrypt && \
mkdir /cache && \
chown root:nginx /cache && \
chmod 770 /cache && \
touch /var/log/jobs.log && \
chown root:nginx /var/log/jobs.log && \
chmod 770 /var/log/jobs.log && \
chown -R root:nginx /opt/confs/nginx && \
chmod -R 770 /opt/confs/nginx && \
mkdir /acme-challenge && \
chown root:nginx /acme-challenge && \
chmod 770 /acme-challenge
COPY autoconf/misc/logrotate.conf /etc/logrotate.conf
COPY scripts/* /opt/scripts/
COPY confs/global/ /opt/confs/global
COPY confs/site/ /opt/confs/site
COPY entrypoint/* /opt/entrypoint/
COPY autoconf/* /opt/entrypoint/
RUN chmod +x /opt/entrypoint/*.py /opt/entrypoint/*.sh /opt/scripts/*.sh
ENTRYPOINT ["/opt/entrypoint/entrypoint.sh"]

28
autoconf/ReloadServer.py Normal file
View File

@@ -0,0 +1,28 @@
import socketserver, threading, utils, os, stat
class ReloadServerHandler(socketserver.StreamRequestHandler):
def handle(self) :
try :
data = self.request.recv(512)
if not data :
return
with self.server.lock :
ret = self.server.autoconf.reload()
if ret :
self.request.sendall("ok".encode("utf-8"))
else :
self.request.sendall("ko".encode("utf-8"))
except Exception as e :
utils.log("Exception " + str(e))
def run_reload_server(autoconf, lock) :
server = socketserver.UnixStreamServer("/tmp/autoconf.sock", ReloadServerHandler)
os.chown("/tmp/autoconf.sock", 0, 101)
os.chmod("/tmp/autoconf.sock", 0o770)
server.autoconf = autoconf
server.lock = lock
thread = threading.Thread(target=server.serve_forever)
thread.daemon = True
thread.start()
return (server, thread)

73
autoconf/app.py Normal file
View File

@@ -0,0 +1,73 @@
#!/usr/bin/python3
from AutoConf import AutoConf
from ReloadServer import run_reload_server
import utils
import docker, os, stat, sys, select, threading
# Connect to the endpoint
endpoint = "/var/run/docker.sock"
if not os.path.exists(endpoint) or not stat.S_ISSOCK(os.stat(endpoint).st_mode) :
utils.log("[!] /var/run/docker.sock not found (is it mounted ?)")
sys.exit(1)
try :
client = docker.DockerClient(base_url='unix:///var/run/docker.sock')
except Exception as e :
utils.log("[!] Can't instantiate DockerClient : " + str(e))
sys.exit(2)
# Check if we are in Swarm mode
swarm = os.getenv("SWARM_MODE") == "yes"
# Our object to process events
api = ""
if swarm :
api = os.getenv("API_URI")
autoconf = AutoConf(swarm, api)
lock = threading.Lock()
if swarm :
(server, thread) = run_reload_server(autoconf, lock)
# Get all bunkerized-nginx instances and web services created before
try :
if swarm :
before = client.services.list(filters={"label" : "bunkerized-nginx.AUTOCONF"}) + client.services.list(filters={"label" : "bunkerized-nginx.SERVER_NAME"})
else :
before = client.containers.list(all=True, filters={"label" : "bunkerized-nginx.AUTOCONF"}) + client.containers.list(filters={"label" : "bunkerized-nginx.SERVER_NAME"})
except docker.errors.APIError as e :
utils.log("[!] Docker API error " + str(e))
sys.exit(3)
# Process them before events
with lock :
autoconf.pre_process(before)
# Process events received from Docker
try :
utils.log("[*] Listening for Docker events ...")
for event in client.events(decode=True) :
# Process only container/service events
if (swarm and event["Type"] != "service") or (not swarm and event["Type"] != "container") :
continue
# Get Container/Service object
try :
if swarm :
id = service_id=event["Actor"]["ID"]
server = client.services.get(service_id=id)
else :
id = event["id"]
server = client.containers.get(id)
except docker.errors.NotFound as e :
server = autoconf.get_server(id)
if not server :
continue
# Process the event
with lock :
autoconf.process(server, event["Action"])
except docker.errors.APIError as e :
utils.log("[!] Docker API error " + str(e))
sys.exit(4)

48
autoconf/entrypoint.sh Normal file
View File

@@ -0,0 +1,48 @@
#!/bin/bash
echo "[*] Starting autoconf ..."
# check permissions
su -s "/opt/entrypoint/permissions.sh" nginx
if [ "$?" -ne 0 ] ; then
exit 1
fi
if [ "$SWARM_MODE" = "yes" ] ; then
cp -r /opt/confs/nginx/* /etc/nginx
chown -R root:nginx /etc/nginx
chmod -R 770 /etc/nginx
fi
# trap SIGTERM and SIGINT
function trap_exit() {
echo "[*] Catched stop operation"
echo "[*] Stopping crond ..."
pkill -TERM crond
echo "[*] Stopping python3 ..."
pkill -TERM python3
pkill -TERM tail
}
trap "trap_exit" TERM INT QUIT
# remove old crontabs
echo "" > /etc/crontabs/root
# setup logrotate
touch /var/log/jobs.log
echo "0 0 * * * /usr/sbin/logrotate -f /etc/logrotate.conf > /dev/null 2>&1" >> /etc/crontabs/root
# start cron
crond
# run autoconf app
/opt/entrypoint/app.py &
# display logs
tail -F /var/log/jobs.log &
pid="$!"
wait "$pid"
# stop
echo "[*] autoconf stopped"
exit 0

12
autoconf/hooks/post_push Normal file
View File

@@ -0,0 +1,12 @@
#!/bin/bash
curl -Lo manifest-tool https://github.com/estesp/manifest-tool/releases/download/v1.0.3/manifest-tool-linux-amd64
chmod +x manifest-tool
VERSION=$(cat VERSION | tr -d '\n')
if [ "$SOURCE_BRANCH" = "dev" ] ; then
./manifest-tool push from-args --ignore-missing --platforms linux/amd64,linux/386,linux/arm/v7,linux/arm64/v8 --template bunkerity/bunkerized-nginx-autoconf:dev-ARCHVARIANT --target bunkerity/bunkerized-nginx-autoconf:dev
elif [ "$SOURCE_BRANCH" = "master" ] ; then
./manifest-tool push from-args --ignore-missing --platforms linux/amd64,linux/386,linux/arm/v7,linux/arm64/v8 --template bunkerity/bunkerized-nginx-autoconf:ARCHVARIANT --target bunkerity/bunkerized-nginx-autoconf:${VERSION}
./manifest-tool push from-args --ignore-missing --platforms linux/amd64,linux/386,linux/arm/v7,linux/arm64/v8 --template bunkerity/bunkerized-nginx-autoconf:ARCHVARIANT --target bunkerity/bunkerized-nginx-autoconf:latest
fi

5
autoconf/hooks/pre_build Normal file
View File

@@ -0,0 +1,5 @@
#!/bin/bash
# Register qemu-*-static for all supported processors except the
# current one, but also remove all registered binfmt_misc before
docker run --rm --privileged multiarch/qemu-user-static:register --reset

View File

@@ -1,4 +1,4 @@
/var/log/*.log /var/log/clamav/*.log /var/log/nginx/*.log {
/var/log/*.log /var/log/letsencrypt/*.log {
# compress old files using gzip
compress
@@ -6,8 +6,8 @@
daily
# remove old logs after X days
maxage %LOGROTATE_MAXAGE%
rotate %LOGROTATE_MAXAGE%
maxage 7
rotate 7
# no errors if a file is missing
missingok
@@ -16,7 +16,7 @@
nomail
# mininum size of a logfile before rotating
minsize %LOGROTATE_MINSIZE%
minsize 10M
# make a copy and truncate the files
copytruncate

19
autoconf/reload.py Normal file
View File

@@ -0,0 +1,19 @@
#!/usr/bin/python3
import sys, socket, os
if not os.path.exists("/tmp/autoconf.sock") :
sys.exit(1)
try :
client = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
client.connect("/tmp/autoconf.sock")
client.send("reload".encode("utf-8"))
data = client.recv(512)
client.close()
if not data or data.decode("utf-8") != "ok" :
sys.exit(3)
except Exception as e :
sys.exit(2)
sys.exit(0)

24
autoconf/utils.py Normal file
View File

@@ -0,0 +1,24 @@
#!/usr/bin/python3
import datetime
def log(event) :
print("[" + str(datetime.datetime.now().replace(microsecond=0)) + "] " + event, flush=True)
def replace_in_file(file, old_str, new_str) :
with open(file) as f :
data = f.read()
data = data[::-1].replace(old_str[::-1], new_str[::-1], 1)[::-1]
with open(file, "w") as f :
f.write(data)
def install_cron(service, vars, crons) :
for var in vars :
if var in crons :
with open("/etc/crontabs/root", "a+") as f :
f.write(vars[var] + " /opt/cron/" + crons[var] + ".py " + service["Actor"]["ID"])
def uninstall_cron(service, vars, crons) :
for var in vars :
if var in crons :
replace_in_file("/etc/crontabs/root", vars[var] + " /opt/cron/" + crons[var] + ".py " + service["Actor"]["ID"] + "\n", "")

View File

@@ -30,7 +30,7 @@ function git_secure_clone() {
NTASK=$(nproc)
# install build dependencies
apk add --no-cache --virtual build autoconf libtool automake git geoip-dev yajl-dev g++ curl-dev libxml2-dev pcre-dev make linux-headers libmaxminddb-dev musl-dev lua-dev gd-dev gnupg
apk add --no-cache --virtual build autoconf libtool automake git geoip-dev yajl-dev g++ gcc curl-dev libxml2-dev pcre-dev make linux-headers libmaxminddb-dev musl-dev lua-dev gd-dev gnupg brotli-dev openssl-dev
# compile and install ModSecurity library
cd /tmp
@@ -50,8 +50,9 @@ make install-strip
cd /tmp
git_secure_clone https://github.com/coreruleset/coreruleset.git 7776fe23f127fd2315bad0e400bdceb2cabb97dc
cd coreruleset
cp -r rules /etc/nginx/owasp-crs
cp crs-setup.conf.example /etc/nginx/owasp-crs.conf
mkdir /opt/owasp
cp -r rules /opt/owasp/crs
cp crs-setup.conf.example /opt/owasp/crs.conf
# get nginx modules
cd /tmp
@@ -63,6 +64,8 @@ git_secure_clone https://github.com/openresty/headers-more-nginx-module.git d6d7
git_secure_clone https://github.com/leev/ngx_http_geoip2_module.git 1cabd8a1f68ea3998f94e9f3504431970f848fbf
# cookie
git_secure_clone https://github.com/AirisX/nginx_cookie_flag_module.git c4ff449318474fbbb4ba5f40cb67ccd54dc595d4
# brotli
git_secure_clone https://github.com/google/ngx_brotli.git 9aec15e2aa6feea2113119ba06460af70ab3ea62
# LUA requirements
git_secure_clone https://github.com/openresty/luajit2.git fe32831adcb3f5fe9259a9ce404fc54e1399bba3
@@ -109,6 +112,34 @@ git_secure_clone https://github.com/ledgetech/lua-resty-http.git 984fdc260543763
cd lua-resty-http
make install
cd /tmp
git_secure_clone https://github.com/Neopallium/lualogging.git cadc4e8fd652be07a65b121a3e024838db330c15
cd lualogging
cp -r src/* /usr/local/lib/lua
cd /tmp
git_secure_clone https://github.com/diegonehab/luasocket.git 5b18e475f38fcf28429b1cc4b17baee3b9793a62
cd luasocket
make -j $NTASK
make CDIR_linux=lib/lua/5.1 LDIR_linux=lib/lua install
cd /tmp
git_secure_clone https://github.com/brunoos/luasec.git c6704919bdc85f3324340bdb35c2795a02f7d625
cd luasec
make linux -j $NTASK
make LUACPATH=/usr/local/lib/lua/5.1 LUAPATH=/usr/local/lib/lua install
cd /tmp
git_secure_clone https://github.com/crowdsecurity/lua-cs-bouncer.git 3c235c813fc453dcf51a391bc9e9a36ca77958b0
cd lua-cs-bouncer
mkdir /usr/local/lib/lua/crowdsec
cp lib/*.lua /usr/local/lib/lua/crowdsec
cp template.conf /usr/local/lib/lua/crowdsec/crowdsec.conf
sed -i 's/^API_URL=.*/API_URL=%CROWDSEC_HOST%/' /usr/local/lib/lua/crowdsec/crowdsec.conf
sed -i 's/^API_KEY=.*/API_KEY=%CROWDSEC_KEY%/' /usr/local/lib/lua/crowdsec/crowdsec.conf
sed -i 's/require "lrucache"/require "resty.lrucache"/' /usr/local/lib/lua/crowdsec/CrowdSec.lua
sed -i 's/require "config"/require "crowdsec.config"/' /usr/local/lib/lua/crowdsec/CrowdSec.lua
cd /tmp
git_secure_clone https://github.com/hamishforbes/lua-resty-iputils.git 3151d6485e830421266eee5c0f386c32c835dba4
cd lua-resty-iputils
make LUA_LIB_DIR=/usr/local/lib/lua install
cd /tmp
git_secure_clone https://github.com/openresty/lua-nginx-module.git 2d23bc4f0a29ed79aaaa754c11bffb1080aa44ba
export LUAJIT_LIB=/usr/local/lib
export LUAJIT_INC=/usr/local/include/luajit-2.1
@@ -126,8 +157,8 @@ fi
tar -xvzf nginx-${NGINX_VERSION}.tar.gz
cd nginx-$NGINX_VERSION
CONFARGS=$(nginx -V 2>&1 | sed -n -e 's/^.*arguments: //p')
CONFARGS=${CONFARGS/-Os -fomit-frame-pointer/-Os}
./configure $CONFARGS --add-dynamic-module=/tmp/ModSecurity-nginx --add-dynamic-module=/tmp/headers-more-nginx-module --add-dynamic-module=/tmp/ngx_http_geoip2_module --add-dynamic-module=/tmp/nginx_cookie_flag_module --add-dynamic-module=/tmp/lua-nginx-module
CONFARGS=${CONFARGS/-Os -fomit-frame-pointer -g/-Os}
./configure $CONFARGS --add-dynamic-module=/tmp/ModSecurity-nginx --add-dynamic-module=/tmp/headers-more-nginx-module --add-dynamic-module=/tmp/ngx_http_geoip2_module --add-dynamic-module=/tmp/nginx_cookie_flag_module --add-dynamic-module=/tmp/lua-nginx-module --add-dynamic-module=/tmp/ngx_brotli
make -j $NTASK modules
cp ./objs/*.so /usr/lib/nginx/modules

View File

@@ -1,2 +0,0 @@
auth_basic "%AUTH_BASIC_TEXT%";
auth_basic_user_file /etc/nginx/.htpasswd;

View File

@@ -1,3 +0,0 @@
if ($bad_user_agent = yes) {
return 444;
}

View File

@@ -1,3 +0,0 @@
if ($allowed_country = no) {
return 444;
}

View File

View File

@@ -0,0 +1,30 @@
location ~ ^%API_URI%/ping {
return 444;
}
location ~ ^%API_URI% {
rewrite_by_lua_block {
local api = require "api"
local api_uri = "%API_URI%"
if api.is_api_call(api_uri) then
ngx.header.content_type = 'text/plain'
if api.do_api_call(api_uri) then
ngx.log(ngx.NOTICE, "[API] API call " .. ngx.var.request_uri .. " successfull from " .. ngx.var.remote_addr)
ngx.say("ok")
else
ngx.log(ngx.WARN, "[API] API call " .. ngx.var.request_uri .. " failed from " .. ngx.var.remote_addr)
ngx.say("ko")
end
ngx.exit(ngx.HTTP_OK)
end
ngx.exit(ngx.OK)
}
}

21
confs/global/api.conf Normal file
View File

@@ -0,0 +1,21 @@
rewrite_by_lua_block {
local api = require "api"
local api_uri = "%API_URI%"
if api.is_api_call(api_uri) then
ngx.header.content_type = 'text/plain'
if api.do_api_call(api_uri) then
ngx.log(ngx.NOTICE, "[API] API call " .. ngx.var.request_uri .. " successfull from " .. ngx.var.remote_addr)
ngx.say("ok")
else
ngx.log(ngx.WARN, "[API] API call " .. ngx.var.request_uri .. " failed from " .. ngx.var.remote_addr)
ngx.say("ko")
end
ngx.exit(ngx.HTTP_OK)
end
ngx.exit(ngx.OK)
}

View File

@@ -0,0 +1,9 @@
init_by_lua_block {
local cs = require "crowdsec.CrowdSec"
local ok, err = cs.init("/usr/local/lib/lua/crowdsec/crowdsec.conf")
if ok == nil then
ngx.log(ngx.ERR, "[Crowdsec] " .. err)
error()
end
ngx.log(ngx.NOTICE, "[Crowdsec] Initialisation done")
}

View File

@@ -5,6 +5,6 @@ geoip2 /etc/nginx/geoip.mmdb {
}
map $geoip2_data_country_code $allowed_country {
default yes;
%BLOCK_COUNTRY%
default %DEFAULT%;
%COUNTRY%
}

View File

@@ -0,0 +1,31 @@
init_by_lua_block {
local dataloader = require "dataloader"
local use_proxies = %USE_PROXIES%
local use_abusers = %USE_ABUSERS%
local use_tor_exit_nodes = %USE_TOR_EXIT_NODES%
local use_user_agents = %USE_USER_AGENTS%
local use_referrers = %USE_REFERRERS%
if use_proxies then
dataloader.load_ip("/etc/nginx/proxies.list", ngx.shared.proxies_data)
end
if use_abusers then
dataloader.load_ip("/etc/nginx/abusers.list", ngx.shared.abusers_data)
end
if use_tor_exit_nodes then
dataloader.load_ip("/etc/nginx/tor-exit-nodes.list", ngx.shared.tor_exit_nodes_data)
end
if use_user_agents then
dataloader.load_raw("/etc/nginx/user-agents.list", ngx.shared.user_agents_data)
end
if use_referrers then
dataloader.load_raw("/etc/nginx/referrers.list", ngx.shared.referrers_data)
end
}

View File

@@ -0,0 +1,11 @@
listen 0.0.0.0:%HTTPS_PORT% default_server ssl %HTTP2%;
ssl_certificate /etc/nginx/default-cert.pem;
ssl_certificate_key /etc/nginx/default-key.pem;
ssl_protocols %HTTPS_PROTOCOLS%;
ssl_prefer_server_ciphers off;
ssl_session_tickets off;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m;
%SSL_DHPARAM%
%SSL_CIPHERS%
%LETS_ENCRYPT_WEBROOT%

View File

@@ -0,0 +1,3 @@
location ~ ^/.well-known/acme-challenge/ {
root /acme-challenge;
}

View File

@@ -0,0 +1,6 @@
server {
%LISTEN_HTTP%
server_name _;
%USE_HTTPS%
%MULTISITE_DISABLE_DEFAULT_SERVER%
}

View File

@@ -0,0 +1,3 @@
location / {
return 444;
}

View File

@@ -0,0 +1,30 @@
load_module /usr/lib/nginx/modules/ngx_http_lua_module.so;
daemon on;
pid /tmp/nginx-temp.pid;
events {
worker_connections 1024;
use epoll;
}
http {
proxy_temp_path /tmp/proxy_temp;
client_body_temp_path /tmp/client_temp;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
lua_package_path "/usr/local/lib/lua/?.lua;;";
server {
listen 0.0.0.0:%HTTP_PORT% default_server;
server_name _;
location ~ ^/.well-known/acme-challenge/ {
root /acme-challenge;
}
%USE_API%
location / {
return 444;
}
}
}

View File

@@ -7,9 +7,11 @@ load_module /usr/lib/nginx/modules/ngx_http_headers_more_filter_module.so;
load_module /usr/lib/nginx/modules/ngx_http_lua_module.so;
load_module /usr/lib/nginx/modules/ngx_http_modsecurity_module.so;
load_module /usr/lib/nginx/modules/ngx_stream_geoip2_module.so;
load_module /usr/lib/nginx/modules/ngx_http_brotli_filter_module.so;
load_module /usr/lib/nginx/modules/ngx_http_brotli_static_module.so;
# run as daemon
daemon on;
# run in foreground
daemon off;
# PID file
pid /tmp/nginx.pid;
@@ -23,9 +25,12 @@ pcre_jit on;
# config files for dynamic modules
include /etc/nginx/modules/*.conf;
# max open files for each worker
worker_rlimit_nofile %WORKER_RLIMIT_NOFILE%;
events {
# max connections per worker
worker_connections 1024;
worker_connections %WORKER_CONNECTIONS%;
# epoll seems to be the best on Linux
use epoll;
@@ -45,15 +50,10 @@ http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# load gzip custom config
include /etc/nginx/gzip.conf;
# maximum request body size
client_max_body_size %MAX_CLIENT_SIZE%;
# write logs to local syslog
access_log syslog:server=unix:/dev/log,nohostname,facility=local0,severity=notice combined;
error_log syslog:server=unix:/dev/log,nohostname,facility=local0 warn;
log_format logf '%LOG_FORMAT%';
access_log /var/log/access.log logf;
error_log /var/log/error.log info;
# temp paths
proxy_temp_path /tmp/proxy_temp;
@@ -62,26 +62,20 @@ http {
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
# load caching custom config
include /etc/nginx/cache.conf;
# close connections in FIN_WAIT1 state
reset_timedout_connection on;
# timeouts
client_body_timeout 12;
client_header_timeout 12;
client_body_timeout 10;
client_header_timeout 10;
keepalive_timeout 15;
send_timeout 10;
# enable/disable sending nginx version
server_tokens %SERVER_TOKENS%;
# resolvers to use
resolver %DNS_RESOLVERS% ipv6=off;
# get real IP address if behind a reverse proxy
%PROXY_REAL_IP%
# remove ports when sending redirects
port_in_redirect off;
# lua path and dicts
lua_package_path "/usr/local/lib/lua/?.lua;;";
@@ -90,22 +84,40 @@ http {
%BLACKLIST_IP_CACHE%
%BLACKLIST_REVERSE_CACHE%
%DNSBL_CACHE%
%BLOCK_PROXIES%
%BLOCK_ABUSERS%
%BLOCK_TOR_EXIT_NODES%
%BLOCK_USER_AGENTS%
%BLOCK_REFERRERS%
%BAD_BEHAVIOR%
# crowdsec init
%USE_CROWDSEC%
# shared memory zone for limit_req
%LIMIT_REQ_ZONE%
# server config
include /etc/nginx/server.conf;
# shared memory zone for limit_conn
%LIMIT_CONN_ZONE%
# list of blocked country
%BLOCK_COUNTRY%
# whitelist or blacklist country
%USE_COUNTRY%
# list of blocker user agents
%BLOCK_USER_AGENT%
# enable/disable ModSecurity
%USE_MODSECURITY%
# zone for proxy_cache
%PROXY_CACHE_PATH%
# custom http confs
include /http-confs/*.conf;
# LUA init block
include /etc/nginx/init-lua.conf;
# default server when MULTISITE=yes
%MULTISITE_DEFAULT_SERVER%
# server config(s)
%INCLUDE_SERVER%
# API
%USE_API%
}

View File

View File

View File

View File

View File

@@ -1,9 +0,0 @@
# /etc/nginx/gzip.conf
# enable/disable gzip compression
gzip %USE_GZIP%;
gzip_comp_level %GZIP_COMP_LEVEL%;
gzip_disable msie6;
gzip_min_length %GZIP_MIN_LENGTH%;
gzip_proxied any;
gzip_types %GZIP_TYPES%;

View File

@@ -1,137 +0,0 @@
set $session_secret %ANTIBOT_SESSION_SECRET%;
set $session_check_addr on;
access_by_lua_block {
local use_whitelist_ip = %USE_WHITELIST_IP%
local use_whitelist_reverse = %USE_WHITELIST_REVERSE%
local use_blacklist_ip = %USE_BLACKLIST_IP%
local use_blacklist_reverse = %USE_BLACKLIST_REVERSE%
local use_dnsbl = %USE_DNSBL%
local use_antibot_cookie = %USE_ANTIBOT_COOKIE%
local use_antibot_javascript = %USE_ANTIBOT_JAVASCRIPT%
local use_antibot_captcha = %USE_ANTIBOT_CAPTCHA%
local use_antibot_recaptcha = %USE_ANTIBOT_RECAPTCHA%
-- include LUA code
local whitelist = require "whitelist"
local blacklist = require "blacklist"
local dnsbl = require "dnsbl"
local cookie = require "cookie"
local javascript = require "javascript"
local captcha = require "captcha"
local recaptcha = require "recaptcha"
-- antibot
local antibot_uri = "%ANTIBOT_URI%"
-- check if already in whitelist cache
if use_whitelist_ip and whitelist.ip_cached_ok() then
ngx.exit(ngx.OK)
end
if use_whitelist_reverse and whitelist.reverse_cached_ok() then
ngx.exit(ngx.OK)
end
-- check if already in blacklist cache
if use_blacklist_ip and blacklist.ip_cached_ko() then
ngx.exit(ngx.HTTP_FORBIDDEN)
end
if use_blacklist_reverse and blacklist.reverse_cached_ko() then
ngx.exit(ngx.HTTP_FORBIDDEN)
end
-- check if already in dnsbl cache
if use_dnsbl and dnsbl.cached_ko() then
ngx.exit(ngx.HTTP_FORBIDDEN)
end
-- check if IP is whitelisted (only if not in cache)
if use_whitelist_ip and not whitelist.ip_cached() then
if whitelist.check_ip() then
ngx.exit(ngx.OK)
end
end
-- check if reverse is whitelisted (only if not in cache)
if use_whitelist_reverse and not whitelist.reverse_cached() then
if whitelist.check_reverse() then
ngx.exit(ngx.OK)
end
end
-- check if IP is blacklisted (only if not in cache)
if use_blacklist_ip and not blacklist.ip_cached() then
if blacklist.check_ip() then
ngx.exit(ngx.HTTP_FORBIDDEN)
end
end
-- check if reverse is blacklisted (only if not in cache)
if use_blacklist_reverse and not blacklist.reverse_cached() then
if blacklist.check_reverse() then
ngx.exit(ngx.HTTP_FORBIDDEN)
end
end
-- check if IP is in DNSBLs (only if not in cache)
if use_dnsbl and not dnsbl.cached() then
if dnsbl.check() then
ngx.exit(ngx.HTTP_FORBIDDEN)
end
end
-- cookie check
if use_antibot_cookie then
if not cookie.is_set("uri") then
if ngx.var.request_uri ~= antibot_uri then
cookie.set({uri = ngx.var.request_uri})
return ngx.redirect(antibot_uri)
end
return ngx.exit(ngx.HTTP_FORBIDDEN)
else
if ngx.var.request_uri == antibot_uri then
return ngx.redirect(cookie.get("uri"))
end
end
end
-- javascript check
if use_antibot_javascript then
if not cookie.is_set("javascript") then
if ngx.var.request_uri ~= antibot_uri then
cookie.set({uri = ngx.var.request_uri, challenge = javascript.get_challenge()})
return ngx.redirect(antibot_uri)
end
end
end
-- captcha check
if use_antibot_captcha then
if not cookie.is_set("captcha") then
if ngx.var.request_uri ~= antibot_uri and ngx.var.request_uri ~= "/favicon.ico" then
cookie.set({uri = ngx.var.request_uri})
return ngx.redirect(antibot_uri)
end
end
end
-- recaptcha check
if use_antibot_recaptcha then
if not cookie.is_set("recaptcha") then
if ngx.var.request_uri ~= antibot_uri and ngx.var.request_uri ~= "/favicon.ico" then
cookie.set({uri = ngx.var.request_uri})
return ngx.redirect(antibot_uri)
end
end
end
ngx.exit(ngx.OK)
}
%INCLUDE_ANTIBOT_JAVASCRIPT%
%INCLUDE_ANTIBOT_CAPTCHA%
%INCLUDE_ANTIBOT_RECAPTCHA%

View File

@@ -1,4 +0,0 @@
map $http_user_agent $bad_user_agent {
default no;
%BLOCK_USER_AGENT%
}

View File

@@ -1,2 +0,0 @@
modsecurity on;
modsecurity_rules_file /etc/nginx/modsecurity-rules.conf;

View File

@@ -7,6 +7,7 @@ location = %ANTIBOT_URI% {
local cookie = require "cookie"
local captcha = require "captcha"
if not cookie.is_set("uri") then
ngx.log(ngx.NOTICE, "[ANTIBOT] captcha fail (1) for " .. ngx.var.remote_addr)
return ngx.exit(ngx.HTTP_FORBIDDEN)
end
local img, res = captcha.get_challenge()
@@ -21,16 +22,19 @@ location = %ANTIBOT_URI% {
local cookie = require "cookie"
local captcha = require "captcha"
if not cookie.is_set("captchares") then
ngx.log(ngx.NOTICE, "[ANTIBOT] captcha fail (2) for " .. ngx.var.remote_addr)
return ngx.exit(ngx.HTTP_FORBIDDEN)
end
ngx.req.read_body()
local args, err = ngx.req.get_post_args(1)
if err == "truncated" or not args or not args["captcha"] then
ngx.log(ngx.NOTICE, "[ANTIBOT] captcha fail (3) for " .. ngx.var.remote_addr)
return ngx.exit(ngx.HTTP_FORBIDDEN)
end
local captcha_user = args["captcha"]
local check = captcha.check(captcha_user, cookie.get("captchares"))
if not check then
ngx.log(ngx.NOTICE, "[ANTIBOT] captcha fail (4) for " .. ngx.var.remote_addr)
return ngx.redirect("%ANTIBOT_URI%")
end
cookie.set({captcha = "ok"})

View File

@@ -7,6 +7,7 @@ location = %ANTIBOT_URI% {
local cookie = require "cookie"
local javascript = require "javascript"
if not cookie.is_set("challenge") then
ngx.log(ngx.WARN, "[ANTIBOT] javascript fail (1) for " .. ngx.var.remote_addr)
return ngx.exit(ngx.HTTP_FORBIDDEN)
end
local challenge = cookie.get("challenge")
@@ -20,16 +21,19 @@ location = %ANTIBOT_URI% {
local cookie = require "cookie"
local javascript = require "javascript"
if not cookie.is_set("challenge") then
ngx.log(ngx.WARN, "[ANTIBOT] javascript fail (2) for " .. ngx.var.remote_addr)
return ngx.exit(ngx.HTTP_FORBIDDEN)
end
ngx.req.read_body()
local args, err = ngx.req.get_post_args(1)
if err == "truncated" or not args or not args["challenge"] then
ngx.log(ngx.WARN, "[ANTIBOT] javascript fail (3) for " .. ngx.var.remote_addr)
return ngx.exit(ngx.HTTP_FORBIDDEN)
end
local challenge = args["challenge"]
local challenge = args["challenge"]
local check = javascript.check(cookie.get("challenge"), challenge)
if not check then
ngx.log(ngx.WARN, "[ANTIBOT] javascript fail (4) for " .. ngx.var.remote_addr)
return ngx.exit(ngx.HTTP_FORBIDDEN)
end
cookie.set({javascript = "ok"})

View File

@@ -7,6 +7,7 @@ location = %ANTIBOT_URI% {
local cookie = require "cookie"
local recaptcha = require "recaptcha"
if not cookie.is_set("uri") then
ngx.log(ngx.NOTICE, "[ANTIBOT] recaptcha fail (1) for " .. ngx.var.remote_addr)
return ngx.exit(ngx.HTTP_FORBIDDEN)
end
local code = recaptcha.get_code("%ANTIBOT_URI%", "%ANTIBOT_RECAPTCHA_SITEKEY%")
@@ -19,17 +20,19 @@ location = %ANTIBOT_URI% {
local cookie = require "cookie"
local recaptcha = require "recaptcha"
if not cookie.is_set("uri") then
ngx.log(ngx.NOTICE, "[ANTIBOT] recaptcha fail (2) for " .. ngx.var.remote_addr)
return ngx.exit(ngx.HTTP_FORBIDDEN)
end
ngx.req.read_body()
local args, err = ngx.req.get_post_args(1)
if err == "truncated" or not args or not args["token"] then
ngx.log(ngx.NOTICE, "[ANTIBOT] recaptcha fail (3) for " .. ngx.var.remote_addr)
return ngx.exit(ngx.HTTP_FORBIDDEN)
end
local token = args["token"]
local check = recaptcha.check(token, "%ANTIBOT_RECAPTCHA_SECRET%")
if check < %ANTIBOT_RECAPTCHA_SCORE% then
ngx.log(ngx.WARN, "client has recaptcha score of " .. tostring(check))
ngx.log(ngx.NOTICE, "[ANTIBOT] recaptcha fail (4) for " .. ngx.var.remote_addr .. " (score = " .. tostring(check) .. ")")
return ngx.exit(ngx.HTTP_FORBIDDEN)
end
cookie.set({recaptcha = "ok"})

View File

@@ -0,0 +1,2 @@
auth_basic "%AUTH_BASIC_TEXT%";
auth_basic_user_file %NGINX_PREFIX%.htpasswd;

View File

@@ -1,4 +1,4 @@
location %AUTH_BASIC_LOCATION% {
auth_basic "%AUTH_BASIC_TEXT%";
auth_basic_user_file /etc/nginx/.htpasswd;
auth_basic_user_file %NGINX_PREFIX%.htpasswd;
}

4
confs/site/brotli.conf Normal file
View File

@@ -0,0 +1,4 @@
brotli on;
brotli_types %BROTLI_TYPES%;
brotli_comp_level %BROTLI_COMP_LEVEL%;
brotli_min_length %BROTLI_MIN_LENGTH%;

View File

@@ -0,0 +1,6 @@
etag %CLIENT_CACHE_ETAG%;
set $cache "";
if ($uri ~* \.(%CLIENT_CACHE_EXTENSIONS%)$) {
set $cache "%CLIENT_CACHE_CONTROL%";
}
add_header Cache-Control $cache;

25
confs/site/fastcgi.conf Normal file
View File

@@ -0,0 +1,25 @@
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param REQUEST_SCHEME $scheme;
fastcgi_param HTTPS $https if_not_empty;
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
# PHP only, required if PHP was built with --enable-force-cgi-redirect
fastcgi_param REDIRECT_STATUS 200;

4
confs/site/gzip.conf Normal file
View File

@@ -0,0 +1,4 @@
gzip on;
gzip_comp_level %GZIP_COMP_LEVEL%;
gzip_min_length %GZIP_MIN_LENGTH%;
gzip_types %GZIP_TYPES%;

View File

@@ -1,11 +1,12 @@
listen 0.0.0.0:8443 ssl %HTTP2%;
listen 0.0.0.0:%HTTPS_PORT% ssl %HTTP2%;
ssl_certificate %HTTPS_CERT%;
ssl_certificate_key %HTTPS_KEY%;
ssl_protocols %HTTPS_PROTOCOLS%;
ssl_prefer_server_ciphers off;
ssl_prefer_server_ciphers on;
ssl_session_tickets off;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m;
%STRICT_TRANSPORT_SECURITY%
%SSL_DHPARAM%
%SSL_CIPHERS%
%LETS_ENCRYPT_WEBROOT%

View File

@@ -0,0 +1,3 @@
location ~ ^/.well-known/acme-challenge/ {
root /acme-challenge;
}

View File

@@ -0,0 +1 @@
limit_conn ddos %LIMIT_CONN_MAX%;

11
confs/site/log-lua.conf Normal file
View File

@@ -0,0 +1,11 @@
log_by_lua_block {
local use_bad_behavior = %USE_BAD_BEHAVIOR%
local behavior = require "behavior"
if use_bad_behavior then
behavior.count()
end
}

271
confs/site/main-lua.conf Normal file
View File

@@ -0,0 +1,271 @@
set $session_secret %ANTIBOT_SESSION_SECRET%;
set $session_check_addr on;
access_by_lua_block {
local use_lets_encrypt = %USE_LETS_ENCRYPT%
local use_whitelist_ip = %USE_WHITELIST_IP%
local use_whitelist_reverse = %USE_WHITELIST_REVERSE%
local use_user_agents = %USE_USER_AGENTS%
local use_proxies = %USE_PROXIES%
local use_abusers = %USE_ABUSERS%
local use_tor_exit_nodes = %USE_TOR_EXIT_NODES%
local use_referrers = %USE_REFERRERS%
local use_country = %USE_COUNTRY%
local use_blacklist_ip = %USE_BLACKLIST_IP%
local use_blacklist_reverse = %USE_BLACKLIST_REVERSE%
local use_dnsbl = %USE_DNSBL%
local use_crowdsec = %USE_CROWDSEC%
local use_antibot_cookie = %USE_ANTIBOT_COOKIE%
local use_antibot_javascript = %USE_ANTIBOT_JAVASCRIPT%
local use_antibot_captcha = %USE_ANTIBOT_CAPTCHA%
local use_antibot_recaptcha = %USE_ANTIBOT_RECAPTCHA%
local use_bad_behavior = %USE_BAD_BEHAVIOR%
-- include LUA code
local whitelist = require "whitelist"
local blacklist = require "blacklist"
local dnsbl = require "dnsbl"
local cookie = require "cookie"
local javascript = require "javascript"
local captcha = require "captcha"
local recaptcha = require "recaptcha"
local iputils = require "resty.iputils"
local behavior = require "behavior"
-- user variables
local antibot_uri = "%ANTIBOT_URI%"
local whitelist_user_agent = {%WHITELIST_USER_AGENT%}
local whitelist_uri = {%WHITELIST_URI%}
-- check if already in whitelist cache
if use_whitelist_ip and whitelist.ip_cached_ok() then
ngx.exit(ngx.OK)
end
if use_whitelist_reverse and whitelist.reverse_cached_ok() then
ngx.exit(ngx.OK)
end
-- check if already in blacklist cache
if use_blacklist_ip and blacklist.ip_cached_ko() then
ngx.exit(ngx.HTTP_FORBIDDEN)
end
if use_blacklist_reverse and blacklist.reverse_cached_ko() then
ngx.exit(ngx.HTTP_FORBIDDEN)
end
-- check if already in dnsbl cache
if use_dnsbl and dnsbl.cached_ko() then
ngx.exit(ngx.HTTP_FORBIDDEN)
end
-- check if IP is whitelisted (only if not in cache)
if use_whitelist_ip and not whitelist.ip_cached() then
if whitelist.check_ip() then
ngx.exit(ngx.OK)
end
end
-- check if reverse is whitelisted (only if not in cache)
if use_whitelist_reverse and not whitelist.reverse_cached() then
if whitelist.check_reverse() then
ngx.exit(ngx.OK)
end
end
-- check if URI is whitelisted
for k, v in pairs(whitelist_uri) do
if ngx.var.request_uri == v then
ngx.log(ngx.NOTICE, "[WHITELIST] URI " .. v .. " is whitelisted")
ngx.exit(ngx.OK)
end
end
-- check if it's certbot
if use_lets_encrypt and string.match(ngx.var.request_uri, "^/.well-known/acme-challenge/") then
ngx.exit(ngx.OK)
end
-- check if IP is blacklisted (only if not in cache)
if use_blacklist_ip and not blacklist.ip_cached() then
if blacklist.check_ip() then
ngx.exit(ngx.HTTP_FORBIDDEN)
end
end
-- check if reverse is blacklisted (only if not in cache)
if use_blacklist_reverse and not blacklist.reverse_cached() then
if blacklist.check_reverse() then
ngx.exit(ngx.HTTP_FORBIDDEN)
end
end
-- check if IP is banned because of "bad behavior"
if use_bad_behavior and behavior.is_banned() then
ngx.log(ngx.NOTICE, "[BLOCK] IP " .. ngx.var.remote_addr .. " is banned because of bad behavior")
ngx.exit(ngx.HTTP_FORBIDDEN)
end
-- check if IP is in proxies list
if use_proxies then
local value, flags = ngx.shared.proxies_data:get(iputils.ip2bin(ngx.var.remote_addr))
if value ~= nil then
ngx.log(ngx.NOTICE, "[BLOCK] IP " .. ngx.var.remote_addr .. " is in proxies list")
ngx.exit(ngx.HTTP_FORBIDDEN)
end
end
-- check if IP is in abusers list
if use_abusers then
local value, flags = ngx.shared.abusers_data:get(iputils.ip2bin(ngx.var.remote_addr))
if value ~= nil then
ngx.log(ngx.NOTICE, "[BLOCK] IP " .. ngx.var.remote_addr .. " is in abusers list")
ngx.exit(ngx.HTTP_FORBIDDEN)
end
end
-- check if IP is in TOR exit nodes list
if use_tor_exit_nodes then
local value, flags = ngx.shared.tor_exit_nodes_data:get(iputils.ip2bin(ngx.var.remote_addr))
if value ~= nil then
ngx.log(ngx.NOTICE, "[BLOCK] IP " .. ngx.var.remote_addr .. " is in TOR exit nodes list")
ngx.exit(ngx.HTTP_FORBIDDEN)
end
end
-- check if user-agent is allowed
if use_user_agents and ngx.var.http_user_agent ~= nil then
local whitelisted = false
for k, v in pairs(whitelist_user_agent) do
if string.match(ngx.var.http_user_agent, v) then
ngx.log(ngx.NOTICE, "[ALLOW] User-Agent " .. ngx.var.http_user_agent .. " is whitelisted")
whitelisted = true
break
end
end
if not whitelisted then
local value, flags = ngx.shared.user_agents_cache:get(ngx.var.http_user_agent)
if value == nil then
local patterns = ngx.shared.user_agents_data:get_keys(0)
for i, pattern in ipairs(patterns) do
if string.match(ngx.var.http_user_agent, pattern) then
value = "ko"
ngx.shared.user_agents_cache:set(ngx.var.http_user_agent, "ko", 86400)
break
end
end
if value == nil then
value = "ok"
ngx.shared.user_agents_cache:set(ngx.var.http_user_agent, "ok", 86400)
end
end
if value == "ko" then
ngx.log(ngx.NOTICE, "[BLOCK] User-Agent " .. ngx.var.http_user_agent .. " is blacklisted")
ngx.exit(ngx.HTTP_FORBIDDEN)
end
end
end
-- check if referrer is allowed
if use_referrer and ngx.var.http_referer ~= nil then
local value, flags = ngx.shared.referrers_cache:get(ngx.var.http_referer)
if value == nil then
local patterns = ngx.shared.referrers_data:get_keys(0)
for i, pattern in ipairs(patterns) do
if string.match(ngx.var.http_referer, pattern) then
value = "ko"
ngx.shared.referrers_cache:set(ngx.var.http_referer, "ko", 86400)
break
end
end
if value == nil then
value = "ok"
ngx.shared.referrers_cache:set(ngx.var.http_referer, "ok", 86400)
end
end
if value == "ko" then
ngx.log(ngx.NOTICE, "[BLOCK] Referrer " .. ngx.var.http_referer .. " is blacklisted")
ngx.exit(ngx.HTTP_FORBIDDEN)
end
end
-- check if country is allowed
if use_country and ngx.var.allowed_country == "no" then
ngx.log(ngx.NOTICE, "[BLOCK] Country of " .. ngx.var.remote_addr .. " is blacklisted")
ngx.exit(ngx.HTTP_FORBIDDEN)
end
-- check if IP is in DNSBLs (only if not in cache)
if use_dnsbl and not dnsbl.cached() then
if dnsbl.check() then
ngx.exit(ngx.HTTP_FORBIDDEN)
end
end
-- check if IP is in CrowdSec DB
if use_crowdsec then
local ok, err = require "crowdsec.CrowdSec".allowIp(ngx.var.remote_addr)
if ok == nil then
ngx.log(ngx.ERR, "[Crowdsec] " .. err)
end
if not ok then
ngx.log(ngx.NOTICE, "[Crowdsec] denied '" .. ngx.var.remote_addr .. "'")
ngx.exit(ngx.HTTP_FORBIDDEN)
end
end
-- cookie check
if use_antibot_cookie then
if not cookie.is_set("uri") then
if ngx.var.request_uri ~= antibot_uri then
cookie.set({uri = ngx.var.request_uri})
return ngx.redirect(antibot_uri)
end
ngx.log(ngx.NOTICE, "[ANTIBOT] cookie fail for " .. ngx.var.remote_addr)
return ngx.exit(ngx.HTTP_FORBIDDEN)
else
if ngx.var.request_uri == antibot_uri then
return ngx.redirect(cookie.get("uri"))
end
end
end
-- javascript check
if use_antibot_javascript then
if not cookie.is_set("javascript") then
if ngx.var.request_uri ~= antibot_uri then
cookie.set({uri = ngx.var.request_uri, challenge = javascript.get_challenge()})
return ngx.redirect(antibot_uri)
end
end
end
-- captcha check
if use_antibot_captcha then
if not cookie.is_set("captcha") then
if ngx.var.request_uri ~= antibot_uri then
cookie.set({uri = ngx.var.request_uri})
return ngx.redirect(antibot_uri)
end
end
end
-- recaptcha check
if use_antibot_recaptcha then
if not cookie.is_set("recaptcha") then
if ngx.var.request_uri ~= antibot_uri then
cookie.set({uri = ngx.var.request_uri})
return ngx.redirect(antibot_uri)
end
end
end
ngx.exit(ngx.OK)
}
%INCLUDE_ANTIBOT_JAVASCRIPT%
%INCLUDE_ANTIBOT_CAPTCHA%
%INCLUDE_ANTIBOT_RECAPTCHA%

View File

@@ -49,8 +49,7 @@ SecResponseBodyLimit 524288
SecResponseBodyLimitAction ProcessPartial
# log usefull stuff
SecAuditEngine RelevantOnly
SecAuditLogRelevantStatus "^(?:5|4(?!04))"
SecAuditEngine %MODSECURITY_SEC_AUDIT_ENGINE%
SecAuditLogType Serial
SecAuditLog /var/log/nginx/modsec_audit.log

View File

@@ -0,0 +1,2 @@
modsecurity on;
modsecurity_rules_file %MODSEC_RULES_FILE%;

View File

@@ -0,0 +1,4 @@
open_file_cache %OPEN_FILE_CACHE%;
open_file_cache_errors %OPEN_FILE_CACHE_ERRORS%;
open_file_cache_min_uses %OPEN_FILE_CACHE_MIN_USES%;
open_file_cache_valid %OPEN_FILE_CACHE_VALID%;

View File

@@ -0,0 +1 @@
more_set_headers "Permissions-Policy: %PERMISSIONS_POLICY%";

View File

@@ -1,5 +1,4 @@
location ~ \.php$ {
fastcgi_pass %REMOTE_PHP%:9000;
fastcgi_index index.php;
include fastcgi.conf;
}

View File

@@ -0,0 +1,7 @@
proxy_cache proxycache;
proxy_cache_methods %PROXY_CACHE_METHODS%;
proxy_cache_min_uses %PROXY_CACHE_MIN_USES%;
proxy_cache_key %PROXY_CACHE_KEY%;
proxy_no_cache %PROXY_NO_CACHE%;
proxy_cache_bypass %PROXY_CACHE_BYPASS%;
%PROXY_CACHE_VALID%

View File

@@ -0,0 +1,6 @@
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Forwarded-Host $http_host;

View File

@@ -0,0 +1,7 @@
location %REVERSE_PROXY_URL% {
etag off;
proxy_pass %REVERSE_PROXY_HOST%;
%REVERSE_PROXY_HEADERS%
%REVERSE_PROXY_WS%
%REVERSE_PROXY_CUSTOM_HEADERS%
}

View File

@@ -1,6 +1,11 @@
%PRE_SERVER_CONF%
server {
include /server-confs/*.conf;
include /etc/nginx/main-lua.conf;
%FASTCGI_PATH%
%SERVER_CONF%
%PROXY_REAL_IP%
%INCLUDE_LUA%
%USE_MODSECURITY%
%LISTEN_HTTP%
%USE_HTTPS%
%REDIRECT_HTTP_TO_HTTPS%
@@ -12,21 +17,25 @@ server {
return 405;
}
%LIMIT_REQ%
%LIMIT_CONN%
%AUTH_BASIC%
%USE_PHP%
%HEADER_SERVER%
%REMOVE_HEADERS%
%X_FRAME_OPTIONS%
%X_XSS_PROTECTION%
%X_CONTENT_TYPE_OPTIONS%
%CONTENT_SECURITY_POLICY%
%REFERRER_POLICY%
%FEATURE_POLICY%
%BLOCK_COUNTRY%
%BLOCK_USER_AGENT%
%BLOCK_TOR_EXIT_NODE%
%BLOCK_PROXIES%
%BLOCK_ABUSERS%
%PERMISSIONS_POLICY%
%COOKIE_FLAGS%
%ERRORS%
%USE_FAIL2BAN%
%USE_CLIENT_CACHE%
%USE_GZIP%
%USE_BROTLI%
client_max_body_size %MAX_CLIENT_SIZE%;
server_tokens %SERVER_TOKENS%;
%USE_OPEN_FILE_CACHE%
%USE_PROXY_CACHE%
%USE_REVERSE_PROXY%
%USE_PHP%
}

4
dependencies.sh Normal file
View File

@@ -0,0 +1,4 @@
#!/bin/sh
# install dependencies
apk --no-cache add certbot libstdc++ libmaxminddb geoip pcre yajl clamav apache2-utils openssl lua libgd go jq mariadb-connector-c bash brotli

20
docs/Makefile Normal file
View File

@@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

55
docs/conf.py Normal file
View File

@@ -0,0 +1,55 @@
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
# -- Project information -----------------------------------------------------
project = 'bunkerized-nginx'
copyright = '2021, bunkerity'
author = 'bunkerity'
# The full version, including alpha/beta/rc tags
release = 'v1.2.5'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ['myst_parser']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
import sphinx_rtd_theme
html_theme = "sphinx_rtd_theme"
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']

View File

@@ -0,0 +1,956 @@
# List of environment variables
## nginx
### Misc
`MULTISITE`
Values : *yes* | *no*
Default value : *no*
Context : *global*
When set to *no*, only one server block will be generated. Otherwise one server per host defined in the `SERVER_NAME` environment variable will be generated.
Any environment variable tagged as *multisite* context can be used for a specific server block with the following format : *host_VARIABLE=value*. If the variable is used without the host prefix it will be applied to all the server blocks (but still can be overriden).
`SERVER_NAME`
Values : *&lt;first name&gt; &lt;second name&gt; ...*
Default value : *www.bunkerity.com*
Context : *global*
Sets the host names of the webserver separated with spaces. This must match the Host header sent by clients.
Useful when used with `MULTISITE=yes` and/or `AUTO_LETSENCRYPT=yes` and/or `DISABLE_DEFAULT_SERVER=yes`.
`MAX_CLIENT_SIZE`
Values : *0* | *Xm*
Default value : *10m*
Context : *global*, *multisite*
Sets the maximum body size before nginx returns a 413 error code.
Setting to 0 means "infinite" body size.
`ALLOWED_METHODS`
Values : *allowed HTTP methods separated with | char*
Default value : *GET|POST|HEAD*
Context : *global*, *multisite*
Only the HTTP methods listed here will be accepted by nginx. If not listed, nginx will close the connection.
`DISABLE_DEFAULT_SERVER`
Values : *yes* | *no*
Default value : *no*
Context : *global*
If set to yes, nginx will only respond to HTTP request when the Host header match a FQDN specified in the `SERVER_NAME` environment variable.
For example, it will close the connection if a bot access the site with direct ip.
`SERVE_FILES`
Values : *yes* | *no*
Default value : *yes*
Context : *global*, *multisite*
If set to yes, nginx will serve files from /www directory within the container.
A use case to not serving files is when you setup bunkerized-nginx as a reverse proxy.
`DNS_RESOLVERS`
Values : *\<two IP addresses separated with a space\>*
Default value : *127.0.0.11*
Context : *global*
The IP addresses of the DNS resolvers to use when performing DNS lookups.
`ROOT_FOLDER`
Values : *\<any valid path to web files\>*
Default value : */www*
Context : *global*
The default folder where nginx will search for web files. Don't change it unless you want to make your own image.
`ROOT_SITE_SUBFOLDER`
Values : *\<any valid directory name\>*
Default value :
Context : *global*, *multisite*
The subfolder where nginx will search for site web files.
`LOG_FORMAT`
Values : *\<any values accepted by the log_format directive\>*
Default value : *$host $remote_addr - $remote_user \[$time_local\] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"*
Context : *global*
The log format used by nginx to generate logs. More info [here](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format).
`HTTP_PORT`
Values : *\<any valid port greater than 1024\>*
Default value : *8080*
Context : *global*
The HTTP port number used by nginx inside the container.
`HTTPS_PORT`
Values : *\<any valid port greater than 1024\>*
Default value : *8443*
Context : *global*
The HTTPS port number used by nginx inside the container.
### Information leak
`SERVER_TOKENS`
Values : *on* | *off*
Default value : *off*
Context : *global*
If set to on, nginx will display server version in Server header and default error pages.
`REMOVE_HEADERS`
Values : \<*list of headers separated with space*\>
Default value : *Server X-Powered-By X-AspNet-Version X-AspNetMvc-Version*
Context : *global*, *multisite*
List of header to remove when sending responses to clients.
### Custom error pages
`ERROR_XXX`
Values : *\<relative path to the error page\>*
Default value :
Context : *global*, *multisite*
Use this kind of environment variable to define custom error page depending on the HTTP error code. Replace XXX with HTTP code.
For example : `ERROR_404=/404.html` means the /404.html page will be displayed when 404 code is generated. The path is relative to the root web folder.
### HTTP basic authentication
`USE_AUTH_BASIC`
Values : *yes* | *no*
Default value : *no*
Context : *global*, *multisite*
If set to yes, enables HTTP basic authentication at the location `AUTH_BASIC_LOCATION` with user `AUTH_BASIC_USER` and password `AUTH_BASIC_PASSWORD`.
`AUTH_BASIC_LOCATION`
Values : *sitewide* | */somedir* | *\<any valid location\>*
Default value : *sitewide*
Context : *global*, *multisite*
The location to restrict when `USE_AUTH_BASIC` is set to *yes*. If the special value *sitewide* is used then auth basic will be set at server level outside any location context.
`AUTH_BASIC_USER`
Values : *\<any valid username\>*
Default value : *changeme*
Context : *global*, *multisite*
The username allowed to access `AUTH_BASIC_LOCATION` when `USE_AUTH_BASIC` is set to yes.
`AUTH_BASIC_PASSWORD`
Values : *\<any valid password\>*
Default value : *changeme*
Context : *global*, *multisite*
The password of `AUTH_BASIC_USER` when `USE_AUTH_BASIC` is set to yes.
`AUTH_BASIC_TEXT`
Values : *\<any valid text\>*
Default value : *Restricted area*
Context : *global*, *multisite*
The text displayed inside the login prompt when `USE_AUTH_BASIC` is set to yes.
### Reverse proxy
`USE_REVERSE_PROXY`
Values : *yes* | *no*
Default value : *no*
Context : *global*, *multisite*
Set this environment variable to *yes* if you want to use bunkerized-nginx as a reverse proxy.
`REVERSE_PROXY_URL`
Values : \<*any valid location path*\>
Default value :
Context : *global*, *multisite*
Only valid when `USE_REVERSE_PROXY` is set to *yes*. Let's you define the location path to match when acting as a reverse proxy.
You can set multiple url/host by adding a suffix number to the variable name like this : `REVERSE_PROXY_URL_1`, `REVERSE_PROXY_URL_2`, `REVERSE_PROXY_URL_3`, ...
`REVERSE_PROXY_HOST`
Values : \<*any valid proxy_pass value*\>
Default value :
Context : *global*, *multisite*
Only valid when `USE_REVERSE_PROXY` is set to *yes*. Let's you define the proxy_pass destination to use when acting as a reverse proxy.
You can set multiple url/host by adding a suffix number to the variable name like this : `REVERSE_PROXY_HOST_1`, `REVERSE_PROXY_HOST_2`, `REVERSE_PROXY_HOST_3`, ...
`REVERSE_PROXY_WS`
Values : *yes* | *no*
Default value : *no*
Context : *global*, *multisite*
Only valid when `USE_REVERSE_PROXY` is set to *yes*. Set it to *yes* when the corresponding `REVERSE_PROXY_HOST` is a WebSocket server.
You can set multiple url/host by adding a suffix number to the variable name like this : `REVERSE_PROXY_WS_1`, `REVERSE_PROXY_WS_2`, `REVERSE_PROXY_WS_3`, ...
`REVERSE_PROXY_HEADERS`
Values : *\<list of custom headers separated with a semicolon like this : header1 value1;header2 value2...\>*
Default value :
Context : *global*, *multisite*
Only valid when `USE_REVERSE_PROXY` is set to *yes*.
You can set multiple url/host by adding a suffix number to the variable name like this : `REVERSE_PROXY_HEADERS_1`, `REVERSE_PROXY_HEADERS_2`, `REVERSE_PROXY_HEADERS_3`, ...
`PROXY_REAL_IP`
Values : *yes* | *no*
Default value : *no*
Context : *global*, *multisite*
Set this environment variable to *yes* if you're using bunkerized-nginx behind a reverse proxy. This means you will see the real client address instead of the proxy one inside your logs. Modsecurity, fail2ban and others security tools will also then work correctly.
`PROXY_REAL_IP_FROM`
Values : *\<list of trusted IP addresses and/or networks separated with spaces\>*
Default value : *192.168.0.0/16 172.16.0.0/12 10.0.0.0/8*
Context : *global*, *multisite*
When `PROXY_REAL_IP` is set to *yes*, lets you define the trusted IPs/networks allowed to send the correct client address.
`PROXY_REAL_IP_HEADER`
Values : *X-Forwarded-For* | *X-Real-IP* | *custom header*
Default value : *X-Forwarded-For*
Context : *global*, *multisite*
When `PROXY_REAL_IP` is set to *yes*, lets you define the header that contains the real client IP address.
`PROXY_REAL_IP_RECURSIVE`
Values : *on* | *off*
Default value : *on*
Context : *global*, *multisite*
When `PROXY_REAL_IP` is set to *yes*, setting this to *on* avoid spoofing attacks using the header defined in `PROXY_REAL_IP_HEADER`.
### Compression
`USE_GZIP`
Values : *yes* | *no*
Default value : *no*
Context : *global*, *multisite*
When set to *yes*, nginx will use the gzip algorithm to compress responses sent to clients.
`GZIP_COMP_LEVEL`
Values : \<*any integer between 1 and 9*\>
Default value : *5*
Context : *global*, *multisite*
The gzip compression level to use when `USE_GZIP` is set to *yes*.
`GZIP_MIN_LENGTH`
Values : \<*any positive integer*\>
Default value : *1000*
Context : *global*, *multisite*
The minimum size (in bytes) of a response required to compress when `USE_GZIP` is set to *yes*.
`GZIP_TYPES`
Values : \<*list of mime types separated with space*\>
Default value : *application/atom+xml application/javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-opentype application/x-font-truetype application/x-font-ttf application/x-javascript application/xhtml+xml application/xml font/eot font/opentype font/otf font/truetype image/svg+xml image/vnd.microsoft.icon image/x-icon image/x-win-bitmap text/css text/javascript text/plain text/xml*
Context : *global*, *multisite*
List of response MIME type required to compress when `USE_GZIP` is set to *yes*.
`USE_BROTLI`
Values : *yes* | *no*
Default value : *no*
Context : *global*, *multisite*
When set to *yes*, nginx will use the brotli algorithm to compress responses sent to clients.
`BROTLI_COMP_LEVEL`
Values : \<*any integer between 1 and 9*\>
Default value : *5*
Context : *global*, *multisite*
The brotli compression level to use when `USE_BROTLI` is set to *yes*.
`BROTLI_MIN_LENGTH`
Values : \<*any positive integer*\>
Default value : *1000*
Context : *global*, *multisite*
The minimum size (in bytes) of a response required to compress when `USE_BROTLI` is set to *yes*.
`BROTLI_TYPES`
Values : \<*list of mime types separated with space*\>
Default value : *application/atom+xml application/javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-opentype application/x-font-truetype application/x-font-ttf application/x-javascript application/xhtml+xml application/xml font/eot font/opentype font/otf font/truetype image/svg+xml image/vnd.microsoft.icon image/x-icon image/x-win-bitmap text/css text/javascript text/plain text/xml*
Context : *global*, *multisite*
List of response MIME type required to compress when `USE_BROTLI` is set to *yes*.
### Cache
`USE_CLIENT_CACHE`
Values : *yes* | *no*
Default value : *no*
Context : *global*, *multisite*
When set to *yes*, clients will be told to cache some files locally.
`CLIENT_CACHE_EXTENSIONS`
Values : \<*list of extensions separated with |*\>
Default value : *jpg|jpeg|png|bmp|ico|svg|tif|css|js|otf|ttf|eot|woff|woff2*
Context : *global*, *multisite*
List of file extensions that clients should cache when `USE_CLIENT_CACHE` is set to *yes*.
`CLIENT_CACHE_CONTROL`
Values : \<*Cache-Control header value*\>
Default value : *public, max-age=15552000*
Context : *global*, *multisite*
Content of the [Cache-Control](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control) header to send when `USE_CLIENT_CACHE` is set to *yes*.
`CLIENT_CACHE_ETAG`
Values : *on* | *off*
Default value : *on*
Context : *global*, *multisite*
Whether or not nginx will send the [ETag](https://en.wikipedia.org/wiki/HTTP_ETag) header when `USE_CLIENT_CACHE` is set to *yes*.
`USE_OPEN_FILE_CACHE`
Values : *yes* | *no*
Default value : *no*
Context : *global*, *multisite*
When set to *yes*, nginx will cache open fd, existence of directories, ... See [open_file_cache](http://nginx.org/en/docs/http/ngx_http_core_module.html#open_file_cache).
`OPEN_FILE_CACHE`
Values : \<*any valid open_file_cache parameters*\>
Default value : *max=1000 inactive=20s*
Context : *global*, *multisite*
Parameters to use with open_file_cache when `USE_OPEN_FILE_CACHE` is set to *yes*.
`OPEN_FILE_CACHE_ERRORS`
Values : *on* | *off*
Default value : *on*
Context : *global*, *multisite*
Whether or not nginx should cache file lookup errors when `USE_OPEN_FILE_CACHE` is set to *yes*.
`OPEN_FILE_CACHE_MIN_USES`
Values : \<*any valid integer *\>
Default value : *2*
Context : *global*, *multisite*
The minimum number of file accesses required to cache the fd when `USE_OPEN_FILE_CACHE` is set to *yes*.
`OPEN_FILE_CACHE_VALID`
Values : \<*any time value like Xs, Xm, Xh, ...*\>
Default value : *30s*
Context : *global*, *multisite*
The time after which cached elements should be validated when `USE_OPEN_FILE_CACHE` is set to *yes*.
`USE_PROXY_CACHE`
Values : *yes* | *no*
Default value : *no*
Context : *global*, *multisite*
When set to *yes*, nginx will cache responses from proxied applications. See [proxy_cache](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache).
`PROXY_CACHE_PATH_ZONE_SIZE`
Values : \<*any valid size like Xk, Xm, Xg, ...*\>
Default value : *10m*
Context : *global*, *multisite*
Maximum size of cached metadata when `USE_PROXY_CACHE` is set to *yes*.
`PROXY_CACHE_PATH_PARAMS`
Values : \<*any valid parameters to proxy_cache_path directive*\>
Default value : *max_size=100m*
Context : *global*, *multisite*
Parameters to use for [proxy_cache_path](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_path) directive when `USE_PROXY_CACHE` is set to *yes*.
`PROXY_CACHE_METHODS`
Values : \<*list of HTTP methods separated with space*\>
Default value : *GET HEAD*
Context : *global*, *multisite*
The HTTP methods that should trigger a cache operation when `USE_PROXY_CACHE` is set to *yes*.
`PROXY_CACHE_MIN_USES`
Values : \<*any positive integer*\>
Default value : *2*
Context : *global*, *multisite*
The minimum number of requests before the response is cached when `USE_PROXY_CACHE` is set to *yes*.
`PROXY_CACHE_KEY`
Values : \<*list of variables*\>
Default value : *$scheme$host$request_uri*
Context : *global*, *multisite*
The key used to uniquely identify a cached response when `USE_PROXY_CACHE` is set to *yes*.
`PROXY_CACHE_VALID`
Values : \<*status=time list separated with space*\>
Default value : *200=10m 301=10m 302=1h*
Context : *global*, *multisite*
Define the caching time depending on the HTTP status code (list of status=time separated with space) when `USE_PROXY_CACHE` is set to *yes*.
`PROXY_NO_CACHE`
Values : \<*list of variables*\>
Default value : *$http_authorization*
Context : *global*, *multisite*
Conditions that must be met to disable caching of the response when `USE_PROXY_CACHE` is set to *yes*.
`PROXY_CACHE_BYPASS`
Values : \<*list of variables*\>
Default value : *$http_authorization*
Context : *global*, *multisite*
Conditions that must be met to bypass the cache when `USE_PROXY_CACHE` is set to *yes*.
## HTTPS
### Let's Encrypt
`AUTO_LETS_ENCRYPT`
Values : *yes* | *no*
Default value : *no*
Context : *global*, *multisite*
If set to yes, automatic certificate generation and renewal will be setup through Let's Encrypt. This will enable HTTPS on your website for free.
You will need to redirect the 80 port to 8080 port inside container and also set the `SERVER_NAME` environment variable.
`EMAIL_LETS_ENCRYPT`
Values : *contact@yourdomain.com*
Default value : *contact@first-domain-in-server-name*
Context : *global*, *multisite*
Define the contact email address declare in the certificate.
### HTTP
`LISTEN_HTTP`
Values : *yes* | *no*
Default value : *yes*
Context : *global*, *multisite*
If set to no, nginx will not in listen on HTTP (port 80).
Useful if you only want HTTPS access to your website.
`REDIRECT_HTTP_TO_HTTPS`
Values : *yes* | *no*
Default value : *no*
Context : *global*, *multisite*
If set to yes, nginx will redirect all HTTP requests to HTTPS.
### Custom certificate
`USE_CUSTOM_HTTPS`
Values : *yes* | *no*
Default value : *no*
Context : *global*, *multisite*
If set to yes, HTTPS will be enabled with certificate/key of your choice.
`CUSTOM_HTTPS_CERT`
Values : *\<any valid path inside the container\>*
Default value :
Context : *global*, *multisite*
Full path of the certificate or bundle file to use when `USE_CUSTOM_HTTPS` is set to yes. If your chain of trust contains one or more intermediate certificate(s), you will need to bundle them into a single file (more info [here](https://nginx.org/en/docs/http/configuring_https_servers.html#chains)).
`CUSTOM_HTTPS_KEY`
Values : *\<any valid path inside the container\>*
Default value :
Context : *global*, *multisite*
Full path of the key file to use when `USE_CUSTOM_HTTPS` is set to yes.
### Self-signed certificate
`GENERATE_SELF_SIGNED_SSL`
Values : *yes* | *no*
Default value : *no*
Context : *global*
If set to yes, HTTPS will be enabled with a container generated self-signed certificate.
`SELF_SIGNED_SSL_EXPIRY`
Values : *integer*
Default value : *365* (1 year)
Context : *global*
Needs `GENERATE_SELF_SIGNED_SSL` to work.
Sets the expiry date for the self generated certificate.
`SELF_SIGNED_SSL_COUNTRY`
Values : *text*
Default value : *Switzerland*
Context : *global*
Needs `GENERATE_SELF_SIGNED_SSL` to work.
Sets the country for the self generated certificate.
`SELF_SIGNED_SSL_STATE`
Values : *text*
Default value : *Switzerland*
Context : *global*
Needs `GENERATE_SELF_SIGNED_SSL` to work.
Sets the state for the self generated certificate.
`SELF_SIGNED_SSL_CITY`
Values : *text*
Default value : *Bern*
Context : *global*
Needs `GENERATE_SELF_SIGNED_SSL` to work.
Sets the city for the self generated certificate.
`SELF_SIGNED_SSL_ORG`
Values : *text*
Default value : *AcmeInc*
Context : *global*
Needs `GENERATE_SELF_SIGNED_SSL` to work.
Sets the organisation name for the self generated certificate.
`SELF_SIGNED_SSL_OU`
Values : *text*
Default value : *IT*
Context : *global*
Needs `GENERATE_SELF_SIGNED_SSL` to work.
Sets the organisitional unit for the self generated certificate.
`SELF_SIGNED_SSL_CN`
Values : *text*
Default value : *bunkerity-nginx*
Context : *global*
Needs `GENERATE_SELF_SIGNED_SSL` to work.
Sets the CN server name for the self generated certificate.
### Misc
`HTTP2`
Values : *yes* | *no*
Default value : *yes*
Context : *global*, *multisite*
If set to yes, nginx will use HTTP2 protocol when HTTPS is enabled.
`HTTPS_PROTOCOLS`
Values : *TLSv1.2* | *TLSv1.3* | *TLSv1.2 TLSv1.3*
Default value : *TLSv1.2 TLSv1.3*
Context : *global*, *multisite*
The supported version of TLS. We recommend the default value *TLSv1.2 TLSv1.3* for compatibility reasons.
## ModSecurity
`USE_MODSECURITY`
Values : *yes* | *no*
Default value : *yes*
Context : *global*, *multisite*
If set to yes, the ModSecurity WAF will be enabled.
You can include custom rules by adding .conf files into the /modsec-confs/ directory inside the container (i.e : through a volume).
`USE_MODSECURITY_CRS`
Values : *yes* | *no*
Default value : *yes*
Context : *global*, *multisite*
If set to yes, the [OWASP ModSecurity Core Rule Set](https://coreruleset.org/) will be used. It provides generic rules to detect common web attacks.
You can customize the CRS (i.e. : add WordPress exclusions) by adding custom .conf files into the /modsec-crs-confs/ directory inside the container (i.e : through a volume). Files inside this directory are included before the CRS rules. If you need to tweak (i.e. : SecRuleUpdateTargetById) put .conf files inside the /modsec-confs/ which is included after the CRS rules.
## Security headers
`X_FRAME_OPTIONS`
Values : *DENY* | *SAMEORIGIN* | *ALLOW-FROM https://www.website.net*
Default value : *DENY*
Context : *global*, *multisite*
Policy to be used when the site is displayed through iframe. Can be used to mitigate clickjacking attacks.
More info [here](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options).
`X_XSS_PROTECTION`
Values : *0* | *1* | *1; mode=block*
Default value : *1; mode=block*
Context : *global*, *multisite*
Policy to be used when XSS is detected by the browser. Only works with Internet Explorer.
More info [here](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection).
`X_CONTENT_TYPE_OPTIONS`
Values : *nosniff*
Default value : *nosniff*
Context : *global*, *multisite*
Tells the browser to be strict about MIME type.
More info [here](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options).
`REFERRER_POLICY`
Values : *no-referrer* | *no-referrer-when-downgrade* | *origin* | *origin-when-cross-origin* | *same-origin* | *strict-origin* | *strict-origin-when-cross-origin* | *unsafe-url*
Default value : *no-referrer*
Context : *global*, *multisite*
Policy to be used for the Referer header.
More info [here](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy).
`FEATURE_POLICY`
Values : *&lt;directive&gt; &lt;allow list&gt;*
Default value : *accelerometer 'none'; ambient-light-sensor 'none'; autoplay 'none'; camera 'none'; display-capture 'none'; document-domain 'none'; encrypted-media 'none'; fullscreen 'none'; geolocation 'none'; gyroscope 'none'; magnetometer 'none'; microphone 'none'; midi 'none'; payment 'none'; picture-in-picture 'none'; speaker 'none'; sync-xhr 'none'; usb 'none'; vibrate 'none'; vr 'none'*
Context : *global*, *multisite*
Tells the browser which features can be used on the website.
More info [here](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Feature-Policy).
`PERMISSIONS_POLICY`
Values : *feature=(allow list)*
Default value : accelerometer=(), ambient-light-sensor=(), autoplay=(), camera=(), display-capture=(), document-domain=(), encrypted-media=(), fullscreen=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), midi=(), payment=(), picture-in-picture=(), speaker=(), sync-xhr=(), usb=(), vibrate=(), vr=()
Context : *global*, *multisite*
Tells the browser which features can be used on the website.
More info [here](https://www.w3.org/TR/permissions-policy-1/).
`COOKIE_FLAGS`
Values : *\* HttpOnly* | *MyCookie secure SameSite=Lax* | *...*
Default value : *\* HttpOnly SameSite=Lax*
Context : *global*, *multisite*
Adds some security to the cookies set by the server.
Accepted value can be found [here](https://github.com/AirisX/nginx_cookie_flag_module).
`COOKIE_AUTO_SECURE_FLAG`
Values : *yes* | *no*
Default value : *yes*
Context : *global*, *multisite*
When set to *yes*, the *secure* will be automatically added to cookies when using HTTPS.
`STRICT_TRANSPORT_SECURITY`
Values : *max-age=expireTime [; includeSubDomains] [; preload]*
Default value : *max-age=31536000*
Context : *global*, *multisite*
Tells the browser to use exclusively HTTPS instead of HTTP when communicating with the server.
More info [here](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security).
`CONTENT_SECURITY_POLICY`
Values : *\<directive 1\>; \<directive 2\>; ...*
Default value : *object-src 'none'; frame-ancestors 'self'; form-action 'self'; block-all-mixed-content; sandbox allow-forms allow-same-origin allow-scripts allow-popups allow-downloads; base-uri 'self';*
Context : *global*, *multisite*
Policy to be used when loading resources (scripts, forms, frames, ...).
More info [here](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy).
## Blocking
### Antibot
`USE_ANTIBOT`
Values : *no* | *cookie* | *javascript* | *captcha* | *recaptcha*
Default value : *no*
Context : *global*, *multisite*
If set to another allowed value than *no*, users must complete a "challenge" before accessing the pages on your website :
- *cookie* : asks the users to set a cookie
- *javascript* : users must execute a javascript code
- *captcha* : a text captcha must be resolved by the users
- *recaptcha* : use [Google reCAPTCHA v3](https://developers.google.com/recaptcha/intro) score to allow/deny users
`ANTIBOT_URI`
Values : *\<any valid uri\>*
Default value : */challenge*
Context : *global*, *multisite*
A valid and unused URI to redirect users when `USE_ANTIBOT` is used. Be sure that it doesn't exist on your website.
`ANTIBOT_SESSION_SECRET`
Values : *random* | *\<32 chars of your choice\>*
Default value : *random*
Context : *global*, *multisite*
A secret used to generate sessions when `USE_ANTIBOT` is set. Using the special *random* value will generate a random one. Be sure to use the same value when you are in a multi-server environment (so sessions are valid in all the servers).
`ANTIBOT_RECAPTCHA_SCORE`
Values : *\<0.0 to 1.0\>*
Default value : *0.7*
Context : *global*, *multisite*
The minimum score required when `USE_ANTIBOT` is set to *recaptcha*.
`ANTIBOT_RECAPTCHA_SITEKEY`
Values : *\<public key given by Google\>*
Default value :
Context : *global*
The sitekey given by Google when `USE_ANTIBOT` is set to *recaptcha*.
`ANTIBOT_RECAPTCHA_SECRET`
Values : *\<private key given by Google\>*
Default value :
Context : *global*
The secret given by Google when `USE_ANTIBOT` is set to *recaptcha*.
### External blacklists
`BLOCK_USER_AGENT`
Values : *yes* | *no*
Default value : *yes*
Context : *global*, *multisite*
If set to yes, block clients with "bad" user agent.
Blacklist can be found [here](https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/_generator_lists/bad-user-agents.list) and [here](https://raw.githubusercontent.com/JayBizzle/Crawler-Detect/master/raw/Crawlers.txt).
`BLOCK_TOR_EXIT_NODE`
Values : *yes* | *no*
Default value : *yes*
Context : *global*, *multisite*
Is set to yes, will block known TOR exit nodes.
Blacklist can be found [here](https://iplists.firehol.org/?ipset=tor_exits).
`BLOCK_PROXIES`
Values : *yes* | *no*
Default value : *yes*
Context : *global*, *multisite*
Is set to yes, will block known proxies.
Blacklist can be found [here](https://iplists.firehol.org/?ipset=firehol_proxies).
`BLOCK_ABUSERS`
Values : *yes* | *no*
Default value : *yes*
Context : *global*, *multisite*
Is set to yes, will block known abusers.
Blacklist can be found [here](https://iplists.firehol.org/?ipset=firehol_abusers_30d).
`BLOCK_REFERRER`
Values : *yes* | *no*
Default value : *yes*
Context : *global*, *multisite*
Is set to yes, will block known bad referrer header.
Blacklist can be found [here](https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/_generator_lists/bad-referrers.list).
### DNSBL
`USE_DNSBL`
Values : *yes* | *no*
Default value : *yes*
Context : *global*, *multisite*
If set to *yes*, DNSBL checks will be performed to the servers specified in the `DNSBL_LIST` environment variable.
`DNSBL_LIST`
Values : *\<list of DNS zones separated with spaces\>*
Default value : *bl.blocklist.de problems.dnsbl.sorbs.net sbl.spamhaus.org xbl.spamhaus.org*
Context : *global*
The list of DNSBL zones to query when `USE_DNSBL` is set to *yes*.
### CrowdSec
`USE_CROWDSEC`
Values : *yes* | *no*
Default value : *no*
Context : *global*, *multisite*
If set to *yes*, [CrowdSec](https://github.com/crowdsecurity/crowdsec) will be enabled. Please note that you need a CrowdSec instance running see example [here](https://github.com/bunkerity/bunkerized-nginx/tree/master/examples/crowdsec).
`CROWDSEC_HOST`
Values : *\<full URL to the CrowdSec instance API\>*
Default value :
Context : *global*
The full URL to the CrowdSec API.
`CROWDSEC_KEY`
Values : *\<CrowdSec bouncer key\>*
Default value :
Context : *global*
The CrowdSec key given by *cscli bouncer add BouncerName*.
### Custom whitelisting
`USE_WHITELIST_IP`
Values : *yes* | *no*
Default value : *yes*
Context : *global*, *multisite*
If set to *yes*, lets you define custom IP addresses to be whitelisted through the `WHITELIST_IP_LIST` environment variable.
`WHITELIST_IP_LIST`
Values : *\<list of IP addresses and/or network CIDR blocks separated with spaces\>*
Default value : *23.21.227.69 40.88.21.235 50.16.241.113 50.16.241.114 50.16.241.117 50.16.247.234 52.204.97.54 52.5.190.19 54.197.234.188 54.208.100.253 54.208.102.37 107.21.1.8*
Context : *global*
The list of IP addresses and/or network CIDR blocks to whitelist when `USE_WHITELIST_IP` is set to *yes*. The default list contains IP addresses of the [DuckDuckGo crawler](https://help.duckduckgo.com/duckduckgo-help-pages/results/duckduckbot/).
`USE_WHITELIST_REVERSE`
Values : *yes* | *no*
Default value : *yes*
Context : *global*, *multisite*
If set to *yes*, lets you define custom reverse DNS suffixes to be whitelisted through the `WHITELIST_REVERSE_LIST` environment variable.
`WHITELIST_REVERSE_LIST`
Values : *\<list of reverse DNS suffixes separated with spaces\>*
Default value : *.googlebot.com .google.com .search.msn.com .crawl.yahoot.net .crawl.baidu.jp .crawl.baidu.com .yandex.com .yandex.ru .yandex.net*
Context : *global*
The list of reverse DNS suffixes to whitelist when `USE_WHITELIST_REVERSE` is set to *yes*. The default list contains suffixes of major search engines.
`WHITELIST_USER_AGENT`
Values : *\<list of regexes separated with spaces\>*
Default value :
Context : *global*, *multisite*
Whitelist user agent from being blocked by `BLOCK_USER_AGENT`.
`WHITELIST_URI`
Values : *\<list of URI separated with spaces\>*
Default value :
Context : *global*, *multisite*
URI listed here have security checks like bad user-agents, bad IP, ... disabled. Useful when using callbacks for example.
### Custom blacklisting
`USE_BLACKLIST_IP`
Values : *yes* | *no*
Default value : *yes*
Context : *global*, *multisite*
If set to *yes*, lets you define custom IP addresses to be blacklisted through the `BLACKLIST_IP_LIST` environment variable.
`BLACKLIST_IP_LIST`
Values : *\<list of IP addresses and/or network CIDR blocks separated with spaces\>*
Default value :
Context : *global*
The list of IP addresses and/or network CIDR blocks to blacklist when `USE_BLACKLIST_IP` is set to *yes*.
`USE_BLACKLIST_REVERSE`
Values : *yes* | *no*
Default value : *yes*
Context : *global*, *multisite*
If set to *yes*, lets you define custom reverse DNS suffixes to be blacklisted through the `BLACKLIST_REVERSE_LIST` environment variable.
`BLACKLIST_REVERSE_LIST`
Values : *\<list of reverse DNS suffixes separated with spaces\>*
Default value : *.shodan.io*
Context : *global*
The list of reverse DNS suffixes to blacklist when `USE_BLACKLIST_REVERSE` is set to *yes*.
### Requests limiting
`USE_LIMIT_REQ`
Values : *yes* | *no*
Default value : *yes*
Context : *global*, *multisite*
If set to yes, the amount of HTTP requests made by a user for a given resource will be limited during a period of time.
More info rate limiting [here](https://www.nginx.com/blog/rate-limiting-nginx/) (the key used is $binary_remote_addr$uri).
`LIMIT_REQ_RATE`
Values : *Xr/s* | *Xr/m*
Default value : *1r/s*
Context : *global*, *multisite*
The rate limit to apply when `USE_LIMIT_REQ` is set to *yes*. Default is 1 request to the same URI and from the same IP per second.
`LIMIT_REQ_BURST`
Values : *<any valid integer\>*
Default value : *2*
Context : *global*, *multisite*
The number of requests to put in queue before rejecting requests.
`LIMIT_REQ_CACHE`
Values : *Xm* | *Xk*
Default value : *10m*
Context : *global*
The size of the cache to store information about request limiting.
### Connections limiting
`USE_LIMIT_CONN`
Values : *yes* | *no*
Default value : *yes*
Context : *global*, *multisite*
If set to yes, the number of connections made by an ip will be limited during a period of time. (ie. very small/weak ddos protection)
More info connections limiting [here](http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html).
`LIMIT_CONN_MAX`
Values : *<any valid integer\>*
Default value : *50*
Context : *global*, *multisite*
The maximum number of connections per ip to put in queue before rejecting requests.
`LIMIT_CONN_CACHE`
Values : *Xm* | *Xk*
Default value : *10m*
Context : *global*
The size of the cache to store information about connection limiting.
### Countries
`BLACKLIST_COUNTRY`
Values : *\<country code 1\> \<country code 2\> ...*
Default value :
Context : *global*, *multisite*
Block some countries from accessing your website. Use 2 letters country code separated with space.
`WHITELIST_COUNTRY`
Values : *\<country code 1\> \<country code 2\> ...*
Default value :
Context : *global*, *multisite*
Only allow specific countries accessing your website. Use 2 letters country code separated with space.
## PHP
`REMOTE_PHP`
Values : *\<any valid IP/hostname\>*
Default value :
Context : *global*, *multisite*
Set the IP/hostname address of a remote PHP-FPM to execute .php files.
`REMOTE_PHP_PATH`
Values : *\<any valid absolute path\>*
Default value : */app*
Context : *global*, *multisite*
The path where the PHP files are located inside the server specified in `REMOTE_PHP`.
## Bad behavior
`USE_BAD_BEHAVIOR`
Values : *yes* | *no*
Default value : *yes*
Context : *global*, *multisite*
If set to yes, bunkerized-nginx will block users getting too much "suspicious" HTTP codes in a period of time.
`BAD_BEHAVIOR_STATUS_CODES`
Values : *\<HTTP status codes separated with space\>*
Default value : *400 401 403 404 405 429 444*
Context : *global*
List of HTTP status codes considered as "suspicious".
`BAD_BEHAVIOR_THRESHOLD`
Values : *<any positive integer>*
Default value : *10*
Context : *global*
The number of "suspicious" HTTP status code before the corresponding IP is banned.
`BAD_BEHAVIOR_BAN_TIME`
Values : *<any positive integer>*
Default value : *86400*
Context : *global*
The duration time (in seconds) of a ban when the corresponding IP has reached the `BAD_BEHAVIOR_THRESHOLD`.
`BAD_BEHAVIOR_COUNT_TIME`
Values : *<any positive integer>*
Default value : *60*
Context : *global*
The duration time (in seconds) before the counter of "suspicious" HTTP is reset.
## ClamAV
`USE_CLAMAV_UPLOAD`
Values : *yes* | *no*
Default value : *yes*
Context : *global*, *multisite*
If set to yes, ClamAV will scan every file uploads and block the upload if the file is detected.
`USE_CLAMAV_SCAN`
Values : *yes* | *no*
Default value : *yes*
Context : *global*
If set to yes, ClamAV will scan all the files inside the container every day.
`CLAMAV_SCAN_REMOVE`
Values : *yes* | *no*
Default value : *yes*
Context : *global*
If set to yes, ClamAV will automatically remove the detected files.
## Cron jobs
`AUTO_LETS_ENCRYPT_CRON`
Values : *\<cron expression\>*
Default value : *15 0 \* \* \**
Context : *global*
Cron expression of how often certbot will try to renew the certificates.
`BLOCK_USER_AGENT_CRON`
Values : *\<cron expression\>*
Default value : *30 0 \* \* \* \**
Context : *global*
Cron expression of how often the blacklist of user agent is updated.
`BLOCK_TOR_EXIT_NODE_CRON`
Values : *\<cron expression\>*
Default value : *0 \*/1 \* \* \* \**
Context : *global*
Cron expression of how often the blacklist of tor exit node is updated.
`BLOCK_PROXIES_CRON`
Values : *\<cron expression\>*
Default value : *0 3 \* \* \* \**
Context : *global*
Cron expression of how often the blacklist of proxies is updated.
`BLOCK_ABUSERS_CRON`
Values : *\<cron expression\>*
Default value : *0 2 \* \* \* \**
Context : *global*
Cron expression of how often the blacklist of abusers is updated.
`BLOCK_REFERRER_CRON`
Values : *\<cron expression\>*
Default value : *45 0 \* \* \* \**
Context : *global*
Cron expression of how often the blacklist of referrer is updated.
`GEOIP_CRON`
Values : *\<cron expression\>*
Default value : *0 4 2 \* \**
Context : *global*
Cron expression of how often the GeoIP database is updated.
`USE_CLAMAV_SCAN_CRON`
Values : *\<cron expression\>*
Default value : *30 1 \* \* \**
Context : *global*
Cron expression of how often ClamAV will scan all the files inside the container.
`CLAMAV_UPDATE_CRON`
Values : *\<cron expression\>*
Default value : *0 1 \* \* \**
Context : *global*
Cron expression of how often ClamAV will update its database.
## misc
`SWARM_MODE`
Values : *yes* | *no*
Default value : *no*
Context : *global*
Only set to *yes* when you use *bunkerized-nginx* with *autoconf* feature in swarm mode. More info [here](#swarm-mode).
`USE_API`
Values : *yes* | *no*
Default value : *no*
Context : *global*
Only set to *yes* when you use *bunkerized-nginx* with *autoconf* feature in swarm mode. More info [here](#swarm-mode).
`API_URI`
Values : *random* | *\<any valid URI path\>*
Default value : *random*
Context : *global*
Set it to a random path when you use *bunkerized-nginx* with *autoconf* feature in swarm mode. More info [here](#swarm-mode).
`API_WHITELIST_IP`
Values : *\<list of IP/CIDR separated with space\>*
Default value : *192.168.0.0/16 172.16.0.0/12 10.0.0.0/8*
Context : *global*
List of IP/CIDR block allowed to send API order using the `API_URI` uri.

12
docs/index.md Normal file
View File

@@ -0,0 +1,12 @@
# bunkerized-nginx official documentation
```{toctree}
:maxdepth: 2
:caption: Contents
introduction
quickstart_guide
security_tuning
troubleshooting
volumes
environment_variables
```

29
docs/introduction.md Normal file
View File

@@ -0,0 +1,29 @@
# Introduction
<p align="center">
<img src="https://github.com/bunkerity/bunkerized-nginx/blob/master/logo.png?raw=true" width="425" />
</p>
nginx Docker image secure by default.
Avoid the hassle of following security best practices "by hand" each time you need a web server or reverse proxy. Bunkerized-nginx provides generic security configs, settings and tools so you don't need to do it yourself.
Non-exhaustive list of features :
- HTTPS support with transparent Let's Encrypt automation
- State-of-the-art web security : HTTP security headers, prevent leaks, TLS hardening, ...
- Integrated ModSecurity WAF with the OWASP Core Rule Set
- Automatic ban of strange behaviors
- Antibot challenge through cookie, javascript, captcha or recaptcha v3
- Block TOR, proxies, bad user-agents, countries, ...
- Block known bad IP with DNSBL and CrowdSec
- Prevent bruteforce attacks with rate limiting
- Detect bad files with ClamAV
- Easy to configure with environment variables or web UI
- Automatic configuration with container labels
- Docker Swarm support
Fooling automated tools/scanners :
<img src="https://github.com/bunkerity/bunkerized-nginx/blob/master/demo.gif?raw=true" />
You can find a live demo at <a href="https://demo-nginx.bunkerity.com" target="_blank">https://demo-nginx.bunkerity.com</a>, feel free to do some security tests.

35
docs/make.bat Normal file
View File

@@ -0,0 +1,35 @@
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=.
set BUILDDIR=_build
if "%1" == "" goto help
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
:end
popd

346
docs/quickstart_guide.md Normal file
View File

@@ -0,0 +1,346 @@
# Quickstart guide
## Run HTTP server with default settings
```shell
docker run -p 80:8080 -v /path/to/web/files:/www:ro bunkerity/bunkerized-nginx
```
Web files are stored in the /www directory, the container will serve files from there. Please note that *bunkerized-nginx* doesn't run as root but as an unprivileged user with UID/GID 101 therefore you should set the rights of */path/to/web/files* accordingly.
## In combination with PHP
```shell
docker network create mynet
```
```shell
docker run --network mynet \
-p 80:8080 \
-v /path/to/web/files:/www:ro \
-e REMOTE_PHP=myphp \
-e REMOTE_PHP_PATH=/app \
bunkerity/bunkerized-nginx
```
```shell
docker run --network mynet \
--name myphp \
-v /path/to/web/files:/app \
php:fpm
```
The `REMOTE_PHP` environment variable lets you define the address of a remote PHP-FPM instance that will execute the .php files. `REMOTE_PHP_PATH` must be set to the directory where the PHP container will find the files.
## Run HTTPS server with automated Let's Encrypt
```shell
docker run -p 80:8080 \
-p 443:8443 \
-v /path/to/web/files:/www:ro \
-v /where/to/save/certificates:/etc/letsencrypt \
-e SERVER_NAME=www.yourdomain.com \
-e AUTO_LETS_ENCRYPT=yes \
-e REDIRECT_HTTP_TO_HTTPS=yes \
bunkerity/bunkerized-nginx
```
Certificates are stored in the /etc/letsencrypt directory, you should save it on your local drive. Please note that *bunkerized-nginx* doesn't run as root but as an unprivileged user with UID/GID 101 therefore you should set the rights of */where/to/save/certificates* accordingly.
If you don't want your webserver to listen on HTTP add the environment variable `LISTEN_HTTP` with a *no* value (e.g. HTTPS only). But Let's Encrypt needs the port 80 to be opened so redirecting the port is mandatory.
Here you have three environment variables :
- `SERVER_NAME` : define the FQDN of your webserver, this is mandatory for Let's Encrypt (www.yourdomain.com should point to your IP address)
- `AUTO_LETS_ENCRYPT` : enable automatic Let's Encrypt creation and renewal of certificates
- `REDIRECT_HTTP_TO_HTTPS` : enable HTTP to HTTPS redirection
## As a reverse proxy
```shell
docker run -p 80:8080 \
-e USE_REVERSE_PROXY=yes \
-e REVERSE_PROXY_URL=/ \
-e REVERSE_PROXY_HOST=http://myserver:8080 \
bunkerity/bunkerized-nginx
```
This is a simple reverse proxy to a unique application. If you have more than one application you can add more REVERSE_PROXY_URL/REVERSE_PROXY_HOST by appending a suffix number like this :
```shell
docker run -p 80:8080 \
-e USE_REVERSE_PROXY=yes \
-e REVERSE_PROXY_URL_1=/app1/ \
-e REVERSE_PROXY_HOST_1=http://myapp1:3000/ \
-e REVERSE_PROXY_URL_2=/app2/ \
-e REVERSE_PROXY_HOST_2=http://myapp2:3000/ \
bunkerity/bunkerized-nginx
```
## Behind a reverse proxy
```shell
docker run -p 80:8080 \
-v /path/to/web/files:/www \
-e PROXY_REAL_IP=yes \
bunkerity/bunkerized-nginx
```
The `PROXY_REAL_IP` environment variable, when set to *yes*, activates the [ngx_http_realip_module](https://nginx.org/en/docs/http/ngx_http_realip_module.html) to get the real client IP from the reverse proxy.
See [this section](https://bunkerized-nginx.readthedocs.io/en/latest/environment_variables.html#reverse-proxy) if you need to tweak some values (trusted ip/network, header, ...).
## Multisite
By default, bunkerized-nginx will only create one server block. When setting the `MULTISITE` environment variable to *yes*, one server block will be created for each host defined in the `SERVER_NAME` environment variable.
You can set/override values for a specific server by prefixing the environment variable with one of the server name previously defined.
```shell
docker run -p 80:8080 \
-p 443:8443 \
-v /where/to/save/certificates:/etc/letsencrypt \
-e SERVER_NAME=app1.domain.com app2.domain.com \
-e MULTISITE=yes \
-e AUTO_LETS_ENCRYPT=yes \
-e REDIRECT_HTTP_TO_HTTPS=yes \
-e USE_REVERSE_PROXY=yes \
-e app1.domain.com_REVERSE_PROXY_URL=/ \
-e app1.domain.com_REVERSE_PROXY_HOST=http://myapp1:8000 \
-e app2.domain.com_REVERSE_PROXY_URL=/ \
-e app2.domain.com_REVERSE_PROXY_HOST=http://myapp2:8000 \
bunkerity/bunkerized-nginx
```
The `USE_REVERSE_PROXY` is a *global* variable that will be applied to each server block. Whereas the `app1.domain.com_*` and `app2.domain.com_*` will only be applied to the app1.domain.com and app2.domain.com server block respectively.
When serving files, the web root directory should contains subdirectories named as the servers defined in the `SERVER_NAME` environment variable. Here is an example :
```shell
docker run -p 80:8080 \
-p 443:8443 \
-v /where/to/save/certificates:/etc/letsencrypt \
-v /where/are/web/files:/www:ro \
-e SERVER_NAME=app1.domain.com app2.domain.com \
-e MULTISITE=yes \
-e AUTO_LETS_ENCRYPT=yes \
-e REDIRECT_HTTP_TO_HTTPS=yes \
-e app1.domain.com_REMOTE_PHP=php1 \
-e app1.domain.com_REMOTE_PHP_PATH=/app \
-e app2.domain.com_REMOTE_PHP=php2 \
-e app2.domain.com_REMOTE_PHP_PATH=/app \
bunkerity/bunkerized-nginx
```
The */where/are/web/files* directory should have a structure like this :
```shell
/where/are/web/files
├── app1.domain.com
│ └── index.php
│ └── ...
└── app2.domain.com
└── index.php
└── ...
```
## Automatic configuration
The downside of using environment variables is that you need to recreate a new container each time you want to add or remove a web service. An alternative is to use the *bunkerized-nginx-autoconf* image which listens for Docker events and "automagically" generates the configuration.
First we need a volume that will store the configurations :
```shell
docker volume create nginx_conf
```
Then we run bunkerized-nginx with the `bunkerized-nginx.AUTOCONF` label, mount the created volume at /etc/nginx and set some default configurations for our services (e.g. : automatic Let's Encrypt and HTTP to HTTPS redirect) :
```shell
docker network create mynet
docker run -p 80:8080 \
-p 443:8443 \
--network mynet \
-v /where/to/save/certificates:/etc/letsencrypt \
-v /where/are/web/files:/www:ro \
-v nginx_conf:/etc/nginx \
-e SERVER_NAME= \
-e MULTISITE=yes \
-e AUTO_LETS_ENCRYPT=yes \
-e REDIRECT_HTTP_TO_HTTPS=yes \
-l bunkerized.nginx.AUTOCONF \
bunkerity/bunkerized-nginx
```
When setting `SERVER_NAME` to nothing bunkerized-nginx won't create any server block (in case we only want automatic configuration).
Once bunkerized-nginx is created, let's setup the autoconf container :
```shell
docker run -v /var/run/docker.sock:/var/run/docker.sock:ro \
-v nginx_conf:/etc/nginx \
bunkerity/bunkerized-nginx-autoconf
```
We can now create a new container and use labels to dynamically configure bunkerized-nginx. Labels for automatic configuration are the same as environment variables but with the "bunkerized-nginx." prefix.
Here is a PHP example :
```shell
docker run --network mynet \
--name myapp \
-v /where/are/web/files/app.domain.com:/app \
-l bunkerized-nginx.SERVER_NAME=app.domain.com \
-l bunkerized-nginx.REMOTE_PHP=myapp \
-l bunkerized-nginx.REMOTE_PHP_PATH=/app \
php:fpm
```
And a reverse proxy example :
```shell
docker run --network mynet \
--name anotherapp \
-l bunkerized-nginx.SERVER_NAME=app2.domain.com \
-l bunkerized-nginx.USE_REVERSE_PROXY=yes \
-l bunkerized-nginx.REVERSE_PROXY_URL=/ \
-l bunkerized-nginx.REVERSE_PROXY_HOST=http://anotherapp \
tutum/hello-world
```
## Swarm mode
Automatic configuration through labels is also supported in swarm mode. The *bunkerized-nginx-autoconf* is used to listen for Swarm events (e.g. service create/rm) and "automagically" edit configurations files and reload nginx.
As a use case we will assume the following :
- Some managers are also workers (they will only run the *autoconf* container for obvious security reasons)
- The bunkerized-nginx service will be deployed on all workers (global mode) so clients can connect to each of them (e.g. load balancing, CDN, edge proxy, ...)
- There is a shared folder mounted on managers and workers (e.g. NFS, GlusterFS, CephFS, ...)
Let's start by creating the network to allow communications between our services :
```shell
docker network create -d overlay mynet
```
We can now create the *autoconf* service that will listen to swarm events :
```shell
docker service create --name autoconf \
--network mynet \
--mount type=bind,source=/var/run/docker.sock,destination=/var/run/docker.sock,ro \
--mount type=bind,source=/shared/confs,destination=/etc/nginx \
--mount type=bind,source=/shared/letsencrypt,destination=/etc/letsencrypt \
--mount type=bind,source=/shared/acme-challenge,destination=/acme-challenge \
-e SWARM_MODE=yes \
-e API_URI=/ChangeMeToSomethingHardToGuess \
--replicas 1 \
--constraint node.role==manager \
bunkerity/bunkerized-nginx-autoconf
```
**You need to change `API_URI` to something hard to guess since there is no other security mechanism to protect the API at the moment.**
When *autoconf* is created, it's time for the *bunkerized-nginx* service to be up :
```shell
docker service create --name nginx \
--network mynet \
-p published=80,target=8080,mode=host \
-p published=443,target=8443,mode=host \
--mount type=bind,source=/shared/confs,destination=/etc/nginx \
--mount type=bind,source=/shared/letsencrypt,destination=/etc/letsencrypt,ro \
--mount type=bind,source=/shared/acme-challenge,destination=/acme-challenge,ro \
--mount type=bind,source=/shared/www,destination=/www,ro \
-e SWARM_MODE=yes \
-e USE_API=yes \
-e API_URI=/ChangeMeToSomethingHardToGuess \
-e MULTISITE=yes \
-e SERVER_NAME= \
-e AUTO_LETS_ENCRYPT=yes \
-e REDIRECT_HTTP_TO_HTTPS=yes \
-l bunkerized-nginx.AUTOCONF \
--mode global \
--constraint node.role==worker \
bunkerity/bunkerized-nginx
```
The `API_URI` value must be the same as the one specified for the *autoconf* service.
We can now create a new service and use labels to dynamically configure bunkerized-nginx. Labels for automatic configuration are the same as environment variables but with the "bunkerized-nginx." prefix.
Here is a PHP example :
```shell
docker service create --name myapp \
--network mynet \
--mount type=bind,source=/shared/www/app.domain.com,destination=/app \
-l bunkerized-nginx.SERVER_NAME=app.domain.com \
-l bunkerized-nginx.REMOTE_PHP=myapp \
-l bunkerized-nginx.REMOTE_PHP_PATH=/app \
--constraint node.role==worker \
php:fpm
```
And a reverse proxy example :
```shell
docker service create --name anotherapp \
--network mynet \
-l bunkerized-nginx.SERVER_NAME=app2.domain.com \
-l bunkerized-nginx.USE_REVERSE_PROXY=yes \
-l bunkerized-nginx.REVERSE_PROXY_URL=/ \
-l bunkerized-nginx.REVERSE_PROXY_HOST=http://anotherapp \
--constraint node.role==worker \
tutum/hello-world
```
## Web UI
**This feature exposes, for now, a security risk because you need to mount the docker socket inside a container exposing a web application. You can test it but you should not use it in servers facing the internet.**
A dedicated image, *bunkerized-nginx-ui*, lets you manage bunkerized-nginx instances and services configurations through a web user interface. This feature is still in beta, feel free to open a new issue if you find a bug and/or you have an idea to improve it.
First we need a volume that will store the configurations :
```shell
docker volume create nginx_conf
```
Then, we can create the bunkerized-nginx instance with the `bunkerized-nginx.UI` label and a reverse proxy configuration for our web UI :
```shell
docker network create mynet
docker run -p 80:8080 \
-p 443:8443 \
--network mynet \
-v nginx_conf:/etc/nginx \
-v /where/are/web/files:/www:ro \
-v /where/to/save/certificates:/etc/letsencrypt \
-e SERVER_NAME=admin.domain.com \
-e MULTISITE=yes \
-e AUTO_LETS_ENCRYPT=yes \
-e REDIRECT_HTTP_TO_HTTPS=yes \
-e DISABLE_DEFAULT_SERVER=yes \
-e admin.domain.com_SERVE_FILES=no \
-e admin.domain.com_USE_AUTH_BASIC=yes \
-e admin.domain.com_AUTH_BASIC_USER=admin \
-e admin.domain.com_AUTH_BASIC_PASSWORD=password \
-e admin.domain.com_USE_REVERSE_PROXY=yes \
-e admin.domain.com_REVERSE_PROXY_URL=/webui/ \
-e admin.domain.com_REVERSE_PROXY_HOST=http://myui:5000/ \
-l bunkerized-nginx.UI \
bunkerity/bunkerized-nginx
```
The `AUTH_BASIC` environment variables let you define a login/password that must be provided before accessing to the web UI. At the moment, there is no authentication mechanism integrated into bunkerized-nginx-ui.
We can now create the bunkerized-nginx-ui container that will host the web UI behind bunkerized-nginx :
```shell
docker run --network mynet \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
-v nginx_conf:/etc/nginx \
-e ABSOLUTE_URI=https://admin.domain.com/webui/ \
bunkerity/bunkerized-nginx-ui
```
After that, the web UI should be accessible from https://admin.domain.com/webui/.

3
docs/requirements.txt Normal file
View File

@@ -0,0 +1,3 @@
sphinx
sphinx-rtd-theme
myst-parser

Some files were not shown because too many files have changed in this diff Show More