github.com/tiredofit/docker-alpine

December 7, 2024 ยท View on GitHub

GitHub release Build Status Docker Stars Docker Pulls Become a sponsor Paypal Donate


About

Dockerfile to build an alpine linux container image.

  • Currently tracking 3.5, 3.6, 3.7, 3.8, 3.9, 3.10, 3.11, 3.12, 3.13, 3.14, 3.15, 3.16, 3.17, 3.18 , 3.19, 3.20, 3.21, and edge.
  • s6 overlay enabled for PID 1 init capabilities.
  • zabbix-agent (Classic and Modern) for individual container monitoring.
  • Scheduling via cron with other helpful tools (bash, curl, less, logrotate, nano, vi) for easier management.
  • Messaging ability via MSMTP enabled to send mail from container to external SMTP server.
  • Firewall included with capabilities of monitoring logs to block remote hosts via Fail2ban
  • Logshipping capabilities to remote log analysis servers via Fluent-Bit
  • Ability to update User ID and Group ID permissions dynamically.

Maintainer

Table of Contents

Prerequisites and Assumptions

No prerequisites required

Installation

Build from Source

Clone this repository and build the image with docker build <arguments> (imagename) .

Prebuilt Images

Builds of the image are available on Docker Hub

docker pull docker.io/tiredofdit/alpine:(imagetag)

Builds of the image are also available on the Github Container Registry

docker pull ghcr.io/tiredofit/docker-alpine:(imagetag)

The following image tags are available along with their tagged release based on what's written in the Changelog:

Alpine versionTag
edge:edge
3.20:3.20
3.19:3.19
3.18:3.18
3.17:3.17
3.16:3.16
3.15:3.15
3.14:3.14
3.13:3.13
3.12:3.12
3.11:3.11
3.10:3.10
3.9:3.9
3.8:3.8
3.7:3.7
3.6:3.6
3.5:3.5

Multi Architecture

Images are built primarily for amd64 architecture, and may also include builds for arm/v7, arm64 and others. These variants are all unsupported. Consider sponsoring my work so that I can work with various hardware. To see if this image supports multiple architecures, type docker manifest (image):(tag)

Configuration

Quick Start

Utilize this image as a base for further builds. Please visit the s6 overlay repository for instructions on how to enable the S6 init system when using this base or look at some of my other images which use this as a base.

Persistent Storage

The following directories are used for configuration and can be mapped for persistent storage.

DirectoryDescription
/etc/fluent-bit/conf.d/Fluent-Bit custom configuration directory
/etc/fluent-bit/parsers.d/Fluent-Bit custom parsers directory
/etc/zabbix/zabbix_agentd.conf.d/Zabbix Agent configuration directory
/etc/fail2ban/filter.dCustom Fail2ban Filter configuration
/etc/fail2ban/jail.dCustom Fail2ban Jail configuration
/var/logContainer, Cron, Zabbix, other log files
/assets/cronDrop custom crontabs here
/assets/iptablesDrop custom IPTables rules here

Environment Variables

Below is the complete list of available options that can be used to customize your installation. Variables showing an 'x' under the _FILE column can be used for storing the information inside of a file, useful for secrets.

Container Options

ParameterDescriptionDefault
CONAINER_ENABLE_LOG_TIMESTAMPPrefix this images container logs with timestampTRUE
CONTAINER_COLORIZE_OUTPUTEnable/Disable colorized console outputTRUE
CONTAINER_CUSTOM_BASH_PROMPTIf you wish to set a different bash prompt then '(imagename):(version) HH:MM:SS # '
CONTAINER_CUSTOM_PATHUsed for adding custom files into the image upon startup/assets/custom
CONTAINER_CUSTOM_SCRIPTS_PATHUsed for adding custom scripts to execute upon startup/assets/custom-scripts
CONTAINER_ENABLE_PROCESS_COUNTERShow how many times process has executed in console logTRUE
CONTAINER_LOG_LEVELControl level of output of container INFO, WARN, NOTICE, DEBUGNOTICE
CONTAINER_LOG_PREFIX_TIME_FMTTimestamp Time Format%H:%M:%S
CONTAINER_LOG_PREFIX_DATE_FMTTimestamp Date Format%Y-%m-%d
CONTAINER_LOG_PREFIX_SEPERATORTimestamp seperator-
CONTAINER_LOG_FILE_LEVELControl level of output of container INFO, WARN, NOTICE, DEBUGDEBUG
CONTAINER_LOG_FILE_NAMEInternal Container Logs filename/var/log/container/container.log
CONTAINER_LOG_FILE_PATHPath where to find the internal container logs/var/log/container/
CONTAINER_LOG_FILE_PREFIX_TIME_FMTTimestamp Time Format%H:%M:%S
CONTAINER_LOG_FILE_PREFIX_DATE_FMTTimestamp Date Format%Y-%m-%d
CONTAINER_LOG_FILE_PREFIX_SEPERATORTimestamp seperator-
CONTAINER_NAMEUsed for setting entries in Monnitoring and Log Shipping(hostname)
CONTAINER_POST_INIT_COMMANDIf you wish to execute a command in the container after all services have initialized enter it here. Seperate multiple by commas
CONTAINER_POST_INIT_SCRIPTIf you wish to execute a script in the container after all services have initialized enter the path here. Seperate multiple by commas
TIMEZONESet TimezoneEtc/GMT

Scheduling Options

The image has capability of executing tasks at differnt times of the day. It follows the cron syntax. Presently this image only supports the busybox cron however can be extended to other scheduling backends with little effort.

ParameterDescriptionDefault
CONTAINER_ENABLE_SCHEDULINGEnable Scheduled TasksTRUE
CONTAINR_SCHEDULING_BACKENDWhat scheduling tool to use croncron
CONTAINER_SCHEDULING_LOCATIONWhere to read task files/assets/cron/
SCHEDULING_LOG_TYPELog Type FILEFILE
SCHEDULING_LOG_LOCATIONLog File Location/var/log/cron/
SCHEDULING_LOG_LEVELLog Level 1 (loud) to 8 (quiet)6
Cron Options

There are two ways to add jobs to be triggered via cron. One is to drop files into /assets/cron/ which will be parsed upon container startup, or to set environment variables.

ParameterDescriptionDefault
CRON_*Name of the job value of the time and output to be run``

Example: CRON_HELLO="* * * * * echo 'hello' > /tmp/hello.log

Since you can't really disable environment variables in Docker if they are baked into parent Docker images, you can override a baked in Cron command with your own values, or to disable it entirely set the value to FALSE eg CRON_HELLO=FALSE.

Messaging Options

If you wish to send mail, set CONTAINER_ENABLE_MESSAGING=TRUE and configure the following environment variables. Presently we only support one backend, but more can be added with little effort.

ParameterDescriptionDefault
CONTAINER_ENABLE_MESSAGINGEnable Messaging services like SMTPTRUE
CONTAINER_MESSAGING_BACKENDMessaging Backend - presently only msmtpmsmtp
MSMTP Options

See the MSMTP Configuration Options for further information on options to configure MSMTP.

ParameterDescriptionDefault_FILE
SMTP_AUTO_FROMAdd setting to support sending through Gmail SMTPFALSE
SMTP_HOSTHostname of SMTP Serverpostfix-relayx
SMTP_PORTPort of SMTP Server25x
SMTP_DOMAINHELO Domaindocker
SMTP_MAILDOMAINMail Domain Fromlocal
SMTP_AUTHENTICATIONSMTP Authenticationnone
SMTP_USERSMTP Username``x
SMTP_PASSSMTP Password``x
SMTP_TLSUse TLSFALSE
SMTP_STARTTLSStart TLS from within sessionFALSE
SMTP_TLSCERTCHECKCheck remote certificateFALSE
SMTP_ALLOW_FROM_OVERRIDESMTP Allow From Override``

See The Official Zabbix Agent Documentation for information about the following Zabbix values.

Monitoring Options

This image includes the capability of using agents inside the image to monitor metrics from applications. Presently at this time it only supports Zabbix as a monitoring platform, however is extendable to other platforms with little effort.

ParameterDescriptionDefault
CONTAINER_ENABLE_MONITORINGEnable Monitoring of applications or metricsTRUE
CONTAINR_MONITORING_BACKENDWhat monitoring agent to use zabbixzabbix
Zabbix Options

This image comes with Zabbix Agent 1 (Classic or C compiled) and Zabbix Agent 2 (Modern, or Go compiled). See which variables work for each version and make your agent choice. Drop files in /etc/zabbix/zabbix_agentd.conf.d to setup your metrics. The environment variables below only affect the system end of the configuration. If you wish to use your own system configuration without these variables, change ZABBIX_SETUP_TYPE to MANUAL

ParameterDescriptionDefault12_FILE
ZABBIX_SETUP_TYPEAutomatically generate configuration based on these variables AUTO or MANUALAUTOxx
ZABBIX_AGENT_TYPEWhich version of Zabbix Agent to load 1 or 21N/AN/A
ZABBIX_AGENT_LOG_PATHLog File Path/var/log/zabbix/agent/xx
ZABBIX_AGENT_LOG_FILELogfile namezabbix_agentd.logxx
ZABBIX_CERT_PATHZabbix Certificates Path/etc/zabbix/certs/xx
ZABBIX_ENABLE_AUTOREGISTRATIONUse internal routine for Agent autoregistration based on config files with # Autoregister tagTRUExx
ZABBIX_ENABLE_AUTOREGISTRATION_DNSRegister with DNS name instead of IP Address when autoregisteringTRUExx
ZABBIX_AUTOREGISTRATION_DNS_NAME(optional) DNS Name to provide for auto register. Uses CONTAINER_NAME as default$CONTAINER_NAMExx
ZABBIX_AUTOREGISTRATION_DNS_SUFFIXIf you want to append something after the generated DNS Name``xx
ZABBIX_ENCRYPT_PSK_IDZabbix Encryption PSK ID``xxx
ZABBIX_ENCRYPT_PSK_KEYZabbix Encryption PSK Key``xxx
ZABBIX_ENCRYPT_PSK_FILEZabbix Encryption PSK File (If not using above env var)``xx
ZABBIX_LOG_FILE_SIZELogfile size0xx
ZABBIX_DEBUGLEVELDebug level1xx
ZABBIX_REMOTECOMMANDS_ALLOWEnable remote commands*xx
ZABBIX_REMOTECOMMANDS_DENYDeny remote commands``xx
ZABBIX_REMOTECOMMANDS_LOGEnable remote commands Log (0/1)1x``
ZABBIX_SERVERAllow connections from Zabbix server IP0.0.0.0/0xx
ZABBIX_STATUS_PORTAgent will listen to this port for status requests (http://localhost:port/status)10050``x
ZABBIX_LISTEN_PORTZabbix Agent listening port10050xx
ZABBIX_LISTEN_IPZabbix Agent listening IP0.0.0.0xx
ZABBIX_START_AGENTSHow many Zabbix Agents to start1x``
ZABBIX_SERVER_ACTIVEServer for active checkszabbix-proxyxxx
ZABBIX_HOSTNAMEContainer hostname to report to server$CONTAINER_NAMExx
ZABBIX_REFRESH_ACTIVE_CHECKSSeconds to refresh Active Checks120xx
ZABBIX_BUFFER_SENDBuffer Send5xx
ZABBIX_BUFFER_SIZEBuffer Size100xx
ZABBIX_MAXLINES_SECONDMax Lines Per Second20x``
ZABBIX_SOCKETSocket for communicating/tmp/zabbix.sock``x
ZABBIX_ALLOW_ROOTAllow running as root1x``
ZABBIX_USERUser to start agentzabbixxx
ZABBIX_USER_SUDOAllow Zabbix user to utilize sudo commandsTRUExx
ZABBIX_AGENT_TIMEOUTZabbix agent timeout (sec) for userparameter checks3xx

This image supports autoregistering configuration as an Active Agent to a Zabbix Proxy or a Server. It looks in /etc/zabbix_agent.conf.d/*.conf for the string # Autoregister= and takes these values and adds it to the HostMetadata configuration entry automatically wrapped around : eg :application: . Use it by creating an Auto register rule and search for that string. You can find server templates in this repository in the [zabbix_templates](zabbix_templates/) directory.

Logging Options

This is work in progress for a larger logging solution. Presently there is functionality to rotate logs on a daily basis, however as this section matures there will be the capability to also ship the logs to an external data warehouse like Loki, or Elastic Search. At present Log shipping is only supported by fluent-bit and x86_64 only.

ParameterDescriptionDefault
CONTAINER_ENABLE_LOGROTATEEnable Logrotate (if scheduling enabled)TRUE
CONTAINER_ENABLE_LOGSHIPPINGEnable Log ShippingFALSE
CONTAINER_LOGSHIPPING_BACKENDLog shipping backend fluent-bitfluent-bit
LOGROTATE_COMPRESSION_TYPELogfile compression algorithm NONE BZIP2 GZIP ZSTDZSTD
LOGROTATE_COMPRESSION_VALUEWhat level of compression to use8
LOGROTATE_COMPRESSION_EXTRA_PARAMETERSPass extra parameters to the compression command (optional)
LOGROTATE_RETAIN_DAYSRotate and retain logs for x days7
Log Shipping Parsing

You can set an environment variable to start shipping a log without any other configuration. Create a variable starting with LOGSHIP_<name> with the value of the location of the log files. You can also use this to null an existing configuration by setting the value of FALSE.

Example: LOGSHIP_NGINX=/var/log/nginx/*.log will create and tag all log files from that directory as to be coming from CONTAINER_NAME and from nginx. Note, it does not allow for custom parsing, so will simply parse the log entry as is.

If LOGSHIPPING_AUTO_CONFIG_LOGROTATE set to true, you can define what parser the configuration file should use. Make sure you have the appropriate .conf parsers in /etc/fluent-bit/parsers.d/ Create a line in the logrotate.d/<file> that looks like # logship: <parser>. Multiple parsers can be added by seperating via commas. Alternatively, if you wanted to skip a certain logfile from being parsed by the log shipper, use the value "SKIP".

ParameterDescriptionDefault
LOGSHIPPING_AUTO_CONFIG_LOGROTATEAutomatically configure log shipping for files that are listed in /etc/logrotate.dTRUE
Fluent-Bit Options

Drop files in /etc/fluent-bit/conf.d to setup your inputs and outputs. The environment variables below only affect the system end of the configuration. If you wish to use your own system configuration without these variables, change FLUENTBIT_SETUP_TYPE to MANUAL. Container will attempt to automatically create configuration to send to a destination, or can also be set to act as a receiver from other fluent-bit hosts and forward data to a remote log analysis service.

ParameterDescriptionDefault_FILE
FLUENTBIT_CONFIG_PARSERSParsers config file nameparsers.conf
FLUENTBIT_CONFIG_PLUGINSPlugins config file nameplugins.conf
FLUENTBIT_ENABLE_HTTP_SERVEREmbedded HTTP Server for metrics TRUE / FALSETRUE
FLUENTBIT_ENABLE_STORAGE_METRICSPublic storage pipeline metrics in /api/v1/storageTRUE
FLUENTBIT_FLUSH_SECONDSWait time to flush records in seconds1
FLUENTBIT_FORWARD_BUFFER_CHUNK_SIZEBuffer Chunk Size32KB
FLUENTBIT_FORWARD_BUFFER_MAX_SIZEBuffer Maximum Size64KB
FLUENTBIT_FORWARD_PORTWhat port when using PROXY (listen) mode or FORWARD (client) output24224
FLUENTBIT_GRACE_SECONDSWait time before exit in seconds1
FLUENTBIT_HTTP_LISTEN_IPHTTP Listen IP0.0.0.0
FLUENTBIT_HTTP_LISTEN_PORTHTTP Listening Port2020
FLUENTBIT_LOG_FILELog Filefluentbit.log
FLUENTBIT_LOG_LEVELLog Level info warn error debug traceinfo
FLUENTBIT_LOG_PATHLog Path/var/log/fluentbit/
FLUENTBIT_MODEType of operation - Client NORMAL or Proxy PROXYNORMAL
FLUENTBIT_OUTPUT_FORWARD_HOSTWhere to forward Fluent-Bit data tofluent-proxyx
FLUENTBIT_OUTPUT_FORWARD_TLS_VERIFYVerify certificates when using TLSFALSE
FLUENTBIT_OUTPUT_FORWARD_TLSEnable TLS when forwadingFALSE
FLUENTBIT_OUTPUT_LOKI_COMPRESS_GZIPEnable GZIP compression when sending to loki hostTRUE
FLUENTBIT_OUTPUT_LOKI_HOSTHost for Loki Outputlokix
FLUENTBIT_OUTPUT_LOKI_PORTPort for Loki Output3100x
FLUENTBIT_OUTPUT_LOKI_TLSEnable TLS For Loki OutputFALSE
FLUENTBIT_OUTPUT_LOKI_TLS_VERIFYEnable TLS Certificate Verification For Loki OutputFALSE
FLUENTBIT_OUTPUT_LOKI_USER(optional) Username to authenticate to Loki Server``x
FLUENTBIT_OUTPUT_LOKI_PASS(optional) Password to authenticate to Loki Server``x
FLUENTBIT_OUTPUT_TENANT_ID(optional) Tenant ID to pass to Loki Server``x
FLUENTBIT_OUTPUTOutput plugin to use LOKI , FORWARD, NULLFORWARD
FLUENTBIT_TAIL_BUFFER_CHUNK_SIZEBuffer Chunk Size for Tail32k
FLUENTBIT_TAIL_BUFFER_MAX_SIZEMaximum size for Tail32k
FLUENTBIT_TAIL_READ_FROM_HEADRead from Head instead of TailFALSE
FLUENTBIT_TAIL_SKIP_EMPTY_LINESSkip Empty Lines when TailingTRUE
FLUENTBIT_TAIL_SKIP_LONG_LINESSkip Long Lines when TailingTRUE
FLUENTBIT_TAIL_DB_ENABLEEnable Offset DB per tracked file (will be same name as log file yet hidden and a suffix of dbTRUE
FLUENTBIT_TAIL_DB_SYNCDB Sync Type normal or fullnormal
FLUENTBIT_TAIL_DB_LOCKLock access to DB FileTRUE
FLUENTBIT_TAIL_DB_JOURNAL_MODEJournal Mode for DB WAL DELETE TRUNCATE PERSIST MEMORY OFFWAL
FLUENTBIT_TAIL_KEY_PATH_ENABLEEnable sending Key for Log Filename/PathTRUE
FLUENTBIT_TAIL_KEY_PATHPath Key Namefilename
FLUENTBIT_TAIL_KEY_OFFSET_ENABLEEnable sending Key for Offset in Log fileFALSE
FLUENTBIT_TAIL_KEY_OFFSETOffset Path Key Nameoffset
FLUENTBIT_SETUP_TYPEAutomatically generate configuration based on these variables AUTO or MANUALAUTO
FLUENTBIT_STORAGE_BACKLOG_LIMITMaximum about of memory to use for backlogged/unsent records5M
FLUENTBIT_STORAGE_CHECKSUMCreate CRC32 checkcum for filesystem RW functionsFALSE
FLUENTBIT_STORAGE_PATHAbsolute file system path to store filesystem data buffers/tmp/fluentbit/storage
FLUENTBIT_STORAGE_SYNCSynchronization mode to store data in filesystem normal or fullnormal

Firewall Options|

Included when proper capabilities are set on image is the capability to set up detailed block / allow rules via a firewall on container start. Presently only iptables is supported. You must use run your containers with the following capabilities added: NET_ADMIN, NET_RAW

ParameterDescriptionDefault
CONTAINER_ENABLE_FIREWALLEnable Firewall FunctionalityFALSE
CONTAINER_FIREWALL_BACKENDWhat Firewall backend to use iptablesiptables
FIREWALL_RULE_00Firewall rule to execute
FIREWALL_RULE_01Next firewall rule to execute

One can use the FIREWALL_RULE_XX environment variables to pass rules to the firewall. In this example I am going to block someone from being able to access a port except if from a specific IP address:

FIREWALL_RULE_00=-I INPUT -p tcp -m tcp -s 101.69.69.101 --dport 389 -j ACCEPT
FIREWALL_RULE_01=-I INPUT -p tcp -m tcp -s 0.0.0.0/0 --dport 389 -j DROP
Host Override Options

Sometimes you may need to do some host file trickery. This will add an entry to the contains hosts file.

Instead of relying on environment variables one can put a iptables-restore compatible ruleset below and it will be imported on container start.

ParameterDescriptionDefault
CONTAINER_HOST_OVERRIDE_01Create manual hosts entry

Make the value <destination> override1 override2 eg 1.2.3.4 example.org example.com. If you omit an IP Address and instead use a domain name it will attempt to look it up to an IP eg proxy example.com example.org

IPTables Options

Instead of relying on environment variables one can put a iptables-restore compatible ruleset below and it will be imported on container start.

ParameterDescriptionDefault
IPTABLES_RULES_PATHPath for IPTables Rules/assets/iptables/
IPTABLES_RULES_FILEIPTables Rules File to restore if exists on container startiptables.rules
Fail2Ban Options

The container also has the capability should CONTAINER_ENABLE_FIREWALL=TRUE be enabled to launch Fail2ban, a process which watches logs for patterns and then blocks the remote host from connecting for a period of time. Drop your custom jail configs as *.conf files in /etc/fail2ban/jail.d/ and filters in /etc/fail2ban/filter.d for them to be parsed at startup. Note the startup delay environment variable to avoid the process failing if no log files exist from a fresh install.

ParameterDescriptionDefault
CONTAINER_ENABLE_FAIL2BANEnable Firewall FunctionalityFALSE
FAIL2BAN_BACKENDBackendAUTO
FAIL2BAN_CONFIG_PATHFail2ban Configuration Path/etc/fail2ban/
FAIL2BAN_DB_FILEPersistent Database Filefail2ban.sqlite3
FAIL2BAN_DB_PATHPersistent Database Path/data/fail2ban/
FAIL2BAN_DB_PURGE_AGEPurge entries after how many seconds86400
FAIL2BAN_DB_TYPEDB Type NONE, MEMORY, FILEMEMORY
FAIL2BAN_IGNORE_IPIgnore these IPs or Ranges space seperated127.0.0.1/8 ::1 172.16.0.0/12 192.168.0.0/24
FAIL2BAN_IGNORE_SELFIgnroe Self TRUE FALSETRUE
FAIL2BAN_LOG_PATHFail2ban Log Path/var/log/fail2ban/
FAIL2BAN_LOG_FILEFail2ban Log Filefail2ban.log
FAIL2BAN_LOG_LEVELLog Level CRITICAL ERROR WARNING NOTICE INFO DEBUGINFO
FAIL2BAN_LOG_TYPELog to FILE or CONSOLEFILE
FAIL2BAN_MAX_RETRYMax times to find pattern in log over FAIL2BAN_TIME_FIND5
FAIL2BAN_STARTUP_DELAYStartup Delay to give a chance for monitored logs to exist or have data in seconds15
FAIL2BAN_TIME_BANLength of time to ban in default10m
FAIL2BAN_TIME_FINDWindow to base pattern matches against10m
FAIL2BAN_USE_DNSUSE DNS for lookups yes warn no rawwarn

Permissions

If you wish to change the internal id for users and groups you can set environment variables to do so. e.g. If you add USER_NGINX=1000 it will reset the containers nginx user id from 82 to 1000 -

If you enable DEBUG_PERMISSIONS=TRUE all the users and groups have been modified in accordance with environment variables will be displayed in output.

Hint, also change the Group ID to your local development users UID & GID and avoid Docker permission issues when developing.

ParameterDescription
CONTAINER_USER_<USERNAME>The user's UID in /etc/passwd will be modified with new UID
CONTAINER_GROUP_<GROUPNAME>The group's GID in /etc/group and /etc/passwd will be modified with new GID
CONTAINER_GROUP_ADD_<GROUPNAME>The username will be added in /etc/group after the group name defined

Process Watchdog

This is experimental functionality to call an external script before a process is executed.

Sample use cases:

  • Alert slack channel when process has executed more than once
  • Disable process from executing further if restarted 50 times
  • Write to an additional log file..
  • Change a file to display "Under Maintenance" on a webserver if this process isn't supposed to be run more than 1 time.

It will pass 5 arguments to a bash script titled the same name as the executing script or if not found, use the default CONTAINER_PROCESS_HELPER_SCRIPT below. Drop your files into the CONTAINER_PROCESS_HELPER_PATH.

For example, if 04-scheduling was starting, it would look for $CONTAINER_PROCESS_HELPER_PATH/04-scheduling and if found execute it while passing the following arguments: DATE,TIME,SCRIPT_NAME,TIMES EXECUTED,HOSTNAME

e.g: 2021-07-01 23:01:04 04-scheduling 2 container

Use the values in your own bash script using the $1 $2 $3 $4 $5 syntax. Change time and date and settings with these environment variables

ParameterDescriptionDefault
CONTAINER_PROCESS_HELPER_PATHPath to file external helper scripts/assets/container/processhelper/
CONTAINER_PROCESS_HELPER_SCRIPTDefault helper script nameprocesshelper.sh
CONTAINER_PROCESS_HELPER_DATE_FMTDate format passed to external script%Y-%m-%d
CONTAINER_PROCESS_HELPER_TIME_FMTTime format passed to external script%H:%M:%S
CONTAINER_PROCESS_RUNAWAY_PROTECTORDisables a service if executed more than (x) amount of timesTRUE
CONTAINER_PROCESS_RUNAWAY_DELAYDelay in seconds to restart process1
CONTAINER_PROCESS_RUNAWAY_LIMITThe amount of times it needs to restart before disabling50
CONTAINER_PROCESS_RUNAWAY_SHOW_OUTPUT_FINALShow the program Output on the final execution before disablingTRUE

Networking

The following ports are exposed.

PortDescription
2020Fluent Bit
10050Zabbix Agent

Developing / Overriding

This base image has been used over a hundred times to successfully build secondary images. My methodology is admittedly straying from the "one process per container" rule, however this methodology allows me to put together images at a rapid pace, and if more complex scalability is required, the work is split into their own individual images. Since you are reading this here's a crash course at how the image works: (WIP):

See /assets/functions/00-container for more detailed documentation for the various commands and functions/shortcuts

  • Put defaults in /assets/defaults/(script name)

  • Put functions in /assets/functions/(script name)

  • Put Initialization script in /etc/cont-init.d/(script name)

    Put at the top:

#!/command/with-contenv bash          # Pull in Container Environment Variables from Dockerfile/Docker Runtime
source /assets/functions/00-container # Pull in all custom container functions from this image
prepare_service single                # Read functions and defaults only from files matching this script filename - see detailed docs for more
PROCESS_NAME="process"                # set the prefix for any logging

.. your scripting ..
print_info "This an INFO log"
print_warn "This a WARN log"
print_error "This is a ERROR log"

liftoff                               # this writes to the state files at /tmp/.container/ to prove the script executed properly see CONTAINER_SKIP_SANITY_CHECK
  • Put Services script in /etc/services.available/(script name)

    Put at the top:

#!/command/with-contenv bash          # Pull in Container Environment Variables from Dockerfile/Docker Runtime
source /assets/functions/00-container # Pull in all custom container functions from this image
prepare_service defaults single       # Read defaults only from files matching this script filename - see detailed docs for more
PROCESS_NAME="process"                # set the prefix for any logging

check_container_initialized           # Check to make sure that the container properly initialized before proceeding
check_service_initialized init        # Check to see if the cont-init.d/scriptname executed correctly, otherwise wait until it has done
liftoff                               # Prove script was able to execute properly

print_start "Starting processname"    # Show STARTING log prefix, and also show if enabled a counter, and execute process watchdog script
fakeprocess (args)                    # whatever your process you want to start is
ParameterDescriptionDefault
CONTAINER_SKIP_SANITY_CHECKSkip the checking to see if all scripts in /etc/cont-init.d executed correctlyFALSE
DEBUG_MODEShow all script output (set -x)FALSE
PROCESS_NAMEUsed for prefixing the script that is runningcontainer

Debug Mode

When using this as a base image, create statements in your startup scripts to check for existence of DEBUG_MODE=TRUE and set various parameters in your applications to output more detail, enable debugging modes, and so on. In this base image it does the following:

  • Sets zabbix-agent to output logs in verbosity
  • Shows all script output (equivalent to set -x)

Maintenance

Shell Access

For debugging and maintenance purposes you may want access the containers shell.

bash docker exec -it (whatever your container name is) bash

Support

These images were built to serve a specific need in a production environment and gradually have had more functionality added based on requests from the community.

Usage

  • The Discussions board is a great place for working with the community on tips and tricks of using this image.
  • Sponsor me for personalized support

Bugfixes

  • Please, submit a Bug Report if something isn't working as expected. I'll do my best to issue a fix in short order.

Feature Requests

  • Feel free to submit a feature request, however there is no guarantee that it will be added, or at what timeline.
  • Sponsor me regarding development of features.

Updates

  • Best effort to track upstream changes, More priority if I am actively using the image in a production environment.
  • Sponsor me for up to date releases.

License

MIT. See LICENSE for more details.