Docker image for ISAM
The Security Verify Access Docker image can run as a...
- Configuration container
- web reverse proxy container
- Runtime profile (aka Advanced Access Control/Federation) container
- Distributed Session Cache (DSC) container
- The docker container should be started as the 'isam' user (UID: 6000). In standard docker environment this will happen automatically but in a Kubernetes environment the security context should be set to allow the container to start as this particular user.
- The following Linux capabilities are required by the container (these capabilities are allowed by default in a standard Docker environment):
- CHOWN
- DAC_OVERRIDE
- FOWNER
- KILL
- NET_BIND_SERVICE
- SETFCAP
- SETGID
- SETUID
- The following environment variables are used by the container:
CONTAINER_TIMEZONE The timezone that will be used by the container (this is a standard Docker environment variable). For example: "Australia/Brisbane" INSTANCE The service instance the container will provide. In a web reverse proxy container, this environment variable is used to specify the name of the web reverse proxy instance to start. Required when running in "webseal" mode. In a Distributed Session Cache container, this environment variable specifies the role of the container (i.e. primary/secondary/tertiary/quaternary) in the format of INSTANCE = '1|2|3|4'. For example, to specify the container acts as the primary, use INSTANCE = '1'. To specify the container acts as the secondary, use INSTANCE = '2'. SERVICE The service the container will provide. If no service is specified, the container will default to "config". Valid values are: config, webseal, dsc, and runtime. SNAPSHOT The name of the configuration data snapshot that is to be used when starting the container. This will default to the latest published configuration. FIXPACKS A space-separated ordered list of fix packs to be applied when starting the container. If this environment variable is not present, any fix packs present in the fixpacks directory of the configuration volume will be applied in alphanumeric order. CONFIG_SERVICE_URL The URL used to access the published configuration/fix-pack data. If using the configuration service of the ISAM configuration container, the URL would be of the format: https://<container-ip>:<mapped port>/shared_volume. A BA header will be supplied to handle authentication to the configuration service. This environment variable is ignored by the configuration container. CONFIG_SERVICE_USER_NAME The user used when accessing the configuration service. This environment variable is ignored by the configuration container. CONFIG_SERVICE_USER_PWD Password for the user used when accessing the configuration service. This environment variable is ignored by the configuration container. ADMIN_PWD The initial seeded password for the built in 'admin' user used when accessing the configuration service. If this parameter is not specified the default password 'admin' is used. This environment variable is not available prior to version 9.0.5.0. AUTO_RELOAD_FREQUENCY The frequency, in seconds, the container will check to see if the configuration has been updated. If an updated configuration is detected, the container will automatically reload the configuration data. Note there will be a service interruption while the reload takes place. If this environment variable is missing, the container will not attempt any automatic reload of configuration data. This environment variable is ignored in the configuration container. USE_CONTAINER_LOG_DIR If set to any value, used to indicate the log files should be written to a container specific logging directory (underneath the '/var/application.logs' path). This allows multiple container replicas to write log information to the same persistent volume. An alternative, in a Kubernetes environment, is to deploy the containers in a 'StatefulSet'. See the official Kubernetes documentation for information on StatefulSets. This environment variable is not available prior to version 10.0.0.0
Consider the following points regarding user registry support when we use Security Verify Access in a Docker environment.
- The embedded user registry can only be used to house the secAuthority=Default suffix in conjunction with basic users. If full Security Verify Access users are required, the secAuthority=Default suffix must be stored in an external user registry.
- An external user registry is always required for the user suffix. Configure the external user registry as a federated user registry if the embedded user registry is being used for the secAuthority=Default suffix.
Configuration service
All configuration must be completed using the configuration container. The configuration service supports a scaled-down version of the LMI. We can use this LMI to manage the configuration data. To make a configuration available to a runtime container, we must click the Publish configuration button in the LMI.
The management service (i.e. the LMI) always listens on port 9443 of the container.
Runtime Management Web Services
The Runtime management Web services are light-weight web services hosted on the runtime container. These Web Services are used to manage the runtime of the container. They cannot be used to manage the configuration data, which should be done with the configuration Web services provided by the configuration container. The runtime management Web services listen on port 9443 of the runtime container.
Currently the runtime Web Services provides the ability to access the CLI and the application log files. For those APIs, the {appliance_hostname} parameter can be the configuration container address or the runtime container address. See the following Web Services documentation for the usage:
- "Run CLI Command" Web Services
- This Web Service acts as a front-end to the command line interface.
- "Application Log Files" Web Services
- The "Application Log Files" Web Services allows us to access the application log files.
Migrate an appliance to Docker
To migrate the appliance to the Docker environment, we can create a snapshot of the appliance in its original environment and import the snapshot into a running Security Verify Access configuration container.
We can only import a snapshot from an appliance if the following conditions are met:
- For an ISAM Base only activation, the snapshot was taken on version 9.0.0.0 or later. For an Advanced Access Control or Federation activation, the snapshot was taken on version 9.0.2.0 or later.
- The appliance was configured with an embedded configuration database and an external runtime database.
- The appliance runtime environment was using an external LDAP server. Alternatively, if the appliance was running Security Verify Access 9.0.4.0, an embedded LDAP server can be used if the "wga_rte.embedded.ldap.include.in.snapshot" advanced tuning parameter was set to true before generating the snapshot.
When a snapshot from an appliance is imported to a Docker container:
- The LMI HTTPS listening port will be rewritten to 9443.
- Any reverse proxy instances will have their HTTPS and HTTP ports rewritten to 443 and 80 respectively.
Restrictions
Security Verify Access, when run in a Docker environment, has the following restrictions:
- Any configuration changes require the service containers to be reloaded. We can use the CLI to trigger a manual reload. Changes to the Federation configuration and the policy database will not result in any service down time. Changes to junction definitions and web reverse proxy configuration will result in minimal service down time while the web reverse proxy is restarted. See CLI in a Docker environment.
- The authorization server (i.e. pdacld) is not supported.
- The front-end load balancer capability of the ISAM appliance is not supported.
- The IP reputation policy information point (PIP) capability of Advanced Access Control is not supported.
- Network HSM devices are not supported. All keys are stored locally.
- A sample geo-location database is not provided. If a sample geo-location database is required, it should be obtained from the downloads area of a running virtual or hardware appliance. See Updating location attributes.
- Pre-installed federation partner templates are not provided. See Manage federation partner templates. The connector package is available from the following public IBM download site: http://public.dhe.ibm.com/software/security/products/isam/downloads/
- Web Reverse proxy flow data or PAM statistics are not supported.
- The default administrator password must be set on each container using the ADMIN_PWD environment variable. If this parameter is not specified, the default password 'admin' will be used. This password cannot be changed at runtime through the configuration service and is not captured in configuration snapshots.
- The embedded user registry can only be used to hold static data and should not be used to hold any user data. As a result the embedded user registry should only be used in conjunction with a federated registry, to store the user data, and basic users. The Security Verify Access integration component of the SCIM support will not be available if the embedded user registry is in use.
- Authentication using RSA SecurID tokens is not supported.
- The container cannot be executed from within a Docker user namespace.
- The remote syslog forwarder capability cannot be used to forward the web reverse proxy runtime log files to a remote syslog server. These runtime log files include the auditing files, request.log, referrer log, and agent log. If the contents of these files need to be sent to a remote syslog server, the native web reverse proxy remote syslog agent should instead be configured.
Shared configuration data
The shared configuration volume is a section of the file system that is reserved for the storage of data to be shared among multiple containers. The data on the shared configuration volume is persisted even if the containers are deleted.
The shared configuration volume is mounted in an ISAM container at '/var/shared'. Snapshots, support files, and fix packs are stored in this volume. To manage these files, we can use the System > Network Settings > Shared Volume page of the configuration container LMI.
- Snapshots
Snapshots are located in the snapshots directory of the configuration volume.
When a snapshot is published from the configuration container, it is stored on the shared volume. When a runtime container is started, it uses the snapshot to perform configuration and bootstrap successfully. Snapshots can only be created using the configuration container, though an administrator can also manually add or remove snapshots by directly accessing the Docker volume.
- Support files
Support files are located in the support directory of the configuration volume.
Technically, we can create support files in containers of any type. However, support files are most commonly generated in one of the runtime containers. To generate and retrieve a support file in a runtime container, follow these steps:
- Use the CLI or CLI web service, create a support file in the runtime container. This support file will be visible in the configuration container.
- If the volume has not been directly mounted in the runtime container and a configuration service has been defined, use the support -> publish CLI command to send the snapshot to the configuration service.
- Use the LMI or web service of the configuration container, retrieve the support file. Alternatively, we can also access the support folder on the Docker volume directly to retrieve the support file.
- Fix packs
Fix packs are located in the fixpacks directory of the configuration volume.
When a container is started, fix packs specified in the FIXPACKS environment variable will be applied in the order they are specified. If the FIXPACKS environment variable is not present, any fix packs present in the fixpacks directory of the configuration volume will be applied in alphanumeric order.
To manage fix packs, we can either access the Docker volume manually, or use the System > Network Settings > Shared Volume page of the configuration container LMI. On the Shared Volume page, we can view the contents of the fixpacks directory of the configuration volume, upload, delete, or rename fix packs.
The System > Updates and Licensing > Fixpack LMI page is read-only in a Docker environment. We can use that page to see which fix packs have been applied, but cannot use it to apply or roll back fix packs.
Log files
By default, Docker uses a layered file system to help reduce the disk space utilization of the Docker containers. However, this file system has slower write speeds than standard file systems. As such, a standard Docker practice is to place any files that are updated frequently (for example, log files) on a shared volume. All of the log files used by Security Verify Access are located in the '/var/application.logs' directory. Therefore the recommended approach is to create this directory as a shared volume when we create your container.
We can view the log files through the Monitor > Application Log Files panel of the LMI. Multiple containers should not reference the same persistent volume for log storage, otherwise multiple containers will attempt to write to the same log file at the same time, causing data write and integrity issues. In a Kubernetes environment this problem can be overcome by deploying the containers in a StatefulSet (refer to the official Kubernetes documentation for information on StatefulSets). An alternative is to set the USE_CONTAINER_LOG_DIR environment variable in the container. When this variable is set the log files are written to a container specific log sub-directory. This environment variable is not available prior to version 10.0.0. In IBM Security Verify Access version 9.0.7.0 a container specific log sub-directory is always used.
The log file directory structure is shown in the following table.
Log file Sub-directory (relative to the root log directory) Local management interface log files lmi Runtime profile log files rtprofile Runtime audit logs rtaudit DSC log files dsc Security Verify Access policy server log and trace files isam_runtime/policy Embedded User Registry log files isam_runtime/user_registry web reverse proxy log files wrp/<instance>/log web reverse proxy statistic files wrp/<instance>/stats web reverse proxy trace files wrp/<instance>/trace web reverse proxy transaction files wrp/<instance>/translog System log files system Remote system log forwarder files rsyslog_forwarder
The other option is to access the logs with the Web services on the configuration and the runtime containers. By invoking the corresponding "Application Logs" API on each container, the user can list and retrieve the log files on that container. See the Docker Web Services documentation for more information. The recommended approach is to configure IBM Security Verify Access to send the log files to a remote syslog server wherever possible.
Parent topic: Docker support