Configure the GitLab chart with Mattermost Team Edition
This document describes how to install Mattermost Team Edition Helm chart in proximity with an existing GitLab Helm chart deployment.
As the Mattermost Helm chart is installed in a separate namespace, it is recommended that cert-manager and nginx-ingress be configured to manage cluster-wide Ingress and certificate resources. For additional configuration information, refer to the Mattermost Helm configuration guide.
<your-domain>: your desired domain, such as gitlab.example.com.
<external-ip>: the external IP pointing to your Kubernetes cluster.
<email>: email to register in Let’s Encrypt to retrieve TLS certificates.
Once you’ve deployed the GitLab instance, follow the instructions for the initial login.
Create an OAuth application with GitLab
The next part of the process is setting up the GitLab SSO integration. To do so, you need to create the OAuth application to allow Mattermost to use GitLab as the authentication provider.
Only the default GitLab SSO is officially supported. “Double SSO”, where GitLab SSO is chained to other SSO solutions, is not supported. It may be possible to connect GitLab SSO with AD, LDAP, SAML, or MFA add-ons in some cases, but because of the special logic required they’re not officially supported and are known not to work on some experiences.
Troubleshooting
If you are following a process other than the one provided and experience authentication and/or deployment issues, let us know in the Mattermost troubleshooting forum.
Configure the GitLab chart with an external NGINX Ingress Controller | GitLab
Configure the GitLab chart with an external NGINX Ingress Controller
This chart configures Ingress resources for use with the official NGINX Ingress implementation. The NGINX Ingress Controller is deployed as a part of this chart. If you want to reuse an existing NGINX Ingress Controller already available in your cluster, this guide will help.
TCP services in the external Ingress Controller
The GitLab Shell component requires TCP traffic to pass through on port 22 (by default; this can be changed). Ingress does not directly support TCP services, so some additional configuration is necessary. Your NGINX Ingress Controller may have been deployed directly (i.e. with a Kubernetes spec file) or through the official Helm chart. The configuration of the TCP pass through will differ depending on the deployment approach.
Direct deployment
In a direct deployment, the NGINX Ingress Controller handles configuring TCP services with a ConfigMap (see docs here). Assuming your GitLab chart is deployed to the namespace gitlab and your Helm release is named mygitlab, your ConfigMap should be something like this:
Finally make sure that the Service for your NGINX Ingress Controller is exposing port 22 in addition to 80 and 443.
Helm deployment
If you have installed or will install the NGINX Ingress Controller via it’s Helm chart, then you will need to add a value to the chart via the command line:
--set tcp.22="gitlab/mygitlab-gitlab-shell:22"
or a values.yaml file:
tcp: 22:"gitlab/mygitlab-gitlab-shell:22"
The format for the value is the same as describe above in the “Direct Deployment” section.
Customize the GitLab Ingress options
The NGINX Ingress Controller uses an annotation to mark which Ingress Controller will service a particular Ingress (see docs). You can configure the Ingress class to use with this chart using the global.ingress.class setting. Make sure to set this in your Helm options.
--set global.ingress.class=myingressclass
While not necessarily required, if you’re using an external Ingress Controller, you will likely want to disable the Ingress Controller that is deployed by default with this chart:
--set nginx-ingress.enabled=false
Custom certificate management
The full scope of your TLS options are documented elsewhere.
If you are using an external Ingress Controller, you may also be using an external cert-manager instance or managing your certificates in some other custom manner. The full documentation around your TLS options is here, however for the purposes of this discussion, here are the two values that would need to be set to disable the cert-manager chart and tell the GitLab component charts to NOT look for the built in certificate resources:
The default configuration for external object storage in the charts uses access and secret keys. It is also possible to use IAM roles in combination with kube2iam, kiam, or IRSA.
IAM role
The IAM role will need read, write and list permissions on the S3 buckets. You can choose to have a role per bucket or combine them.
Chart configuration
IAM roles can be specified by adding annotations and changing the secrets, as specified below:
Registry
An IAM role can be specified via the annotations key:
For the object-storage.yaml secret, omit the access and secret key. Because the GitLab Rails codebase uses Fog for S3 storage, the use_iam_profile key should be added for Fog to use the role:
The s3cmd.config secret is to be created without the access and secret keys:
[default] bucket_location=us-east-1
Using IAM roles for service accounts
If GitLab is running in an AWS EKS cluster (version 1.14 or greater), you can use an AWS IAM role to authenticate to the S3 object storage without the need of generating or storing access tokens. More information regarding using IAM roles in an EKS cluster can be found in the Introducing fine-grained IAM roles for service accounts documentation from AWS.
Appropriate IRSA annotations for roles can be applied to ServiceAccounts throughout this Helm chart in one of two ways:
ServiceAccounts that have been pre-created as described in the above AWS documentation. This ensures the proper annotations on the ServiceAccount and the linked OIDC provider.
Chart-generated ServiceAccounts with annotations defined. We allow for the configuration of annotations on ServiceAccounts both globally and on a per-chart basis.
To use IAM roles for ServiceAccounts in EKS clusters, the specific annotation must be eks.amazonaws.com/role-arn: arn:aws:iam::<ACCOUNT_ID>:role/<IAM_ROLE_NAME>.
To enable IAM roles for ServiceAccounts for GitLab running in an AWS EKS cluster, follow the instructions on IAM roles for service accounts.
Using the backup-utility as specified in the backup documentation does not properly copy the backup file to the S3 bucket. The backup-utility uses the s3cmd to perform the copy of the backup file and it has a known issue of not supporting OIDC authentication. This has been resolved in their 2.2.0 release, which has been merged into GitLab 14.4.
Workaround to perform backups before GitLab 14.4
If you are on a version earlier than 14.4, run the following command in your task-runner pod to sideload the latest version of s3cmd. You can then run backup-utility as per usual.
pip install--upgrade s3cmd &&export PATH="$(python3 -m site --user-base)/bin:${PATH}"
Using pre-created service accounts
Set the following options when the GitLab chart is deployed. It is important to note that the ServiceAccount is enabled but not created.
The eks.amazonaws.com/role-arn annotation can be applied to all ServiceAccounts created by GitLab owned charts by configuring global.serviceAccount.annotations.
Annotations can also be added on a per ServiceAccount basis, but adding the matching definition for each chart. These can be the same role, or individual roles.
You can test if the IAM role is correctly set up and that GitLab is accessing S3 using the IAM role by logging into the toolbox pod and using awscli (replace <namespace> with the namespace where GitLab is installed):
kubectl exec-ti$(kubectl get pod -n <namespace> -lapp=toolbox -ojsonpath='{.items[0].metadata.name}')-n <namespace> -- bash
With the awscli package installed, verify that you are able to communicate with the AWS API:
aws sts get-caller-identity
A normal response showing the temporary user ID, account number and IAM ARN (this will not be the IAM ARN for the role used to access S3) will be returned if connection to the AWS API was successful. An unsuccessful connection will require more troubleshooting to determine why the toolbox pod is not able to communicate with the AWS APIs.
If connecting to the AWS APIs is successful, then the following command will assume the IAM role that was created and verify that a STS token can be retrieved for accessing S3. The AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE variables are defined in the environment when IAM role annotation has been added to the pod and do not require that they be defined:
If the IAM role could not be assumed then an error message similar to the following will be displayed:
An error occurred (AccessDenied) when calling the AssumeRoleWithWebIdentity operation: Not authorized to perform sts:AssumeRoleWithWebIdentity
Otherwise, the STS credentials and IAM role information will be displayed.
WebIdentityErr: failed to retrieve credentials
If you see this error in the logs, this suggests that endpoint has been configured in your object-storage.yaml secret. Remove this setting and restart the webservice and sidekiq pods.
MinIO is an object storage server that exposes S3-compatible APIs and it has a gateway feature that allows proxying requests to Azure Blob Storage. To setup our gateway, we will make use of Azure’s Web App on Linux.
To get started, make sure you have installed Azure CLI and you are logged in (az login). Proceed to create a Resource group, if you don’t have one already:
az group create --name"gitlab-azure-minio"--location"WestUS"
Storage Account
Create a Storage account in your resource group, the name of the storage account must be globally unique:
First, we need to create an App Service Plan in the same resource group.
az appservice plan create \ --name"gitlab-azure-minio-app-plan"\ --is-linux\ --sku B1 \ --resource-group"gitlab-azure-minio"\ --location"WestUS"
Create a Web app configured with the minio/minio Docker container, the name you specify will be used in the URL of the web app:
az webapp create \ --name"gitlab-minio-app"\ --deployment-container-image-name"minio/minio"\ --plan"gitlab-azure-minio-app-plan"\ --resource-group"gitlab-azure-minio"
The Web app should now be accessible at https://gitlab-minio-app.azurewebsites.net.
Lastly, we need to set up the startup command and create environment variables that will store our storage account name and key for use by the web app, MINIO_ACCESS_KEY & MINIO_SECRET_KEY.
az webapp config appsettings set\ --settings"MINIO_ACCESS_KEY=gitlab-azure-minio-storage""MINIO_SECRET_KEY=h0tSyeTebs+...""PORT=9000"\ --name"gitlab-minio-app"\ --resource-group"gitlab-azure-minio"
You can proceed to use this gateway with any client with s3-compability. Your web application URL will be the s3 endpoint, storage account name will be your accesskey, and storage account key will be your secretkey.
Configure the GitLab chart with an external object storage
GitLab relies on object storage for highly-available persistent data in Kubernetes. By default, an S3-compatible storage solution named minio is deployed with the chart. For production quality deployments, we recommend using a hosted object storage solution like Google Cloud Storage or AWS S3.
To disable MinIO, set this option and then follow the related documentation below:
These two options are not mutually exclusive. You can set a default encryption policy, but also enable server-side encryption headers to override those defaults.
Although Azure uses the word container to denote a collection of blobs, GitLab standardizes on the term bucket.
Azure Blob storage requires the use of the consolidated object storage settings. A single Azure storage account name and key must be used across multiple Azure blob containers. Customizing individual connection settings by object type (for example, artifacts, uploads, and so on) is not permitted.
To enable Azure Blob storage, see rails.azurerm.yaml as an example to define the Azure connection. You can load this as a secret via:
Note: The bucket name needs to be set both in the secret, and in global.registry.bucket. The secret is used in the registry server, and the global is used by GitLab backups.
Configuration of object storage for LFS, artifacts, uploads, packages, external diffs, and pseudonymizer is done via the following keys:
global.appConfig.lfs
global.appConfig.artifacts
global.appConfig.uploads
global.appConfig.packages
global.appConfig.externalDiffs
global.appConfig.dependencyProxy
Note also that:
A different bucket is needed for each, otherwise performing a restore from backup doesn’t function properly.
Storing MR diffs on external storage is not enabled by default, so, for the object storage settings for externalDiffs to take effect, global.appConfig.externalDiffs.enabled key should have a true value.
The dependency proxy feature is not enabled by default, so, for the object storage settings for dependencyProxy to take effect, global.appConfig.dependencyProxy.enabled key should have a true value.
Create the secret(s) per the connection details documentation, and then configure the chart to use the provided secrets. Note, the same secret can be used for all of them.
Backups are also stored in object storage, and must be configured to point externally rather than the included MinIO service. The backup/restore procedure uses two separate buckets:
A bucket for storing backups (global.appConfig.backups.bucket)
A temporary bucket for preserving existing data during the restore process (global.appConfig.backups.tmpBucket)
AWS S3-compatible object storage systems and Google Cloud Storage are supported backends. You can configure the backend type by setting global.appConfig.backups.objectStorage.backend to s3 for AWS S3 or gcs for Google Cloud Storage. You must also provide a connection configuration through the gitlab.toolbox.backups.objectStorage.config key.
When using Google Cloud Storage, the GCP project must be set with the global.appConfig.backups.objectStorage.config.gcpProject value.
To backup or restore files from the other object storage locations, the configuration file needs to be configured to authenticate as a user with sufficient access to read/write to all GitLab buckets.
[default] access_key=BOGUS_ACCESS_KEY secret_key=BOGUS_SECRET_KEY bucket_location=us-east-1 multipart_chunk_size_mb=128 # default is 15 (MB)
On Google Cloud Storage, you can create the file by creating a service account with the storage.admin role and then creating a service account key. Below is an example of using the gcloud CLI to create the file.
[default] # Setup endpoint: hostname of the Web App host_base=https://your_minio_setup.azurewebsites.net host_bucket=https://your_minio_setup.azurewebsites.net # Leave as default bucket_location=us-west-1 use_https=True multipart_chunk_size_mb=128 # default is 15 (MB)
To connect GitLab to an external MinIO instance, first create MinIO buckets for the GitLab application, using the bucket names in this example configuration file.
Using the MinIO client, create the necessary buckets before use:
mc mb gitlab-registry-storage mc mb gitlab-lfs-storage mc mb gitlab-artifacts-storage mc mb gitlab-uploads-storage mc mb gitlab-packages-storage mc mb gitlab-backup-storage
Once the buckets have been created, GitLab can be configured to use the MinIO instance. See example configuration in rails.minio.yaml and registry.minio.yaml in the examples folder.