Use custom Docker images for the GitLab chart | GitLab
Default image format
Example values file
Use custom Docker images for the GitLab chart
In certain scenarios (i.e. offline environments), you may want to bring your own images rather than pulling them down from the Internet. This requires specifying your own Docker image registry/repository for each of the charts that make up the GitLab release.
Default image format
Our default format for the image in most cases includes the full path to the image, excluding the tag:
The end result will be
repo.example.com/image:custom-tag
.
Example values file
There is an example values file that demonstrates how to configure a custom Docker registry/repository and tag. You can copy relevant sections of this file for your own releases.
Some of the charts (especially third party charts) sometimes have slightly different conventions for specifying the image registry/repository and tag. You can find documentation for third party charts on the Artifact Hub.
We’ll make use of the Omnibus GitLab package for Ubuntu. This package provides versions of the services that are guaranteed to be compatible with the charts’ services.
Create VM with Omnibus GitLab
Create a VM on your provider of choice, or locally. This was tested with VirtualBox, KVM, and Bhyve.
Ensure that the instance is reachable from the cluster.
Install Ubuntu Server onto the VM that you have created. Ensure that
openssh-server
is installed, and that all packages are up to date.
Configure networking and a hostname. Make note of the hostname/IP, and ensure it is both resolvable and reachable from your Kubernetes cluster.
Be sure firewall policies are in place to allow traffic.
Follow the installation instructions for Omnibus GitLab. When you perform the package installation,
do not
provide the
EXTERNAL_URL=
value. We do not want automatic configuration to occur, as we’ll provide a very specific configuration in the next step.
Configure Omnibus GitLab
Create a minimal
gitlab.rb
file to be placed at
/etc/gitlab/gitlab.rb
. Be very explicit about what is enabled on this node, use the contents below.
Note
: This example is not intended to provide PostgreSQL for scaling.
NOTE
: The values below should be replaced
DB_USERNAME
default username is
gitlab
DB_PASSSWORD
unencoded value
DB_ENCODED_PASSWORD
encoded value of
DB_PASSWORD
. Can be generated by replacing
DB_USERNAME
and
DB_PASSWORD
with real values in:
echo -n 'DB_PASSSWORDDB_USERNAME' | md5sum - | cut -d' ' -f1
AUTH_CIDR_ADDRESS
configure the CIDRs for MD5 authentication, should be the smallest possible subnets of your cluster or it’s gateway. For minikube, this value is
192.168.100.0/12
# Change the address below if you do not want PG to listen on all available addresses postgresql['listen_address']='0.0.0.0' # Set to approximately 1/4 of available RAM. postgresql['shared_buffers']="512MB" # This password is: `echo -n '${password}${username}' | md5sum - | cut -d' ' -f1` # The default username is `gitlab` postgresql['sql_user_password']="DB_ENCODED_PASSWORD" # Configure the CIDRs for MD5 authentication postgresql['md5_auth_cidr_addresses']=['AUTH_CIDR_ADDRESSES'] # Configure the CIDRs for trusted authentication (passwordless) postgresql['trust_auth_cidr_addresses']=['127.0.0.1/24']
After creating
gitlab.rb
, we’ll reconfigure the package with
gitlab-ctl reconfigure
. Once the task has completed, check the running processes with
gitlab-ctl status
. The output should appear as such:
Configure the GitLab chart with an external database | GitLab
Configure the GitLab chart with an external database
For a production-ready GitLab chart deployment, use an external database.
Prerequisites:
A deployment of PostgreSQL 12 or later. If you do not have one, consider
a cloud provided solution like AWS RDS PostgreSQL
or GCP Cloud SQL. For a self-managed solution,
consider the Omnibus GitLab package.
An empty database named
gitlabhq_production
by default.
A user with full database access. See the
external database documentation for details.
A Kubernetes Secret with the password for the database user.
The
pg_trgm
and
btree_gist
extensions. If you don’t provide an account with
the Superuser flag to GitLab, ensure these extensions are loaded prior to
proceeding with the database installation.
Networking prerequisites:
Ensure that the database is reachable from the cluster. Be sure that your firewall policies allow traffic.
If you plan to use PostgreSQL as a load balancing cluster and Kubernetes
DNS for service discovery, when you install the
bitnami/postgresql
chart,
use
--set slave.service.clusterIP=None
.
This setting configures the PostgreSQL secondary service as a headless service to
allow DNS
A
records to be created for each secondary instance.
For an example of how to use Kubernetes DNS for service discovery,
see
examples/database/values-loadbalancing-discover.yaml
.
To configure the GitLab chart to use an external database:
Set the following parameters:
postgresql.install
: Set to
false
to disable the embedded database.
global.psql.host
: Set to the hostname of the external database, can be a domain or an IP address.
global.psql.password.secret
: The name of the secret that contains the database password for the
gitlab
user.
global.psql.password.key
: Within the secret, the key that contains the password.
Optional. The following items can be further customized if you are not using the defaults:
global.psql.port
: The port the database is available on. Defaults to
5432
.
global.psql.database
: The name of the database.
global.psql.username
: The user with access to the database.
Optional. If you use a mutual TLS connection to the database, set the following:
global.psql.ssl.secret
: A secret that contains the client certificate, key, and certificate authority.
global.psql.ssl.serverCA
: In the secret, the key that refers to the certificate authority (CA).
global.psql.ssl.clientCertificate
: In the secret, the key that refers to the client certificate.
global.psql.ssl.clientKey
: In the secret, the client.
When you deploy the GitLab chart, add the values by using the
--set
flag. For example:
The instructions here make use of the Omnibus GitLab package for Ubuntu.
This package provides versions of the services that are guaranteed to be compatible with the charts’ services.
Create VM with Omnibus GitLab
Create a VM on your provider of choice, or locally. This was tested with VirtualBox, KVM, and Bhyve.
Ensure that the instance is reachable from the cluster.
Install Ubuntu Server onto the VM that you have created. Ensure that
openssh-server
is installed, and that all packages are up to date.
Configure networking and a hostname. Make note of the hostname/IP, and ensure it is both resolvable and reachable from your Kubernetes cluster.
Be sure firewall policies are in place to allow traffic.
Follow the installation instructions for Omnibus GitLab. When you perform the package installation,
do not
provide the
EXTERNAL_URL=
value. We do not want automatic configuration to occur, as we’ll provide a very specific configuration in the next step.
Configure Omnibus GitLab
Create a minimal
gitlab.rb
file to be placed at
/etc/gitlab/gitlab.rb
. Be
very
explicit about what’s enabled on this node, using the following contents
based on the documentation for
running Gitaly on its own server.
NOTE
: The values below should be replaced
AUTH_TOKEN
should be replaced with the value in the
gitaly-secret
secret
GITLAB_URL
should be replaced with the URL of the GitLab instance
SHELL_TOKEN
should be replaced with the value in the
gitlab-shell-secret
secret
# Avoid running unnecessary services on the Gitaly server postgresql['enable']=false redis['enable']=false nginx['enable']=false puma['enable']=false sidekiq['enable']=false gitlab_workhorse['enable']=false grafana['enable']=false gitlab_exporter['enable']=false gitlab_kas['enable']=false
# If you run a seperate monitoring node you can disable these services prometheus['enable']=false alertmanager['enable']=false
# If you don't run a seperate monitoring node you can # Enable Prometheus access & disable these extra services # This makes Prometheus listen on all interfaces. You must use firewalls to restrict access to this address/port. # prometheus['listen_address'] = '0.0.0.0:9090' # prometheus['monitor_kubernetes'] = false
# If you don't want to run monitoring services uncomment the following (not recommended) # node_exporter['enable'] = false
# Prevent database connections during 'gitlab-ctl reconfigure' gitlab_rails['auto_migrate']=false
# Configure the gitlab-shell API callback URL. Without this, `git push` will # fail. This can be your 'front door' GitLab URL or an internal load # balancer. gitlab_rails['internal_api_url']='GITLAB_URL' gitlab_shell['secret_token']='SHELL_TOKEN'
# Make Gitaly accept connections on all network interfaces. You must use # firewalls to restrict access to this address/port. # Comment out following line if you only want to support TLS connections gitaly['listen_addr']="0.0.0.0:8075"
# Authentication token to ensure only authorized servers can communicate with # Gitaly server gitaly['auth_token']='AUTH_TOKEN'
# To use TLS for Gitaly you need to add gitaly['tls_listen_addr']="0.0.0.0:8076" gitaly['certificate_path']="path/to/cert.pem" gitaly['key_path']="path/to/key.pem"
After creating
gitlab.rb
, reconfigure the package with
gitlab-ctl reconfigure
.
Once the task has completed, check the running processes with
gitlab-ctl status
.
The output should appear as such:
Configure the GitLab chart with an external Gitaly | GitLab
Configure the chart
Multiple external Gitaly
Connecting to external Gitaly over TLS
Configure the GitLab chart with an external Gitaly
This document intends to provide documentation on how to configure this Helm chart with an external Gitaly service.
If you don’t have Gitaly configured, for on-premise or deployment to VM,
consider using our Omnibus GitLab package.
External Gitaly
services
can be provided by Gitaly nodes, or
Praefect clusters.
Configure the chart
Disable the
gitaly
chart and the Gitaly service it provides, and point the other services to the external service.
You need to set the following properties:
global.gitaly.enabled
: Set to
false
to disable the included Gitaly chart.
global.gitaly.external
: This is an array of external Gitaly service(s).
global.gitaly.authToken.secret
: The name of the secret which contains the token for authentication.
global.gitaly.authToken.key
: The key within the secret, which contains the token content.
The external Gitaly services will make use of their own instances of GitLab Shell.
Depending your implementation, you can configure those with the secrets from this
chart, or you can configure this chart’s secrets with the content from a predefined
source.
You
may
need to set the following properties:
global.shell.authToken.secret
: The name of the secret which contains secret for GitLab Shell.
global.shell.authToken.key
: The key within the secret, which contains the secret content.
A complete example configuration, with two external services (
external-gitaly.yml
):
If your implementation uses multiple Gitaly nodes external to these charts,
you can define multiple hosts as well. The syntax is slightly different, as
to allow the complexity required.
An example values file is provided, which shows the
appropriate set of configuration. The content of this values file is not
interpreted correctly via
--set
arguments, so should be passed to Helm
with the
-f / --values
flag.
Connecting to external Gitaly over TLS
If your external Gitaly server listens over TLS port,
you can make your GitLab instance communicate with it over TLS. To do this, you
have to
Create a Kubernetes secret containing the certificate of the Gitaly
server
kubectl create secret generic gitlab-gitaly-tls-certificate --from-file=gitaly-tls.crt=<path to certificate>
Add the certificate of external Gitaly server to the list of
custom Certificate Authorities
In the values file, specify the following
You can choose any valid secret name and key for this, but make
sure the key is unique across all the secrets specified in
customCAs
to avoid
collision since all keys within the secrets will be mounted. You
do not
need to provide the key for the certificate, as this is the
client side
.
Configure the GitLab chart with external GitLab Pages | GitLab
Requirements
Known limitations
Configure external GitLab Pages instance
Configure the chart
Configure the GitLab chart with external GitLab Pages
This document intends to provide documentation on how to configure this Helm
chart with a GitLab Pages instance, configured outside of the cluster, using an
Omnibus GitLab package.
Requirements
GitLab 13.7 or later.
External Object Storage, as
recommended for production instances, should be used.
Base64 encoded form of a 32-bytes-long API secret key for Pages to interact
with GitLab Pages.
Known limitations
GitLab Pages Access Control
is not supported out of the box.
Configure external GitLab Pages instance
Install GitLab using the Omnibus GitLab
package.
Edit
/etc/gitlab/gitlab.rb
file and replace its contents with the
following snippet. Update the values below to match your configuration:
roles['pages_role']
# Root domain where Pages will be served. pages_external_url'<Pages root domain>'# Example: 'http://pages.example.io'
# Information regarding GitLab instance gitlab_pages['gitlab_server']='<GitLab URL>'# Example: 'https://gitlab.example.com' gitlab_pages['api_secret_key']='<Base64 encoded form of API secret key>'
Apply the changes by running
sudo gitlab-ctl reconfigure
.
Configure the chart
Create a bucket named
gitlab-pages
in the object storage for storing Pages
deployments.
Create a secret
gitlab-pages-api-key
with the Base64 encoded form of API
secret key as value.
Refer the following configuration snippet and add necessary entries to your
values file.
global: pages: path:'/srv/gitlab/shared/pages' host:<Pages root domain> port:'80'# Set to 443 if Pages is served over HTTPS https:false# Set to true if Pages is served over HTTPS artifactsServer:true objectStore: enabled:true bucket:'gitlab-pages' apiSecret: secret:gitlab-pages-api-key key:shared_secret extraEnv: PAGES_UPDATE_LEGACY_STORAGE:true# Bypass automatic disabling of disk storage
By setting
PAGES_UPDATE_LEGACY_STORAGE
environment variable to true,
the feature flag
pages_update_legacy_storage
is enabled which deploys Pages
to local disk. When you migrate to object storage, do remember to remove this
variable.
Configure the GitLab chart with Mattermost Team Edition | GitLab
Prerequisites
Deploy the Mattermost Team Edition Helm chart
Deploy GitLab Helm chart
Create an OAuth application with GitLab
Troubleshooting
Configure the GitLab chart with Mattermost Team Edition
This document describes how to install Mattermost Team Edition Helm chart in proximity with an existing GitLab Helm chart deployment.
As the Mattermost Helm chart is installed in a separate namespace, it is recommended that
cert-manager
and
nginx-ingress
be configured to manage cluster-wide Ingress and certificate resources. For additional configuration information,
refer to the Mattermost Helm configuration guide.
Prerequisites
A running Kubernetes cluster.
Helm v3
For the Team Edition you can have just one replica running.
Deploy the Mattermost Team Edition Helm chart
Once you have installed the Mattermost Team Edition Helm chart, you can deploy it using the following command:
Wait for the pods to run. Then, using the Ingress host you specified in the configuration, access your Mattermost server.
For additional configuration information, refer to the Mattermost Helm configuration guide.
you experience any issues with this, please view the Mattermost Helm chart issue repository or
the Mattermost Forum.
Deploy GitLab Helm chart
To deploy the GitLab Helm chart, follow the instructions described here.
<your-domain>
: your desired domain, such as
gitlab.example.com
.
<external-ip>
: the external IP pointing to your Kubernetes cluster.
<email>
: email to register in Let’s Encrypt to retrieve TLS certificates.
Once you’ve deployed the GitLab instance, follow the instructions for the initial login.
Create an OAuth application with GitLab
The next part of the process is setting up the GitLab SSO integration.
To do so, you need to create the OAuth application to allow Mattermost to use GitLab as the authentication provider.
Only the default GitLab SSO is officially supported. “Double SSO”, where GitLab SSO is chained to other SSO solutions, is not supported. It may be possible to connect
GitLab SSO with AD, LDAP, SAML, or MFA add-ons in some cases, but because of the special logic required they’re not officially
supported and are known not to work on some experiences.
Troubleshooting
If you are following a process other than the one provided and experience authentication and/or deployment issues,
let us know in the Mattermost troubleshooting forum.
Configure the GitLab chart with an external NGINX Ingress Controller | GitLab
TCP services in the external Ingress Controller
Direct deployment
Helm deployment
Customize the GitLab Ingress options
Custom certificate management
Configure the GitLab chart with an external NGINX Ingress Controller
This chart configures
Ingress
resources for use with the official
NGINX Ingress implementation. The
NGINX Ingress Controller is deployed as a part of this chart. If you want to
reuse an existing NGINX Ingress Controller already available in your cluster,
this guide will help.
TCP services in the external Ingress Controller
The GitLab Shell component requires TCP traffic to pass through on
port 22 (by default; this can be changed). Ingress does not directly support TCP services, so some additional configuration is necessary. Your NGINX Ingress Controller may have been deployed directly (i.e. with a Kubernetes spec file) or through the official Helm chart. The configuration of the TCP pass through will differ depending on the deployment approach.
Direct deployment
In a direct deployment, the NGINX Ingress Controller handles configuring TCP services with a
ConfigMap
(see docs here).
Assuming your GitLab chart is deployed to the namespace
gitlab
and your Helm
release is named
mygitlab
, your
ConfigMap
should be something like this:
Finally make sure that the
Service
for your NGINX Ingress Controller is exposing
port 22 in addition to 80 and 443.
Helm deployment
If you have installed or will install the NGINX Ingress Controller via it’s Helm chart, then you will need to add a value to the chart via the command line:
--set tcp.22="gitlab/mygitlab-gitlab-shell:22"
or a
values.yaml
file:
tcp: 22:"gitlab/mygitlab-gitlab-shell:22"
The format for the value is the same as describe above in the “Direct Deployment” section.
Customize the GitLab Ingress options
The NGINX Ingress Controller uses an annotation to mark which Ingress Controller
will service a particular
Ingress
(see docs).
You can configure the Ingress class to use with this chart using the
global.ingress.class
setting. Make sure to set this in your Helm options.
--set global.ingress.class=myingressclass
While not necessarily required, if you’re using an external Ingress Controller, you will likely want to
disable the Ingress Controller that is deployed by default with this chart:
--set nginx-ingress.enabled=false
Custom certificate management
The full scope of your TLS options are documented elsewhere.
If you are using an external Ingress Controller, you may also be using an external cert-manager instance
or managing your certificates in some other custom manner. The full documentation around your TLS options is here,
however for the purposes of this discussion, here are the two values that would need to be set to disable the cert-manager chart and tell
the GitLab component charts to NOT look for the built in certificate resources:
IAM roles for AWS when using the GitLab chart | GitLab
IAM role
Chart configuration
Registry
LFS, Artifacts, Uploads, Packages
Backups
Using IAM roles for service accounts
Workaround to perform backups before GitLab 14.4
Using pre-created service accounts
Using chart-owned service accounts
Troubleshooting
WebIdentityErr: failed to retrieve credentials
IAM roles for AWS when using the GitLab chart
The default configuration for external object storage in the charts uses access and secret keys.
It is also possible to use IAM roles in combination with
kube2iam
,
kiam
, or IRSA.
IAM role
The IAM role will need read, write and list permissions on the S3 buckets. You can choose to have a role per bucket or combine them.
Chart configuration
IAM roles can be specified by adding annotations and changing the secrets, as specified below:
Registry
An IAM role can be specified via the annotations key:
For the
object-storage.yaml
secret, omit
the access and secret key. Because the GitLab Rails codebase uses Fog for S3
storage, the
use_iam_profile
key should be added for Fog to use the role:
Do NOT include
endpoint
in this configuration.
IRSA makes use of STS tokens, which use specialized endpoints.
When
endpoint
is provided, the AWS client will attempt
to send an
AssumeRoleWithWebIdentity
message to this endpoint and will fail.
Backups
The Toolbox configuration allows for annotations to be set to upload backups to S3:
The
s3cmd.config
secret is to be created without the access and secret keys:
[default] bucket_location=us-east-1
Using IAM roles for service accounts
If GitLab is running in an AWS EKS cluster (version 1.14 or greater), you can
use an AWS IAM role to authenticate to the S3 object storage without the need
of generating or storing access tokens. More information regarding using
IAM roles in an EKS cluster can be found in the
Introducing fine-grained IAM roles for service accounts
documentation from AWS.
Appropriate IRSA annotations for roles can be applied to ServiceAccounts throughout
this Helm chart in one of two ways:
ServiceAccounts that have been pre-created as described in the above AWS documentation.
This ensures the proper annotations on the ServiceAccount and the linked OIDC provider.
Chart-generated ServiceAccounts with annotations defined. We allow for the configuration
of annotations on ServiceAccounts both globally and on a per-chart basis.
To use IAM roles for ServiceAccounts in EKS clusters, the specific annotation must be
eks.amazonaws.com/role-arn: arn:aws:iam::<ACCOUNT_ID>:role/<IAM_ROLE_NAME>
.
To enable IAM roles for ServiceAccounts for GitLab running in an AWS EKS cluster, follow the instructions on
IAM roles for service accounts.
Using the
backup-utility
as specified in the backup documentation
does not properly copy the backup file to the S3 bucket. The
backup-utility
uses
the
s3cmd
to perform the copy of the backup file and it has a known
issue of not supporting OIDC authentication.
This has been resolved in their 2.2.0 release, which has been
merged into GitLab 14.4.
Workaround to perform backups before GitLab 14.4
If you are on a version earlier than 14.4, run the following command in your task-runner pod to sideload
the latest version of
s3cmd
. You can then run
backup-utility
as per usual.
pip install--upgrade s3cmd &&export PATH="$(python3 -m site --user-base)/bin:${PATH}"
Using pre-created service accounts
Set the following options when the GitLab chart is deployed. It is important
to note that the ServiceAccount is enabled but not created.
The
eks.amazonaws.com/role-arn
annotation can be applied to
all
ServiceAccounts
created by GitLab owned charts by configuring
global.serviceAccount.annotations
.
Annotations can also be added on a per ServiceAccount basis, but adding the matching
definition for each chart. These can be the same role, or individual roles.
You can test if the IAM role is correctly set up and that GitLab is
accessing S3 using the IAM role by logging into the
toolbox
pod and
using
awscli
(replace
<namespace>
with the namespace where GitLab is
installed):
kubectl exec-ti$(kubectl get pod -n <namespace> -lapp=toolbox -ojsonpath='{.items[0].metadata.name}')-n <namespace> -- bash
With the
awscli
package installed, verify that you are able to communicate
with the AWS API:
aws sts get-caller-identity
A normal response showing the temporary user ID, account number and IAM
ARN (this will not be the IAM ARN for the role used to access S3) will be
returned if connection to the AWS API was successful. An unsuccessful
connection will require more troubleshooting to determine why the
toolbox
pod is not able to communicate with the AWS APIs.
If connecting to the AWS APIs is successful, then the following command
will assume the IAM role that was created and verify that a STS token can
be retrieved for accessing S3. The
AWS_ROLE_ARN
and
AWS_WEB_IDENTITY_TOKEN_FILE
variables are defined in the environment when IAM role annotation has been
added to the pod and do not require that they be defined:
If the IAM role could not be assumed then an error message similar to the
following will be displayed:
An error occurred (AccessDenied) when calling the AssumeRoleWithWebIdentity operation: Not authorized to perform sts:AssumeRoleWithWebIdentity
Otherwise, the STS credentials and IAM role information will be displayed.
WebIdentityErr: failed to retrieve credentials
If you see this error in the logs, this suggests that
endpoint
has
been configured in your
object-storage.yaml
secret. Remove
this setting and restart the
webservice
and
sidekiq
pods.