Welcome to Knowledge Base!

KB at your finger tips

Book a Meeting to Avail the Services of GitLab overtime

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All

GitLab

Setup standalone Redis | GitLab






  • Create VM with Omnibus GitLab
  • Configure Omnibus GitLab

Setup standalone Redis

The instructions here make use of the Omnibus GitLab package for Ubuntu. This package provides versions of the services that are guaranteed to be compatible with the charts’ services.

Create VM with Omnibus GitLab

Create a VM on your provider of choice, or locally. This was tested with VirtualBox, KVM, and Bhyve.
Ensure that the instance is reachable from the cluster.

Install Ubuntu Server onto the VM that you have created. Ensure that openssh-server is installed, and that all packages are up to date.
Configure networking and a hostname. Make note of the hostname/IP, and ensure it is both resolvable and reachable from your Kubernetes cluster.
Be sure firewall policies are in place to allow traffic.

Follow the installation instructions for Omnibus GitLab. When you perform the package installation, do not provide the EXTERNAL_URL= value. We do not want automatic configuration to occur, as we’ll provide a very specific configuration in the next step.

Configure Omnibus GitLab

Create a minimal gitlab.rb file to be placed at /etc/gitlab/gitlab.rb . Be very explicit about what is enabled on this node, use the contents below.


note
This example is not intended to provide Redis for scaling.


  • REDIS_PASSWORD should be replaced with the value in the gitlab-redis secret.
# Listen on all addresses
redis['bind'] = '0.0.0.0'
# Set the defaul port, must be set.
redis['port'] = 6379
# Set password, as in the secret `gitlab-redis` populated in Kubernetes
redis['password'] = 'REDIS_PASSWORD'

## Disable everything else
gitlab_rails['enable'] = false
sidekiq['enable'] = false
puma['enable']=false
registry['enable'] = false
gitaly['enable'] = false
gitlab_workhorse['enable'] = false
nginx['enable'] = false
prometheus_monitoring['enable'] = false
postgresql['enable'] = false

After creating gitlab.rb , reconfigure the package with gitlab-ctl reconfigure .
After the task completes, check the running processes with gitlab-ctl status .
The output should appear similar to:

# gitlab-ctl status
run: logrotate: (pid 4856) 1859s; run: log: (pid 31262) 77460s
run: redis: (pid 30562) 77637s; run: log: (pid 30561) 77637s

Stay Ahead in Today’s Competitive Market!
Unlock your company’s full potential with a Virtual Delivery Center (VDC). Gain specialized expertise, drive seamless operations, and scale effortlessly for long-term success.

Book a Meeting to Avail the Services of GitLabovertime

Advanced configuration | GitLab





Advanced configuration


  • Bringing your own custom Docker images
  • Using an external database
  • Using an external Gitaly
  • Using an external GitLab Pages instance
  • Using an external Mattermost
  • Using your own NGINX Ingress Controller
  • Using an external object storage
  • Using an external Redis
  • Using FIPS-compliant images
  • Making use of GitLab Geo functionality
  • Enabling internal TLS between services
  • After install, managing Persistent Volumes
  • Using Red Hat UBI-based images
Read article

Use TLS between components of the GitLab chart | GitLab







  • Preparation


    • Generating certificates for internal use

      • Required certificate CN and SANs
  • Configuration
  • Result
  • Troubleshooting

Use TLS between components of the GitLab chart

The GitLab charts can use transport-layer security (TLS) between the various
components. This requires you to provide certificates for the services
you want to enable, and configure those services to make use of those
certificates and the certificate authority (CA) that signed them.

Preparation

Each chart has documentation regarding enabling TLS for that service, and the various
settings required to ensure that appropriate configuration.

Generating certificates for internal use


note
GitLab does not purport to provide high-grade PKI infrastructure, or certificate
authorities.

For the purposes of this documentation, we provide a Proof of Concept script
below, which makes use of Cloudflare’s CFSSL
to produce a self-signed Certificate Authority, and a wildcard certificate that can be
used for all services.

This script will:


  • Generate a CA key pair.
  • Sign a certificate meant to service all GitLab component service endpoints.
  • Create two Kubernetes Secret objects:

    • A secret of type kuberetes.io/tls which has the server certificate and key pair.
    • A secret of type Opaque which only contains the public certificate of the CA as ca.crt
      as need by NGINX Ingress.

Prerequisites:


  • Bash, or compatible shell.

  • cfssl is available to your shell, and within PATH .

  • kubectl is available, and configured to point to your Kubernetes cluster
    where GitLab will later be installed.

    • Be sure to have created the namespace you wish to have these certificates
      installed into before operating the script.

You may copy the content of this script to your computer, and make the resulting
file executable. We suggest poc-gitlab-internal-tls.sh .

#!/bin/bash
set -e
#############
## make and change into a working directory
pushd $(mktemp -d)

#############
## setup environment
NAMESPACE=${NAMESPACE:-default}
RELEASE=${RELEASE:-gitlab}
## stop if variable is unset beyond this point
set -u
## known expected patterns for SAN
CERT_SANS="*.${NAMESPACE}.svc,${RELEASE}-metrics.${NAMESPACE}.svc,*.${RELEASE}-gitaly.${NAMESPACE}.svc"

#############
## generate default CA config
cfssl print-defaults config > ca-config.json
## generate a CA
echo '{"CN":"'${RELEASE}.${NAMESPACE}.internal.ca'","key":{"algo":"ecdsa","size":256}}' | \
cfssl gencert -initca - | \
cfssljson -bare ca -
## generate certificate
echo '{"CN":"'${RELEASE}.${NAMESPACE}.internal'","key":{"algo":"ecdsa","size":256}}' | \
cfssl gencert -config=ca-config.json -ca=ca.pem -ca-key=ca-key.pem -profile www -hostname="${CERT_SANS}" - |\
cfssljson -bare ${RELEASE}-services

#############
## load certificates into K8s
kubectl -n ${NAMESPACE} create secret tls ${RELEASE}-internal-tls \
--cert=${RELEASE}-services.pem \
--key=${RELEASE}-services-key.pem
kubectl -n ${NAMESPACE} create secret generic ${RELEASE}-internal-tls-ca \
--from-file=ca.crt=ca.pem

note
This script does not preserve the CA’s private key. It is a Proof-of-Concept
helper, and is not intended for production use .

The script expects two environment variables to be set:



  1. NAMESPACE : The Kubernetes Namespace you will later install GitLab to.
    This defaults to default , as with kubectl .

  2. RELEASE : The Helm Release name you will later use to install GitLab.
    This defaults to gitlab .

To operate this script, you may export the two variables, or prepend the
script name with their values.

export NAMESPACE=testing
export RELEASE=gitlab

./poc-gitlab-internal-tls.sh

After the script has run, you will find the two secrets created, and the
temporary working directory contains all certificates and their keys.

$ pwd
/tmp/tmp.swyMgf9mDs
$ kubectl -n ${NAMESPACE} get secret | grep internal-tls
testing-internal-tls kubernetes.io/tls 2 11s
testing-internal-tls-ca Opaque 1 10s
$ ls -1
ca-config.json
ca.csr
ca-key.pem
ca.pem
testing-services.csr
testing-services-key.pem
testing-services.pem

Required certificate CN and SANs

The various GitLab components speak to each other over their Service’s DNS names.
The Ingress objects generated by the GitLab chart must provide NGINX the
name to verify, when tls.verify: true (which is the default). As a result
of this, each GitLab component should receive a certificate with a SAN including
either their Service’s name, or a wildcard acceptable to the Kubernetes Service
DNS entry.


  • service-name.namespace.svc
  • *.namespace.svc

Failure to ensure these SANs within certificates will result in a non-functional
instance, and logs that can be quite cryptic, refering to “connection failure”
or “SSL verification failed”.

You can make use of helm template to retrieve a full list of all
Service object names, if needed. If your GitLab has been deployed without TLS,
you can query Kubernetes for those names:

kubectl -n ${NAMESPACE} get service -lrelease=${RELEASE}

Configuration

Example configurations can be found in examples/internal-tls.

For the purposes of this documentation, we have provided shared-cert-values.yaml
which configures the GitLab components to consume the certificates generated with
the script above, in generating certificates for internal use.

Key items to configure:


  1. Global Custom Certificate Authorities.
  2. Per-component TLS for the service listeners.
    (See each chart’s documentation, under charts/)

This process is greatly simplified by making use of YAML’s native anchor
functionality. A truncated snippet of shared-cert-values.yaml shows this:

.internal-ca: &internal-ca gitlab-internal-tls-ca
.internal-tls: &internal-tls gitlab-internal-tls

global:
certificates:
customCAs:
- secret: *internal-ca
workhorse:
tls:
enabled: true
gitlab:
webservice:
tls:
secretName: *internal-tls
workhorse:
tls:
verify: true # default
secretName: *internal-tls
caSecretName: *internal-ca

Result

When all components have been configured to provide TLS on their service
listeners, all communication between GitLab components will traverse the
network with TLS security, including connections from NGINX Ingress to
each GitLab component.

NGINX Ingress will terminate any inbound TLS, determine the appropriate
services to pass the traffic to, and then form a new TLS connection to
the GitLab component. When configured as shown here, it will also verify
the certificates served by the GitLab components against the CA.

This can be verified by connecting to the Toolbox pod, and querying the
various compontent Services. One such example, connecting to the Webservice
Pod’s primary service port that NGINX Ingress uses:

$ kubectl -n ${NAMESPACE} get pod -lapp=toolbox,release=${RELEASE}
NAME READY STATUS RESTARTS AGE
gitlab-toolbox-5c447bfdb4-pfmpc 1/1 Running 0 65m
$ kubectl exec -ti gitlab-toolbox-5c447bfdb4-pfmpc -c toolbox -- \
curl -Iv "https://gitlab-webservice-default.testing.svc:8181"

The output should be similar to following example:

*   Trying 10.60.0.237:8181...
* Connected to gitlab-webservice-default.testing.svc (10.60.0.237) port 8181 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: CN=gitlab.testing.internal
* start date: Jul 18 19:15:00 2022 GMT
* expire date: Jul 18 19:15:00 2023 GMT
* subjectAltName: host "gitlab-webservice-default.testing.svc" matched cert's "*.testing.svc"
* issuer: CN=gitlab.testing.internal.ca
* SSL certificate verify ok.
> HEAD / HTTP/1.1
> Host: gitlab-webservice-default.testing.svc:8181

Troubleshooting

If your GitLab instance appears unreachable from the browser, by rendering an
HTTP 503 error, NGINX Ingress is likely having a problem verifying the
certificates of the GitLab components.

You may work around this by temporarily setting
gitlab.webservice.workhorse.tls.verify to false .

The NGINX Ingress controller can be connected to, and will evidence a message
in nginx.conf , regarding problems verifying the certificate(s).

Example content, where the Secret is not reachable:

# Location denied. Reason: "error obtaining certificate: local SSL certificate
testing/gitlab-internal-tls-ca was not found"
return 503;

Common problems that cause this:


  • CA certificate is not in a key named ca.crt within the Secret.
  • The Secret was not properly supplied, or may not exist within the Namespace.
Read article

Configure the GitLab chart with persistent volumes | GitLab






  • Locate the GitLab Volumes
  • Before making storage changes

  • Making storage changes


    • Changes to an existing Volume

      • Update the volume to bind to the claim
    • Switching to a different Volume
  • Make changes to the PersistentVolumeClaim
  • Apply the changes to the GitLab chart

Configure the GitLab chart with persistent volumes

Some of the included services require persistent storage, configured through
Persistent Volumes that specify which disks your cluster has access to.
Documentation on the storage configuration necessary to install this chart can be found in our
Storage Guide.

Storage changes after installation need to be manually handled by your cluster
administrators. Automated management of these volumes after installation is not
handled by the GitLab chart.

Examples of changes not automatically managed after initial installation
include:


  • Mounting different volumes to the Pods
  • Changing the effective accessModes or Storage Class
  • Expanding the storage size of your volume* 1

1 In Kubernetes 1.11, expanding the storage size of your volume is supported
if you have allowVolumeExpansion configured to true in your Storage Class.

Automating theses changes is complicated due to:


  1. Kubernetes does not allow changes to most fields in an existing PersistentVolumeClaim
  2. Unless manually configured, the PVC is the only reference to dynamically provisioned PersistentVolumes

  3. Delete is the default reclaimPolicy for dynamically provisioned PersistentVolumes

This means in order to make changes, we need to delete the PersistentVolumeClaim
and create a new one with our changes. But due to the default reclaimPolicy,
deleting the PersistentVolumeClaim may delete the PersistentVolumes
and underlying disk. And unless configured with appropriate volumeNames and/or
labelSelectors, the chart doesn’t know the volume to attach to.

We will continue to look into making this process easier, but for now a manual
process needs to be followed to make changes to your storage.

Locate the GitLab Volumes

Find the volumes/claims that are being used:

kubectl --namespace <namespace> get PersistentVolumeClaims -l release=<chart release name> -ojsonpath='{range .items[*]}{.spec.volumeName}{"\t"}{.metadata.labels.app}{"\n"}{end}'


  • <namespace> should be replaced with the namespace where you installed the GitLab chart.

  • <chart release name> should be replaced with the name you used to install the GitLab chart.

The command prints a list of the volume names, followed by the name of the
service they are for.

For example:

$ kubectl --namespace helm-charts-win get PersistentVolumeClaims -l release=review-update-app-h8qogp -ojsonpath='{range .items[*]}{.spec.volumeName}{"\t"}{.metadata.labels.app}{"\n"}{end}'
pvc-6247502b-8c2d-11e8-8267-42010a9a0113 gitaly
pvc-61bbc05e-8c2d-11e8-8267-42010a9a0113 minio
pvc-61bc6069-8c2d-11e8-8267-42010a9a0113 postgresql
pvc-61bcd6d2-8c2d-11e8-8267-42010a9a0113 prometheus
pvc-61bdf136-8c2d-11e8-8267-42010a9a0113 redis

Before making storage changes

The person making the changes needs to have administrator access to the cluster, and appropriate access to the storage
solutions being used. Often the changes will first need to be applied in the storage solution, then the results need to
be updated in Kubernetes.

Before making changes, you should ensure your PersistentVolumes are using
the Retain reclaimPolicy so they don’t get removed while you are
making changes.

First, find the volumes/claims that are being used.

Next, edit each volume and change the value of persistentVolumeReclaimPolicy
under the spec field, to be Retain rather than Delete

For example:

kubectl --namespace helm-charts-win edit PersistentVolume pvc-6247502b-8c2d-11e8-8267-42010a9a0113

Editing Output:

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubernetes.io/createdby: gce-pd-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: "yes"
pv.kubernetes.io/provisioned-by: kubernetes.io/gce-pd
creationTimestamp: 2018-07-20T14:58:43Z
labels:
failure-domain.beta.kubernetes.io/region: europe-west2
failure-domain.beta.kubernetes.io/zone: europe-west2-b
name: pvc-6247502b-8c2d-11e8-8267-42010a9a0113
resourceVersion: "48362431"
selfLink: /api/v1/persistentvolumes/pvc-6247502b-8c2d-11e8-8267-42010a9a0113
uid: 650bd649-8c2d-11e8-8267-42010a9a0113
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 50Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: repo-data-review-update-app-h8qogp-gitaly-0
namespace: helm-charts-win
resourceVersion: "48362307"
uid: 6247502b-8c2d-11e8-8267-42010a9a0113
gcePersistentDisk:
fsType: ext4
pdName: gke-cloud-native-81a17-pvc-6247502b-8c2d-11e8-8267-42010a9a0113
# Changed the following line
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
status:
phase: Bound

Making storage changes

First, make the desired changes to the disk outside of the cluster. (Resize the
disk in GKE, or create a new disk from a snapshot or clone, etc).

How you do this, and whether or not it can be done live, without downtime, is
dependent on the storage solutions you are using, and can’t be covered by this
document.

Next, evaluate whether you need these changes to be reflected in the Kubernetes
objects. For example: with expanding the disk storage size, the storage size
settings in the PersistentVolumeClaim will only be used when a new volume
resource is requested. So you would only need to increase the values in the
PersistentVolumeClaim if you intend to scale up more disks (for use in
additional Gitaly pods).

If you do need to have the changes reflected in Kubernetes, be sure that you’ve
updated your reclaim policy on the volumes as described in the Before making storage changes
section.

The paths we have documented for storage changes are:


  • Changes to an existing Volume
  • Switching to a different Volume

Changes to an existing Volume

First locate the volume name you are changing.

Use kubectl edit to make the desired configuration changes to the volume. (These changes
should only be updates to reflect the real state of the attached disk)

For example:

kubectl --namespace helm-charts-win edit PersistentVolume pvc-6247502b-8c2d-11e8-8267-42010a9a0113

Editing Output:

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubernetes.io/createdby: gce-pd-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: "yes"
pv.kubernetes.io/provisioned-by: kubernetes.io/gce-pd
creationTimestamp: 2018-07-20T14:58:43Z
labels:
failure-domain.beta.kubernetes.io/region: europe-west2
failure-domain.beta.kubernetes.io/zone: europe-west2-b
name: pvc-6247502b-8c2d-11e8-8267-42010a9a0113
resourceVersion: "48362431"
selfLink: /api/v1/persistentvolumes/pvc-6247502b-8c2d-11e8-8267-42010a9a0113
uid: 650bd649-8c2d-11e8-8267-42010a9a0113
spec:
accessModes:
- ReadWriteOnce
capacity:
# Updated the storage size
storage: 100Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: repo-data-review-update-app-h8qogp-gitaly-0
namespace: helm-charts-win
resourceVersion: "48362307"
uid: 6247502b-8c2d-11e8-8267-42010a9a0113
gcePersistentDisk:
fsType: ext4
pdName: gke-cloud-native-81a17-pvc-6247502b-8c2d-11e8-8267-42010a9a0113
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
status:
phase: Bound

Now that the changes have been reflected in the volume, we need to update
the claim.

Follow the instructions in the Make changes to the PersistentVolumeClaim section.

Update the volume to bind to the claim

In a separate terminal, start watching to see when the claim has its status change to bound,
and then move onto the next step to make the volume available for use in the new claim.

kubectl --namespace <namespace> get --watch PersistentVolumeClaim <claim name>

Edit the volume to make it available to the new claim. Remove the .spec.claimRef section.

kubectl --namespace <namespace> edit PersistentVolume <volume name>

Editing Output:

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubernetes.io/createdby: gce-pd-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: "yes"
pv.kubernetes.io/provisioned-by: kubernetes.io/gce-pd
creationTimestamp: 2018-07-20T14:58:43Z
labels:
failure-domain.beta.kubernetes.io/region: europe-west2
failure-domain.beta.kubernetes.io/zone: europe-west2-b
name: pvc-6247502b-8c2d-11e8-8267-42010a9a0113
resourceVersion: "48362431"
selfLink: /api/v1/persistentvolumes/pvc-6247502b-8c2d-11e8-8267-42010a9a0113
uid: 650bd649-8c2d-11e8-8267-42010a9a0113
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 100Gi
gcePersistentDisk:
fsType: ext4
pdName: gke-cloud-native-81a17-pvc-6247502b-8c2d-11e8-8267-42010a9a0113
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
status:
phase: Released

Shortly after making the change to the Volume, the terminal watching the claim status should show Bound .

Finally, apply the changes to the GitLab chart

Switching to a different Volume

If you want to switch to using a new volume, using a disk that has a copy of the
appropriate data from the old volume, then first you need to create the new
Persistent Volume in Kubernetes.

In order to create a Persistent Volume for your disk, you will need to
locate the driver specific documentation
for your storage type.

There are a couple of things to keep in mind when following the driver documentation:


  • You need to use the driver to create a Persistent Volume, not a Pod object with a volume as shown in a lot of the documentation.
  • You do not want to create a PersistentVolumeClaim for the volume, we will be editing the existing claim instead.

The driver documentation often includes examples for using the driver in a Pod, for example:

apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
# This GCE PD must already exist.
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4

What you actually want, is to create a Persistent Volume, like so:

apiVersion: v1
kind: PersistentVolume
metadata:
name: test-volume
spec:
capacity:
storage: 400Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4

You normally create a local yaml file with the PersistentVolume information,
then issue a create command to Kubernetes to create the object using the file.

kubectl --namespace <your namespace> create -f <local-pv-file>.yaml

Once your volume is created, you can move on to Making changes to the PersistentVolumeClaim

Make changes to the PersistentVolumeClaim

Find the PersistentVolumeClaim you want to change.

kubectl --namespace <namespace> get PersistentVolumeClaims -l release=<chart release name> -ojsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.app}{"\n"}{end}'


  • <namespace> should be replaced with the namespace where you installed the GitLab chart.

  • <chart release name> should be replaced with the name you used to install the GitLab chart.

The command will print a list of the PersistentVolumeClaim names, followed by the name of the
service they are for.

Then save a copy of the claim to your local filesystem:

kubectl --namespace <namespace> get PersistentVolumeClaim <claim name> -o yaml > <claim name>.bak.yaml

Example Output:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd
creationTimestamp: 2018-07-20T14:58:38Z
labels:
app: gitaly
release: review-update-app-h8qogp
name: repo-data-review-update-app-h8qogp-gitaly-0
namespace: helm-charts-win
resourceVersion: "48362433"
selfLink: /api/v1/namespaces/helm-charts-win/persistentvolumeclaims/repo-data-review-update-app-h8qogp-gitaly-0
uid: 6247502b-8c2d-11e8-8267-42010a9a0113
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: standard
volumeName: pvc-6247502b-8c2d-11e8-8267-42010a9a0113
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 50Gi
phase: Bound

Create a new YAML file for a new PVC object. Have it use the same metadata.name , metadata.labels , metadata,namespace , and spec fields. (With your updates applied). And drop the other settings:

Example:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: gitaly
release: review-update-app-h8qogp
name: repo-data-review-update-app-h8qogp-gitaly-0
namespace: helm-charts-win
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
# This is our updated field
storage: 100Gi
storageClassName: standard
volumeName: pvc-6247502b-8c2d-11e8-8267-42010a9a0113

Now delete the old claim:

kubectl --namespace <namespace> delete PersistentVolumeClaim <claim name>

Create the new claim:

kubectl --namespace <namespace> create PersistentVolumeClaim -f <new claim yaml file>

If you are binding to the same PersistentVolume that was previous bound to
the claim, then proceed to update the volume to bind to the claim

Otherwise, if you have bound the claim to a new volume, move onto apply the changes to the GitLab chart

Apply the changes to the GitLab chart

After making changes to the PersistentVolumes and PersistentVolumeClaims,
you will also want to issue a Helm update with the changes applied to the chart
settings as well.

See the installation storage guide
for the options.


Note : If you made changes to the Gitaly volume claim, you will need to delete the
Gitaly StatefulSet before you will be able to issue a Helm update. This is
because the StatefulSet’s Volume Template is immutable, and cannot be changed.

You can delete the StatefulSet without deleting the Gitaly Pods:
kubectl --namespace <namespace> delete --cascade=false StatefulSet <release-name>-gitaly
The Helm update command will recreate the StatefulSet, which will adopt and
update the Gitaly pods.

Update the chart, and include the updated configuration:

Example:

helm upgrade --install review-update-app-h8qogp gitlab/gitlab \
--set gitlab.gitaly.persistence.size=100Gi \
<your other config settings>
Read article

Configure the GitLab chart with an external Redis | GitLab






  • Configure the chart
  • Use multiple Redis instances
  • Specify secure Redis scheme (SSL)

Configure the GitLab chart with an external Redis

This document intends to provide documentation on how to configure this Helm chart with an external Redis service.

If you don’t have Redis configured, for on-premise or deployment to VM,
consider using our Omnibus GitLab package.

Configure the chart

Disable the redis chart and the Redis service it provides, and point the other services to the external service.

You must set the following parameters:



  • redis.install : Set to false to disable including the Redis chart.

  • global.redis.host : Set to the hostname of the external Redis, can be a domain or an IP address.

  • global.redis.password.enabled : Set to false if the external Redis does not require a password.

  • global.redis.password.secret : The name of the secret which contains the token for authentication.

  • global.redis.password.key : The key in the secret, which contains the token content.

Items below can be further customized if you are not using the defaults:



  • global.redis.port : The port the database is available on, defaults to 6379 .

For example, pass these values via Helm’s --set flag while deploying:

helm install gitlab gitlab/gitlab  \
--set redis.install=false \
--set global.redis.host=redis.example \
--set global.redis.password.secret=gitlab-redis \
--set global.redis.password.key=redis-password \

If you are connecting to a Redis HA cluster that has Sentinel servers
running, the global.redis.host attribute needs to be set to the name of
the Redis instance group (such as mymaster or resque ), as
specified in the sentinel.conf . Sentinel servers can be referenced
using the global.redis.sentinels[0].host and global.redis.sentinels[0].port
values for the --set flag. The index is zero based.

Use multiple Redis instances

GitLab supports splitting several of the resource intensive
Redis operations across multiple Redis instances. This chart supports distributing
those persistence classes to other Redis instances.

More detailed information on configuring the chart for using multiple Redis
instances can be found in the globals
documentation.

Specify secure Redis scheme (SSL)

To connect to Redis using SSL, use the rediss (note the double s ) scheme parameter:

--set global.redis.scheme=rediss
Read article

Configure the GitLab chart with FIPS-compliant images | GitLab






  • Sample values

Configure the GitLab chart with FIPS-compliant images

GitLab offers FIPS-compliant
versions of its images, allowing you to run GitLab on FIPS-enabled clusters.

Sample values

We provide an example for GitLab chart values in
examples/fips/values.yaml
which can help you to build a FIPS-compatible GitLab deployment.

Note the comment under the nginx-ingress.controller key that provides the
relevant configuration to use a FIPS-compatible NGINX Ingress Controller image. This image is
maintained in our NGINX Ingress Controller fork.

Read article