nango-helm-charts

Nango

Nango is a single API for all your integrations. It provides OAuth handling, webhook management, and unified API access to hundreds of third-party services.

TL;DR

helm repo add nangohq https://nangohq.github.io/nango-helm-charts
helm install nango nangohq/nango

Introduction

This chart bootstraps a Nango deployment on a Kubernetes cluster using the Helm package manager.

Prerequisites

Installing the Chart

Add the Helm Repository

helm repo add nangohq https://nangohq.github.io/nango-helm-charts
helm repo update

Install the Chart

helm install nango nangohq/nango

The command deploys Nango on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.

Uninstall the Chart

helm uninstall nango

The command removes all the Kubernetes components associated with the chart and deletes the release.

Parameters

Common parameters

Name Description Value
nameOverride String to partially override common.names.name ""
fullnameOverride String to fully override common.names.fullname ""
namespaceOverride String to fully override common.names.namespace nango
commonLabels Labels to add to all deployed objects {}
commonAnnotations Annotations to add to all deployed objects {}

Global Parameters

Name Description Value
global.image.registry global image registry nangohq
global.image.repository global image repository nango
global.image.digest global image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag image tag (immutable tags are recommended) ""
global.image.pullPolicy global image pull policy IfNotPresent
global.image.pullSecrets global image pull secrets []
global.defaultStorageClass Global default StorageClass for Persistent Volume(s) ""

Server Parameters

Name Description Value
server.name server name server
server.image.registry server image registry nangohq
server.image.repository server image repository nango
server.image.digest server image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag image tag (immutable tags are recommended) ""
server.image.pullPolicy server image pull policy ""
server.image.pullSecrets server image pull secrets []
server.replicaCount Number of server replicas to deploy 1
server.command Override default server container command (useful when using custom images) []
server.args Override default server container args (useful when using custom images) ["node","packages/server/dist/server.js"]
server.serviceAccount.create Specifies whether a ServiceAccount should be created true
server.serviceAccount.name The name of the ServiceAccount to use. ""
server.serviceAccount.annotations Additional Service Account annotations (evaluated as a template) {}
server.serviceAccount.automountServiceAccountToken Automount service account token for the server service account true
server.networkPolicy.enabled Specifies whether a NetworkPolicy should be created true
server.networkPolicy.allowExternal Don’t require server label for connections true
server.networkPolicy.allowExternalEgress Allow the pod to access any range of port and all destinations. true
server.networkPolicy.addExternalClientAccess Allow access from pods with client label set to “true”. Ignored if server.networkPolicy.allowExternal is true. true
server.networkPolicy.extraIngress Add extra ingress rules to the NetworkPolicy []
server.networkPolicy.extraEgress Add extra ingress rules to the NetworkPolicy (ignored if allowExternalEgress=true) []
server.networkPolicy.ingressPodMatchLabels Labels to match to allow traffic from other pods. Ignored if server.networkPolicy.allowExternal is true. {}
server.networkPolicy.ingressNSMatchLabels Labels to match to allow traffic from other namespaces. Ignored if server.networkPolicy.allowExternal is true. {}
server.networkPolicy.ingressNSPodMatchLabels Pod labels to match to allow traffic from other namespaces. Ignored if server.networkPolicy.allowExternal is true. {}
server.ingress.enabled Enable ingress record generation for server true
server.ingress.pathType Ingress path type Prefix
server.ingress.apiVersion Force Ingress API version (automatically detected if not set) ""
server.ingress.hostname Default host for the ingress record example-app.nango.dev
server.ingress.ingressClassName IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) ""
server.ingress.path Default path for the ingress record /
server.ingress.annotations Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. {}
server.ingress.tls Enable TLS configuration for the host defined at server.ingress.hostname parameter false
server.ingress.selfSigned Create a TLS secret for this ingress record using self-signed certificates generated by Helm false
server.ingress.extraHosts An array with additional hostname(s) to be covered with the ingress record []
server.ingress.extraPaths An array with additional arbitrary paths that may need to be added to the ingress under the main host []
server.ingress.extraTls TLS configuration for additional hostname(s) to be covered with this ingress record []
server.ingress.secrets Custom TLS certificates as secrets []
server.ingress.extraRules Additional rules to be covered with this ingress record []
server.service.type server service type LoadBalancer
server.service.ports.http server service HTTP port 80
server.service.ports.https server service HTTPS port 443
server.service.nodePorts.http Node port for HTTP ""
server.service.nodePorts.https Node port for HTTPS ""
server.service.clusterIP server service Cluster IP ""
server.service.loadBalancerIP server service Load Balancer IP ""
server.service.loadBalancerSourceRanges server service Load Balancer sources []
server.service.externalTrafficPolicy server service external traffic policy Cluster
server.service.annotations Additional custom annotations for server service {}
server.service.extraPorts Extra ports to expose in server service (normally used with the sidecars value) []
server.service.sessionAffinity Control where client requests go, to the same pod or round-robin None
server.service.sessionAffinityConfig Additional settings for the sessionAffinity {}
server.containerPorts.http server HTTP container port 8080
server.containerPorts.https server HTTPS container port 8080
server.extraContainerPorts Optionally specify extra list of additional ports for server containers []
server.updateStrategy.type server deployment strategy type RollingUpdate
server.updateStrategy.type server statefulset strategy type RollingUpdate
server.pdb.create Enable/disable a Pod Disruption Budget creation true
server.pdb.minAvailable Minimum number/percentage of pods that should remain scheduled 1
server.pdb.maxUnavailable Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both server.pdb.minAvailable and server.pdb.maxUnavailable are empty. ""
server.autoscaling.hpa.enabled Enable HPA for server pods true
server.autoscaling.hpa.minReplicas Minimum number of replicas 6
server.autoscaling.hpa.maxReplicas Maximum number of replicas 12
server.autoscaling.hpa.targetCPU Target CPU utilization percentage 70
server.autoscaling.hpa.targetMemory Target Memory utilization percentage 70
server.autoscaling.vpa.enabled Enable VPA for server pods false
server.autoscaling.vpa.annotations Annotations for VPA resource {}
server.autoscaling.vpa.controlledResources VPA List of resources that the vertical pod autoscaler can control. Defaults to cpu and memory []
server.autoscaling.vpa.maxAllowed VPA Max allowed resources for the pod {}
server.autoscaling.vpa.minAllowed VPA Min allowed resources for the pod {}
server.autoscaling.vpa.updatePolicy.updateMode Autoscaling update policy Auto
server.sidecars Add additional sidecar containers to the server pods []
server.resourcesPreset Set server container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if server.resources is set (server.resources is recommended for production). 2xlarge
server.resources Set server container requests and limits for different resources like CPU or memory (essential for production workloads) {}
server.extraEnvVars Array with extra environment variables to add to server containers []
server.extraEnvVarsCM Name of existing ConfigMap containing extra env vars for server containers ""
server.extraEnvVarsCMs Name of existing ConfigMaps containing extra env vars for server containers []
server.extraEnvVarsSecret Name of existing Secret containing extra env vars for server containers ""
server.extraEnvVarsSecrets Name of existing Secrets containing extra env vars for server containers []
server.livenessProbe.enabled Enable livenessProbe on server containers true
server.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 20
server.livenessProbe.periodSeconds Period seconds for livenessProbe 10
server.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 3
server.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
server.livenessProbe.successThreshold Success threshold for livenessProbe 1
server.readinessProbe.enabled Enable readinessProbe on server containers true
server.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 2
server.readinessProbe.periodSeconds Period seconds for readinessProbe 5
server.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 2
server.readinessProbe.failureThreshold Failure threshold for readinessProbe 3
server.readinessProbe.successThreshold Success threshold for readinessProbe 1
server.startupProbe.enabled Enable startupProbe on server containers false
server.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe 0
server.startupProbe.periodSeconds Period seconds for startupProbe 5
server.startupProbe.timeoutSeconds Timeout seconds for startupProbe 3
server.startupProbe.failureThreshold Failure threshold for startupProbe 24
server.startupProbe.successThreshold Success threshold for startupProbe 1
server.podSecurityContext.enabled Enable server pods’ Security Context false
server.podSecurityContext.fsGroupChangePolicy Set filesystem group change policy for server pods Always
server.podSecurityContext.sysctls Set kernel settings using the sysctl interface for server pods []
server.podSecurityContext.supplementalGroups Set filesystem extra groups for server pods []
server.podSecurityContext.fsGroup Set fsGroup in server pods’ Security Context 1001
server.deploymentAnnotations Annotations for server deployment {}
server.persistence.enabled Enable persistence using Persistent Volume Claims false
server.persistence.mountPath Path to mount the volume at. /nango/server/data
server.persistence.subPath The subdirectory of the volume to mount to, useful in dev environments and one PV for multiple services ""
server.persistence.storageClass Storage class of backing PVC ""
server.persistence.annotations Persistent Volume Claim annotations {}
server.persistence.accessModes Persistent Volume Access Modes ["ReadWriteOnce"]
server.persistence.size Size of data volume 8Gi
server.persistence.dataSource Custom PVC data source {}
server.persistence.existingClaim The name of an existing PVC to use for persistence nango-jobs
server.persistence.selector Selector to match an existing Persistent Volume {}

Persist Parameters

Name Description Value
persist.name persist name persist
persist.image.registry persist image registry nangohq
persist.image.repository persist image repository nango
persist.image.digest persist image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag image tag (immutable tags are recommended) ""
persist.image.pullPolicy persist image pull policy ""
persist.image.pullSecrets persist image pull secrets []
persist.replicaCount Number of persist replicas to deploy 1
persist.command Override default persist container command (useful when using custom images) []
persist.args Override default persist container args (useful when using custom images) ["node","packages/persist/dist/app.js"]
persist.serviceAccount.create Specifies whether a ServiceAccount should be created true
persist.serviceAccount.name The name of the ServiceAccount to use. ""
persist.serviceAccount.annotations Additional Service Account annotations (evaluated as a template) {}
persist.serviceAccount.automountServiceAccountToken Automount service account token for the server service account true
persist.networkPolicy.enabled Specifies whether a NetworkPolicy should be created true
persist.networkPolicy.allowExternal Don’t require server label for connections false
persist.networkPolicy.allowExternalEgress Allow the pod to access any range of port and all destinations. true
persist.networkPolicy.addExternalClientAccess Allow access from pods with client label set to “true”. Ignored if persist.networkPolicy.allowExternal is true. true
persist.networkPolicy.extraIngress Add extra ingress rules to the NetworkPolicy []
persist.networkPolicy.extraEgress Add extra ingress rules to the NetworkPolicy (ignored if allowExternalEgress=true) []
persist.networkPolicy.ingressPodMatchLabels Labels to match to allow traffic from other pods. Ignored if persist.networkPolicy.allowExternal is true. {}
persist.networkPolicy.ingressNSMatchLabels Labels to match to allow traffic from other namespaces. Ignored if persist.networkPolicy.allowExternal is true. {}
persist.networkPolicy.ingressNSPodMatchLabels Pod labels to match to allow traffic from other namespaces. Ignored if persist.networkPolicy.allowExternal is true. {}
persist.ingress.enabled Enable ingress record generation for persist false
persist.ingress.pathType Ingress path type ImplementationSpecific
persist.ingress.apiVersion Force Ingress API version (automatically detected if not set) ""
persist.ingress.hostname Default host for the ingress record nango-server-default.dev
persist.ingress.ingressClassName IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) ""
persist.ingress.path Default path for the ingress record /
persist.ingress.annotations Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. {}
persist.ingress.tls Enable TLS configuration for the host defined at persist.ingress.hostname parameter false
persist.ingress.selfSigned Create a TLS secret for this ingress record using self-signed certificates generated by Helm false
persist.ingress.extraHosts An array with additional hostname(s) to be covered with the ingress record []
persist.ingress.extraPaths An array with additional arbitrary paths that may need to be added to the ingress under the main host []
persist.ingress.extraTls TLS configuration for additional hostname(s) to be covered with this ingress record []
persist.ingress.secrets Custom TLS certificates as secrets []
persist.ingress.extraRules Additional rules to be covered with this ingress record []
persist.service.type persist service type LoadBalancer
persist.service.ports.http persist service HTTP port 80
persist.service.ports.https persist service HTTPS port 443
persist.service.nodePorts.http Node port for HTTP ""
persist.service.nodePorts.https Node port for HTTPS ""
persist.service.clusterIP persist service Cluster IP ""
persist.service.loadBalancerIP persist service Load Balancer IP ""
persist.service.loadBalancerSourceRanges persist service Load Balancer sources []
persist.service.externalTrafficPolicy persist service external traffic policy Cluster
persist.service.annotations Additional custom annotations for persist service {}
persist.service.extraPorts Extra ports to expose in persist service (normally used with the sidecars value) []
persist.service.sessionAffinity Control where client requests go, to the same pod or round-robin None
persist.service.sessionAffinityConfig Additional settings for the sessionAffinity {}
persist.containerPorts.http persist HTTP container port 3007
persist.containerPorts.https persist HTTPS container port 3007
persist.extraContainerPorts Optionally specify extra list of additional ports for persist containers []
persist.updateStrategy.type persist deployment strategy type RollingUpdate
persist.updateStrategy.type persist statefulset strategy type RollingUpdate
persist.pdb.create Enable/disable a Pod Disruption Budget creation true
persist.pdb.minAvailable Minimum number/percentage of pods that should remain scheduled 1
persist.pdb.maxUnavailable Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both persist.pdb.minAvailable and persist.pdb.maxUnavailable are empty. ""
persist.autoscaling.hpa.enabled Enable HPA for persist pods true
persist.autoscaling.hpa.minReplicas Minimum number of replicas 6
persist.autoscaling.hpa.maxReplicas Maximum number of replicas 15
persist.autoscaling.hpa.targetCPU Target CPU utilization percentage 60
persist.autoscaling.hpa.targetMemory Target Memory utilization percentage 70
persist.autoscaling.vpa.enabled Enable VPA for persist pods false
persist.autoscaling.vpa.annotations Annotations for VPA resource {}
persist.autoscaling.vpa.controlledResources VPA List of resources that the vertical pod autoscaler can control. Defaults to cpu and memory []
persist.autoscaling.vpa.maxAllowed VPA Max allowed resources for the pod {}
persist.autoscaling.vpa.minAllowed VPA Min allowed resources for the pod {}
persist.autoscaling.vpa.updatePolicy.updateMode Autoscaling update policy Auto
persist.sidecars Add additional sidecar containers to the persist pods []
persist.resourcesPreset Set persist container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if persist.resources is set (persist.resources is recommended for production). xlarge
persist.resources Set persist container requests and limits for different resources like CPU or memory (essential for production workloads) {}
persist.extraEnvVars Array with extra environment variables to add to persist containers []
persist.extraEnvVarsCM Name of existing ConfigMap containing extra env vars for persist containers ""
persist.extraEnvVarsCMs Name of existing ConfigMaps containing extra env vars for persist containers []
persist.extraEnvVarsSecret Name of existing Secret containing extra env vars for persist containers ""
persist.extraEnvVarsSecrets Name of existing Secrets containing extra env vars for persist containers []
persist.livenessProbe.enabled Enable readinessProbe on persist containers true
persist.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 20
persist.livenessProbe.periodSeconds Period seconds for livenessProbe 10
persist.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 3
persist.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
persist.livenessProbe.successThreshold Success threshold for livenessProbe 1
persist.readinessProbe.enabled Enable readinessProbe on persist containers true
persist.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 2
persist.readinessProbe.periodSeconds Period seconds for readinessProbe 5
persist.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 2
persist.readinessProbe.failureThreshold Failure threshold for readinessProbe 3
persist.readinessProbe.successThreshold Success threshold for readinessProbe 1
persist.startupProbe.enabled Enable startupProbe on persist containers false
persist.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe 0
persist.startupProbe.periodSeconds Period seconds for startupProbe 5
persist.startupProbe.timeoutSeconds Timeout seconds for startupProbe 3
persist.startupProbe.failureThreshold Failure threshold for startupProbe 24
persist.startupProbe.successThreshold Success threshold for startupProbe 1
persist.podSecurityContext.enabled Enable persist pods’ Security Context false
persist.podSecurityContext.fsGroupChangePolicy Set filesystem group change policy for persist pods Always
persist.podSecurityContext.sysctls Set kernel settings using the sysctl interface for persist pods []
persist.podSecurityContext.supplementalGroups Set filesystem extra groups for persist pods []
persist.podSecurityContext.fsGroup Set fsGroup in persist pods’ Security Context 1001
persist.deploymentAnnotations Annotations for persist deployment {}
persist.persistence.enabled Enable persistence using Persistent Volume Claims false
persist.persistence.mountPath Path to mount the volume at. /nango/server/data
persist.persistence.subPath The subdirectory of the volume to mount to, useful in dev environments and one PV for multiple services ""
persist.persistence.storageClass Storage class of backing PVC ""
persist.persistence.annotations Persistent Volume Claim annotations {}
persist.persistence.accessModes Persistent Volume Access Modes ["ReadWriteOnce"]
persist.persistence.size Size of data volume 8Gi
persist.persistence.dataSource Custom PVC data source {}
persist.persistence.existingClaim The name of an existing PVC to use for persistence ""
persist.persistence.selector Selector to match an existing Persistent Volume {}

Orchestrator Parameters

Name Description Value
orchestrator.name orchestrator name orchestrator
orchestrator.image.registry orchestrator image registry nangohq
orchestrator.image.repository orchestrator image repository nango
orchestrator.image.digest orchestrator image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag image tag (immutable tags are recommended) ""
orchestrator.image.pullPolicy orchestrator image pull policy ""
orchestrator.image.pullSecrets orchestrator image pull secrets []
orchestrator.replicaCount Number of orchestrator replicas to deploy 1
orchestrator.command Override default orchestrator container command (useful when using custom images) []
orchestrator.args Override default orchestrator container args (useful when using custom images) ["node","packages/orchestrator/dist/app.js"]
orchestrator.serviceAccount.create Specifies whether a ServiceAccount should be created true
orchestrator.serviceAccount.name The name of the ServiceAccount to use. ""
orchestrator.serviceAccount.annotations Additional Service Account annotations (evaluated as a template) {}
orchestrator.serviceAccount.automountServiceAccountToken Automount service account token for the orchestrator service account true
orchestrator.networkPolicy.enabled Specifies whether a NetworkPolicy should be created true
orchestrator.networkPolicy.allowExternal Don’t require orchestrator label for connections true
orchestrator.networkPolicy.allowExternalEgress Allow the pod to access any range of port and all destinations. true
orchestrator.networkPolicy.addExternalClientAccess Allow access from pods with client label set to “true”. Ignored if networkPolicy.allowExternal is true. true
orchestrator.networkPolicy.extraIngress Add extra ingress rules to the NetworkPolicy []
orchestrator.networkPolicy.extraEgress Add extra ingress rules to the NetworkPolicy (ignored if allowExternalEgress=true) []
orchestrator.networkPolicy.ingressPodMatchLabels Labels to match to allow traffic from other pods. Ignored if orchestrator.networkPolicy.allowExternal is true. {}
orchestrator.networkPolicy.ingressNSMatchLabels Labels to match to allow traffic from other namespaces. Ignored if orchestrator.networkPolicy.allowExternal is true. {}
orchestrator.networkPolicy.ingressNSPodMatchLabels Pod labels to match to allow traffic from other namespaces. Ignored if orchestrator.networkPolicy.allowExternal is true. {}
orchestrator.ingress.enabled Enable ingress record generation for orchestrator false
orchestrator.ingress.pathType Ingress path type ImplementationSpecific
orchestrator.ingress.apiVersion Force Ingress API version (automatically detected if not set) ""
orchestrator.ingress.hostname Default host for the ingress record nango-orchestrator-default.dev
orchestrator.ingress.ingressClassName IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) ""
orchestrator.ingress.path Default path for the ingress record /
orchestrator.ingress.annotations Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. {}
orchestrator.ingress.tls Enable TLS configuration for the host defined at ingress.hostname parameter false
orchestrator.ingress.selfSigned Create a TLS secret for this ingress record using self-signed certificates generated by Helm false
orchestrator.ingress.extraHosts An array with additional hostname(s) to be covered with the ingress record []
orchestrator.ingress.extraPaths An array with additional arbitrary paths that may need to be added to the ingress under the main host []
orchestrator.ingress.extraTls TLS configuration for additional hostname(s) to be covered with this ingress record []
orchestrator.ingress.secrets Custom TLS certificates as secrets []
orchestrator.ingress.extraRules Additional rules to be covered with this ingress record []
orchestrator.service.type orchestrator service type LoadBalancer
orchestrator.service.ports.http orchestrator service HTTP port 80
orchestrator.service.ports.https orchestrator service HTTPS port 443
orchestrator.service.nodePorts.http Node port for HTTP ""
orchestrator.service.nodePorts.https Node port for HTTPS ""
orchestrator.service.clusterIP orchestrator service Cluster IP ""
orchestrator.service.loadBalancerIP orchestrator service Load Balancer IP ""
orchestrator.service.loadBalancerSourceRanges orchestrator service Load Balancer sources []
orchestrator.service.externalTrafficPolicy orchestrator service external traffic policy Cluster
orchestrator.service.annotations Additional custom annotations for orchestrator service {}
orchestrator.service.extraPorts Extra ports to expose in orchestrator service (normally used with the sidecars value) []
orchestrator.service.sessionAffinity Control where client requests go, to the same pod or round-robin None
orchestrator.service.sessionAffinityConfig Additional settings for the sessionAffinity {}
orchestrator.containerPorts.http orchestrator HTTP container port 3008
orchestrator.containerPorts.https orchestrator HTTPS container port 3008
orchestrator.extraContainerPorts Optionally specify extra list of additional ports for orchestrator containers []
orchestrator.updateStrategy.type orchestrator deployment strategy type RollingUpdate
orchestrator.updateStrategy.type orchestrator statefulset strategy type RollingUpdate
orchestrator.pdb.create Enable/disable a Pod Disruption Budget creation true
orchestrator.pdb.minAvailable Minimum number/percentage of pods that should remain scheduled 1
orchestrator.pdb.maxUnavailable Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both orchestrator.pdb.minAvailable and orchestrator.pdb.maxUnavailable are empty. ""
orchestrator.autoscaling.hpa.enabled Enable HPA for orchestrator pods true
orchestrator.autoscaling.hpa.minReplicas Minimum number of replicas 2
orchestrator.autoscaling.hpa.maxReplicas Maximum number of replicas 3
orchestrator.autoscaling.hpa.targetCPU Target CPU utilization percentage 70
orchestrator.autoscaling.hpa.targetMemory Target Memory utilization percentage 70
orchestrator.autoscaling.vpa.enabled Enable VPA for orchestrator pods false
orchestrator.autoscaling.vpa.annotations Annotations for VPA resource {}
orchestrator.autoscaling.vpa.controlledResources VPA List of resources that the vertical pod autoscaler can control. Defaults to cpu and memory []
orchestrator.autoscaling.vpa.maxAllowed VPA Max allowed resources for the pod {}
orchestrator.autoscaling.vpa.minAllowed VPA Min allowed resources for the pod {}
orchestrator.autoscaling.vpa.updatePolicy.updateMode Autoscaling update policy Auto
orchestrator.sidecars Add additional sidecar containers to the orchestrator pods []
orchestrator.resourcesPreset Set orchestrator container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if orchestrator.resources is set (orchestrator.resources is recommended for production). nano
orchestrator.resources Set orchestrator container requests and limits for different resources like CPU or memory (essential for production workloads) {}
orchestrator.extraEnvVars Array with extra environment variables to add to orchestrator containers []
orchestrator.extraEnvVarsCM Name of existing ConfigMap containing extra env vars for orchestrator containers ""
orchestrator.extraEnvVarsCMs Name of existing ConfigMaps containing extra env vars for orchestrator containers []
orchestrator.extraEnvVarsSecret Name of existing Secret containing extra env vars for orchestrator containers ""
orchestrator.extraEnvVarsSecrets Name of existing Secrets containing extra env vars for orchestrator containers []
orchestrator.livenessProbe.enabled Enable livenessProbe on orchestrator containers true
orchestrator.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 20
orchestrator.livenessProbe.periodSeconds Period seconds for livenessProbe 10
orchestrator.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 3
orchestrator.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
orchestrator.livenessProbe.successThreshold Success threshold for livenessProbe 1
orchestrator.readinessProbe.enabled Enable readinessProbe on orchestrator containers true
orchestrator.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 2
orchestrator.readinessProbe.periodSeconds Period seconds for readinessProbe 5
orchestrator.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 2
orchestrator.readinessProbe.failureThreshold Failure threshold for readinessProbe 3
orchestrator.readinessProbe.successThreshold Success threshold for readinessProbe 1
orchestrator.startupProbe.enabled Enable startupProbe on orchestrator containers false
orchestrator.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe 0
orchestrator.startupProbe.periodSeconds Period seconds for startupProbe 5
orchestrator.startupProbe.timeoutSeconds Timeout seconds for startupProbe 3
orchestrator.startupProbe.failureThreshold Failure threshold for startupProbe 24
orchestrator.startupProbe.successThreshold Success threshold for startupProbe 1
orchestrator.podSecurityContext.enabled Enable orchestrator pods’ Security Context false
orchestrator.podSecurityContext.fsGroupChangePolicy Set filesystem group change policy for orchestrator pods Always
orchestrator.podSecurityContext.sysctls Set kernel settings using the sysctl interface for orchestrator pods []
orchestrator.podSecurityContext.supplementalGroups Set filesystem extra groups for orchestrator pods []
orchestrator.podSecurityContext.fsGroup Set fsGroup in orchestrator pods’ Security Context 1001
orchestrator.deploymentAnnotations Annotations for orchestrator deployment {}
orchestrator.persistence.enabled Enable persistence using Persistent Volume Claims false
orchestrator.persistence.mountPath Path to mount the volume at. /nango/orchestrator/data
orchestrator.persistence.subPath The subdirectory of the volume to mount to, useful in dev environments and one PV for multiple services ""
orchestrator.persistence.storageClass Storage class of backing PVC ""
orchestrator.persistence.annotations Persistent Volume Claim annotations {}
orchestrator.persistence.accessModes Persistent Volume Access Modes ["ReadWriteOnce"]
orchestrator.persistence.size Size of data volume 1Gi
orchestrator.persistence.dataSource Custom PVC data source {}
orchestrator.persistence.existingClaim The name of an existing PVC to use for persistence ""
orchestrator.persistence.selector Selector to match an existing Persistent Volume {}

Jobs Parameters

Name Description Value
jobs.name jobs name jobs
jobs.image.registry jobs image registry nangohq
jobs.image.repository jobs image repository nango
jobs.image.digest jobs image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag image tag (immutable tags are recommended) ""
jobs.image.pullPolicy jobs image pull policy ""
jobs.image.pullSecrets jobs image pull secrets []
jobs.replicaCount Number of jobs replicas to deploy 1
jobs.command Override default jobs container command (useful when using custom images) []
jobs.args Override default jobs container args (useful when using custom images) ["node","packages/jobs/dist/app.js"]
jobs.serviceAccount.create Specifies whether a ServiceAccount should be created true
jobs.serviceAccount.name The name of the ServiceAccount to use. ""
jobs.serviceAccount.annotations Additional Service Account annotations (evaluated as a template) {}
jobs.serviceAccount.automountServiceAccountToken Automount service account token for the jobs service account true
jobs.networkPolicy.enabled Specifies whether a NetworkPolicy should be created true
jobs.networkPolicy.allowExternal Don’t require server label for connections true
jobs.networkPolicy.allowExternalEgress Allow the pod to access any range of port and all destinations. true
jobs.networkPolicy.addExternalClientAccess Allow access from pods with client label set to “true”. Ignored if networkPolicy.allowExternal is true. true
jobs.networkPolicy.extraIngress Add extra ingress rules to the NetworkPolicy []
jobs.networkPolicy.extraEgress Add extra ingress rules to the NetworkPolicy (ignored if allowExternalEgress=true) []
jobs.networkPolicy.ingressPodMatchLabels Labels to match to allow traffic from other pods. Ignored if jobs.networkPolicy.allowExternal is true. {}
jobs.networkPolicy.ingressNSMatchLabels Labels to match to allow traffic from other namespaces. Ignored if jobs.networkPolicy.allowExternal is true. {}
jobs.networkPolicy.ingressNSPodMatchLabels Pod labels to match to allow traffic from other namespaces. Ignored if jobs.networkPolicy.allowExternal is true. {}
jobs.ingress.enabled Enable ingress record generation for jobs false
jobs.ingress.pathType Ingress path type ImplementationSpecific
jobs.ingress.apiVersion Force Ingress API version (automatically detected if not set) ""
jobs.ingress.hostname Default host for the ingress record ""
jobs.ingress.ingressClassName IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) ""
jobs.ingress.path Default path for the ingress record /
jobs.ingress.annotations Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. {}
jobs.ingress.tls Enable TLS configuration for the host defined at ingress.hostname parameter false
jobs.ingress.selfSigned Create a TLS secret for this ingress record using self-signed certificates generated by Helm false
jobs.ingress.extraHosts An array with additional hostname(s) to be covered with the ingress record []
jobs.ingress.extraPaths An array with additional arbitrary paths that may need to be added to the ingress under the main host []
jobs.ingress.extraTls TLS configuration for additional hostname(s) to be covered with this ingress record []
jobs.ingress.secrets Custom TLS certificates as secrets []
jobs.ingress.extraRules Additional rules to be covered with this ingress record []
jobs.service.type jobs service type LoadBalancer
jobs.service.ports.http jobs service HTTP port 80
jobs.service.ports.https jobs service HTTPS port 443
jobs.service.nodePorts.http Node port for HTTP ""
jobs.service.nodePorts.https Node port for HTTPS ""
jobs.service.clusterIP jobs service Cluster IP ""
jobs.service.loadBalancerIP jobs service Load Balancer IP ""
jobs.service.loadBalancerSourceRanges jobs service Load Balancer sources []
jobs.service.externalTrafficPolicy jobs service external traffic policy Cluster
jobs.service.annotations Additional custom annotations for jobs service {}
jobs.service.extraPorts Extra ports to expose in jobs service (normally used with the sidecars value) []
jobs.service.sessionAffinity Control where client requests go, to the same pod or round-robin None
jobs.service.sessionAffinityConfig Additional settings for the sessionAffinity {}
jobs.containerPorts.http jobs HTTP container port 3005
jobs.containerPorts.https jobs HTTPS container port 3005
jobs.extraContainerPorts Optionally specify extra list of additional ports for jobs containers []
jobs.updateStrategy.type jobs deployment strategy type RollingUpdate
jobs.updateStrategy.type jobs statefulset strategy type RollingUpdate
jobs.pdb.create Enable/disable a Pod Disruption Budget creation true
jobs.pdb.minAvailable Minimum number/percentage of pods that should remain scheduled 1
jobs.pdb.maxUnavailable Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both jobs.pdb.minAvailable and jobs.pdb.maxUnavailable are empty. ""
jobs.autoscaling.hpa.enabled Enable HPA for jobs pods true
jobs.autoscaling.hpa.minReplicas Minimum number of replicas 3
jobs.autoscaling.hpa.maxReplicas Maximum number of replicas 6
jobs.autoscaling.hpa.targetCPU Target CPU utilization percentage 90
jobs.autoscaling.hpa.targetMemory Target Memory utilization percentage 70
jobs.autoscaling.vpa.enabled Enable VPA for jobs pods false
jobs.autoscaling.vpa.annotations Annotations for VPA resource {}
jobs.autoscaling.vpa.controlledResources VPA List of resources that the vertical pod autoscaler can control. Defaults to cpu and memory []
jobs.autoscaling.vpa.maxAllowed VPA Max allowed resources for the pod {}
jobs.autoscaling.vpa.minAllowed VPA Min allowed resources for the pod {}
jobs.autoscaling.vpa.updatePolicy.updateMode Autoscaling update policy Auto
jobs.sidecars Add additional sidecar containers to the jobs pods []
jobs.resourcesPreset Set jobs container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if jobs.resources is set (jobs.resources is recommended for production). 2xlarge
jobs.resources Set jobs container requests and limits for different resources like CPU or memory (essential for production workloads) {}
jobs.extraEnvVars Array with extra environment variables to add to jobs containers []
jobs.extraEnvVarsCM Name of existing ConfigMap containing extra env vars for jobs containers ""
jobs.extraEnvVarsCMs Name of existing ConfigMaps containing extra env vars for jobs containers []
jobs.extraEnvVarsSecret Name of existing Secret containing extra env vars for jobs containers ""
jobs.extraEnvVarsSecrets Name of existing Secrets containing extra env vars for jobs containers []
jobs.livenessProbe.enabled Enable livenessProbe on jobs containers true
jobs.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 20
jobs.livenessProbe.periodSeconds Period seconds for livenessProbe 10
jobs.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 3
jobs.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
jobs.livenessProbe.successThreshold Success threshold for livenessProbe 1
jobs.readinessProbe.enabled Enable readinessProbe on jobs containers true
jobs.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 2
jobs.readinessProbe.periodSeconds Period seconds for readinessProbe 5
jobs.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 2
jobs.readinessProbe.failureThreshold Failure threshold for readinessProbe 3
jobs.readinessProbe.successThreshold Success threshold for readinessProbe 1
jobs.startupProbe.enabled Enable startupProbe on jobs containers false
jobs.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe 0
jobs.startupProbe.periodSeconds Period seconds for startupProbe 5
jobs.startupProbe.timeoutSeconds Timeout seconds for startupProbe 3
jobs.startupProbe.failureThreshold Failure threshold for startupProbe 24
jobs.startupProbe.successThreshold Success threshold for startupProbe 1
jobs.podSecurityContext.enabled Enable jobs pods’ Security Context false
jobs.podSecurityContext.fsGroupChangePolicy Set filesystem group change policy for jobs pods Always
jobs.podSecurityContext.sysctls Set kernel settings using the sysctl interface for jobs pods []
jobs.podSecurityContext.supplementalGroups Set filesystem extra groups for jobs pods []
jobs.podSecurityContext.fsGroup Set fsGroup in jobs pods’ Security Context 1001
jobs.deploymentAnnotations Annotations for jobs deployment {}
jobs.persistence.enabled Enable persistence using Persistent Volume Claims false
jobs.persistence.mountPath Path to mount the volume at. /nango/jobs/data
jobs.persistence.subPath The subdirectory of the volume to mount to, useful in dev environments and one PV for multiple services ""
jobs.persistence.storageClass Storage class of backing PVC ""
jobs.persistence.annotations Persistent Volume Claim annotations {}
jobs.persistence.accessModes Persistent Volume Access Modes ["ReadWriteOnce"]
jobs.persistence.size Size of data volume 1Gi
jobs.persistence.dataSource Custom PVC data source {}
jobs.persistence.existingClaim The name of an existing PVC to use for persistence ""
jobs.persistence.selector Selector to match an existing Persistent Volume {}
jobs.rbac.create Specifies whether RBAC resources should be created true

Runner Parameters

Name Description Value
runner.enabled Enable runner deployment false
runner.name runner name runner
runner.image.registry runner image registry nangohq
runner.image.repository runner image repository nango
runner.image.digest runner image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag image tag (immutable tags are recommended) ""
runner.image.pullPolicy runner image pull policy ""
runner.image.pullSecrets runner image pull secrets []
runner.replicaCount Number of runner replicas to deploy 1
runner.command Override default runner container command (useful when using custom images) []
runner.args Override default runner container args (useful when using custom images) ["node","packages/runner/dist/app.js"]
runner.serviceAccount.create Specifies whether a ServiceAccount should be created false
runner.serviceAccount.name The name of the ServiceAccount to use. ""
runner.serviceAccount.annotations Additional Service Account annotations (evaluated as a template) {}
runner.serviceAccount.automountServiceAccountToken Automount service account token for the runner service account true
runner.networkPolicy.enabled Specifies whether a NetworkPolicy should be created true
runner.networkPolicy.allowExternal Don’t require runner label for connections true
runner.networkPolicy.allowExternalEgress Allow the pod to access any range of port and all destinations. true
runner.networkPolicy.addExternalClientAccess Allow access from pods with client label set to “true”. Ignored if networkPolicy.allowExternal is true. true
runner.networkPolicy.extraIngress Add extra ingress rules to the NetworkPolicy []
runner.networkPolicy.extraEgress Add extra ingress rules to the NetworkPolicy (ignored if allowExternalEgress=true) []
runner.networkPolicy.ingressPodMatchLabels Labels to match to allow traffic from other pods. Ignored if networkPolicy.allowExternal is true. {}
runner.networkPolicy.ingressNSMatchLabels Labels to match to allow traffic from other namespaces. Ignored if networkPolicy.allowExternal is true. {}
runner.networkPolicy.ingressNSPodMatchLabels Pod labels to match to allow traffic from other namespaces. Ignored if networkPolicy.allowExternal is true. {}
runner.ingress.enabled Enable ingress record generation for runner false
runner.ingress.pathType Ingress path type ImplementationSpecific
runner.ingress.apiVersion Force Ingress API version (automatically detected if not set) ""
runner.ingress.hostname Default host for the ingress record nango-server-default.dev
runner.ingress.ingressClassName IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) ""
runner.ingress.path Default path for the ingress record /
runner.ingress.annotations Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. {}
runner.ingress.tls Enable TLS configuration for the host defined at runner.ingress.hostname parameter false
runner.ingress.selfSigned Create a TLS secret for this ingress record using self-signed certificates generated by Helm false
runner.ingress.extraHosts An array with additional hostname(s) to be covered with the ingress record []
runner.ingress.extraPaths An array with additional arbitrary paths that may need to be added to the ingress under the main host []
runner.ingress.extraTls TLS configuration for additional hostname(s) to be covered with this ingress record []
runner.ingress.secrets Custom TLS certificates as secrets []
runner.ingress.extraRules Additional rules to be covered with this ingress record []
runner.service.type runner service type LoadBalancer
runner.service.ports.http runner service HTTP port 80
runner.service.ports.https runner service HTTPS port 443
runner.service.nodePorts.http Node port for HTTP ""
runner.service.nodePorts.https Node port for HTTPS ""
runner.service.clusterIP runner service Cluster IP ""
runner.service.loadBalancerIP runner service Load Balancer IP ""
runner.service.loadBalancerSourceRanges runner service Load Balancer sources []
runner.service.externalTrafficPolicy runner service external traffic policy Cluster
runner.service.annotations Additional custom annotations for runner service {}
runner.service.extraPorts Extra ports to expose in runner service (normally used with the sidecars value) []
runner.service.sessionAffinity Control where client requests go, to the same pod or round-robin None
runner.service.sessionAffinityConfig Additional settings for the sessionAffinity {}
runner.containerPorts.http runner HTTP container port 3006
runner.containerPorts.https runner HTTPS container port 3006
runner.extraContainerPorts Optionally specify extra list of additional ports for runner containers []
runner.updateStrategy.type runner deployment strategy type RollingUpdate
runner.updateStrategy.type runner statefulset strategy type RollingUpdate
runner.pdb.create Enable/disable a Pod Disruption Budget creation false
runner.pdb.minAvailable Minimum number/percentage of pods that should remain scheduled ""
runner.pdb.maxUnavailable Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both runner.pdb.minAvailable and runner.pdb.maxUnavailable are empty. ""
runner.autoscaling.hpa.enabled Enable HPA for runner pods true
runner.autoscaling.hpa.minReplicas Minimum number of replicas 1
runner.autoscaling.hpa.maxReplicas Maximum number of replicas 3
runner.autoscaling.hpa.targetCPU Target CPU utilization percentage 75
runner.autoscaling.hpa.targetMemory Target Memory utilization percentage 75
runner.autoscaling.vpa.enabled Enable VPA for runner pods false
runner.autoscaling.vpa.annotations Annotations for VPA resource {}
runner.autoscaling.vpa.controlledResources VPA List of resources that the vertical pod autoscaler can control. Defaults to cpu and memory []
runner.autoscaling.vpa.maxAllowed VPA Max allowed resources for the pod {}
runner.autoscaling.vpa.minAllowed VPA Min allowed resources for the pod {}
runner.autoscaling.vpa.updatePolicy.updateMode Autoscaling update policy Auto
runner.sidecars Add additional sidecar containers to the runner pods []
runner.resourcesPreset Set runner container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if runner.resources is set (runner.resources is recommended for production). small
runner.resources Set runner container requests and limits for different resources like CPU or memory (essential for production workloads) {}
runner.extraEnvVars Array with extra environment variables to add to runner containers []
runner.extraEnvVarsCM Name of existing ConfigMap containing extra env vars for runner containers ""
runner.extraEnvVarsCMs Name of existing ConfigMaps containing extra env vars for runner containers []
runner.extraEnvVarsSecret Name of existing Secret containing extra env vars for runner containers ""
runner.extraEnvVarsSecrets Name of existing Secrets containing extra env vars for runner containers []
runner.livenessProbe.enabled Enable livenessProbe on runner containers true
runner.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 20
runner.livenessProbe.periodSeconds Period seconds for livenessProbe 10
runner.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 3
runner.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
runner.livenessProbe.successThreshold Success threshold for livenessProbe 1
runner.readinessProbe.enabled Enable readinessProbe on runner containers true
runner.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 2
runner.readinessProbe.periodSeconds Period seconds for readinessProbe 5
runner.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 2
runner.readinessProbe.failureThreshold Failure threshold for readinessProbe 3
runner.readinessProbe.successThreshold Success threshold for readinessProbe 1
runner.startupProbe.enabled Enable startupProbe on runner containers false
runner.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe 0
runner.startupProbe.periodSeconds Period seconds for startupProbe 5
runner.startupProbe.timeoutSeconds Timeout seconds for startupProbe 3
runner.startupProbe.failureThreshold Failure threshold for startupProbe 24
runner.startupProbe.successThreshold Success threshold for startupProbe 1
runner.podSecurityContext.enabled Enable runner pods’ Security Context false
runner.podSecurityContext.fsGroupChangePolicy Set filesystem group change policy for runner pods Always
runner.podSecurityContext.sysctls Set kernel settings using the sysctl interface for runner pods []
runner.podSecurityContext.supplementalGroups Set filesystem extra groups for runner pods []
runner.podSecurityContext.fsGroup Set fsGroup in runner pods’ Security Context 1001
runner.deploymentAnnotations Annotations for runner deployment {}
runner.persistence.enabled Enable persistence using Persistent Volume Claims false
runner.persistence.mountPath Path to mount the volume at. /nango/runner/data
runner.persistence.subPath The subdirectory of the volume to mount to, useful in dev environments and one PV for multiple services ""
runner.persistence.storageClass Storage class of backing PVC ""
runner.persistence.annotations Persistent Volume Claim annotations {}
runner.persistence.accessModes Persistent Volume Access Modes ["ReadWriteOnce"]
runner.persistence.size Size of data volume 8Gi
runner.persistence.dataSource Custom PVC data source {}
runner.persistence.existingClaim The name of an existing PVC to use for persistence ""
runner.persistence.selector Selector to match an existing Persistent Volume {}

Metering Parameters

Name Description Value
metering.enabled Enable/disable metering false
metering.name metering name metering
metering.image.registry metering image registry nangohq
metering.image.repository metering image repository nango
metering.image.digest metering image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag image tag (immutable tags are recommended) ""
metering.image.pullPolicy metering image pull policy ""
metering.image.pullSecrets metering image pull secrets []
metering.replicaCount Number of metering replicas to deploy 1
metering.command Override default metering container command (useful when using custom images) []
metering.args Override default metering container args (useful when using custom images) ["node","packages/metering/dist/app.js"]
metering.serviceAccount.create Specifies whether a ServiceAccount should be created true
metering.serviceAccount.name The name of the ServiceAccount to use. ""
metering.serviceAccount.annotations Additional Service Account annotations (evaluated as a template) {}
metering.serviceAccount.automountServiceAccountToken Automount service account token for the metering service account true
metering.networkPolicy.enabled Specifies whether a NetworkPolicy should be created true
metering.networkPolicy.allowExternal Don’t require metering label for connections true
metering.networkPolicy.allowExternalEgress Allow the pod to access any range of port and all destinations. true
metering.networkPolicy.addExternalClientAccess Allow access from pods with client label set to “true”. Ignored if metering.networkPolicy.allowExternal is true. true
metering.networkPolicy.extraIngress Add extra ingress rules to the NetworkPolicy []
metering.networkPolicy.extraEgress Add extra ingress rules to the NetworkPolicy (ignored if allowExternalEgress=true) []
metering.networkPolicy.ingressPodMatchLabels Labels to match to allow traffic from other pods. Ignored if metering.networkPolicy.allowExternal is true. {}
metering.networkPolicy.ingressNSMatchLabels Labels to match to allow traffic from other namespaces. Ignored if metering.networkPolicy.allowExternal is true. {}
metering.networkPolicy.ingressNSPodMatchLabels Pod labels to match to allow traffic from other namespaces. Ignored if metering.networkPolicy.allowExternal is true. {}
metering.ingress.enabled Enable ingress record generation for metering false
metering.ingress.pathType Ingress path type Prefix
metering.ingress.apiVersion Force Ingress API version (automatically detected if not set) ""
metering.ingress.hostname Default host for the ingress record example-app.nango.dev
metering.ingress.ingressClassName IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) ""
metering.ingress.path Default path for the ingress record /
metering.ingress.annotations Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. {}
metering.ingress.tls Enable TLS configuration for the host defined at metering.ingress.hostname parameter false
metering.ingress.selfSigned Create a TLS secret for this ingress record using self-signed certificates generated by Helm false
metering.ingress.extraHosts An array with additional hostname(s) to be covered with the ingress record []
metering.ingress.extraPaths An array with additional arbitrary paths that may need to be added to the ingress under the main host []
metering.ingress.extraTls TLS configuration for additional hostname(s) to be covered with this ingress record []
metering.ingress.secrets Custom TLS certificates as secrets []
metering.ingress.extraRules Additional rules to be covered with this ingress record []
metering.service.create Create metering service false
metering.service.type metering service type LoadBalancer
metering.service.ports.http metering service HTTP port 80
metering.service.ports.https metering service HTTPS port 443
metering.service.nodePorts.http Node port for HTTP ""
metering.service.nodePorts.https Node port for HTTPS ""
metering.service.clusterIP metering service Cluster IP ""
metering.service.loadBalancerIP metering service Load Balancer IP ""
metering.service.loadBalancerSourceRanges metering service Load Balancer sources []
metering.service.externalTrafficPolicy metering service external traffic policy Cluster
metering.service.annotations Additional custom annotations for metering service {}
metering.service.extraPorts Extra ports to expose in metering service (normally used with the sidecars value) []
metering.service.sessionAffinity Control where client requests go, to the same pod or round-robin None
metering.service.sessionAffinityConfig Additional settings for the sessionAffinity {}
metering.containerPorts.enabled Enable/disable metering container ports false
metering.containerPorts.http metering HTTP container port ""
metering.containerPorts.https metering HTTPS container port ""
metering.extraContainerPorts Optionally specify extra list of additional ports for metering containers []
metering.updateStrategy.type metering deployment strategy type RollingUpdate
metering.updateStrategy.type metering statefulset strategy type RollingUpdate
metering.pdb.create Enable/disable a Pod Disruption Budget creation true
metering.pdb.minAvailable Minimum number/percentage of pods that should remain scheduled 1
metering.pdb.maxUnavailable Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both metering.pdb.minAvailable and metering.pdb.maxUnavailable are empty. ""
metering.autoscaling.hpa.enabled Enable HPA for metering pods false
metering.autoscaling.hpa.minReplicas Minimum number of replicas 1
metering.autoscaling.hpa.maxReplicas Maximum number of replicas 1
metering.autoscaling.hpa.targetCPU Target CPU utilization percentage ""
metering.autoscaling.hpa.targetMemory Target Memory utilization percentage ""
metering.autoscaling.vpa.enabled Enable VPA for metering pods false
metering.autoscaling.vpa.annotations Annotations for VPA resource {}
metering.autoscaling.vpa.controlledResources VPA List of resources that the vertical pod autoscaler can control. Defaults to cpu and memory []
metering.autoscaling.vpa.maxAllowed VPA Max allowed resources for the pod {}
metering.autoscaling.vpa.minAllowed VPA Min allowed resources for the pod {}
metering.autoscaling.vpa.updatePolicy.updateMode Autoscaling update policy Auto
metering.sidecars Add additional sidecar containers to the metering pods []
metering.resourcesPreset Set metering container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if metering.resources is set (metering.resources is recommended for production). small
metering.resources Set metering container requests and limits for different resources like CPU or memory (essential for production workloads) {}
metering.extraEnvVars Array with extra environment variables to add to metering containers []
metering.extraEnvVarsCM Name of existing ConfigMap containing extra env vars for metering containers ""
metering.extraEnvVarsCMs Name of existing ConfigMaps containing extra env vars for metering containers []
metering.extraEnvVarsSecret Name of existing Secret containing extra env vars for metering containers ""
metering.extraEnvVarsSecrets Name of existing Secrets containing extra env vars for metering containers []
metering.livenessProbe.enabled Enable livenessProbe on metering containers false
metering.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 20
metering.livenessProbe.periodSeconds Period seconds for livenessProbe 10
metering.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 3
metering.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
metering.livenessProbe.successThreshold Success threshold for livenessProbe 1
metering.readinessProbe.enabled Enable readinessProbe on metering containers false
metering.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 2
metering.readinessProbe.periodSeconds Period seconds for readinessProbe 5
metering.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 2
metering.readinessProbe.failureThreshold Failure threshold for readinessProbe 3
metering.readinessProbe.successThreshold Success threshold for readinessProbe 1
metering.startupProbe.enabled Enable startupProbe on metering containers false
metering.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe 0
metering.startupProbe.periodSeconds Period seconds for startupProbe 5
metering.startupProbe.timeoutSeconds Timeout seconds for startupProbe 3
metering.startupProbe.failureThreshold Failure threshold for startupProbe 24
metering.startupProbe.successThreshold Success threshold for startupProbe 1
metering.podSecurityContext.enabled Enable metering pods’ Security Context false
metering.podSecurityContext.fsGroupChangePolicy Set filesystem group change policy for metering pods Always
metering.podSecurityContext.sysctls Set kernel settings using the sysctl interface for metering pods []
metering.podSecurityContext.supplementalGroups Set filesystem extra groups for metering pods []
metering.podSecurityContext.fsGroup Set fsGroup in metering pods’ Security Context 1001
metering.deploymentAnnotations Annotations for metering deployment {}
metering.persistence.enabled Enable persistence using Persistent Volume Claims false
metering.persistence.mountPath Path to mount the volume at. /nango/metering/data
metering.persistence.subPath The subdirectory of the volume to mount to, useful in dev environments and one PV for multiple services ""
metering.persistence.storageClass Storage class of backing PVC ""
metering.persistence.annotations Persistent Volume Claim annotations {}
metering.persistence.accessModes Persistent Volume Access Modes ["ReadWriteOnce"]
metering.persistence.size Size of data volume 8Gi
metering.persistence.dataSource Custom PVC data source {}
metering.persistence.existingClaim The name of an existing PVC to use for persistence nango-jobs
metering.persistence.selector Selector to match an existing Persistent Volume {}