add rocketchat helm

This commit is contained in:
2025-04-03 15:02:31 +02:00
parent 50847afaa0
commit 5127028f6d
177 changed files with 23855 additions and 0 deletions

View File

@@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,25 @@
apiVersion: v2
appVersion: 2.7.4
description: A Helm chart for the NATS.io High Speed Cloud Native Distributed Communications
Technology.
home: http://github.com/nats-io/k8s
icon: https://nats.io/img/nats-icon-color.png
keywords:
- nats
- messaging
- cncf
maintainers:
- email: wally@nats.io
name: Waldemar Quevedo
url: https://github.com/wallyqs
- email: colin@nats.io
name: Colin Sullivan
url: https://github.com/ColinSullivan1
- email: jaime@nats.io
name: Jaime Piña
url: https://github.com/variadico
- email: caleb@nats.io
name: Caleb Lloyd
url: https://github.com/caleblloyd
name: nats
version: 0.15.1

View File

@@ -0,0 +1,823 @@
# NATS Server
[NATS](https://nats.io) is a simple, secure and performant communications system for digital systems, services and devices. NATS is part of the Cloud Native Computing Foundation ([CNCF](https://cncf.io)). NATS has over [30 client language implementations](https://nats.io/download/), and its server can run on-premise, in the cloud, at the edge, and even on a Raspberry Pi. NATS can secure and simplify design and operation of modern distributed systems.
## TL;DR;
```console
helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm install my-nats nats/nats
```
## Breaking Change Log
- **0.15.0**: For users with JetStream enabled (`nats.jetstream.enabled = true`): `nats.jetstream.fileStorage.enabled` now defaults to `true` and `nats.jetstream.fileStorage.size` now defaults to `10Gi`. This updates the StatefulSet `spec.volumeClaimTemplates` field, which is immutable and cannot be changed on an existing StatefulSet; to upgrade from an older chart version, add the value:
```yaml
nats:
jetstream:
fileStorage:
# add if enabled was previously the default setting
# not recommended; it would be better to migrate to a StatefulSet with storage enabled
enabled: false
# add if size was previously the default setting
size: 1Gi
```
- **0.12.0**: The `podManagementPolicy` value was introduced and set to `Parallel` by default, which controls the StatefulSet `spec.podManagementPolicy` field. This field is immutable and cannot be changed on an existing StatefulSet; to upgrade from an older chart version, add the value:
```yaml
podManagementPolicy: OrderedReady
```
## Configuration
### Server Image
```yaml
nats:
image: nats:2.7.4-alpine
pullPolicy: IfNotPresent
```
### Limits
```yaml
nats:
# The number of connect attempts against discovered routes.
connectRetries: 30
# How many seconds should pass before sending a PING
# to a client that has no activity.
pingInterval:
# Server settings.
limits:
maxConnections:
maxSubscriptions:
maxControlLine:
maxPayload:
writeDeadline:
maxPending:
maxPings:
lameDuckDuration:
# Number of seconds to wait for client connections to end after the pod termination is requested
terminationGracePeriodSeconds: 60
```
### Logging
*Note*: It is not recommended to enable trace or debug in production since enabling it will significantly degrade performance.
```yaml
nats:
logging:
debug:
trace:
logtime:
connectErrorReports:
reconnectErrorReports:
```
### TLS setup for client connections
You can find more on how to setup and trouble shoot TLS connnections at:
https://docs.nats.io/nats-server/configuration/securing_nats/tls
```yaml
nats:
tls:
secret:
name: nats-client-tls
ca: "ca.crt"
cert: "tls.crt"
key: "tls.key"
```
## Clustering
If clustering is enabled, then a 3-node cluster will be setup. More info at:
https://docs.nats.io/nats-server/configuration/clustering#nats-server-clustering
```yaml
cluster:
enabled: true
replicas: 3
tls:
secret:
name: nats-server-tls
ca: "ca.crt"
cert: "tls.crt"
key: "tls.key"
```
Example:
```sh
$ helm install nats nats/nats --set cluster.enabled=true
```
## Leafnodes
Leafnode connections to extend a cluster. More info at:
https://docs.nats.io/nats-server/configuration/leafnodes
```yaml
leafnodes:
enabled: true
remotes:
- url: "tls://connect.ngs.global:7422"
# credentials:
# secret:
# name: leafnode-creds
# key: TA.creds
# tls:
# secret:
# name: nats-leafnode-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
tls:
secret:
name: nats-client-tls
ca: "ca.crt"
cert: "tls.crt"
key: "tls.key"
```
## Setting up External Access
### Using HostPorts
In case of both external access and advertisements being enabled, an
initializer container will be used to gather the public ips. This
container will required to have enough RBAC policy to be able to make a
look up of the public ip of the node where it is running.
For example, to setup external access for a cluster and advertise the public ip to clients:
```yaml
nats:
# Toggle whether to enable external access.
# This binds a host port for clients, gateways and leafnodes.
externalAccess: true
# Toggle to disable client advertisements (connect_urls),
# in case of running behind a load balancer (which is not recommended)
# it might be required to disable advertisements.
advertise: true
# In case both external access and advertise are enabled
# then a service account would be required to be able to
# gather the public ip from a node.
serviceAccount: "nats-server"
```
Where the service account named `nats-server` has the following RBAC policy for example:
```yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nats-server
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nats-server
rules:
- apiGroups: [""]
resources:
- nodes
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nats-server-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nats-server
subjects:
- kind: ServiceAccount
name: nats-server
namespace: default
```
The container image of the initializer can be customized via:
```yaml
bootconfig:
image: natsio/nats-boot-config:latest
pullPolicy: IfNotPresent
```
### Using LoadBalancers
In case of using a load balancer for external access, it is recommended to disable no advertise
so that internal ips from the NATS Servers are not advertised to the clients connecting through
the load balancer.
```yaml
nats:
image: nats:alpine
cluster:
enabled: true
noAdvertise: true
leafnodes:
enabled: true
noAdvertise: true
natsbox:
enabled: true
```
Then could use an L4 enabled load balancer to connect to NATS, for example:
```yaml
apiVersion: v1
kind: Service
metadata:
name: nats-lb
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: nats
ports:
- protocol: TCP
port: 4222
targetPort: 4222
name: nats
- protocol: TCP
port: 7422
targetPort: 7422
name: leafnodes
- protocol: TCP
port: 7522
targetPort: 7522
name: gateways
```
## Gateways
A super cluster can be formed by pointing to remote gateways.
You can find more about gateways in the NATS documentation:
https://docs.nats.io/nats-server/configuration/gateways
```yaml
gateway:
enabled: false
name: 'default'
#############################
# #
# List of remote gateways #
# #
#############################
# gateways:
# - name: other
# url: nats://my-gateway-url:7522
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
# tls:
# secret:
# name: nats-client-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
```
## Auth setup
### Auth with a Memory Resolver
```yaml
auth:
enabled: true
# Reference to the Operator JWT.
operatorjwt:
configMap:
name: operator-jwt
key: KO.jwt
# Public key of the System Account
systemAccount:
resolver:
############################
# #
# Memory resolver settings #
# #
##############################
type: memory
#
# Use a configmap reference which will be mounted
# into the container.
#
configMap:
name: nats-accounts
key: resolver.conf
```
### Auth using an Account Server Resolver
```yaml
auth:
enabled: true
# Reference to the Operator JWT.
operatorjwt:
configMap:
name: operator-jwt
key: KO.jwt
# Public key of the System Account
systemAccount:
resolver:
##########################
# #
# URL resolver settings #
# #
##########################
type: URL
url: "http://nats-account-server:9090/jwt/v1/accounts/"
```
## JetStream
### Setting up Memory and File Storage
File Storage is **always** recommended, since JetStream's RAFT Meta Group will be persisted to file storage. The Storage Class used should be block storage. NFS is not recommended.
```yaml
nats:
image: nats:alpine
jetstream:
enabled: true
memStorage:
enabled: true
size: 2Gi
fileStorage:
enabled: true
size: 10Gi
# storageClassName: gp2 # NOTE: AWS setup but customize as needed for your infra.
```
### Using with an existing PersistentVolumeClaim
For example, given the following `PersistentVolumeClaim`:
```yaml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nats-js-disk
annotations:
volume.beta.kubernetes.io/storage-class: "default"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
```
You can start JetStream so that one pod is bounded to it:
```yaml
nats:
image: nats:alpine
jetstream:
enabled: true
fileStorage:
enabled: true
storageDirectory: /data/
existingClaim: nats-js-disk
claimStorageSize: 3Gi
```
### Clustering example
```yaml
nats:
image: nats:alpine
jetstream:
enabled: true
memStorage:
enabled: true
size: "2Gi"
fileStorage:
enabled: true
size: "10Gi"
cluster:
enabled: true
# Cluster name is required, by default will be release name.
# name: "nats"
replicas: 3
```
### Basic Authentication and JetStream
```yaml
nats:
image: nats:alpine
jetstream:
enabled: true
memStorage:
enabled: true
size: "2Gi"
fileStorage:
enabled: true
size: "10Gi"
# storageClassName: gp2 # NOTE: AWS setup but customize as needed for your infra.
cluster:
enabled: true
# Can set a custom cluster name
# name: "nats"
replicas: 3
auth:
enabled: true
systemAccount: sys
basic:
accounts:
sys:
users:
- user: sys
pass: sys
js:
jetstream: true
users:
- user: foo
```
### NATS Resolver setup example
As of NATS v2.2, the server now has a built-in NATS resolver of accounts.
The following is an example guide of how to get it configured.
```sh
# Create a working directory to keep the creds.
mkdir nats-creds
cd nats-creds
# This just creates some accounts for you to get started.
curl -fSl https://nats-io.github.io/k8s/setup/nsc-setup.sh | sh
source .nsc.env
# You should have some accounts now, at least the following.
nsc list accounts
+-------------------------------------------------------------------+
| Accounts |
+--------+----------------------------------------------------------+
| Name | Public Key |
+--------+----------------------------------------------------------+
| A | ABJ4OIKBBFCNXZDP25C7EWXCXOVCYYAGBEHFAG7F5XYCOYPHZLNSJYDF |
| B | ACVRK7GFBRQUCB3NEABGQ7XPNED2BSPT27GOX5QBDYW2NOFMQKK755DJ |
| SYS | ADGFH4NYV5V75SVM5DYSW5AWOD7H2NRUWAMO6XLZKIDGUWYEXCZG5D6N |
+--------+----------------------------------------------------------+
# Now create an account with JetStream support
export account=JS1
nsc add account --name $account
nsc edit account --name $account --js-disk-storage -1 --js-consumer -1 --js-streams -1
nsc add user -a $account js-user
```
Next, generate the NATS resolver config. This will be used to fill in the values of the YAML in the Helm template.
For example the result of generating this:
```sh
nsc generate config --sys-account SYS --nats-resolver
# Operator named KO
operator: eyJ0eXAiOiJKV1QiLCJhbGciOiJlZDI1NTE5LW5rZXkifQ.eyJqdGkiOiJDRlozRlE0WURNTUc1Q1UzU0FUWVlHWUdQUDJaQU1QUzVNRUdNWFdWTUJFWUdIVzc2WEdBIiwiaWF0IjoxNjMyNzgzMDk2LCJpc3MiOiJPQ0lWMlFGSldJTlpVQVQ1VDJZSkJJUkMzQjZKS01TWktRTkY1S0dQNE4zS1o0RkZEVkFXWVhDTCIsIm5hbWUiOiJLTyIsInN1YiI6Ik9DSVYyUUZKV0lOWlVBVDVUMllKQklSQzNCNkpLTVNaS1FORjVLR1A0TjNLWjRGRkRWQVdZWENMIiwibmF0cyI6eyJ0eXBlIjoib3BlcmF0b3IiLCJ2ZXJzaW9uIjoyfX0.e3gvJ-C1IBznmbUljeT_wbLRl1akv5IGBS3rbxs6mzzTvf3zlqQI4wDKVE8Gvb8qfTX6TIwocClfOqNaN3k3CQ
# System Account named SYS
system_account: ADGFH4NYV5V75SVM5DYSW5AWOD7H2NRUWAMO6XLZKIDGUWYEXCZG5D6N
resolver_preload: {
ADGFH4NYV5V75SVM5DYSW5AWOD7H2NRUWAMO6XLZKIDGUWYEXCZG5D6N: eyJ0eXAiOiJKV1QiLCJhbGciOiJlZDI1NTE5LW5rZXkifQ.eyJqdGkiOiJDR0tWVzJGQUszUE5XQTRBWkhHT083UTdZWUVPQkJYNDZaTU1VSFc1TU5QSUFVSFE0RVRRIiwiaWF0IjoxNjMyNzgzMDk2LCJpc3MiOiJPQ0lWMlFGSldJTlpVQVQ1VDJZSkJJUkMzQjZKS01TWktRTkY1S0dQNE4zS1o0RkZEVkFXWVhDTCIsIm5hbWUiOiJTWVMiLCJzdWIiOiJBREdGSDROWVY1Vjc1U1ZNNURZU1c1QVdPRDdIMk5SVVdBTU82WExaS0lER1VXWUVYQ1pHNUQ2TiIsIm5hdHMiOnsibGltaXRzIjp7InN1YnMiOi0xLCJkYXRhIjotMSwicGF5bG9hZCI6LTEsImltcG9ydHMiOi0xLCJleHBvcnRzIjotMSwid2lsZGNhcmRzIjp0cnVlLCJjb25uIjotMSwibGVhZiI6LTF9LCJkZWZhdWx0X3Blcm1pc3Npb25zIjp7InB1YiI6e30sInN1YiI6e319LCJ0eXBlIjoiYWNjb3VudCIsInZlcnNpb24iOjJ9fQ.J7g73TEn-ZT13owq4cVWl4l0hZnGK4DJtH2WWOZmGbefcCQ1xsx4cIagKc1cZTCwUpELVAYnSkmPp4LsQOspBg,
}
```
In the YAML would be configured as follows:
```
auth:
enabled: true
timeout: "5s"
resolver:
type: full
operator: eyJ0eXAiOiJKV1QiLCJhbGciOiJlZDI1NTE5LW5rZXkifQ.eyJqdGkiOiJDRlozRlE0WURNTUc1Q1UzU0FUWVlHWUdQUDJaQU1QUzVNRUdNWFdWTUJFWUdIVzc2WEdBIiwiaWF0IjoxNjMyNzgzMDk2LCJpc3MiOiJPQ0lWMlFGSldJTlpVQVQ1VDJZSkJJUkMzQjZKS01TWktRTkY1S0dQNE4zS1o0RkZEVkFXWVhDTCIsIm5hbWUiOiJLTyIsInN1YiI6Ik9DSVYyUUZKV0lOWlVBVDVUMllKQklSQzNCNkpLTVNaS1FORjVLR1A0TjNLWjRGRkRWQVdZWENMIiwibmF0cyI6eyJ0eXBlIjoib3BlcmF0b3IiLCJ2ZXJzaW9uIjoyfX0.e3gvJ-C1IBznmbUljeT_wbLRl1akv5IGBS3rbxs6mzzTvf3zlqQI4wDKVE8Gvb8qfTX6TIwocClfOqNaN3k3CQ
systemAccount: ADGFH4NYV5V75SVM5DYSW5AWOD7H2NRUWAMO6XLZKIDGUWYEXCZG5D6N
store:
dir: "/etc/nats-config/accounts/jwt"
size: "1Gi"
resolverPreload:
ADGFH4NYV5V75SVM5DYSW5AWOD7H2NRUWAMO6XLZKIDGUWYEXCZG5D6N: eyJ0eXAiOiJKV1QiLCJhbGciOiJlZDI1NTE5LW5rZXkifQ.eyJqdGkiOiJDR0tWVzJGQUszUE5XQTRBWkhHT083UTdZWUVPQkJYNDZaTU1VSFc1TU5QSUFVSFE0RVRRIiwiaWF0IjoxNjMyNzgzMDk2LCJpc3MiOiJPQ0lWMlFGSldJTlpVQVQ1VDJZSkJJUkMzQjZKS01TWktRTkY1S0dQNE4zS1o0RkZEVkFXWVhDTCIsIm5hbWUiOiJTWVMiLCJzdWIiOiJBREdGSDROWVY1Vjc1U1ZNNURZU1c1QVdPRDdIMk5SVVdBTU82WExaS0lER1VXWUVYQ1pHNUQ2TiIsIm5hdHMiOnsibGltaXRzIjp7InN1YnMiOi0xLCJkYXRhIjotMSwicGF5bG9hZCI6LTEsImltcG9ydHMiOi0xLCJleHBvcnRzIjotMSwid2lsZGNhcmRzIjp0cnVlLCJjb25uIjotMSwibGVhZiI6LTF9LCJkZWZhdWx0X3Blcm1pc3Npb25zIjp7InB1YiI6e30sInN1YiI6e319LCJ0eXBlIjoiYWNjb3VudCIsInZlcnNpb24iOjJ9fQ.J7g73TEn-ZT13owq4cVWl4l0hZnGK4DJtH2WWOZmGbefcCQ1xsx4cIagKc1cZTCwUpELVAYnSkmPp4LsQOspBg
```
Now we start the server with the NATS Account Resolver (`auth.resolver.type=full`):
```yaml
nats:
image: nats:2.7.4-alpine
logging:
debug: false
trace: false
jetstream:
enabled: true
memStorage:
enabled: true
size: "2Gi"
fileStorage:
enabled: true
size: "10Gi"
# storageClassName: gp2 # NOTE: AWS setup but customize as needed for your infra.
cluster:
enabled: true
# Can set a custom cluster name
name: "nats"
replicas: 3
auth:
enabled: true
timeout: "5s"
resolver:
type: full
operator: eyJ0eXAiOiJKV1QiLCJhbGciOiJlZDI1NTE5LW5rZXkifQ.eyJqdGkiOiJDRlozRlE0WURNTUc1Q1UzU0FUWVlHWUdQUDJaQU1QUzVNRUdNWFdWTUJFWUdIVzc2WEdBIiwiaWF0IjoxNjMyNzgzMDk2LCJpc3MiOiJPQ0lWMlFGSldJTlpVQVQ1VDJZSkJJUkMzQjZKS01TWktRTkY1S0dQNE4zS1o0RkZEVkFXWVhDTCIsIm5hbWUiOiJLTyIsInN1YiI6Ik9DSVYyUUZKV0lOWlVBVDVUMllKQklSQzNCNkpLTVNaS1FORjVLR1A0TjNLWjRGRkRWQVdZWENMIiwibmF0cyI6eyJ0eXBlIjoib3BlcmF0b3IiLCJ2ZXJzaW9uIjoyfX0.e3gvJ-C1IBznmbUljeT_wbLRl1akv5IGBS3rbxs6mzzTvf3zlqQI4wDKVE8Gvb8qfTX6TIwocClfOqNaN3k3CQ
systemAccount: ADGFH4NYV5V75SVM5DYSW5AWOD7H2NRUWAMO6XLZKIDGUWYEXCZG5D6N
store:
dir: "/etc/nats-config/accounts/jwt"
size: "1Gi"
resolverPreload:
ADGFH4NYV5V75SVM5DYSW5AWOD7H2NRUWAMO6XLZKIDGUWYEXCZG5D6N: eyJ0eXAiOiJKV1QiLCJhbGciOiJlZDI1NTE5LW5rZXkifQ.eyJqdGkiOiJDR0tWVzJGQUszUE5XQTRBWkhHT083UTdZWUVPQkJYNDZaTU1VSFc1TU5QSUFVSFE0RVRRIiwiaWF0IjoxNjMyNzgzMDk2LCJpc3MiOiJPQ0lWMlFGSldJTlpVQVQ1VDJZSkJJUkMzQjZKS01TWktRTkY1S0dQNE4zS1o0RkZEVkFXWVhDTCIsIm5hbWUiOiJTWVMiLCJzdWIiOiJBREdGSDROWVY1Vjc1U1ZNNURZU1c1QVdPRDdIMk5SVVdBTU82WExaS0lER1VXWUVYQ1pHNUQ2TiIsIm5hdHMiOnsibGltaXRzIjp7InN1YnMiOi0xLCJkYXRhIjotMSwicGF5bG9hZCI6LTEsImltcG9ydHMiOi0xLCJleHBvcnRzIjotMSwid2lsZGNhcmRzIjp0cnVlLCJjb25uIjotMSwibGVhZiI6LTF9LCJkZWZhdWx0X3Blcm1pc3Npb25zIjp7InB1YiI6e30sInN1YiI6e319LCJ0eXBlIjoiYWNjb3VudCIsInZlcnNpb24iOjJ9fQ.J7g73TEn-ZT13owq4cVWl4l0hZnGK4DJtH2WWOZmGbefcCQ1xsx4cIagKc1cZTCwUpELVAYnSkmPp4LsQOspBg
```
Finally, using a local port-forward make it possible to establish a connection to one of the servers and upload the accounts.
```sh
nsc push --system-account SYS -u nats://localhost:4222 -A
[ OK ] push to nats-server "nats://localhost:4222" using system account "SYS":
[ OK ] push JS1 to nats-server with nats account resolver:
[ OK ] pushed "JS1" to nats-server nats-0: jwt updated
[ OK ] pushed "JS1" to nats-server nats-1: jwt updated
[ OK ] pushed "JS1" to nats-server nats-2: jwt updated
[ OK ] pushed to a total of 3 nats-server
```
Now you should be able to use JetStream and the NATS based account resolver:
```sh
nats stream ls -s localhost --creds ./nsc/nkeys/creds/KO/JS1/js-user.creds
No Streams defined
```
## Misc
### NATS Box
A lightweight container with NATS and NATS Streaming utilities that is deployed along the cluster to confirm the setup.
You can find the image at: https://github.com/nats-io/nats-box
```yaml
natsbox:
enabled: true
image: natsio/nats-box:latest
pullPolicy: IfNotPresent
# credentials:
# secret:
# name: nats-sys-creds
# key: sys.creds
```
### Configuration Checksum
A configuration checksum annotation is enabled by default on StatefulSet Pods in order to force a rollout when the NATS configuration changes. This checksum is only applied by `helm` commands, and will not change if configuration is modified outside of setting `helm` values.
```yaml
nats:
configChecksumAnnotation: true
```
### Configuration Reload sidecar
The NATS configuration reload sidecar is enabled by default; it passes the configuration reload signal to the NATS server when it detects configuration changes:
```yaml
reloader:
enabled: true
image: natsio/nats-server-config-reloader:latest
pullPolicy: IfNotPresent
```
### Prometheus Exporter sidecar
The Prometheus Exporter sidecar is enabled by default; it can be used to feed metrics to Prometheus:
```yaml
exporter:
enabled: true
image: natsio/prometheus-nats-exporter:latest
pullPolicy: IfNotPresent
```
### Prometheus operator ServiceMonitor support
You can enable prometheus operator ServiceMonitor:
```yaml
exporter:
# You have to enable exporter first
enabled: true
serviceMonitor:
enabled: true
## Specify the namespace where Prometheus Operator is running
# namespace: monitoring
# ...
```
### Pod Customizations
#### Security Context
```yaml
# Toggle whether to use setup a Pod Security Context
# ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
securityContext:
fsGroup: 1000
runAsUser: 1000
runAsNonRoot: true
```
#### Affinity
<https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity>
`matchExpressions` must be configured according to your setup
```yaml
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node.kubernetes.io/purpose
operator: In
values:
- nats
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nats
- stan
topologyKey: "kubernetes.io/hostname"
```
#### Service topology
[Service topology](https://kubernetes.io/docs/concepts/services-networking/service-topology/) is disabled by default, but can be enabled by setting `topologyKeys`. For example:
```yaml
topologyKeys:
- "kubernetes.io/hostname"
- "topology.kubernetes.io/zone"
- "topology.kubernetes.io/region"
```
#### CPU/Memory Resource Requests/Limits
Sets the pods cpu/memory requests/limits
```yaml
nats:
resources:
requests:
cpu: 4
memory: 8Gi
limits:
cpu: 6
memory: 10Gi
```
No resources are set by default. It is recommended for NATS JetStream deployments to allocate at least 8Gi of memory and 4 cpus.
#### Annotations
<https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations>
```yaml
podAnnotations:
key1 : "value1",
key2 : "value2"
```
### Name Overides
Can change the name of the resources as needed with:
```yaml
nameOverride: "my-nats"
```
### Image Pull Secrets
```yaml
imagePullSecrets:
- name: myRegistry
```
Adds this to the StatefulSet:
```yaml
spec:
imagePullSecrets:
- name: myRegistry
```
### Mixed TLS and non TLS mode
You can use the `nats.tls.allowNonTLS` option to allow a cluster to use TLS connections
and plain connections:
```yaml
nats:
client:
port: 4222
tls:
allowNonTLS: true
secret:
name: nats-server-tls
ca: "ca.crt"
cert: "tls.crt"
key: "tls.key"
timeout: "5s"
```

View File

@@ -0,0 +1,26 @@
{{- if or .Values.nats.logging.debug .Values.nats.logging.trace }}
*WARNING*: Keep in mind that running the server with
debug and/or trace enabled significantly affects the
performance of the server!
{{- end }}
You can find more information about running NATS on Kubernetes
in the NATS documentation website:
https://docs.nats.io/nats-on-kubernetes/nats-kubernetes
{{- if .Values.natsbox.enabled }}
NATS Box has been deployed into your cluster, you can
now use the NATS tools within the container as follows:
kubectl exec -n {{ template "nats.namespace" . }} -it deployment/{{ template "nats.fullname" . }}-box -- /bin/sh -l
nats-box:~# nats-sub test &
nats-box:~# nats-pub test hi
nats-box:~# nc {{ template "nats.fullname" . }} {{ .Values.nats.client.port }}
{{- end }}
Thanks for using NATS!

View File

@@ -0,0 +1,147 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "nats.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "nats.namespace" -}}
{{- default .Release.Namespace .Values.namespaceOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "nats.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "nats.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "nats.labels" -}}
helm.sh/chart: {{ include "nats.chart" . }}
{{- range $name, $value := .Values.commonLabels }}
{{ $name }}: {{ tpl $value $ }}
{{- end }}
{{ include "nats.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "nats.selectorLabels" -}}
{{- if .Values.nats.selectorLabels }}
{{ tpl (toYaml .Values.nats.selectorLabels) . }}
{{- else -}}
app.kubernetes.io/name: {{ include "nats.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{- end }}
{{/*
Return the proper NATS image name
*/}}
{{- define "nats.clusterAdvertise" -}}
{{- if $.Values.useFQDN }}
{{- printf "$(POD_NAME).%s.$(POD_NAMESPACE).svc.%s" (include "nats.fullname" . ) $.Values.k8sClusterDomain }}
{{- else }}
{{- printf "$(POD_NAME).%s.$(POD_NAMESPACE)" (include "nats.fullname" . ) }}
{{- end }}
{{- end }}
{{/*
Return the NATS cluster routes.
*/}}
{{- define "nats.clusterRoutes" -}}
{{- $name := (include "nats.fullname" . ) -}}
{{- $namespace := (include "nats.namespace" . ) -}}
{{- range $i, $e := until (.Values.cluster.replicas | int) -}}
{{- if $.Values.useFQDN }}
{{- printf "nats://%s-%d.%s.%s.svc.%s:6222," $name $i $name $namespace $.Values.k8sClusterDomain -}}
{{- else }}
{{- printf "nats://%s-%d.%s.%s:6222," $name $i $name $namespace -}}
{{- end }}
{{- end -}}
{{- end }}
{{- define "nats.extraRoutes" -}}
{{- range $i, $url := .Values.cluster.extraRoutes -}}
{{- printf "%s," $url -}}
{{- end -}}
{{- end }}
{{- define "nats.tlsConfig" -}}
tls {
{{- if .cert }}
cert_file: {{ .secretPath }}/{{ .secret.name }}/{{ .cert }}
{{- end }}
{{- if .key }}
key_file: {{ .secretPath }}/{{ .secret.name }}/{{ .key }}
{{- end }}
{{- if .ca }}
ca_file: {{ .secretPath }}/{{ .secret.name }}/{{ .ca }}
{{- end }}
{{- if .insecure }}
insecure: {{ .insecure }}
{{- end }}
{{- if .verify }}
verify: {{ .verify }}
{{- end }}
{{- if .verifyAndMap }}
verify_and_map: {{ .verifyAndMap }}
{{- end }}
{{- if .curvePreferences }}
curve_preferences: {{ .curvePreferences }}
{{- end }}
{{- if .timeout }}
timeout: {{ .timeout }}
{{- end }}
{{- if .cipherSuites }}
cipher_suites: {{ toRawJson .cipherSuites }}
{{- end }}
}
{{- end }}
{{/*
Return the appropriate apiVersion for networkpolicy.
*/}}
{{- define "networkPolicy.apiVersion" -}}
{{- if semverCompare ">=1.4-0, <1.7-0" .Capabilities.KubeVersion.GitVersion -}}
{{- print "extensions/v1beta1" -}}
{{- else -}}
{{- print "networking.k8s.io/v1" -}}
{{- end -}}
{{- end -}}
{{/*
Renders a value that contains template.
Usage:
{{ include "tplvalues.render" ( dict "value" .Values.path.to.the.Value "context" $) }}
*/}}
{{- define "tplvalues.render" -}}
{{- if typeIs "string" .value }}
{{- tpl .value .context }}
{{- else }}
{{- tpl (toYaml .value) .context }}
{{- end }}
{{- end -}}

View File

@@ -0,0 +1,551 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "nats.fullname" . }}-config
namespace: {{ include "nats.namespace" . }}
labels:
{{- include "nats.labels" . | nindent 4 }}
data:
nats.conf: |
# NATS Clients Port
port: {{ .Values.nats.client.port }}
# PID file shared with configuration reloader.
pid_file: "/var/run/nats/nats.pid"
{{- if .Values.nats.config }}
###########
# #
# Imports #
# #
###########
{{- range .Values.nats.config }}
include ./{{ .name }}/{{ .name }}.conf
{{- end}}
{{- end }}
###############
# #
# Monitoring #
# #
###############
http: 8222
server_name: {{- if .Values.nats.serverNamePrefix }}$SERVER_NAME{{- else }}$POD_NAME{{- end }}
{{- if .Values.nats.tls }}
#####################
# #
# TLS Configuration #
# #
#####################
{{- with .Values.nats.tls }}
{{- $nats_tls := merge (dict) . }}
{{- $_ := set $nats_tls "secretPath" "/etc/nats-certs/clients" }}
{{- tpl (include "nats.tlsConfig" $nats_tls) $ | nindent 4}}
{{- end }}
{{- if .Values.nats.tls.allowNonTLS }}
allow_non_tls: {{ .Values.nats.tls.allowNonTLS }}
{{- end }}
{{- end }}
{{- if .Values.nats.jetstream.enabled }}
###################################
# #
# NATS JetStream #
# #
###################################
jetstream {
{{- if .Values.nats.jetstream.encryption }}
{{- if .Values.nats.jetstream.encryption.key }}
key: {{ .Values.nats.jetstream.encryption.key | quote }}
{{- else if .Values.nats.jetstream.encryption.secret }}
key: $JS_KEY
{{- end}}
{{- end}}
{{- if .Values.nats.jetstream.memStorage.enabled }}
max_mem: {{ .Values.nats.jetstream.memStorage.size }}
{{- end }}
{{- if .Values.nats.jetstream.domain }}
domain: {{ .Values.nats.jetstream.domain }}
{{- end }}
{{- if .Values.nats.jetstream.fileStorage.enabled }}
store_dir: {{ .Values.nats.jetstream.fileStorage.storageDirectory }}
max_file:
{{- if .Values.nats.jetstream.fileStorage.existingClaim }}
{{- .Values.nats.jetstream.fileStorage.claimStorageSize }}
{{- else }}
{{- .Values.nats.jetstream.fileStorage.size }}
{{- end }}
{{- end }}
}
{{- end }}
{{- if .Values.mqtt.enabled }}
###################################
# #
# NATS MQTT #
# #
###################################
mqtt {
port: 1883
{{- with .Values.mqtt.tls }}
{{- $mqtt_tls := merge (dict) . }}
{{- $_ := set $mqtt_tls "secretPath" "/etc/nats-certs/mqtt" }}
{{- tpl (include "nats.tlsConfig" $mqtt_tls) $ | nindent 6}}
{{- end }}
{{- if .Values.mqtt.noAuthUser }}
no_auth_user: {{ .Values.mqtt.noAuthUser | quote }}
{{- end }}
ack_wait: {{ .Values.mqtt.ackWait | quote }}
max_ack_pending: {{ .Values.mqtt.maxAckPending }}
}
{{- end }}
{{- if .Values.cluster.enabled }}
###################################
# #
# NATS Full Mesh Clustering Setup #
# #
###################################
cluster {
port: 6222
{{- if .Values.nats.jetstream.enabled }}
{{- if .Values.cluster.name }}
name: {{ .Values.cluster.name }}
{{- else }}
name: {{ template "nats.name" . }}
{{- end }}
{{- else }}
{{- with .Values.cluster.name }}
name: {{ . }}
{{- end }}
{{- end }}
{{- with .Values.cluster.tls }}
{{- $cluster_tls := merge (dict) . }}
{{- $_ := set $cluster_tls "secretPath" "/etc/nats-certs/cluster" }}
{{- tpl (include "nats.tlsConfig" $cluster_tls) $ | nindent 6}}
{{- end }}
{{- if .Values.cluster.authorization }}
authorization {
{{- with .Values.cluster.authorization.user }}
user: {{ . }}
{{- end }}
{{- with .Values.cluster.authorization.password }}
password: {{ . }}
{{- end }}
{{- with .Values.cluster.authorization.timeout }}
timeout: {{ . }}
{{- end }}
}
{{- end }}
routes = [
{{ include "nats.clusterRoutes" . }}
{{ include "nats.extraRoutes" . }}
]
cluster_advertise: $CLUSTER_ADVERTISE
{{- with .Values.cluster.noAdvertise }}
no_advertise: {{ . }}
{{- end }}
connect_retries: {{ .Values.nats.connectRetries }}
}
{{- end }}
{{- if and .Values.nats.advertise .Values.nats.externalAccess }}
include "advertise/client_advertise.conf"
{{- end }}
{{- if or .Values.leafnodes.enabled .Values.leafnodes.remotes }}
#################
# #
# NATS Leafnode #
# #
#################
leafnodes {
{{- if .Values.leafnodes.enabled }}
listen: "0.0.0.0:{{ .Values.leafnodes.port }}"
{{- end }}
{{- if and .Values.nats.advertise .Values.nats.externalAccess }}
include "advertise/gateway_advertise.conf"
{{- end }}
{{- with .Values.leafnodes.noAdvertise }}
no_advertise: {{ . }}
{{- end }}
{{- with .Values.leafnodes.authorization }}
authorization: {
{{- with .user }}
user: {{ . }}
{{- end }}
{{- with .password }}
password: {{ . }}
{{- end }}
{{- with .account }}
account: {{ . | quote }}
{{- end }}
{{- with .timeout }}
timeout: {{ . }}
{{- end }}
{{- with .users }}
users: [
{{- range . }}
{{- toRawJson . | nindent 10 }},
{{- end }}
]
{{- end }}
}
{{- end }}
{{- with .Values.leafnodes.tls }}
{{- if .custom }}
tls {
{{- .custom | nindent 8 }}
}
{{- else }}
{{- $leafnode_tls := merge (dict) . }}
{{- $_ := set $leafnode_tls "secretPath" "/etc/nats-certs/leafnodes" }}
{{- tpl (include "nats.tlsConfig" $leafnode_tls) $ | nindent 6}}
{{- end }}
{{- end }}
remotes: [
{{- range .Values.leafnodes.remotes }}
{
{{- with .url }}
url: {{ . | quote }}
{{- end }}
{{- with .urls }}
urls: {{ toRawJson . }}
{{- end }}
{{- with .account }}
account: {{ . | quote }}
{{- end }}
{{- with .credentials }}
credentials: "/etc/nats-creds/{{ .secret.name }}/{{ .secret.key }}"
{{- end }}
{{- with .tls }}
tls: {
{{- if .custom }}
{{- .custom | nindent 10 }}
{{- else }}
{{ $secretName := tpl .secret.name $ }}
{{- with .cert }}
cert_file: /etc/nats-certs/leafnodes/{{ $secretName }}/{{ . }}
{{- end }}
{{- with .key }}
key_file: /etc/nats-certs/leafnodes/{{ $secretName }}/{{ . }}
{{- end }}
{{- with .ca }}
ca_file: /etc/nats-certs/leafnodes/{{ $secretName }}/{{ . }}
{{- end }}
{{- end }}
}
{{- end }}
}
{{- end }}
]
}
{{- end }}
{{- if .Values.gateway.enabled }}
#################
# #
# NATS Gateways #
# #
#################
gateway {
name: {{ .Values.gateway.name }}
port: {{ .Values.gateway.port }}
{{- if .Values.gateway.advertise }}
advertise: {{ .Values.gateway.advertise }}
{{- end }}
{{- if .Values.gateway.rejectUnknownCluster }}
reject_unknown_cluster: {{ .Values.gateway.rejectUnknownCluster }}
{{- end }}
{{- if .Values.gateway.authorization }}
authorization {
{{- with .Values.gateway.authorization.user }}
user: {{ . }}
{{- end }}
{{- with .Values.gateway.authorization.password }}
password: {{ . }}
{{- end }}
{{- with .Values.gateway.authorization.timeout }}
timeout: {{ . }}
{{- end }}
}
{{- end }}
{{- if and .Values.nats.advertise .Values.nats.externalAccess }}
include "advertise/gateway_advertise.conf"
{{- end }}
{{- with .Values.gateway.tls }}
{{- $gateway_tls := merge (dict) . }}
{{- $_ := set $gateway_tls "secretPath" "/etc/nats-certs/gateways" }}
{{- tpl (include "nats.tlsConfig" $gateway_tls) $ | nindent 6}}
{{- end }}
# Gateways array here
gateways: [
{{- range .Values.gateway.gateways }}
{
{{- with .name }}
name: {{ . }}
{{- end }}
{{- with .url }}
url: {{ . | quote }}
{{- end }}
{{- with .urls }}
urls: [{{ join "," . }}]
{{- end }}
},
{{- end }}
]
}
{{- end }}
{{- with .Values.nats.logging.debug }}
debug: {{ . }}
{{- end }}
{{- with .Values.nats.logging.trace }}
trace: {{ . }}
{{- end }}
{{- with .Values.nats.logging.logtime }}
logtime: {{ . }}
{{- end }}
{{- with .Values.nats.logging.connectErrorReports }}
connect_error_reports: {{ . }}
{{- end }}
{{- with .Values.nats.logging.reconnectErrorReports }}
reconnect_error_reports: {{ . }}
{{- end }}
{{- with .Values.nats.limits.maxConnections }}
max_connections: {{ . }}
{{- end }}
{{- with .Values.nats.limits.maxSubscriptions }}
max_subscriptions: {{ . }}
{{- end }}
{{- with .Values.nats.limits.maxPending }}
max_pending: {{ . }}
{{- end }}
{{- with .Values.nats.limits.maxControlLine }}
max_control_line: {{ . }}
{{- end }}
{{- with .Values.nats.limits.maxPayload }}
max_payload: {{ . }}
{{- end }}
{{- with .Values.nats.limits.pingInterval }}
ping_interval: {{ . }}
{{- end }}
{{- with .Values.nats.limits.maxPings }}
ping_max: {{ . }}
{{- end }}
{{- with .Values.nats.limits.writeDeadline }}
write_deadline: {{ . }}
{{- end }}
{{- with .Values.nats.limits.lameDuckGracePeriod }}
lame_duck_grace_period: {{ . }}
{{- end }}
{{- with .Values.nats.limits.lameDuckDuration }}
lame_duck_duration: {{ . }}
{{- end }}
{{- if .Values.websocket.enabled }}
##################
# #
# Websocket #
# #
##################
websocket {
port: {{ .Values.websocket.port }}
{{- with .Values.websocket.tls }}
{{ $secretName := tpl .secret.name $ }}
tls {
{{- with .cert }}
cert_file: /etc/nats-certs/ws/{{ $secretName }}/{{ . }}
{{- end }}
{{- with .key }}
key_file: /etc/nats-certs/ws/{{ $secretName }}/{{ . }}
{{- end }}
{{- with .ca }}
ca_file: /etc/nats-certs/ws/{{ $secretName }}/{{ . }}
{{- end }}
}
{{- else }}
no_tls: {{ .Values.websocket.noTLS }}
{{- end }}
same_origin: {{ .Values.websocket.sameOrigin }}
{{- with .Values.websocket.allowedOrigins }}
allowed_origins: {{ toRawJson . }}
{{- end }}
{{- with .Values.websocket.advertise }}
advertise: {{ . }}
{{- end }}
}
{{- end }}
{{- if .Values.auth.enabled }}
##################
# #
# Authorization #
# #
##################
{{- if .Values.auth.resolver }}
{{- if eq .Values.auth.resolver.type "memory" }}
resolver: MEMORY
include "accounts/{{ .Values.auth.resolver.configMap.key }}"
{{- end }}
{{- if eq .Values.auth.resolver.type "full" }}
{{- if .Values.auth.resolver.configMap }}
include "accounts/{{ .Values.auth.resolver.configMap.key }}"
{{- else }}
{{- with .Values.auth.resolver }}
{{- if $.Values.auth.timeout }}
authorization {
timeout: {{ $.Values.auth.timeout }}
}
{{- end }}
{{- if .operator }}
operator: {{ .operator }}
{{- end }}
{{- if .systemAccount }}
system_account: {{ .systemAccount }}
{{- end }}
{{- end }}
resolver: {
type: full
{{- with .Values.auth.resolver }}
dir: {{ .store.dir | quote }}
allow_delete: {{ .allowDelete }}
interval: {{ .interval | quote }}
{{- end }}
}
{{- end }}
{{- end }}
{{- if .Values.auth.resolver.resolverPreload }}
resolver_preload: {{ toRawJson .Values.auth.resolver.resolverPreload }}
{{- end }}
{{- if eq .Values.auth.resolver.type "URL" }}
{{- with .Values.auth.resolver.url }}
resolver: URL({{ . }})
{{- end }}
operator: /etc/nats-config/operator/{{ .Values.auth.operatorjwt.configMap.key }}
{{- end }}
{{- end }}
{{- with .Values.auth.systemAccount }}
system_account: {{ . }}
{{- end }}
{{- with .Values.auth.token }}
authorization {
token: "{{ . }}"
{{- if $.Values.auth.timeout }}
timeout: {{ $.Values.auth.timeout }}
{{- end }}
}
{{- end }}
{{- with .Values.auth.nkeys }}
{{- with .users }}
authorization {
{{- if $.Values.auth.timeout }}
timeout: {{ $.Values.auth.timeout }}
{{- end }}
users: [
{{- range . }}
{{- toRawJson . | nindent 8 }},
{{- end }}
]
}
{{- end }}
{{- end }}
{{- with .Values.auth.basic }}
{{- with .noAuthUser }}
no_auth_user: {{ . }}
{{- end }}
{{- with .users }}
authorization {
{{- if $.Values.auth.timeout }}
timeout: {{ $.Values.auth.timeout }}
{{- end }}
users: [
{{- range . }}
{{- toRawJson . | nindent 8 }},
{{- end }}
]
}
{{- end }}
{{- with .accounts }}
authorization {
{{- if $.Values.auth.timeout }}
timeout: {{ $.Values.auth.timeout }}
{{- end }}
}
accounts: {{- toRawJson . }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,115 @@
{{- if .Values.natsbox.enabled }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "nats.fullname" . }}-box
namespace: {{ include "nats.namespace" . }}
labels:
app: {{ include "nats.fullname" . }}-box
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
{{- if .Values.natsbox.additionalLabels }}
{{- tpl (toYaml .Values.natsbox.additionalLabels) $ | nindent 4 }}
{{- end }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ include "nats.fullname" . }}-box
template:
metadata:
labels:
app: {{ include "nats.fullname" . }}-box
{{- if .Values.natsbox.podLabels }}
{{- tpl (toYaml .Values.natsbox.podLabels) $ | nindent 8 }}
{{- end }}
{{- if .Values.natsbox.podAnnotations }}
annotations:
{{- toYaml .Values.natsbox.podAnnotations | nindent 8 }}
{{- end }}
spec:
{{- with .Values.natsbox.affinity }}
affinity:
{{- tpl (toYaml .) $ | nindent 8 }}
{{- end }}
{{- with .Values.natsbox.nodeSelector }}
nodeSelector: {{ toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.natsbox.tolerations }}
tolerations: {{ toYaml . | nindent 8 }}
{{- end }}
volumes:
{{- if .Values.natsbox.credentials }}
- name: nats-sys-creds
secret:
secretName: {{ .Values.natsbox.credentials.secret.name }}
{{- end }}
{{- if .Values.natsbox.extraVolumes }}
{{- toYaml .Values.natsbox.extraVolumes | nindent 6}}
{{- end }}
{{- with .Values.nats.tls }}
{{ $secretName := tpl .secret.name $ }}
- name: {{ $secretName }}-clients-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- with .Values.securityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: nats-box
image: {{ .Values.natsbox.image }}
imagePullPolicy: {{ .Values.natsbox.pullPolicy }}
{{- if .Values.natsbox.securityContext }}
securityContext:
{{- toYaml .Values.natsbox.securityContext | nindent 10 }}
{{- end }}
resources:
{{- toYaml .Values.natsbox.resources | nindent 10 }}
env:
- name: NATS_URL
value: {{ template "nats.fullname" . }}
{{- if .Values.natsbox.credentials }}
- name: USER_CREDS
value: /etc/nats-config/creds/{{ .Values.natsbox.credentials.secret.key }}
- name: USER2_CREDS
value: /etc/nats-config/creds/{{ .Values.natsbox.credentials.secret.key }}
{{- end }}
{{- with .Values.nats.tls }}
{{ $secretName := tpl .secret.name $ }}
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- cp /etc/nats-certs/clients/{{ $secretName }}/* /usr/local/share/ca-certificates && update-ca-certificates
{{- end }}
command:
- "tail"
- "-f"
- "/dev/null"
volumeMounts:
{{- if .Values.natsbox.credentials }}
- name: nats-sys-creds
mountPath: /etc/nats-config/creds
{{- end }}
{{- if .Values.natsbox.extraVolumeMounts }}
{{- toYaml .Values.natsbox.extraVolumeMounts | nindent 8 }}
{{- end }}
{{- with .Values.nats.tls }}
#######################
# #
# TLS Volumes Mounts #
# #
#######################
{{ $secretName := tpl .secret.name $ }}
- name: {{ $secretName }}-clients-volume
mountPath: /etc/nats-certs/clients/{{ $secretName }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,79 @@
{{- if .Values.networkPolicy.enabled }}
kind: NetworkPolicy
apiVersion: {{ template "networkPolicy.apiVersion" . }}
metadata:
name: {{ include "nats.fullname" . }}
namespace: {{ include "nats.namespace" . }}
labels:
{{- include "nats.labels" . | nindent 4 }}
spec:
podSelector:
matchLabels:
{{- include "nats.selectorLabels" . | nindent 6 }}
policyTypes:
- Ingress
- Egress
egress:
# Allow dns resolution
- ports:
- port: 53
protocol: UDP
# Allow outbound connections to other cluster pods
- ports:
- port: {{ .Values.nats.client.port }}
protocol: TCP
- port: 6222
protocol: TCP
- port: 8222
protocol: TCP
- port: 7777
protocol: TCP
- port: {{ .Values.leafnodes.port }}
protocol: TCP
- port: {{ .Values.gateway.port }}
protocol: TCP
to:
- podSelector:
matchLabels:
{{- include "nats.selectorLabels" . | nindent 10 }}
{{- if .Values.networkPolicy.extraEgress }}
{{- include "tplvalues.render" ( dict "value" .Values.networkPolicy.extraEgress "context" $ ) | nindent 2 }}
{{- end }}
ingress:
# Allow inbound connections
- ports:
- port: {{ .Values.nats.client.port }}
protocol: TCP
- port: 6222
protocol: TCP
- port: 8222
protocol: TCP
- port: 7777
protocol: TCP
- port: {{ .Values.leafnodes.port }}
protocol: TCP
- port: {{ .Values.gateway.port }}
protocol: TCP
{{- if not .Values.networkPolicy.allowExternal }}
from:
- podSelector:
matchLabels:
{{ include "nats.fullname" . }}-client: "true"
- podSelector:
matchLabels:
{{- include "nats.selectorLabels" . | nindent 10 }}
{{- if .Values.networkPolicy.ingressNSMatchLabels }}
- namespaceSelector:
matchLabels:
{{- toYaml .Values.networkPolicy.ingressNSMatchLabels | nindent 10 }}
{{- if .Values.networkPolicy.ingressNSPodMatchLabels }}
podSelector:
matchLabels:
{{- toYaml .Values.networkPolicy.ingressNSPodMatchLabels | nindent 10 }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.networkPolicy.extraIngress }}
{{- include "tplvalues.render" ( dict "value" .Values.networkPolicy.extraIngress "context" $ ) | nindent 2 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,20 @@
{{- if .Values.podDisruptionBudget.enabled }}
---
apiVersion: {{ .Capabilities.APIVersions.Has "policy/v1" | ternary "policy/v1" "policy/v1beta1" }}
kind: PodDisruptionBudget
metadata:
name: {{ include "nats.fullname" . }}
namespace: {{ include "nats.namespace" . }}
labels:
{{- include "nats.labels" . | nindent 4 }}
spec:
{{- if .Values.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.podDisruptionBudget.minAvailable }}
{{- end }}
{{- if .Values.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.podDisruptionBudget.maxUnavailable }}
{{- end }}
selector:
matchLabels:
{{- include "nats.selectorLabels" . | nindent 6 }}
{{- end }}

View File

@@ -0,0 +1,31 @@
{{ if and .Values.nats.externalAccess .Values.nats.advertise }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.nats.serviceAccount }}
namespace: {{ include "nats.namespace" . }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Values.nats.serviceAccount }}
rules:
- apiGroups: [""]
resources:
- nodes
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Values.nats.serviceAccount }}-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ .Values.nats.serviceAccount }}
subjects:
- kind: ServiceAccount
name: {{ .Values.nats.serviceAccount }}
namespace: {{ include "nats.namespace" . }}
{{ end }}

View File

@@ -0,0 +1,73 @@
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "nats.fullname" . }}
namespace: {{ include "nats.namespace" . }}
labels:
{{- include "nats.labels" . | nindent 4 }}
{{- if .Values.serviceAnnotations}}
annotations:
{{- toYaml .Values.serviceAnnotations | nindent 4 }}
{{- end }}
spec:
selector:
{{- include "nats.selectorLabels" . | nindent 4 }}
clusterIP: None
publishNotReadyAddresses: true
{{- if .Values.topologyKeys }}
topologyKeys:
{{- toYaml .Values.topologyKeys | nindent 4 }}
{{- end }}
ports:
{{- if .Values.websocket.enabled }}
- name: websocket
port: {{ .Values.websocket.port }}
{{- if .Values.appProtocol.enabled }}
appProtocol: tcp
{{- end }}
{{- end }}
{{- if .Values.nats.profiling.enabled }}
- name: profiling
port: {{ .Values.nats.profiling.port }}
{{- if .Values.appProtocol.enabled }}
appProtocol: http
{{- end }}
{{- end }}
- name: {{ .Values.nats.client.portName }}
port: {{ .Values.nats.client.port }}
{{- if .Values.appProtocol.enabled }}
appProtocol: tcp
{{- end }}
- name: cluster
port: 6222
{{- if .Values.appProtocol.enabled }}
appProtocol: tcp
{{- end }}
- name: monitor
port: 8222
{{- if .Values.appProtocol.enabled }}
appProtocol: http
{{- end }}
- name: metrics
port: 7777
{{- if .Values.appProtocol.enabled }}
appProtocol: http
{{- end }}
- name: leafnodes
port: {{ .Values.leafnodes.port }}
{{- if .Values.appProtocol.enabled }}
appProtocol: tcp
{{- end }}
- name: gateways
port: {{ .Values.gateway.port }}
{{- if .Values.appProtocol.enabled }}
appProtocol: tcp
{{- end }}
{{- if .Values.mqtt.enabled }}
- name: mqtt
port: 1883
{{- if .Values.appProtocol.enabled }}
appProtocol: tcp
{{- end }}
{{- end }}

View File

@@ -0,0 +1,36 @@
{{ if and .Values.exporter.enabled .Values.exporter.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "nats.fullname" . }}
{{- if .Values.exporter.serviceMonitor.namespace }}
namespace: {{ .Values.exporter.serviceMonitor.namespace }}
{{- else }}
namespace: {{ include "nats.namespace" . }}
{{- end }}
{{- if .Values.exporter.serviceMonitor.labels }}
labels:
{{- toYaml .Values.exporter.serviceMonitor.labels | nindent 4 }}
{{- end }}
{{- if .Values.exporter.serviceMonitor.annotations }}
annotations:
{{- toYaml .Values.exporter.serviceMonitor.annotations | nindent 4 }}
{{- end }}
spec:
endpoints:
- port: metrics
{{- if .Values.exporter.serviceMonitor.path }}
path: {{ .Values.exporter.serviceMonitor.path }}
{{- end }}
{{- if .Values.exporter.serviceMonitor.interval }}
interval: {{ .Values.exporter.serviceMonitor.interval }}
{{- end }}
{{- if .Values.exporter.serviceMonitor.scrapeTimeout }}
scrapeTimeout: {{ .Values.exporter.serviceMonitor.scrapeTimeout }}
{{- end }}
namespaceSelector:
any: true
selector:
matchLabels:
{{- include "nats.selectorLabels" . | nindent 6 }}
{{- end }}

View File

@@ -0,0 +1,633 @@
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "nats.fullname" . }}
namespace: {{ include "nats.namespace" . }}
labels:
{{- include "nats.labels" . | nindent 4 }}
{{- if .Values.statefulSetAnnotations }}
annotations:
{{- toYaml .Values.statefulSetAnnotations | nindent 4 }}
{{- end }}
spec:
selector:
matchLabels:
{{- include "nats.selectorLabels" . | nindent 6 }}
{{- if .Values.cluster.enabled }}
replicas: {{ .Values.cluster.replicas }}
{{- else }}
replicas: 1
{{- end }}
serviceName: {{ include "nats.fullname" . }}
podManagementPolicy: {{ .Values.podManagementPolicy }}
template:
metadata:
{{- if or .Values.exporter.enabled .Values.nats.configChecksumAnnotation .Values.podAnnotations }}
annotations:
{{- if .Values.exporter.enabled }}
prometheus.io/path: /metrics
prometheus.io/port: "7777"
prometheus.io/scrape: "true"
{{- end }}
{{- if .Values.nats.configChecksumAnnotation }}
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{{- end }}
{{- if .Values.podAnnotations }}
{{- toYaml .Values.podAnnotations | nindent 8 }}
{{- end }}
{{- end }}
labels:
{{- include "nats.selectorLabels" . | nindent 8 }}
{{- if .Values.statefulSetPodLabels }}
{{- tpl (toYaml .Values.statefulSetPodLabels) . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.securityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- tpl (toYaml .) $ | nindent 8 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector: {{ toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations: {{ toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.topologySpreadConstraints }}
topologySpreadConstraints:
{{- range .Values.topologySpreadConstraints }}
{{- if and .maxSkew .topologyKey }}
- maxSkew: {{ .maxSkew }}
topologyKey: {{ .topologyKey }}
{{- if .whenUnsatisfiable }}
whenUnsatisfiable: {{ .whenUnsatisfiable }}
{{- end }}
labelSelector:
matchLabels:
{{- include "nats.selectorLabels" $ | nindent 12 }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName | quote }}
{{- end }}
# Common volumes for the containers.
volumes:
- name: config-volume
{{- if .Values.nats.customConfigSecret }}
secret:
secretName: {{ .Values.nats.customConfigSecret.name }}
{{- else }}
configMap:
name: {{ include "nats.fullname" . }}-config
{{- end }}
{{- /* User extended config volumes*/}}
{{- if .Values.nats.config }}
# User extended config volumes
{{- with .Values.nats.config }}
{{- toYaml . | nindent 6 }}
{{- end }}
{{- end }}
# Local volume shared with the reloader.
- name: pid
emptyDir: {}
{{- if and .Values.auth.enabled .Values.auth.resolver }}
{{- if .Values.auth.resolver.configMap }}
- name: resolver-volume
configMap:
name: {{ .Values.auth.resolver.configMap.name }}
{{- end }}
{{- if eq .Values.auth.resolver.type "URL" }}
- name: operator-jwt-volume
configMap:
name: {{ .Values.auth.operatorjwt.configMap.name }}
{{- end }}
{{- end }}
{{- if and .Values.nats.externalAccess .Values.nats.advertise }}
# Local volume shared with the advertise config initializer.
- name: advertiseconfig
emptyDir: {}
{{- end }}
{{- if and .Values.nats.jetstream.enabled .Values.nats.jetstream.fileStorage.enabled .Values.nats.jetstream.fileStorage.existingClaim }}
# Persistent volume for jetstream running with file storage option
- name: {{ include "nats.fullname" . }}-js-pvc
persistentVolumeClaim:
claimName: {{ .Values.nats.jetstream.fileStorage.existingClaim | quote }}
{{- end }}
#################
# #
# TLS Volumes #
# #
#################
{{- with .Values.nats.tls }}
{{ $secretName := tpl .secret.name $ }}
- name: {{ $secretName }}-clients-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- with .Values.mqtt.tls }}
{{ $secretName := tpl .secret.name $ }}
- name: {{ $secretName }}-mqtt-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- with .Values.cluster.tls }}
{{ $secretName := tpl .secret.name $ }}
- name: {{ $secretName }}-cluster-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- with .Values.leafnodes.tls }}
{{- if not .custom }}
{{ $secretName := tpl .secret.name $ }}
- name: {{ $secretName }}-leafnodes-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- end }}
{{- with .Values.gateway.tls }}
{{ $secretName := tpl .secret.name $ }}
- name: {{ $secretName }}-gateways-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- with .Values.websocket.tls }}
{{ $secretName := tpl .secret.name $ }}
- name: {{ $secretName }}-ws-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- if .Values.leafnodes.enabled }}
#
# Leafnode credential volumes
#
{{- range .Values.leafnodes.remotes }}
{{- with .credentials }}
- name: {{ .secret.name }}-volume
secret:
secretName: {{ .secret.name }}
{{- end }}
{{- with .tls }}
- name: {{ .secret.name }}-volume
secret:
secretName: {{ .secret.name }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.additionalVolumes }}
{{- toYaml .Values.additionalVolumes | nindent 6 }}
{{- end }}
{{- if and .Values.nats.externalAccess .Values.nats.advertise }}
# Assume that we only use the service account in case we want to
# figure out what is the current external public IP from the server
# in order to be able to advertise correctly.
serviceAccountName: {{ .Values.nats.serviceAccount }}
{{- end }}
# Required to be able to HUP signal and apply config
# reload to the server without restarting the pod.
shareProcessNamespace: true
{{- if and .Values.nats.externalAccess .Values.nats.advertise }}
# Initializer container required to be able to lookup
# the external ip on which this node is running.
initContainers:
- name: bootconfig
command:
- nats-pod-bootconfig
- -f
- /etc/nats-config/advertise/client_advertise.conf
- -gf
- /etc/nats-config/advertise/gateway_advertise.conf
env:
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: {{ .Values.bootconfig.image }}
imagePullPolicy: {{ .Values.bootconfig.pullPolicy }}
{{- if .Values.bootconfig.securityContext }}
securityContext:
{{- toYaml .Values.bootconfig.securityContext | nindent 10 }}
{{- end }}
resources:
{{- toYaml .Values.bootconfig.resources | nindent 10 }}
volumeMounts:
- mountPath: /etc/nats-config/advertise
name: advertiseconfig
subPath: advertise
{{- end }}
#################
# #
# NATS Server #
# #
#################
terminationGracePeriodSeconds: {{ .Values.nats.terminationGracePeriodSeconds }}
containers:
- name: nats
image: {{ .Values.nats.image }}
imagePullPolicy: {{ .Values.nats.pullPolicy }}
{{- if .Values.nats.securityContext }}
securityContext:
{{- toYaml .Values.nats.securityContext | nindent 10 }}
{{- end }}
resources:
{{- toYaml .Values.nats.resources | nindent 10 }}
ports:
- containerPort: {{ .Values.nats.client.port }}
name: {{ .Values.nats.client.portName }}
{{- if .Values.nats.externalAccess }}
hostPort: {{ .Values.nats.client.port }}
{{- end }}
{{- if .Values.leafnodes.enabled }}
- containerPort: {{ .Values.leafnodes.port }}
name: leafnodes
{{- if .Values.nats.externalAccess }}
hostPort: {{ .Values.leafnodes.port }}
{{- end }}
{{- end }}
{{- if .Values.gateway.enabled }}
- containerPort: {{ .Values.gateway.port }}
name: gateways
{{- if .Values.nats.externalAccess }}
hostPort: {{ .Values.gateway.port }}
{{- end }}
{{- end }}
- containerPort: 6222
name: cluster
- containerPort: 8222
name: monitor
- containerPort: 7777
name: metrics
{{- if .Values.mqtt.enabled }}
- containerPort: 1883
name: mqtt
{{- if .Values.nats.externalAccess }}
hostPort: 1883
{{- end }}
{{- end }}
{{- if .Values.websocket.enabled }}
- containerPort: {{ .Values.websocket.port }}
name: websocket
{{- if .Values.nats.externalAccess }}
hostPort: {{ .Values.websocket.port }}
{{- end }}
{{- end }}
{{- if .Values.nats.profiling.enabled }}
- containerPort: {{ .Values.nats.profiling.port }}
name: profiling
{{- end }}
command:
- "nats-server"
- "--config"
- "/etc/nats-config/nats.conf"
{{- if .Values.nats.profiling.enabled }}
- "--profile={{ .Values.nats.profiling.port }}"
{{- end }}
# Required to be able to define an environment variable
# that refers to other environment variables. This env var
# is later used as part of the configuration file.
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: SERVER_NAME
value: {{ .Values.nats.serverNamePrefix }}$(POD_NAME)
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CLUSTER_ADVERTISE
value: {{ include "nats.clusterAdvertise" . }}
{{- if .Values.nats.jetstream.enabled }}
{{- with .Values.nats.jetstream.encryption }}
{{- with .secret }}
- name: JS_KEY
valueFrom:
secretKeyRef:
name: {{ .name }}
key: {{ .key }}
{{- end }}
{{- end }}
{{- end }}
volumeMounts:
- name: config-volume
mountPath: /etc/nats-config
- name: pid
mountPath: /var/run/nats
{{- if and .Values.nats.externalAccess .Values.nats.advertise }}
- mountPath: /etc/nats-config/advertise
name: advertiseconfig
subPath: advertise
{{- end }}
{{- /* User extended config volumes*/}}
{{- range .Values.nats.config }}
# User extended config volumes
- name: {{ .name }}
mountPath: /etc/nats-config/{{ .name }}
{{- end }}
{{- if and .Values.auth.enabled .Values.auth.resolver }}
{{- if eq .Values.auth.resolver.type "memory" }}
- name: resolver-volume
mountPath: /etc/nats-config/accounts
{{- end }}
{{- if eq .Values.auth.resolver.type "full" }}
{{- if .Values.auth.resolver.configMap }}
- name: resolver-volume
mountPath: /etc/nats-config/accounts
{{- end }}
{{- if and .Values.auth.resolver .Values.auth.resolver.store }}
- name: nats-jwt-pvc
mountPath: {{ .Values.auth.resolver.store.dir }}
{{- end }}
{{- end }}
{{- if eq .Values.auth.resolver.type "URL" }}
- name: operator-jwt-volume
mountPath: /etc/nats-config/operator
{{- end }}
{{- end }}
{{- if and .Values.nats.jetstream.enabled .Values.nats.jetstream.fileStorage.enabled }}
- name: {{ include "nats.fullname" . }}-js-pvc
mountPath: {{ .Values.nats.jetstream.fileStorage.storageDirectory }}
{{- end }}
{{- with .Values.nats.tls }}
#######################
# #
# TLS Volumes Mounts #
# #
#######################
{{ $secretName := tpl .secret.name $ }}
- name: {{ $secretName }}-clients-volume
mountPath: /etc/nats-certs/clients/{{ $secretName }}
{{- end }}
{{- with .Values.mqtt.tls }}
{{ $secretName := tpl .secret.name $ }}
- name: {{ $secretName }}-mqtt-volume
mountPath: /etc/nats-certs/mqtt/{{ $secretName }}
{{- end }}
{{- with .Values.cluster.tls }}
{{- if not .custom }}
{{ $secretName := tpl .secret.name $ }}
- name: {{ $secretName }}-cluster-volume
mountPath: /etc/nats-certs/cluster/{{ $secretName }}
{{- end }}
{{- end }}
{{- with .Values.leafnodes.tls }}
{{- if not .custom }}
{{ $secretName := tpl .secret.name $ }}
- name: {{ $secretName }}-leafnodes-volume
mountPath: /etc/nats-certs/leafnodes/{{ $secretName }}
{{- end }}
{{- end }}
{{- with .Values.gateway.tls }}
{{ $secretName := tpl .secret.name $ }}
- name: {{ $secretName }}-gateways-volume
mountPath: /etc/nats-certs/gateways/{{ $secretName }}
{{- end }}
{{- with .Values.websocket.tls }}
{{ $secretName := tpl .secret.name $ }}
- name: {{ $secretName }}-ws-volume
mountPath: /etc/nats-certs/ws/{{ $secretName }}
{{- end }}
{{- if .Values.leafnodes.enabled }}
#
# Leafnode credential volumes
#
{{- range .Values.leafnodes.remotes }}
{{- with .credentials }}
- name: {{ .secret.name }}-volume
mountPath: /etc/nats-creds/{{ .secret.name }}
{{- end }}
{{- with .tls }}
- name: {{ .secret.name }}-volume
mountPath: /etc/nats-certs/leafnodes/{{ .secret.name }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.additionalVolumeMounts }}
{{- toYaml .Values.additionalVolumeMounts | nindent 8 }}
{{- end }}
#######################
# #
# Healthcheck Probes #
# #
#######################
{{- if .Values.nats.healthcheck }}
{{- with .Values.nats.healthcheck.liveness }}
{{- if .enabled }}
livenessProbe:
httpGet:
path: /
port: 8222
initialDelaySeconds: {{ .initialDelaySeconds }}
timeoutSeconds: {{ .timeoutSeconds }}
periodSeconds: {{ .periodSeconds }}
successThreshold: {{ .successThreshold }}
failureThreshold: {{ .failureThreshold }}
{{- if .terminationGracePeriodSeconds }}
terminationGracePeriodSeconds: {{ .terminationGracePeriodSeconds }}
{{- end }}
{{- end }}
{{- end }}
{{- with .Values.nats.healthcheck.readiness }}
{{- if .enabled }}
readinessProbe:
httpGet:
path: /
port: 8222
initialDelaySeconds: {{ .initialDelaySeconds }}
timeoutSeconds: {{ .timeoutSeconds }}
periodSeconds: {{ .periodSeconds }}
successThreshold: {{ .successThreshold }}
failureThreshold: {{ .failureThreshold }}
{{- end }}
{{- end }}
{{- if .Values.nats.healthcheck.startup.enabled }}
startupProbe:
httpGet:
{{- $parts := split ":" .Values.nats.image }}
{{- $simpleVersion := $parts._1 | default "latest" | regexFind "\\d+(\\.\\d+)?(\\.\\d+)?" | default "2.7.1" }}
{{- if and .Values.nats.healthcheck.enableHealthz (or (not .Values.nats.healthcheck.detectHealthz) (semverCompare ">=2.7.1" $simpleVersion)) }}
# for NATS server versions >=2.7.1, healthz will be enabled to allow for a grace period
# in case of JetStream enabled deployments to form quorum and streams to catch up.
path: /healthz
{{- else }}
path: /
{{- end }}
port: 8222
{{- with .Values.nats.healthcheck.startup }}
initialDelaySeconds: {{ .initialDelaySeconds }}
timeoutSeconds: {{ .timeoutSeconds }}
periodSeconds: {{ .periodSeconds }}
successThreshold: {{ .successThreshold }}
failureThreshold: {{ .failureThreshold }}
{{- end }}
{{- end }}
{{- end }}
# Gracefully stop NATS Server on pod deletion or image upgrade.
#
lifecycle:
preStop:
exec:
# send the lame duck shutdown signal to trigger a graceful shutdown
# nats-server will ignore the TERM signal it receives after this
#
command:
- "nats-server"
- "-sl=ldm=/var/run/nats/nats.pid"
#################################
# #
# NATS Configuration Reloader #
# #
#################################
{{- if .Values.reloader.enabled }}
- name: reloader
image: {{ .Values.reloader.image }}
imagePullPolicy: {{ .Values.reloader.pullPolicy }}
{{- if .Values.reloader.securityContext }}
securityContext:
{{- toYaml .Values.reloader.securityContext | nindent 10 }}
{{- end }}
resources:
{{- toYaml .Values.reloader.resources | nindent 10 }}
command:
- "nats-server-config-reloader"
- "-pid"
- "/var/run/nats/nats.pid"
- "-config"
- "/etc/nats-config/nats.conf"
{{- range .Values.reloader.extraConfigs }}
- "-config"
- {{ . | quote }}
{{- end }}
volumeMounts:
- name: config-volume
mountPath: /etc/nats-config
- name: pid
mountPath: /var/run/nats
{{- if .Values.additionalVolumeMounts }}
{{- toYaml .Values.additionalVolumeMounts | nindent 8 }}
{{- end }}
{{- end }}
##############################
# #
# NATS Prometheus Exporter #
# #
##############################
{{- if .Values.exporter.enabled }}
- name: metrics
image: {{ .Values.exporter.image }}
imagePullPolicy: {{ .Values.exporter.pullPolicy }}
{{- if .Values.exporter.securityContext }}
securityContext:
{{- toYaml .Values.exporter.securityContext | nindent 10 }}
{{- end }}
resources:
{{- toYaml .Values.exporter.resources | nindent 10 }}
args:
- -connz
- -routez
- -subz
- -varz
- -prefix=nats
- -use_internal_server_id
{{- if .Values.nats.jetstream.enabled }}
- -jsz=all
{{- end }}
{{- if .Values.leafnodes.enabled }}
- -leafz
{{- end }}
- http://localhost:8222/
ports:
- containerPort: 7777
name: metrics
{{- end }}
{{- if .Values.additionalContainers }}
{{- toYaml .Values.additionalContainers | nindent 6 }}
{{- end }}
volumeClaimTemplates:
{{- if eq .Values.auth.resolver.type "full" }}
{{- if and .Values.auth.resolver .Values.auth.resolver.store }}
#####################################
# #
# Account Server Embedded JWT #
# #
#####################################
- metadata:
name: nats-jwt-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.auth.resolver.store.size }}
{{- end }}
{{- end }}
{{- if and .Values.nats.jetstream.enabled .Values.nats.jetstream.fileStorage.enabled (not .Values.nats.jetstream.fileStorage.existingClaim) }}
#####################################
# #
# Jetstream New Persistent Volume #
# #
#####################################
- metadata:
name: {{ include "nats.fullname" . }}-js-pvc
{{- if .Values.nats.jetstream.fileStorage.annotations }}
annotations:
{{- toYaml .Values.nats.jetstream.fileStorage.annotations | nindent 10 }}
{{- end }}
spec:
accessModes:
{{- toYaml .Values.nats.jetstream.fileStorage.accessModes | nindent 10 }}
resources:
requests:
storage: {{ .Values.nats.jetstream.fileStorage.size }}
{{- if .Values.nats.jetstream.fileStorage.storageClassName }}
storageClassName: {{ .Values.nats.jetstream.fileStorage.storageClassName | quote }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,30 @@
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "nats.fullname" . }}-test-request-reply"
labels:
{{- include "nats.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test
spec:
containers:
- name: nats-box
image: synadia/nats-box
env:
- name: NATS_HOST
value: {{ template "nats.fullname" . }}
command:
- /bin/sh
- -ec
- |
nats reply -s nats://$NATS_HOST:{{ .Values.nats.client.port }} 'name.>' --command "echo {{1}}" &
- |
"&&"
- |
name=$(nats request -s nats://$NATS_HOST:{{ .Values.nats.client.port }} name.test '' 2>/dev/null)
- |
"&&"
- |
[ $name = test ]
restartPolicy: Never

View File

@@ -0,0 +1,685 @@
###############################
# #
# NATS Server Configuration #
# #
###############################
nats:
image: nats:2.7.4-alpine
pullPolicy: IfNotPresent
# The servers name prefix, must be used for example when we want a NATS cluster
# spanning multiple Kubernetes clusters.
serverNamePrefix: ""
# Toggle profiling.
# This enables nats-server pprof (profiling) port, so you can see goroutines
# stacks, memory heap sizes, etc.
profiling:
enabled: false
port: 6000
# Toggle using health check probes to better detect failures.
healthcheck:
# /healthz health check endpoint was introduced in NATS Server 2.7.1
# Attempt to detect /healthz support by inspecting if tag is >=2.7.1
detectHealthz: true
# Enable /healthz startupProbe for controlled upgrades of NATS JetStream
enableHealthz: true
# Enable liveness checks. If this fails, then the NATS Server will restarted.
liveness:
enabled: true
initialDelaySeconds: 10
timeoutSeconds: 5
# NOTE: liveness check + terminationGracePeriodSeconds can introduce unecessarily long outages
# due to the coupling between liveness probe and terminationGracePeriodSeconds.
# To avoid this, we make the periodSeconds of the liveness check to be about half the default
# time that it takes for lame duck graceful stop.
#
# In case of using Kubernetes +1.22 with probe-level terminationGracePeriodSeconds
# we could revise this but for now keep a minimal liveness check.
#
# More info:
#
# https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#probe-level-terminationgraceperiodseconds
# https://github.com/kubernetes/kubernetes/issues/64715
#
periodSeconds: 30
successThreshold: 1
failureThreshold: 3
# Only for Kubernetes +1.22 that have pod level probes enabled.
terminationGracePeriodSeconds:
# Periodically check for the server to be ready for connections while
# the NATS container is running.
# Disabled by default since covered by startup probe and it is the same
# as the liveness check.
readiness:
enabled: false
initialDelaySeconds: 10
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
# Enable startup checks to confirm server is ready for traffic.
# This is recommended for JetStream deployments since in cluster mode
# it will try to ensure that the server is ready to serve streams.
startup:
enabled: true
initialDelaySeconds: 10
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 30
# Adds a hash of the ConfigMap as a pod annotation
# This will cause the StatefulSet to roll when the ConfigMap is updated
configChecksumAnnotation: true
# securityContext for the nats container
securityContext: {}
# Toggle whether to enable external access.
# This binds a host port for clients, gateways and leafnodes.
externalAccess: false
# Toggle to disable client advertisements (connect_urls),
# in case of running behind a load balancer (which is not recommended)
# it might be required to disable advertisements.
advertise: true
# In case both external access and advertise are enabled
# then a service account would be required to be able to
# gather the public ip from a node.
serviceAccount: "nats-server"
# The number of connect attempts against discovered routes.
connectRetries: 120
# selector matchLabels for the server and service.
# If left empty defaults are used.
# This is helpful if you are updating from Chart version <=7.4
selectorLabels: {}
resources: {}
client:
port: 4222
portName: "client"
# Server settings.
limits:
maxConnections:
maxSubscriptions:
maxControlLine:
maxPayload:
writeDeadline:
maxPending:
maxPings:
# How many seconds should pass before sending a PING
# to a client that has no activity.
pingInterval:
# grace period after pod begins shutdown before starting to close client connections
lameDuckGracePeriod: "10s"
# duration over which to slowly close close client connections after lameDuckGracePeriod has passed
lameDuckDuration: "30s"
# terminationGracePeriodSeconds determines how long to wait for graceful shutdown
# this should be at least `lameDuckGracePeriod` + `lameDuckDuration` + 20s shutdown overhead
terminationGracePeriodSeconds: 60
logging:
debug:
trace:
logtime:
connectErrorReports:
reconnectErrorReports:
# customConfigSecret can be used to use an custom secret for the config
# of the NATS Server.
# NOTE: For this to work the name of the configuration has to be
# called `nats.conf`.
#
# e.g. kubectl create secret generic custom-nats-conf --from-file nats.conf
#
# customConfigSecret:
# name:
#
# Alternately, the generated config can be extended with extra imports using the below syntax.
# The benefit of this is that cluster settings can be built up via helm values, but external
# secrets can be referenced and imported alongside it.
#
# config:
# <name-of-config-item>:
# <configMap|secret>
# name: "<configMap|secret name>"
#
# e.g:
#
# config:
# - name: ssh-key
# secret:
# secretName: ssh-key
# - name: config-vol
# configMap:
# name: log-config
jetstream:
enabled: false
# Jetstream Domain
domain:
##########################
# #
# Jetstream Encryption #
# #
##########################
encryption:
# Use key if you want to provide the key via Helm Values
# key: random_key
# Use a secret reference if you want to get a key from a secret
# secret:
# name: "nats-jetstream-encryption"
# key: "key"
#############################
# #
# Jetstream Memory Storage #
# #
#############################
memStorage:
enabled: true
size: 1Gi
############################
# #
# Jetstream File Storage #
# #
############################
fileStorage:
enabled: true
storageDirectory: /data
# Set for use with existing PVC
# existingClaim: jetstream-pvc
# claimStorageSize: 10Gi
# Use below block to create new persistent volume
# only used if existingClaim is not specified
size: 10Gi
# storageClassName: ""
accessModes:
- ReadWriteOnce
annotations:
# key: "value"
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
# tls:
# allow_non_tls: false
# secret:
# name: nats-client-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
mqtt:
enabled: false
ackWait: 1m
maxAckPending: 100
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
#
# tls:
# secret:
# name: nats-mqtt-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
nameOverride: ""
namespaceOverride: ""
# An array of imagePullSecrets, and they have to be created manually in the same namespace
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
# Toggle whether to use setup a Pod Security Context
# ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
securityContext: {}
# securityContext:
# fsGroup: 1000
# runAsUser: 1000
# runAsNonRoot: true
# Affinity for pod assignment
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
## Pod priority class name
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: null
# Service topology
# ref: https://kubernetes.io/docs/concepts/services-networking/service-topology/
topologyKeys: []
# Pod Topology Spread Constraints
# ref https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
topologySpreadConstraints: []
# - maxSkew: 1
# topologyKey: zone
# whenUnsatisfiable: DoNotSchedule
# Annotations to add to the NATS pods
# ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
podAnnotations: {}
# key: "value"
# Define a Pod Disruption Budget for the stateful set
# ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
podDisruptionBudget:
enabled: true
maxUnavailable: 1
# minAvailable: 1
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
# Node tolerations for server scheduling to nodes with taints
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
#
tolerations: []
# - key: "key"
# operator: "Equal|Exists"
# value: "value"
# effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"
# Annotations to add to the NATS StatefulSet
statefulSetAnnotations: {}
# Labels to add to the pods of the NATS StatefulSet
statefulSetPodLabels: {}
# Annotations to add to the NATS Service
serviceAnnotations: {}
# additionalContainers are the sidecar containers to add to the NATS StatefulSet
additionalContainers: []
# additionalVolumes are the additional volumes to add to the NATS StatefulSet
additionalVolumes: []
# additionalVolumeMounts are the additional volume mounts to add to the nats-server and nats-server-config-reloader containers
additionalVolumeMounts: []
cluster:
enabled: false
replicas: 3
noAdvertise: false
# Explicitly set routes for clustering.
# When JetStream is enabled, the serverName must be unique in the cluster.
extraRoutes: []
# authorization:
# user: foo
# password: pwd
# timeout: 0.5
# Leafnode connections to extend a cluster:
#
# https://docs.nats.io/nats-server/configuration/leafnodes
#
leafnodes:
enabled: false
port: 7422
noAdvertise: false
# remotes:
# - url: "tls://connect.ngs.global:7422"
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
#
# tls:
# secret:
# name: nats-client-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
# Gateway connections to create a super cluster
#
# https://docs.nats.io/nats-server/configuration/gateways
#
gateway:
enabled: false
port: 7522
name: "default"
# authorization:
# user: foo
# password: pwd
# timeout: 0.5
# rejectUnknownCluster: false
# You can add an implicit advertise address instead of using from Node's IP
# could also be a fqdn address
# advertise: "nats.example.com"
#############################
# #
# List of remote gateways #
# #
#############################
# gateways:
# - name: other
# url: nats://my-gateway-url:7522
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
# tls:
# secret:
# name: nats-client-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
# In case of both external access and advertisements being
# enabled, an initializer container will be used to gather
# the public ips.
bootconfig:
image: natsio/nats-boot-config:0.5.4
pullPolicy: IfNotPresent
securityContext: {}
# NATS Box
#
# https://github.com/nats-io/nats-box
#
natsbox:
enabled: true
image: natsio/nats-box:0.8.1
pullPolicy: IfNotPresent
securityContext: {}
# Labels to add to the natsbox deployment
# ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
additionalLabels: {}
# An array of imagePullSecrets, and they have to be created manually in the same namespace
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
# - name: dockerhub
# credentials:
# secret:
# name: nats-sys-creds
# key: sys.creds
# Annotations to add to the box pods
# ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
podAnnotations: {}
# key: "value"
# Labels to add to the box pods
# ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
podLabels: {}
# key: "value"
# Affinity for nats box pod assignment
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
# Node tolerations for server scheduling to nodes with taints
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
#
tolerations: []
# - key: "key"
# operator: "Equal|Exists"
# value: "value"
# effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"
# Additional nats-box server Volume mounts
extraVolumeMounts: []
# Additional nats-box server Volumes
extraVolumes: []
# The NATS config reloader image to use.
reloader:
enabled: true
image: natsio/nats-server-config-reloader:0.6.3
pullPolicy: IfNotPresent
securityContext: {}
extraConfigs: []
# Prometheus NATS Exporter configuration.
exporter:
enabled: true
image: natsio/prometheus-nats-exporter:0.9.1
pullPolicy: IfNotPresent
securityContext: {}
resources: {}
# Prometheus operator ServiceMonitor support. Exporter has to be enabled
serviceMonitor:
enabled: false
## Specify the namespace where Prometheus Operator is running
##
# namespace: monitoring
labels: {}
annotations: {}
path: /metrics
# interval:
# scrapeTimeout:
# Authentication setup
auth:
enabled: false
# basic:
# noAuthUser:
# # List of users that can connect with basic auth,
# # that belong to the global account.
# users:
# # List of accounts with users that can connect
# # using basic auth.
# accounts:
# Reference to the Operator JWT.
# operatorjwt:
# configMap:
# name: operator-jwt
# key: KO.jwt
# Token authentication
# token:
# NKey authentication
# nkeys:
# users:
# Public key of the System Account
# systemAccount:
resolver:
# Disables the resolver by default
type: none
##########################################
# #
# Embedded NATS Account Server Resolver #
# #
##########################################
# type: full
# If the resolver type is 'full', delete when enabled will rename the jwt.
allowDelete: false
# Interval at which a nats-server with a nats based account resolver will compare
# it's state with one random nats based account resolver in the cluster and if needed,
# exchange jwt and converge on the same set of jwt.
interval: 2m
# Operator JWT
operator:
# System Account Public NKEY
systemAccount:
# resolverPreload:
# <ACCOUNT>: <JWT>
# Directory in which the account JWTs will be stored.
store:
dir: "/accounts/jwt"
# Size of the account JWT storage.
size: 1Gi
##############################
# #
# Memory resolver settings #
# #
##############################
# type: memory
#
# Use a configmap reference which will be mounted
# into the container.
#
# configMap:
# name: nats-accounts
# key: resolver.conf
##########################
# #
# URL resolver settings #
# #
##########################
# type: URL
# url: "http://nats-account-server:9090/jwt/v1/accounts/"
websocket:
enabled: false
port: 443
noTLS: true
sameOrigin: false
allowedOrigins: []
# This will optionally specify what host:port for websocket
# connections to be advertised in the cluster.
# advertise: "host:port"
appProtocol:
enabled: false
# Network Policy configuration
networkPolicy:
enabled: false
# Don't require client label for connections
# When set to false, only pods with the correct client label will have network access to the ports
# NATS is listening on. When true, NATS will accept connections from any source
# (with the correct destination port).
allowExternal: true
# Add extra ingress rules to the NetworkPolicy
# e.g:
# extraIngress:
# - ports:
# - port: 1234
# from:
# - podSelector:
# - matchLabels:
# - role: frontend
# - podSelector:
# - matchExpressions:
# - key: role
# operator: In
# values:
# - frontend
extraIngress: []
# Add extra ingress rules to the NetworkPolicy
# e.g:
# extraEgress:
# - ports:
# - port: 1234
# to:
# - podSelector:
# - matchLabels:
# - role: frontend
# - podSelector:
# - matchExpressions:
# - key: role
# operator: In
# values:
# - frontend
extraEgress: []
# Labels to match to allow traffic from other namespaces
ingressNSMatchLabels: {}
# Pod labels to match to allow traffic from other namespaces
ingressNSPodMatchLabels: {}
# Cluster Domain configured on the kubelets
# https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
k8sClusterDomain: cluster.local
# Define if NATS is using FQDN name for clustering (i.e. nats-0.nats.default.svc.cluster.local) or short name (i.e. nats-0.nats.default).
useFQDN: true
# Add labels to all the deployed resources
commonLabels: {}
# podManagementPolicy controls how pods are created during initial scale up,
# when replacing pods on nodes, or when scaling down.
podManagementPolicy: Parallel