argo
ci / build (push) Failing after 8s Details

This commit is contained in:
Chris Grimmett 2024-04-22 00:14:48 +00:00
parent 61d9010138
commit cb8965f69d
29 changed files with 550 additions and 282 deletions

View File

@ -16,3 +16,4 @@ app.json
compose/
docker-compose.*
.vscode
charts/**/charts

1
.gitignore vendored
View File

@ -1,3 +1,4 @@
charts/**/charts
.envrc
compose/
.env

View File

@ -5,11 +5,19 @@ dev: minikube secrets tilt
all: bootstrap secrets helmsman
bootstrap:
kubectl --kubeconfig /home/chris/.kube/vke.yaml apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.4/cert-manager.crds.yaml
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
crds:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.4/cert-manager.crds.yaml
cert-manager:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.4/cert-manager.yaml
argocd:
namespaces:
kubectl create namespace cert-manager
kubectl create namespace windmill
helmsman:
helmsman --apply -f ./helmsman.yaml

129
README.md
View File

@ -1,126 +1,17 @@
[![GitHub version](https://d25lcipzij17d.cloudfront.net/badge.svg?id=gh&type=6&v=v3.17.0&x2=0)](https://github.com/Praqma/helmsman/releases) [![CircleCI](https://circleci.com/gh/Praqma/helmsman/tree/master.svg?style=svg)](https://circleci.com/gh/Praqma/helmsman/tree/master)
# futureporn.net
![helmsman-logo](docs/images/helmsman.png)
See ./ARCHITECTURE.md for overview
> Helmsman v3.0.0 works only with Helm versions >=3.0.0. For older Helm versions, use Helmsman v1.x
## Development
# What is Helmsman?
make minikube
make tilt
Helmsman is a Helm Charts (k8s applications) as Code tool which allows you to automate the deployment/management of your Helm charts from version controlled code.
## Deploying
# How does it work?
Stand up a kubernetes cluster.
Helmsman uses a simple declarative [TOML](https://github.com/toml-lang/toml) file to allow you to describe a desired state for your k8s applications as in the [example toml file](https://github.com/Praqma/helmsman/blob/master/examples/example.toml).
Alternatively YAML declaration is also acceptable [example yaml file](https://github.com/Praqma/helmsman/blob/master/examples/example.yaml).
make crds
make argocd
The desired state file (DSF) follows the [desired state specification](https://github.com/Praqma/helmsman/blob/master/docs/desired_state_specification.md).
Helmsman sees what you desire, validates that your desire makes sense (e.g. that the charts you desire are available in the repos you defined), compares it with the current state of Helm and figures out what to do to make your desire come true.
To plan without executing:
```sh
helmsman -f example.toml
```
To plan and execute the plan:
```sh
helmsman --apply -f example.toml
```
To show debugging details:
```sh
helmsman --debug --apply -f example.toml
```
To run a dry-run:
```sh
helmsman --debug --dry-run -f example.toml
```
To limit execution to specific application:
```sh
helmsman --debug --dry-run --target artifactory -f example.toml
```
# Features
- **Built for CD**: Helmsman can be used as a docker image or a binary.
- **Applications as code**: describe your desired applications and manage them from a single version-controlled declarative file.
- **Suitable for Multitenant Clusters**: deploy Tiller in different namespaces with service accounts and TLS (versions 1.x).
- **Easy to use**: deep knowledge of Helm CLI and Kubectl is NOT mandatory to use Helmsman.
- **Plan, View, apply**: you can run Helmsman to generate and view a plan with/without executing it.
- **Portable**: Helmsman can be used to manage charts deployments on any k8s cluster.
- **Protect Namespaces/Releases**: you can define certain namespaces/releases to be protected against accidental human mistakes.
- **Define the order of managing releases**: you can define the priorities at which releases are managed by helmsman (useful for dependencies).
- **Parallelise**: Releases with the same priority can be executed in parallel.
- **Idempotency**: As long your desired state file does not change, you can execute Helmsman several times and get the same result.
- **Continue from failures**: In the case of partial deployment due to a specific chart deployment failure, fix your helm chart and execute Helmsman again without needing to rollback the partial successes first.
# Install
## From binary
Please make sure the following are installed prior to using `helmsman` as a binary (the docker image contains all of them):
- [kubectl](https://github.com/kubernetes/kubectl)
- [helm](https://github.com/helm/helm) (helm >=v2.10.0 for `helmsman` >= 1.6.0, helm >=v3.0.0 for `helmsman` >=v3.0.0)
- [helm-diff](https://github.com/databus23/helm-diff) (`helmsman` >= 1.6.0)
If you use private helm repos, you will need either `helm-gcs` or `helm-s3` plugin or you can use basic auth to authenticate to your repos. See the [docs](https://github.com/Praqma/helmsman/blob/master/docs/how_to/helm_repos) for details.
Check the [releases page](https://github.com/Praqma/Helmsman/releases) for the different versions.
```sh
# on Linux
curl -L https://github.com/Praqma/helmsman/releases/download/v3.11.0/helmsman_3.11.0_linux_amd64.tar.gz | tar zx
# on MacOS
curl -L https://github.com/Praqma/helmsman/releases/download/v3.11.0/helmsman_3.11.0_darwin_amd64.tar.gz | tar zx
mv helmsman /usr/local/bin/helmsman
```
## As a docker image
Check the images on [dockerhub](https://hub.docker.com/r/praqma/helmsman/tags/)
## As a package
Helmsman has been packaged in Archlinux under `helmsman-bin` for the latest binary release, and `helmsman-git` for master.
You can also install Helmsman using [Homebrew](https://brew.sh)
```sh
brew install helmsman
```
## As an [asdf-vm](https://asdf-vm.com/) plugin
```sh
asdf plugin-add helmsman
asdf install helmsman latest
```
# Documentation
> Documentation for Helmsman v1.x can be found at: [docs v1.x](https://github.com/Praqma/helmsman/tree/1.x/docs)
- [How-Tos](https://github.com/Praqma/helmsman/blob/master/docs/how_to/).
- [Desired state specification](https://github.com/Praqma/helmsman/blob/master/docs/desired_state_specification.md).
- [CMD reference](https://github.com/Praqma/helmsman/blob/master/docs/cmd_reference.md)
## Usage
Helmsman can be used in three different settings:
- [As a binary with a hosted cluster](https://github.com/Praqma/helmsman/blob/master/docs/how_to/settings).
- [As a docker image in a CI system or local machine](https://github.com/Praqma/helmsman/blob/master/docs/how_to/deployments/ci.md) Always use a tagged docker image from [dockerhub](https://hub.docker.com/r/praqma/helmsman/) as the `latest` image can (at times) be unstable.
- [As a docker image inside a k8s cluster](https://github.com/Praqma/helmsman/blob/master/docs/how_to/deployments/inside_k8s.md)
# Contributing
Pull requests, feedback/feature requests are welcome. Please check our [contribution guide](CONTRIBUTION.md).
Argo CD will watch the git repo for changes and deploy helm charts as necessary.

View File

@ -1,97 +0,0 @@
# start Tilt with no enabled resources
config.clear_enabled_resources()
k8s_yaml(helm(
'./charts/fp',
values=['./charts/fp/values-dev.yaml'],
))
docker_build('fp/link2cid', './packages/link2cid')
docker_build(
'fp/strapi',
'.',
dockerfile='strapi.dockerfile',
target='release',
live_update=[
sync('./packages/strapi', '/app')
]
)
load('ext://uibutton', 'cmd_button')
cmd_button('postgres:restore',
argv=['sh', '-c', 'cd letters && yarn install'],
resource='postgres',
icon_name='cloud_download',
text='restore db from backup',
)
## Uncomment the following for fp/next in dev mode
## this is useful for changing the UI and seeing results
docker_build(
'fp/next',
'.',
dockerfile='next.dockerfile',
target='dev',
live_update=[
sync('./packages/next', '/app')
]
)
## Uncomment the following for fp/next in production mode
## this is useful to test how fp/next will behave in production environment
## note: there is no live_update here. expect slow rebuilds in response to code changes
# docker_build('fp/next', '.', dockerfile='next.dockerfile')
k8s_resource(
workload='link2cid',
port_forwards=3939,
links=[
link('http://localhost:3939/health', 'link2cid Health')
]
)
k8s_resource(
workload='ipfs-pod',
port_forwards=['5001'],
links=[
link('http://localhost:5001/webui', 'IPFS Web UI')
]
)
k8s_resource(
workload='next-pod',
port_forwards=['3000'],
)
k8s_resource(
workload='strapi-pod',
port_forwards=['1337'],
links=[
link('http://localhost:1337/admin', 'Strapi Admin UI')
]
)
k8s_resource(
workload='postgres-pod',
)
k8s_resource(
workload='pgadmin-pod',
port_forwards=['5050']
)
# v1alpha1.extension_repo(name='default', url='https://github.com/tilt-dev/tilt-extensions')
# v1alpha1.extension(name='ngrok', repo_name='default', repo_path='ngrok')
# settings = read_json('tilt_option.json', default={})
# default_registry(settings.get('default_registry', 'sjc.vultrcr.com/fpcontainers'))
config.set_enabled_resources([
'pgadmin-pod',
'postgres-pod',
'strapi-pod',
'next-pod',
'ipfs-pod',
'link2cid',
])

View File

@ -0,0 +1,3 @@
apiVersion: v2
name: argodeps
version: "1.0.0"

View File

@ -0,0 +1,28 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server-ingress
namespace: argocd
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
# If you encounter a redirect loop or are getting a 307 response code
# then you need to force the nginx ingress to connect to the backend using HTTPS.
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
ingressClassName: nginx
rules:
- host: argo.sbtp.xyz
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argo-argocd-server
port:
name: https
tls:
- hosts:
- argo.sbtp.xyz
secretName: argocd-server-tls # as expected by argocd-server

View File

@ -0,0 +1,22 @@
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# server: https://acme-staging-v02.api.letsencrypt.org/directory
server: https://acme-v02.api.letsencrypt.org/directory
email: {{ .Values.adminEmail }}
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- dns01:
webhook:
groupName: acme.vultr.com
solverName: vultr
config:
apiKeySecretRef:
key: apiKey
name: vultr

View File

@ -0,0 +1,23 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: {{ .Values.adminEmail }}
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-staging
solvers:
- dns01:
webhook:
groupName: acme.vultr.com
solverName: vultr
config:
apiKeySecretRef:
key: apiKey
name: vultr-credentials

View File

@ -0,0 +1,25 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cert-manager-webhook-vultr-secret-reader
namespace: cert-manager
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cert-manager-webhook-vultr-secret-reader-binding
namespace: cert-manager
subjects:
- kind: ServiceAccount
name: cert-manager-webhook-vultr
namespace: cert-manager
roleRef:
kind: Role
name: cert-manager-webhook-vultr-secret-reader
apiGroup: rbac.authorization.k8s.io

View File

@ -0,0 +1,7 @@
argo-cd:
dex:
enabled: false
notifications:
enabled: false
applicationSet:
enabled: false

View File

@ -0,0 +1,3 @@
apiVersion: v2
name: argo-ingress
version: "1.0.0"

View File

@ -0,0 +1,29 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server-ingress
namespace: argocd
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
# If you encounter a redirect loop or are getting a 307 response code
# then you need to force the nginx ingress to connect to the backend using HTTPS.
#
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
ingressClassName: nginx
rules:
- host: argo.sbtp.xyz
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
name: https
tls:
- hosts:
- argo.sbtp.xyz
secretName: argocd-server-tls # as expected by argocd-server

View File

@ -1,3 +1,36 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
@ -13,13 +46,17 @@ spec:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.14.0
image: registry.k8s.io/external-dns/external-dns:v0.14.1
args:
- --source=service # ingress is also possible
- --domain-filter=futureporn.net # (optional) limit to only example.com domains; change to match the zone created above.
- --source=ingress
- --domain-filter=sbtp.xyz
- --provider=vultr
env:
- name: VULTR_API_KEY
value: "YOU_VULTR_API_KEY"
valueFrom:
secretKeyRef:
name: vultr
key: apiKey

View File

@ -0,0 +1,33 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
annotations:
external-dns.alpha.kubernetes.io/hostname: nginx.sbtp.xyz
spec:
selector:
app: nginx
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80

View File

@ -0,0 +1,21 @@
apiVersion: v1
kind: Pod
metadata:
name: next-pod
labels:
app.kubernetes.io/name: next
spec:
containers:
- name: next
image: {{ .Values.next.containerName }}
env:
- name: HOSTNAME
value: 0.0.0.0
ports:
- containerPort: 3000
resources: {}
restartPolicy: OnFailure
resources:
limits:
cpu: 500m
memory: 1Gi

View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: next-service
annotations:
service.beta.kubernetes.io/vultr-loadbalancer-protocol: "http"
service.beta.kubernetes.io/vultr-loadbalancer-algorithm: "least_connections"
spec:
type: LoadBalancer
selector:
name: next
ports:
- name: http
protocol: TCP
port: 3000
targetPort: 3000

View File

@ -0,0 +1,35 @@
apiVersion: v1
kind: Pod
metadata:
name: pgadmin-pod
labels:
app.kubernetes.io/name: pgadmin
spec:
containers:
- name: pgadmin
image: dpage/pgadmin4
ports:
- containerPort: 5050
resources:
limits:
cpu: 500m
memory: 1Gi
env:
- name: PGADMIN_LISTEN_PORT
value: '5050'
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres
key: password
- name: PGADMIN_DEFAULT_PASSWORD
valueFrom:
secretKeyRef:
name: pgadmin
key: defaultPassword
- name: PGADMIN_DEFAULT_EMAIL
valueFrom:
secretKeyRef:
name: pgadmin
key: defaultEmail
restartPolicy: OnFailure

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: Service
metadata:
name: pgadmin-service
spec:
selector:
app.kubernetes.io/name: pgadmin
ports:
- name: web
protocol: TCP
port: 5050
targetPort: 5050

View File

@ -0,0 +1,30 @@
apiVersion: v1
kind: Pod
metadata:
name: postgres-pod
labels:
app.kubernetes.io/name: postgres
spec:
containers:
- name: postgres
image: postgres:16.0
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres
key: password
ports:
- containerPort: 5432
resources:
limits:
cpu: 500m
memory: 1Gi
volumeMounts:
- name: postgres-pvc
mountPath: /data/postgres
restartPolicy: OnFailure
volumes:
- name: postgres-pvc
persistentVolumeClaim:
claimName: postgres-pvc

View File

@ -0,0 +1,17 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
annotations:
meta.helm.sh/release-name: fp
meta.helm.sh/release-namespace: default
labels:
app.kubernetes.io/managed-by: {{ .Values.managedBy }}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 40Gi
storageClassName: {{ .Values.storageClassName }}

View File

@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app.kubernetes.io/name: postgres
ports:
- name: db
protocol: TCP
port: 5432
targetPort: 5432
status:
loadBalancer: {}

View File

@ -0,0 +1,108 @@
apiVersion: v1
kind: Pod
metadata:
name: strapi-pod
spec:
containers:
- name: strapi-pod
image: {{ .Values.strapi.containerName }}
ports:
- containerPort: 1337
env:
- name: ADMIN_JWT_SECRET
valueFrom:
secretKeyRef:
name: strapi
key: adminJwtSecret
- name: API_TOKEN_SALT
valueFrom:
secretKeyRef:
name: strapi
key: apiTokenSalt
- name: APP_KEYS
valueFrom:
secretKeyRef:
name: strapi
key: appKeys
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: strapi
key: databaseUrl
- name: CDN_BUCKET_USC_URL
valueFrom:
secretKeyRef:
name: strapi
key: cdnBucketUscUrl
- name: DATABASE_CLIENT
value: postgres
- name: DATABASE_HOST
value: postgres-service
- name: DATABASE_NAME
value: futureporn-strapi
- name: JWT_SECRET
valueFrom:
secretKeyRef:
name: strapi
key: jwtSecret
- name: MUX_PLAYBACK_RESTRICTION_ID
valueFrom:
secretKeyRef:
name: strapi
key: muxPlaybackRestrictionId
- name: MUX_SIGNING_KEY_ID
valueFrom:
secretKeyRef:
name: strapi
key: muxSigningKeyId
- name: MUX_SIGNING_KEY_PRIVATE_KEY
valueFrom:
secretKeyRef:
name: strapi
key: muxSigningKeyPrivateKey
- name: NODE_ENV
value: production
- name: S3_USC_BUCKET_APPLICATION_KEY
valueFrom:
secretKeyRef:
name: strapi
key: s3UscBucketApplicationKey
- name: S3_USC_BUCKET_ENDPOINT
valueFrom:
secretKeyRef:
name: strapi
key: s3UscBucketEndpoint
- name: S3_USC_BUCKET_KEY_ID
valueFrom:
secretKeyRef:
name: strapi
key: s3UscBucketKeyId
- name: S3_USC_BUCKET_NAME
valueFrom:
secretKeyRef:
name: strapi
key: s3UscBucketName
- name: S3_USC_BUCKET_REGION
valueFrom:
secretKeyRef:
name: strapi
key: s3UscBucketRegion
- name: SENDGRID_API_KEY
valueFrom:
secretKeyRef:
name: strapi
key: sendgridApiKey
- name: STRAPI_URL
value: {{ .Values.strapi.url }}
- name: TRANSFER_TOKEN_SALT
valueFrom:
secretKeyRef:
name: strapi
key: transferTokenSalt
- name: PORT
value: "{{ .Values.strapi.port }}"
resources:
limits:
cpu: 500m
memory: 1Gi
restartPolicy: OnFailure

View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: strapi-service
annotations:
service.beta.kubernetes.io/vultr-loadbalancer-protocol: "http"
service.beta.kubernetes.io/vultr-loadbalancer-algorithm: "least_connections"
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: strapi
ports:
- name: web
protocol: TCP
port: 80
targetPort: 1337

View File

@ -0,0 +1,29 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server-ingress
namespace: argocd
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
# If you encounter a redirect loop or are getting a 307 response code
# then you need to force the nginx ingress to connect to the backend using HTTPS.
#
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
ingressClassName: nginx
rules:
- host: argocd.sbtp.xyz
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
name: https
tls:
- hosts:
- argocd.sbtp.xyz
secretName: argocd-server-tls # as expected by argocd-server

View File

@ -35,6 +35,17 @@ spec:
name: link2cid
ports:
- containerPort: 3939
env:
- name: IPFS_URL
value: http://ipfs-service:5001
- name: PORT
value: '3939'
- name: API_KEY
valueFrom:
secretKeyRef:
name: link2cid
key: apiKey
---
apiVersion: v1

View File

@ -1,4 +1,5 @@
storageClassName: csi-hostpath-sc
# storageClassName: csi-hostpath-sc # used by minikube
storageClassName: standard # used by Kind
link2cid:
containerName: fp/link2cid
next:

View File

@ -1,56 +0,0 @@
namespaces:
default:
cert-manager:
ingress-nginx:
metrics-server:
kcert:
windmill:
helmRepos:
jetstack: https://charts.jetstack.io
emberstack: https://emberstack.github.io/helm-charts
vultr: https://vultr.github.io/helm-charts
ingress-nginx: https://kubernetes.github.io/ingress-nginx
metrics-server: https://kubernetes-sigs.github.io/metrics-server
windmill: https://windmill-labs.github.io/windmill-helm-charts
apps:
windmill:
namespace: windmill
chart: "windmill/windmill"
enabled: true
version: "2.0.167"
valuesFile: "./charts/windmill/values.yaml"
metrics-server:
namespace: metrics-server
chart: "metrics-server/metrics-server"
enabled: true
version: "3.12.1"
ingress-nginx:
namespace: ingress-nginx
chart: "ingress-nginx/ingress-nginx"
enabled: true
version: "4.10.0"
fp:
namespace: "default"
chart: "charts/fp"
enabled: true
version: "0.0.1"
valuesFile: "./charts/fp/values-prod.yaml"
cert-manager-webhook-vultr:
namespace: cert-manager
chart: vultr/cert-manager-webhook-vultr
enabled: true
version: "1.0.0"
cert-manager:
namespace: "cert-manager"
chart: "jetstack/cert-manager"
enabled: true
version: "1.14.4"
reflector:
namespace: "default"
chart: "emberstack/reflector"
enabled: true
version: "7.1.262"