k8s progress
ci / build (push) Failing after 6m34s Details

This commit is contained in:
Chris Grimmett 2024-04-18 20:51:09 +00:00
parent 3b4de1e1c8
commit 6ea65301c4
37 changed files with 605 additions and 206 deletions

View File

@ -2,8 +2,10 @@ git monorepo.
pnpm required for workspaces.
Yarn required for packages/strapi.
Kubernetes for Development using Tiltfile
dokku for Production, deployed with `git push`.
(dokku is slowly being replaced by Kubernetes)
Kubernetes for Production, deployed using Helm/helmfile

24
LICENSE Normal file
View File

@ -0,0 +1,24 @@
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any
means.
In jurisdictions that recognize copyright laws, the author or authors
of this software dedicate any and all copyright interest in the
software to the public domain. We make this dedication for the benefit
of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of
relinquishment in perpetuity of all present and future rights to this
software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
For more information, please refer to <https://unlicense.org>

View File

@ -3,6 +3,14 @@ include .env
all: minikube secrets tilt
bootstrap:
helm install --create-namespace -n crd-bootstrap crd-bootstrap oci://ghcr.io/skarlso/helm/crd-bootstrap --version v0.6.0
helmsman:
helmsman --apply -f ./helmsman.yaml
deploy: bootstrap helmsman secrets
tilt:
tilt up
@ -11,9 +19,14 @@ secrets:
kubectl create secret generic link2cid \
--from-literal=apiKey=${LINK2CID_API_KEY}
kubectl --namespace cert-manager delete secret vultr-credentials --ignore-not-found
kubectl --namespace cert-manager create secret generic vultr-credentials \
--from-literal=apiKey=${VULTR_API_KEY}
kubectl delete secret vultr --ignore-not-found
kubectl create secret generic vultr \
--from-literal=vultr=${VULTR_CONTAINER_REGISTRY_USERNAME}
--from-literal=containerRegistryUsername=${VULTR_CONTAINER_REGISTRY_USERNAME} \
--from-literal=apiKey=${VULTR_API_KEY}
kubectl delete secret postgres --ignore-not-found
kubectl create secret generic postgres \

127
README.md
View File

@ -1,25 +1,126 @@
# futureporn-monorepo
[![GitHub version](https://d25lcipzij17d.cloudfront.net/badge.svg?id=gh&type=6&v=v3.17.0&x2=0)](https://github.com/Praqma/helmsman/releases) [![CircleCI](https://circleci.com/gh/Praqma/helmsman/tree/master.svg?style=svg)](https://circleci.com/gh/Praqma/helmsman/tree/master)
## development
![helmsman-logo](docs/images/helmsman.png)
[Tilt](https://docs.tilt.dev/) Handles auto-reloading dev containers when making changes to code.
> Helmsman v3.0.0 works only with Helm versions >=3.0.0. For older Helm versions, use Helmsman v1.x
### dependencies
# What is Helmsman?
docker
minikube
ctlptl
tilt
Helmsman is a Helm Charts (k8s applications) as Code tool which allows you to automate the deployment/management of your Helm charts from version controlled code.
### setup
# How does it work?
I would love for this to be a single command, `tilt up`, but minikube needs to get started and needs some addons.
Helmsman uses a simple declarative [TOML](https://github.com/toml-lang/toml) file to allow you to describe a desired state for your k8s applications as in the [example toml file](https://github.com/Praqma/helmsman/blob/master/examples/example.toml).
Alternatively YAML declaration is also acceptable [example yaml file](https://github.com/Praqma/helmsman/blob/master/examples/example.yaml).
@todo figure out a way to incorporate the minikube setup with `tilt up`
The desired state file (DSF) follows the [desired state specification](https://github.com/Praqma/helmsman/blob/master/docs/desired_state_specification.md).
When enabling `csi-hostpath-driver`, it appears like it's locked up, but be sure to be patient and WAIT. It's just slow.
Helmsman sees what you desire, validates that your desire makes sense (e.g. that the charts you desire are available in the repos you defined), compares it with the current state of Helm and figures out what to do to make your desire come true.
To plan without executing:
```sh
helmsman -f example.toml
```
make cluster
To plan and execute the plan:
```sh
helmsman --apply -f example.toml
```
To show debugging details:
```sh
helmsman --debug --apply -f example.toml
```
To run a dry-run:
```sh
helmsman --debug --dry-run -f example.toml
```
To limit execution to specific application:
```sh
helmsman --debug --dry-run --target artifactory -f example.toml
```
# Features
- **Built for CD**: Helmsman can be used as a docker image or a binary.
- **Applications as code**: describe your desired applications and manage them from a single version-controlled declarative file.
- **Suitable for Multitenant Clusters**: deploy Tiller in different namespaces with service accounts and TLS (versions 1.x).
- **Easy to use**: deep knowledge of Helm CLI and Kubectl is NOT mandatory to use Helmsman.
- **Plan, View, apply**: you can run Helmsman to generate and view a plan with/without executing it.
- **Portable**: Helmsman can be used to manage charts deployments on any k8s cluster.
- **Protect Namespaces/Releases**: you can define certain namespaces/releases to be protected against accidental human mistakes.
- **Define the order of managing releases**: you can define the priorities at which releases are managed by helmsman (useful for dependencies).
- **Parallelise**: Releases with the same priority can be executed in parallel.
- **Idempotency**: As long your desired state file does not change, you can execute Helmsman several times and get the same result.
- **Continue from failures**: In the case of partial deployment due to a specific chart deployment failure, fix your helm chart and execute Helmsman again without needing to rollback the partial successes first.
# Install
## From binary
Please make sure the following are installed prior to using `helmsman` as a binary (the docker image contains all of them):
- [kubectl](https://github.com/kubernetes/kubectl)
- [helm](https://github.com/helm/helm) (helm >=v2.10.0 for `helmsman` >= 1.6.0, helm >=v3.0.0 for `helmsman` >=v3.0.0)
- [helm-diff](https://github.com/databus23/helm-diff) (`helmsman` >= 1.6.0)
If you use private helm repos, you will need either `helm-gcs` or `helm-s3` plugin or you can use basic auth to authenticate to your repos. See the [docs](https://github.com/Praqma/helmsman/blob/master/docs/how_to/helm_repos) for details.
Check the [releases page](https://github.com/Praqma/Helmsman/releases) for the different versions.
```sh
# on Linux
curl -L https://github.com/Praqma/helmsman/releases/download/v3.11.0/helmsman_3.11.0_linux_amd64.tar.gz | tar zx
# on MacOS
curl -L https://github.com/Praqma/helmsman/releases/download/v3.11.0/helmsman_3.11.0_darwin_amd64.tar.gz | tar zx
mv helmsman /usr/local/bin/helmsman
```
## As a docker image
Check the images on [dockerhub](https://hub.docker.com/r/praqma/helmsman/tags/)
## As a package
Helmsman has been packaged in Archlinux under `helmsman-bin` for the latest binary release, and `helmsman-git` for master.
You can also install Helmsman using [Homebrew](https://brew.sh)
```sh
brew install helmsman
```
## As an [asdf-vm](https://asdf-vm.com/) plugin
```sh
asdf plugin-add helmsman
asdf install helmsman latest
```
# Documentation
> Documentation for Helmsman v1.x can be found at: [docs v1.x](https://github.com/Praqma/helmsman/tree/1.x/docs)
- [How-Tos](https://github.com/Praqma/helmsman/blob/master/docs/how_to/).
- [Desired state specification](https://github.com/Praqma/helmsman/blob/master/docs/desired_state_specification.md).
- [CMD reference](https://github.com/Praqma/helmsman/blob/master/docs/cmd_reference.md)
## Usage
Helmsman can be used in three different settings:
- [As a binary with a hosted cluster](https://github.com/Praqma/helmsman/blob/master/docs/how_to/settings).
- [As a docker image in a CI system or local machine](https://github.com/Praqma/helmsman/blob/master/docs/how_to/deployments/ci.md) Always use a tagged docker image from [dockerhub](https://hub.docker.com/r/praqma/helmsman/) as the `latest` image can (at times) be unstable.
- [As a docker image inside a k8s cluster](https://github.com/Praqma/helmsman/blob/master/docs/how_to/deployments/inside_k8s.md)
# Contributing
Pull requests, feedback/feature requests are welcome. Please check our [contribution guide](CONTRIBUTION.md).

View File

@ -1,12 +1,12 @@
# load('ext://dotenv', 'dotenv')
# dotenv()
# start Tilt with no enabled resources
config.clear_enabled_resources()
k8s_yaml(helm(
'./charts/fp',
values=['./charts/fp/values-dev.yaml'],
))
# docker_build('fp/link2cid', './packages/link2cid')
docker_build('fp/link2cid', './packages/link2cid')
docker_build(
'fp/strapi',
'.',
@ -17,19 +17,26 @@ docker_build(
]
)
load('ext://uibutton', 'cmd_button')
cmd_button('postgres:restore',
argv=['sh', '-c', 'cd letters && yarn install'],
resource='postgres',
icon_name='cloud_download',
text='restore db from backup',
)
## Uncomment the following for fp/next in dev mode
## this is useful for changing the UI and seeing results
# docker_build(
# 'fp/next',
# '.',
# dockerfile='next.dockerfile',
# target='dev',
# live_update=[
# sync('./packages/next', '/app')
# ]
# )
docker_build(
'fp/next',
'.',
dockerfile='next.dockerfile',
target='dev',
live_update=[
sync('./packages/next', '/app')
]
)
## Uncomment the following for fp/next in production mode
## this is useful to test how fp/next will behave in production environment
@ -37,26 +44,26 @@ docker_build(
# docker_build('fp/next', '.', dockerfile='next.dockerfile')
# k8s_resource(
# workload='link2cid-pod',
# port_forwards=3939,
# links=[
# link('http://localhost:3939/health', 'link2cid Health')
# ]
# )
k8s_resource(
workload='link2cid',
port_forwards=3939,
links=[
link('http://localhost:3939/health', 'link2cid Health')
]
)
# k8s_resource(
# workload='ipfs-pod',
# port_forwards=['5001'],
# links=[
# link('http://localhost:5001/webui', 'IPFS Web UI')
# ]
# )
k8s_resource(
workload='ipfs-pod',
port_forwards=['5001'],
links=[
link('http://localhost:5001/webui', 'IPFS Web UI')
]
)
# k8s_resource(
# workload='next-pod',
# port_forwards=['3000'],
# )
k8s_resource(
workload='next-pod',
port_forwards=['3000'],
)
k8s_resource(
workload='strapi-pod',
port_forwards=['1337'],
@ -74,9 +81,17 @@ k8s_resource(
)
# v1alpha1.extension_repo(name='default', url='https://github.com/tilt-dev/tilt-extensions')
# v1alpha1.extension(name='ngrok', repo_name='default', repo_path='ngrok')
# settings = read_json('tilt_option.json', default={})
# default_registry(settings.get('default_registry', 'sjc.vultrcr.com/fpcontainers'))
config.set_enabled_resources([
'pgadmin-pod',
'postgres-pod',
'strapi-pod',
'next-pod',
'ipfs-pod',
'link2cid',
])

View File

@ -0,0 +1 @@
installCRDs: true

View File

@ -10,7 +10,7 @@ metadata:
spec:
type: LoadBalancer
selector:
app: next-service
app: link2cid-service
ports:
- port: 80
name: "http"

View File

@ -1,41 +1,29 @@
apiVersion: v1
kind: Pod
metadata:
annotations:
kompose.cmd: kompose convert --file compose.yml -c --out ./charts
kompose.version: 1.26.0 (40646f47)
name: link2cid-pod
labels:
io.kompose.service: link2cid
name: link2cid
app.kubernetes.io/name: link2cid
spec:
containers:
- env:
- name: link2cid-pod
image: {{ .Values.link2cid.containerName }}
ports:
- containerPort: 3939
env:
- name: IPFS_URL
value: http://ipfs-service:5001
- name: PORT
value: '3939'
- name: API_KEY
valueFrom:
secretKeyRef:
name: link2cid
key: apiKey
- name: IPFS_URL
value: http://ipfs0:5001
- name: PORT
value: "3939"
image: link2cid
name: fp-link2cid
ports:
- containerPort: 3939
resources:
limits:
cpu: 100m
memory: 2048Gi
requests:
cpu: 100m
memory: 2048Gi
volumeMounts:
- mountPath: /app/index.js
name: link2cid-claim0
cpu: 500m
memory: 1024Mi
imagePullSecrets:
- name: regcred
restartPolicy: OnFailure
volumes:
- name: link2cid-claim0
persistentVolumeClaim:
claimName: link2cid-claim0
status: {}

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: link2cid-pvc
annotations:
meta.helm.sh/release-name: fp
meta.helm.sh/release-namespace: default
labels:
app.kubernetes.io/managed-by: {{ .Values.managedBy }}
spec:
accessModes:
- ReadWriteOnce
storageClassName: {{ .Values.storageClassName }}

View File

@ -1,29 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
labels:
io.kompose.service: next
name: next
spec:
containers:
- env:
- name: NEXT_PUBLIC_SITE_URL
- name: NEXT_PUBLIC_STRAPI_URL
- name: NEXT_PUBLIC_UPPY_COMPANION_URL
- name: NODE_ENV
value: development
- name: REVALIDATION_TOKEN
image: next
name: fp-next
ports:
- containerPort: 3000
resources: {}
volumeMounts:
- mountPath: /app/app
name: next-claim0
restartPolicy: OnFailure
volumes:
- name: next-claim0
persistentVolumeClaim:
claimName: next-claim0
status: {}

View File

@ -1,19 +0,0 @@
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert --file compose.yml -c --out ./charts
kompose.version: 1.26.0 (40646f47)
creationTimestamp: null
labels:
io.kompose.service: next
name: next
spec:
ports:
- name: "3000"
port: 3000
targetPort: 3000
selector:
io.kompose.service: next
status:
loadBalancer: {}

View File

@ -1,25 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.14.1
args:
- --source=service # ingress is also possible
- --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.
- --provider=vultr
env:
- name: VULTR_API_KEY
value: "VULTR_API_KEY"

View File

@ -1,27 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: link2cid-pod
spec:
containers:
- name: link2cid-pod
image: {{ .Values.link2cid.containerName }}
ports:
- containerPort: 3939
env:
- name: IPFS_URL
value: http://ipfs-service:5001
- name: PORT
value: '3939'
- name: API_KEY
valueFrom:
secretKeyRef:
name: link2cid
key: apiKey
resources:
limits:
cpu: 500m
memory: 1024Mi
imagePullSecrets:
- name: regcred
restartPolicy: OnFailure

View File

@ -1,12 +0,0 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: link2cid-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 40Gi
allowVolumeExpansion: true
storageClassName: {{ .Values.storageClassName }}

View File

@ -1,11 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: link2cid-service
spec:
ports:
- name: "3939"
port: 3939
targetPort: 3939
status:
loadBalancer: {}

View File

@ -0,0 +1,13 @@
apiVersion: delivery.crd-bootstrap/v1alpha1
kind: Bootstrap
metadata:
namespace: cert-manager
name: cert-manager-crd-bootstrap
spec:
interval: 10s
source:
helm:
chartReference: https://charts.jetstack.io
chartName: cert-manager
version:
semver: 1.14.4

View File

@ -0,0 +1,62 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.14.1
args:
- --source=service
- --domain-filter=sbtp.xyz
- --provider=vultr
env:
- name: VULTR_API_KEY
valueFrom:
secretKeyRef:
name: vultr
key: apiKey

View File

@ -11,7 +11,6 @@ spec:
ports:
- containerPort: 5001
- containerPort: 8080
resources: {}
volumeMounts:
- name: ipfs-pvc
mountPath: /data/ipfs

View File

@ -2,12 +2,16 @@ apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ipfs-pvc
annotations:
meta.helm.sh/release-name: fp
meta.helm.sh/release-namespace: default
labels:
app.kubernetes.io/managed-by: {{ .Values.managedBy }}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 40Gi
allowVolumeExpansion: true
storageClassName: {{ .Values.storageClassName }}

View File

@ -2,6 +2,11 @@ apiVersion: v1
kind: Service
metadata:
name: ipfs-service
annotations:
meta.helm.sh/release-name: fp
meta.helm.sh/release-namespace: default
labels:
app.kubernetes.io/managed-by: {{ .Values.managedBy }}
spec:
selector:
app.kubernetes.io/name: ipfs

View File

@ -0,0 +1,50 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: {{ .Values.adminEmail }}
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-staging
solvers:
- dns01:
webhook:
groupName: acme.vultr.com
solverName: vultr
config:
apiKeySecretRef:
key: apiKey
name: vultr-credentials
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cert-manager-webhook-vultr:secret-reader
namespace: cert-manager
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["vultr-credentials"]
verbs: ["get", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cert-manager-webhook-vultr:secret-reader
namespace: cert-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cert-manager-webhook-vultr:secret-reader
subjects:
- apiGroup: ""
kind: ServiceAccount
name: cert-manager-webhook-vultr

View File

@ -0,0 +1,142 @@
# apiVersion: v1
# kind: Service
# metadata:
# name: link2cid
# annotations:
# external-dns.alpha.kubernetes.io/hostname: link2cid.sbtp.xyz
# service.beta.kubernetes.io/vultr-loadbalancer-label: "link2cid"
# service.beta.kubernetes.io/vultr-loadbalancer-ssl: "cert-manager-webhook-vultr-webhook-tls"
# service.beta.kubernetes.io/vultr-loadbalancer-protocol: "https"
# service.beta.kubernetes.io/vultr-loadbalancer-https-ports: "443"
# service.beta.kubernetes.io/vultr-loadbalancer-backend-protocol: "http"
# service.beta.kubernetes.io/vultr-loadbalancer-algorithm: "least_connections"
# service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-protocol: "http"
# service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-path: "/health"
# spec:
# selector:
# app: link2cid
# type: LoadBalancer
# ports:
# - name: http
# protocol: TCP
# port: 80
# targetPort: 3939
# - name: https
# protocol: TCP
# port: 443
# targetPort: 3939
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: link2cid-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: link2cid-ingress
rules:
- host: link2cid.sbtp.xyz
http:
paths:
- path: /demo-path
pathType: Prefix
backend:
service:
name: link2cid
port:
number: 3939
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: link2cid
spec:
selector:
matchLabels:
app: link2cid
template:
metadata:
labels:
app: link2cid
spec:
containers:
- image: {{ .Values.link2cid.containerName }}
name: link2cid
ports:
- containerPort: 3939
# ---
# apiVersion: apps/v1
# kind: Deployment
# metadata:
# name: link2cid
# spec:
# selector:
# matchLabels:
# app: link2cid
# template:
# metadata:
# labels:
# app: link2cid
# spec:
# containers:
# - name: link2cid
# image: {{ .Values.link2cid.containerName }}
# ports:
# - containerPort: 3333
# env:
# - name: IPFS_URL
# value: http://ipfs-service:5001
# - name: PORT
# value: "3333"
# - name: API_KEY
# valueFrom:
# secretKeyRef:
# name: link2cid
# key: apiKey
# resources:
# limits:
# cpu: 500m
# memory: 1024Mi
# restartPolicy: Always
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: link2cid
annotations:
meta.helm.sh/release-name: fp
meta.helm.sh/release-namespace: default
labels:
app.kubernetes.io/managed-by: {{ .Values.managedBy }}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 40Gi
storageClassName: {{ .Values.storageClassName }}
{{ if eq .Values.managedBy "Helm" }}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: staging-cert-sbtp-xyz
spec:
commonName: link2cid.sbtp.xyz
dnsNames:
- link2cid.sbtp.xyz
issuerRef:
name: letsencrypt-staging
kind: ClusterIssuer
secretName: sbtp-xyz-tls
secretTemplate:
annotations:
reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces: "default"
{{ end }}

View File

@ -15,3 +15,7 @@ spec:
- containerPort: 3000
resources: {}
restartPolicy: OnFailure
resources:
limits:
cpu: 500m
memory: 1Gi

View File

@ -0,0 +1,33 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
annotations:
external-dns.alpha.kubernetes.io/hostname: nginx.sbtp.xyz
spec:
selector:
app: nginx
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80

View File

@ -2,12 +2,16 @@ apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
annotations:
meta.helm.sh/release-name: fp
meta.helm.sh/release-namespace: default
labels:
app.kubernetes.io/managed-by: {{ .Values.managedBy }}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 40Gi
allowVolumeExpansion: true
storageClassName: {{ .Values.storageClassName }}

View File

@ -7,3 +7,5 @@ strapi:
containerName: fp/strapi
port: 1337
url: http://localhost:1337
managedBy: Dildo
adminEmail: cj@futureporn.net

View File

@ -1,5 +1,4 @@
storageClassName: vultr-block-storage-hdd
#storageClassName: civo-volume
link2cid:
containerName: gitea.futureporn.net/cj_clippy/link2cid:latest
next:
@ -8,3 +7,8 @@ strapi:
containerName: sjc.vultrcr.com/fpcontainers/strapi
port: 1337
url: https://portal.futureporn.net
managedBy: Helm
adminEmail: cj@futureporn.net
extraArgs:
- --dns01-recursive-nameservers-only
- --dns01-recursive-nameservers=8.8.8.8:53,1.1.1.1:53

43
helmsman.yaml Normal file
View File

@ -0,0 +1,43 @@
namespaces:
default:
cert-manager:
crd-bootstrap:
ingress-nginx:
helmRepos:
jetstack: https://charts.jetstack.io
emberstack: https://emberstack.github.io/helm-charts
vultr: https://vultr.github.io/helm-charts
ingress-nginx: https://kubernetes.github.io/ingress-nginx
apps:
ingress-nginx:
namespace: ingress-nginx
chart: "ingress-nginx/ingress-nginx"
enabled: true
version: "4.10.0"
fp:
namespace: "default"
chart: "charts/fp"
enabled: true
version: "0.0.1"
valuesFile: "./charts/fp/values-prod.yaml"
cert-manager-webhook-vultr:
namespace: cert-manager
chart: vultr/cert-manager-webhook-vultr
enabled: true
version: "1.0.0"
cert-manager:
namespace: "cert-manager"
chart: "jetstack/cert-manager"
enabled: true
version: "1.14.4"
set:
installCRDs: true
reflector:
namespace: "default"
chart: "emberstack/reflector"
enabled: true
version: "7.1.262"

View File

@ -9,5 +9,5 @@
},
"keywords": [],
"author": "",
"license": "CC0-1.0"
"license": "Unlicense"
}