add jellyfin

This commit is contained in:
2025-03-16 02:24:03 +01:00
parent 56f4508697
commit d0df41bd8c
7 changed files with 188 additions and 0 deletions

36
deploy/jellyfin/README.md Normal file
View File

@@ -0,0 +1,36 @@
# Jellyfin on Kubernetes #
This project contains the required resources to deploy Jellyfin into Kubernetes. It is adapted from the [Jellyfin on Openshift](https://github.com/home-cluster/jellyfin-openshift) project. The instructions provided here are for microk8s kubernetes running on Ubuntu 22.04 LTS. The jellyfin server will be internet accessible.
## Pre-requisites ##
To deploy this project you will need:
- A working kubernetes cluster. See [here](https://microk8s.io/docs/getting-started) for instructions on getting started with microk8s.
- An ingres controller and cert-manager (or something similar) for providing access to the jellyfin service and performing TLS termination. [This guide](https://microk8s.io/docs/addon-cert-manager) explains how to configure the cert-manager and ingress in microk8s.
## Kubernetes resources ##
The `base/` directory contains a `PersistentVolumeClaim` , a `Deployment`, `Service`, and `ingress` to deploy Jellyfin
into Kubernetes.
You will likely need to update the following:
- the ingress controller (see point 1 in the troubleshooting section below).
- the path to the folder on the local machine that contains your media files.
The examples in the project use [kustomize](https://kustomize.io/) to modify configuration parameters. Kustomize is included in recent versions of kubectl and provides a convenient way to adapt a base set of resources to multiple environments. I have included a [sample overlay](./overlay/kustomization.yaml) which contains example patches for the above configuration parameters.
## Troubleshooting ##
This project is tested on a microk8s kubernetes cluster running on Ubuntu. It should be mostly portable across different kubernetes implementatoins, however keep in mind:
1. The [ingress controller configuration](./resources/ingress.yaml) provided in this project ustilises the **microk8s** `ingress` and `cert-manager` addons. If you are using a different kubernets implementation you may need to modify the ingress configuration beyond changing the hostname of your server.
2. You will likely need to set up port forwarding between port 443 on your internet router and the microk8s ingress IP.
3. This example deployment uses the fairly basic 'hostpath' storage for the media library. This will likely be suitable for a home media server running on microk8s or some other lightweight/single node implementation, where the media files are stored in a local directory, but is not suitable for multi-node clusters.
- An alternative storage option that may be useful if media is on a NAS is to not create a persistent media volume and add media to the library via network shares.
4. Depending on your network configuration, a load balancer service type (instead of ClusterIP) may be necessary to allow media volumes from the local network to be added (I haven't tested this configuration).
## TODO List ##
- For the media persistant volume resource,`spec.claimRef.namespace` should be set using a kustomize patch.

View File

@@ -0,0 +1,26 @@
# Define a StorageClass using Longhorn
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: config-storage
provisioner: driver.longhorn.io
reclaimPolicy: Retain # Keep the data even if PVC is deleted
allowVolumeExpansion: true
parameters:
numberOfReplicas: "2" # Adjust based on your cluster size
staleReplicaTimeout: "30" # Timeout for stale replicas (in minutes)
fromBackup: "" # Optional: Use if restoring from a Longhorn backup
volumeBindingMode: Immediate # Ensures volumes are bound immediately
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jellyfin-config
namespace: jellyfin
spec:
storageClassName: config-storage
accessModes:
- ReadWriteOnce # Default Longhorn mode
resources:
requests:
storage: 10Gi

View File

@@ -0,0 +1,51 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jellyfin
spec:
selector:
matchLabels:
app: jellyfin
template:
metadata:
labels:
app: jellyfin
spec:
containers:
- env:
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: docker.io/jellyfin/jellyfin
imagePullPolicy: IfNotPresent
name: jellyfin
ports:
- containerPort: 8096
protocol: TCP
volumeMounts:
- mountPath: /data/media
name: media
readOnly: True
- mountPath: /config
name: jellyfin-config
restartPolicy: Always
volumes:
- name: media
persistentVolumeClaim:
claimName: media-pvc
- name: jellyfin-config
persistentVolumeClaim:
claimName: jellyfin-config

View File

@@ -0,0 +1,24 @@
# For microk8s default nginx ingress controller (enable by running 'microk8s enable ingress')
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jellyfin-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec
tls:
- hosts:
- play.panic.haus
secretName: jellyfin-tls
rules:
- host: play.panic.haus
http:
paths:
- backend:
service:
name: jellyfin
port:
number: 8096
path: /
pathType: Prefix

View File

@@ -0,0 +1,9 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- ingress.yaml
- config-storage.yaml
- media-storage.yaml
- service.yaml

View File

@@ -0,0 +1,27 @@
# Define a StorageClass using Longhorn
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: media-storage
provisioner: driver.longhorn.io
reclaimPolicy: Retain # Keep the data even if PVC is deleted
allowVolumeExpansion: true
parameters:
numberOfReplicas: "1" # Adjust based on your cluster size
staleReplicaTimeout: "30" # Timeout for stale replicas (in minutes)
fromBackup: "" # Optional: Use if restoring from a Longhorn backup
volumeBindingMode: Immediate # Ensures volumes are bound immediately
---
# Create a PersistentVolumeClaim (PVC) using Longhorn
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: media-pvc
namespace: jellyfin
spec:
storageClassName: media-storage
accessModes:
- ReadWriteOnce # Longhorn supports RWO; if RWX is needed, enable RWX mode in Longhorn UI
resources:
requests:
storage: 1Ti

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
labels:
app: jellyfin
name: jellyfin
spec:
ports:
- name: web
port: 8096
protocol: TCP
targetPort: 8096
selector:
app: jellyfin
type: ClusterIP