Backup/Restore (#256)

* lifecycle: move s3 backup settings to s3 name

* providers/oauth2: fix for alerting for missing certificatekeypair

* lifecycle: add backup commands

see #252

* lifecycle: install postgres-client for 11 and 12

* root: migrate to DBBACKUP_STORAGE_OPTIONS, add region setting

* lifecycle: auto-clean last backups

* helm: add s3 region parameter, add cronjob for backups

* docs: add backup docs

* root: remove backup scheduled task for now
This commit is contained in:
Jens L 2020-10-03 20:36:36 +02:00 committed by GitHub
parent 195d8fe71f
commit 9fb1ac98ec
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
12 changed files with 225 additions and 62 deletions

View File

@ -16,7 +16,11 @@ COPY --from=locker /app/requirements.txt /
COPY --from=locker /app/requirements-dev.txt / COPY --from=locker /app/requirements-dev.txt /
RUN apt-get update && \ RUN apt-get update && \
apt-get install -y --no-install-recommends postgresql-client-11 build-essential && \ apt-get install -y --no-install-recommends curl ca-certificates gnupg && \
curl https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - && \
echo "deb http://apt.postgresql.org/pub/repos/apt buster-pgdg main" > /etc/apt/sources.list.d/pgdg.list && \
apt-get update && \
apt-get install -y --no-install-recommends postgresql-client-12 postgresql-client-11 build-essential && \
apt-get clean && \ apt-get clean && \
pip install -r /requirements.txt --no-cache-dir && \ pip install -r /requirements.txt --no-cache-dir && \
apt-get remove --purge -y build-essential && \ apt-get remove --purge -y build-essential && \

View File

@ -37,6 +37,8 @@ services:
- traefik.port=8000 - traefik.port=8000
- traefik.docker.network=internal - traefik.docker.network=internal
- traefik.frontend.rule=PathPrefix:/ - traefik.frontend.rule=PathPrefix:/
volumes:
- ./backups:/backups
env_file: env_file:
- .env - .env
worker: worker:
@ -50,6 +52,8 @@ services:
PASSBOOK_REDIS__HOST: redis PASSBOOK_REDIS__HOST: redis
PASSBOOK_POSTGRESQL__HOST: postgresql PASSBOOK_POSTGRESQL__HOST: postgresql
PASSBOOK_POSTGRESQL__PASSWORD: ${PG_PASS} PASSBOOK_POSTGRESQL__PASSWORD: ${PG_PASS}
volumes:
- ./backups:/backups
env_file: env_file:
- .env - .env
static: static:

View File

@ -4,7 +4,7 @@ For a mid to high-load installation, Kubernetes is recommended. passbook is inst
This installation automatically applies database migrations on startup. After the installation is done, you can use `pbadmin` as username and password. This installation automatically applies database migrations on startup. After the installation is done, you can use `pbadmin` as username and password.
``` ```yaml
################################### ###################################
# Values directly affecting passbook # Values directly affecting passbook
################################### ###################################
@ -35,8 +35,21 @@ config:
# access_key: access-key # access_key: access-key
# secret_key: secret-key # secret_key: secret-key
# bucket: s3-bucket # bucket: s3-bucket
# region: eu-central-1
# host: s3-host # host: s3-host
ingress:
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- passbook.k8s.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - passbook.k8s.local
################################### ###################################
# Values controlling dependencies # Values controlling dependencies
################################### ###################################
@ -57,16 +70,4 @@ redis:
enabled: false enabled: false
# https://stackoverflow.com/a/59189742 # https://stackoverflow.com/a/59189742
disableCommands: [] disableCommands: []
ingress:
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- passbook.k8s.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - passbook.k8s.local
``` ```

View File

@ -0,0 +1,111 @@
# Backup and restore
!!! warning
Local backups are only supported for docker-compose installs. If you want to backup a Kubernetes instance locally, use an S3-compatible server such as [minio](https://min.io/)
### Backup
Local backups can be created by running the following command in your passbook installation directory
```
docker-compose run --rm server backup
```
This will dump the current database into the `./backups` folder. By defaults, the last 10 Backups are kept.
To schedule these backups, use the following snippet in a crontab
```
0 0 * * * bash -c "cd <passbook install location> && docker-compose run --rm server backup" >/dev/null
```
!!! notice
passbook does support automatic backups on a schedule, however this is currently not recommended, as there is no way to monitor these scheduled tasks.
### Restore
Run this command in your passbook installation directory
```
docker-compose run --rm server backup
```
This will prompt you to restore from your last backup. If you want to restore from a specific file, use the `-i` flag with the filename:
```
docker-compose run --rm server backup -i default-2020-10-03-115557.psql
```
After you've restored the backup, it is recommended to restart all services with `docker-compose restart`.
### S3 Configuration
!!! notice
To trigger backups with S3 enabled, use the same commands as above.
#### S3 Preparation
passbook expects the bucket you select to already exist. The IAM User given to passbook should have the following permissions
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObjectAcl",
"s3:GetObject",
"s3:ListBucket",
"s3:DeleteObject",
"s3:PutObjectAcl"
],
"Principal": {
"AWS": "arn:aws:iam::example-AWS-account-ID:user/example-user-name"
},
"Resource": [
"arn:aws:s3:::example-bucket-name/*",
"arn:aws:s3:::example-bucket-name"
]
}
]
}
```
#### docker-compose
Set the following values in your `.env` file.
```
PASSBOOK_POSTGRESQL__S3_BACKUP__ACCESS_KEY=
PASSBOOK_POSTGRESQL__S3_BACKUP__SECRET_KEY=
PASSBOOK_POSTGRESQL__S3_BACKUP__BUCKET=
PASSBOOK_POSTGRESQL__S3_BACKUP__REGION=
```
If you want to backup to an S3-compatible server, like [minio](https://min.io/), use this setting:
```
PASSBOOK_POSTGRESQL__S3_BACKUP__HOST=http://play.min.io
```
#### Kubernetes
Simply enable these options in your values.yaml file
```yaml
# Enable Database Backups to S3
backup:
access_key: access-key
secret_key: secret-key
bucket: s3-bucket
region: eu-central-1
host: s3-host
```
Afterwards, run a `helm upgrade` to update the ConfigMap. Because passbook-scheduled backups are not recommended currently, a Kubernetes CronJob is created that runs the backup daily.

View File

@ -7,10 +7,11 @@ data:
POSTGRESQL__NAME: "{{ .Values.postgresql.postgresqlDatabase }}" POSTGRESQL__NAME: "{{ .Values.postgresql.postgresqlDatabase }}"
POSTGRESQL__USER: "{{ .Values.postgresql.postgresqlUsername }}" POSTGRESQL__USER: "{{ .Values.postgresql.postgresqlUsername }}"
{{- if .Values.backup }} {{- if .Values.backup }}
POSTGRESQL__BACKUP__ACCESS_KEY: "{{ .Values.backup.access_key }}" POSTGRESQL__S3_BACKUP__ACCESS_KEY: "{{ .Values.backup.access_key }}"
POSTGRESQL__BACKUP__SECRET_KEY: "{{ .Values.backup.secret_key }}" POSTGRESQL__S3_BACKUP__SECRET_KEY: "{{ .Values.backup.secret_key }}"
POSTGRESQL__BACKUP__BUCKET: "{{ .Values.backup.bucket }}" POSTGRESQL__S3_BACKUP__BUCKET: "{{ .Values.backup.bucket }}"
POSTGRESQL__BACKUP__HOST: "{{ .Values.backup.host }}" POSTGRESQL__S3_BACKUP__REGION: "{{ .Values.backup.region }}"
POSTGRESQL__S3_BACKUP__HOST: "{{ .Values.backup.host }}"
{{- end}} {{- end}}
REDIS__HOST: "{{ .Release.Name }}-redis-master" REDIS__HOST: "{{ .Release.Name }}-redis-master"
ERROR_REPORTING__ENABLED: "{{ .Values.config.error_reporting.enabled }}" ERROR_REPORTING__ENABLED: "{{ .Values.config.error_reporting.enabled }}"

View File

@ -0,0 +1,42 @@
{{- if .Values.backup }}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ include "passbook.fullname" . }}-backup
labels:
app.kubernetes.io/name: {{ include "passbook.name" . }}
helm.sh/chart: {{ include "passbook.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
schedule: "0 0 * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
args: [server]
envFrom:
- configMapRef:
name: {{ include "passbook.fullname" . }}-config
prefix: PASSBOOK_
env:
- name: PASSBOOK_SECRET_KEY
valueFrom:
secretKeyRef:
name: "{{ include "passbook.fullname" . }}-secret-key"
key: "secret_key"
- name: PASSBOOK_REDIS__PASSWORD
valueFrom:
secretKeyRef:
name: "{{ .Release.Name }}-redis"
key: "redis-password"
- name: PASSBOOK_POSTGRESQL__PASSWORD
valueFrom:
secretKeyRef:
name: "{{ .Release.Name }}-postgresql"
key: "postgresql-password"
{{- end}}

View File

@ -28,8 +28,21 @@ config:
# access_key: access-key # access_key: access-key
# secret_key: secret-key # secret_key: secret-key
# bucket: s3-bucket # bucket: s3-bucket
# region: eu-central-1
# host: s3-host # host: s3-host
ingress:
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- passbook.k8s.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - passbook.k8s.local
################################### ###################################
# Values controlling dependencies # Values controlling dependencies
################################### ###################################
@ -50,15 +63,3 @@ redis:
enabled: false enabled: false
# https://stackoverflow.com/a/59189742 # https://stackoverflow.com/a/59189742
disableCommands: [] disableCommands: []
ingress:
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- passbook.k8s.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - passbook.k8s.local

View File

@ -1,6 +1,6 @@
#!/bin/bash -e #!/bin/bash -e
python -m lifecycle.wait_for_db python -m lifecycle.wait_for_db
printf '{"event": "Bootstrap completed", "level": "info", "logger": "bootstrap", "command": "%s"}\n' "$@" printf '{"event": "Bootstrap completed", "level": "info", "logger": "bootstrap", "command": "%s"}\n' "$@" > /dev/stderr
if [[ "$1" == "server" ]]; then if [[ "$1" == "server" ]]; then
gunicorn -c /lifecycle/gunicorn.conf.py passbook.root.asgi:application gunicorn -c /lifecycle/gunicorn.conf.py passbook.root.asgi:application
elif [[ "$1" == "worker" ]]; then elif [[ "$1" == "worker" ]]; then
@ -9,6 +9,12 @@ elif [[ "$1" == "migrate" ]]; then
# Run system migrations first, run normal migrations after # Run system migrations first, run normal migrations after
python -m lifecycle.migrate python -m lifecycle.migrate
python -m manage migrate python -m manage migrate
elif [[ "$1" == "backup" ]]; then
python -m manage dbbackup --clean
elif [[ "$1" == "restore" ]]; then
python -m manage dbrestore ${@:2}
elif [[ "$1" == "bash" ]]; then
/bin/bash
else else
python -m manage "$@" python -m manage "$@"
fi fi

View File

@ -57,6 +57,8 @@ nav:
- Ubuntu Landscape: integrations/services/ubuntu-landscape/index.md - Ubuntu Landscape: integrations/services/ubuntu-landscape/index.md
- Sonarr: integrations/services/sonarr/index.md - Sonarr: integrations/services/sonarr/index.md
- Tautulli: integrations/services/tautulli/index.md - Tautulli: integrations/services/tautulli/index.md
- Maintenance:
- Backups: maintenance/backups/index.md
- Upgrading: - Upgrading:
- to 0.9: upgrading/to-0.9.md - to 0.9: upgrading/to-0.9.md
- to 0.10: upgrading/to-0.10.md - to 0.10: upgrading/to-0.10.md

View File

@ -1,14 +0,0 @@
"""passbook misc tasks"""
from django.core import management
from structlog import get_logger
from passbook.root.celery import CELERY_APP
LOGGER = get_logger()
@CELERY_APP.task()
def backup_database(): # pragma: no cover
"""Backup database"""
management.call_command("dbbackup")
LOGGER.info("Successfully backed up database.")

View File

@ -12,7 +12,7 @@ from passbook.providers.oauth2.generators import (
generate_client_id, generate_client_id,
generate_client_secret, generate_client_secret,
) )
from passbook.providers.oauth2.models import OAuth2Provider, ScopeMapping from passbook.providers.oauth2.models import JWTAlgorithms, OAuth2Provider, ScopeMapping
class OAuth2ProviderForm(forms.ModelForm): class OAuth2ProviderForm(forms.ModelForm):
@ -32,7 +32,10 @@ class OAuth2ProviderForm(forms.ModelForm):
def clean_jwt_alg(self): def clean_jwt_alg(self):
"""Ensure that when RS256 is selected, a certificate-key-pair is selected""" """Ensure that when RS256 is selected, a certificate-key-pair is selected"""
if "rsa_key" not in self.cleaned_data: if (
self.data["rsa_key"] == ""
and self.cleaned_data["jwt_alg"] == JWTAlgorithms.RS256
):
raise ValidationError( raise ValidationError(
_("RS256 requires a Certificate-Key-Pair to be selected.") _("RS256 requires a Certificate-Key-Pair to be selected.")
) )

View File

@ -272,7 +272,7 @@ CELERY_BEAT_SCHEDULE = {
"options": {"queue": "passbook_scheduled"}, "options": {"queue": "passbook_scheduled"},
} }
} }
CELERY_CREATE_MISSING_QUEUES = True CELERY_TASK_CREATE_MISSING_QUEUES = True
CELERY_TASK_DEFAULT_QUEUE = "passbook" CELERY_TASK_DEFAULT_QUEUE = "passbook"
CELERY_BROKER_URL = ( CELERY_BROKER_URL = (
f"redis://:{CONFIG.y('redis.password')}@{CONFIG.y('redis.host')}" f"redis://:{CONFIG.y('redis.password')}@{CONFIG.y('redis.host')}"
@ -284,24 +284,25 @@ CELERY_RESULT_BACKEND = (
) )
# Database backup # Database backup
if CONFIG.y("postgresql.backup"): DBBACKUP_STORAGE = "django.core.files.storage.FileSystemStorage"
DBBACKUP_STORAGE = "storages.backends.s3boto3.S3Boto3Storage" DBBACKUP_STORAGE_OPTIONS = {"location": "./backups" if DEBUG else "/backups"}
DBBACKUP_CONNECTOR_MAPPING = { DBBACKUP_CONNECTOR_MAPPING = {
"django_prometheus.db.backends.postgresql": "dbbackup.db.postgresql.PgDumpConnector" "django_prometheus.db.backends.postgresql": "dbbackup.db.postgresql.PgDumpConnector"
} }
AWS_ACCESS_KEY_ID = CONFIG.y("postgresql.backup.access_key") if CONFIG.y("postgresql.s3_backup"):
AWS_SECRET_ACCESS_KEY = CONFIG.y("postgresql.backup.secret_key") DBBACKUP_STORAGE = "storages.backends.s3boto3.S3Boto3Storage"
AWS_STORAGE_BUCKET_NAME = CONFIG.y("postgresql.backup.bucket") DBBACKUP_STORAGE_OPTIONS = {
AWS_S3_ENDPOINT_URL = CONFIG.y("postgresql.backup.host") "access_key": CONFIG.y("postgresql.s3_backup.access_key"),
AWS_DEFAULT_ACL = None "secret_key": CONFIG.y("postgresql.s3_backup.secret_key"),
j_print( "bucket_name": CONFIG.y("postgresql.s3_backup.bucket"),
"Database backup to S3 is configured.", host=CONFIG.y("postgresql.backup.host") "region_name": CONFIG.y("postgresql.s3_backup.region", "eu-central-1"),
) "default_acl": "private",
# Add automatic task to backup "endpoint_url": CONFIG.y("postgresql.s3_backup.host"),
CELERY_BEAT_SCHEDULE["db_backup"] = {
"task": "passbook.lib.tasks.backup_database",
"schedule": crontab(minute=0, hour=0), # Run every day, midnight
} }
j_print(
"Database backup to S3 is configured.",
host=CONFIG.y("postgresql.s3_backup.host"),
)
# Sentry integration # Sentry integration
_ERROR_REPORTING = CONFIG.y_bool("error_reporting.enabled", False) _ERROR_REPORTING = CONFIG.y_bool("error_reporting.enabled", False)
@ -400,6 +401,7 @@ _LOGGING_HANDLER_MAP = {
"urllib3": "WARNING", "urllib3": "WARNING",
"websockets": "WARNING", "websockets": "WARNING",
"daphne": "WARNING", "daphne": "WARNING",
"dbbackup": "ERROR",
} }
for handler_name, level in _LOGGING_HANDLER_MAP.items(): for handler_name, level in _LOGGING_HANDLER_MAP.items():
# pyright: reportGeneralTypeIssues=false # pyright: reportGeneralTypeIssues=false