outposts: call controller.down on outpost pre_delete

This commit is contained in:
Jens Langhammer 2020-10-16 22:26:47 +02:00
parent 5135d828b4
commit 971713d1aa
4 changed files with 67 additions and 1 deletions

50
docs/upgrading/to-0.12.md Normal file
View File

@ -0,0 +1,50 @@
# Upgrading to 0.12
This update brings these headline features:
- Rewrite Outpost state Logic, which now supports multiple concurrent Outpost instances.
- Add Kubernetes Integration for Outposts, which deploys and maintains Outposts with High Availability in a Kubernetes Cluster
- Add System Task Overview to see all background tasks, their status, the log output, and retry them
- Alerts now disappear automatically
## Upgrading
Docker-compose users can upgrade just as usual.
For Kubernetes users, there are some changes to the helm values.
The values change from
```yaml
config:
# Optionally specify fixed secret_key, otherwise generated automatically
# secret_key: _k*@6h2u2@q-dku57hhgzb7tnx*ba9wodcb^s9g0j59@=y(@_o
# Enable error reporting
error_reporting:
enabled: false
environment: customer
send_pii: false
# Log level used by web and worker
# Can be either debug, info, warning, error
log_level: warning
```
to
```yaml
config:
# Optionally specify fixed secret_key, otherwise generated automatically
# secretKey: _k*@6h2u2@q-dku57hhgzb7tnx*ba9wodcb^s9g0j59@=y(@_o
# Enable error reporting
errorReporting:
enabled: false
environment: customer
sendPii: false
# Log level used by web and worker
# Can be either debug, info, warning, error
logLevel: warning
```
in order to be consistent with the rest of the settings.
There is also a new setting called `kubernetesIntegration`, which controls the Kubernetes integration for passbook. When enabled (the default), a Service Account is created, which allows passbook to deploy and update Outposts.

View File

@ -64,6 +64,7 @@ nav:
- to 0.9: upgrading/to-0.9.md - to 0.9: upgrading/to-0.9.md
- to 0.10: upgrading/to-0.10.md - to 0.10: upgrading/to-0.10.md
- to 0.11: upgrading/to-0.11.md - to 0.11: upgrading/to-0.11.md
- to 0.12: upgrading/to-0.12.md
- Troubleshooting: - Troubleshooting:
- Access problems: troubleshooting/access.md - Access problems: troubleshooting/access.md

View File

@ -6,7 +6,7 @@ from structlog import get_logger
from passbook.lib.utils.reflection import class_to_path from passbook.lib.utils.reflection import class_to_path
from passbook.outposts.models import Outpost from passbook.outposts.models import Outpost
from passbook.outposts.tasks import outpost_post_save from passbook.outposts.tasks import outpost_post_save, outpost_pre_delete
LOGGER = get_logger() LOGGER = get_logger()
@ -30,3 +30,7 @@ def post_save_update(sender, instance: Model, **_):
def pre_delete_cleanup(sender, instance: Outpost, **_): def pre_delete_cleanup(sender, instance: Outpost, **_):
"""Ensure that Outpost's user is deleted (which will delete the token through cascade)""" """Ensure that Outpost's user is deleted (which will delete the token through cascade)"""
instance.user.delete() instance.user.delete()
# To ensure that deployment is cleaned up *consistently* we call the controller, and wait
# for it to finish. We don't want to call it in this thread, as we don't have the K8s
# credentials here
outpost_pre_delete.delay(instance.pk.hex).get()

View File

@ -56,6 +56,17 @@ def outpost_controller(self: MonitoredTask, outpost_pk: str):
) )
@CELERY_APP.task()
def outpost_pre_delete(outpost_pk: str):
"""Delete outpost objects before deleting the DB Object"""
outpost = Outpost.objects.get(pk=outpost_pk)
if outpost.type == OutpostType.PROXY:
if outpost.deployment_type == OutpostDeploymentType.KUBERNETES:
ProxyKubernetesController(outpost).down()
if outpost.deployment_type == OutpostDeploymentType.DOCKER:
ProxyDockerController(outpost).down()
@CELERY_APP.task() @CELERY_APP.task()
def outpost_post_save(model_class: str, model_pk: Any): def outpost_post_save(model_class: str, model_pk: Any):
"""If an Outpost is saved, Ensure that token is created/updated """If an Outpost is saved, Ensure that token is created/updated