►
Description
Lucky number session 13 of the SIG dedicated to solving CMS/Kubernetes issues is a special one! We had two presentations, one from Brad Jones at Fruition and one from Florian Loretan.
Connect with Brad: https://bit.ly/3sq6JWr
Learn more about Florian’s work: https://bit.ly/3ieStep
Catch up with the group on GitHub: http://bit.ly/338dXC5
See our K8s-based hosting solution in action: https://bit.ly/3ii1xiD
A
So
anyway,
I'm
Brad
I'm,
the
recently
named
CTO
at
corrosion,
which
is
a
digital
agency
in
Denver.
We
do
marketing
as
well
as
technical
implementation,
work
so
sort
of
unique
in
that
sense
that
a
lot
of
agencies
like
just
do
technical
design
and
design
work.
We
just
marketing
as
well
as
have
sort
of
a
full,
rounded
agency
here
in
Denver
I
hope
you
can
find
that
in
all
the
places.
A
That
we
had
to
make
just
due
to
technical
limitations
and
business
decisions,
and
then
just
a
few
tips
and
I've
got
like
a
few
little
code,
snippets
about
migrating
ingress
sort
of
while
you're
going
down
the
tracks
because
clients
don't
like
it
that
you
take
their
workloads
down
for
reasons
that
don't
include
anything
that
really
they
see,
might
directly
benefit
them.
So
part
of
my
involvement
and
fruition.
A
Some
sort
of
opinionated
control
plane
and
you
know
the
quality
of
service
SLA.
That
kind
of
thing
for
client.
First,
you
cloud
is
an
internal
product
in
the
sense
that
we
do
not
sell
it
directly
as
a
commodity
product,
unlike
for
instance,
what
lucky
that
was
doing
to
do
that
live
or
SAH
or
amazing.
I
owe
you
know
it's
possible
that
we
might
do
that
down
the
road.
But
really
you
know
this
is
a
value
map
played
for
us
to
provide
our
customers,
a
platform
that
we
believe
meets
their.
A
A
The
data
center
and
right
it
might
not
be
public
to
the
internet,
or
in
our
case
it
is
like
a
network
load
balancer
from
google
cloud
that
has
kubernetes
nodes
inside
of
its
targets,
and
then
we
further
have
to
map
that
traffic
to
workloads
inside
of
kubernetes.
So
when
we
were
architecting
first
and
five
I
was
and
still
in
it
to
a
large
extent,
sort
of
a
one-man-band
architect
and
engineer
oneness.
So
our
goals
with
it
we
wanted
to
you
know,
reduce
toil,
improve
our
uptime.
A
We
were
coming
from
a
bare-metal
sort
of
cPanel,
old-school
hosting
environment.
We
wanted
to
prove
security
right,
give
us
the
opportunity
to
automatically
issue
let's
encrypt
certificates
and
also
give
us
just
more
unified
claim
to
do
security
rank
and
cPanel
land
at
VM
land.
Every
VM
every
install
needs
its
own
sort
of
high
touch
maintenance
and
we
wanted
a
more
unified
control
plane
that
we
can
enforce
a
policy
on
and
developer.
Experience
is
a
huge
piece
of
the.
A
We
wanted
to
be
able
to
allow
developers
internal
site
owners
to
be
able
to
spin
up
sites
with
you
know,
TLS
and
sensible
to
Falls
without
needing
to
ask
a
guru
for
permission
and
we
lower
our
cost
of
ownership
and
our
interest
requirements,
and
we
ended
up
using
traffic
which,
if
you
not
particularly
familiar
with
traffic,
it's
one
of
many
ingress
controllers
out
there,
rather
than
an
investment
roller
being
the
workload
that
runs
inside
of
your
cluster.
That
translates
the
declarative,
ingress
definitions
and
ingress.
B
A
These
services
that
are
defining
the
ingress-
that's
a
reductionist
explanation,
but
you
know
there's
many
out.
There
Google,
for
instance,
provides
a
default
ingress
controller,
but
we
get
like
a
new
external
IP
Lugano
server
object,
which
gets
very
cost
prohibitive
very
quickly,
and
it
doesn't
allow
us
to,
for
instance,
like
publish
a
single
IP
and
also.
A
Plane,
so
we
really
wanted
people
to
be
able
to
use
we
use
reinsurer
for
instances
or
the
reverse,
proxying
/,
Eli
and
front
of
kubernetes.
You
know
we
can't
go
from
0
to
100
on
those
Street
command
line
held
like
only
kubernetes.
We
needed
something
that
would
allow
us
to
let
developers
eternally
didn't
have
cloud
native
experience
be
able
to
manage
their
ingress
and
so
paired
with
us
using
Ranger
for
that
sort
of
control
plane.
A
A
A
If
you
put
a
u-turn,
the
traffic
one
with
the
acne
implementation
in
there,
you've
got
a
certificate
right.
You've
got
a
lesson
work
certificate,
and
that
was
the
keys
for
us
is
that
we
didn't
want
to
require
developers
to
have
to
mess
with.
You
know,
additionally,
a
mole
in
their
ingress.
The
cons
ended
up
actually
being
rather
a
long
list.
The
documentation
for
traffic
light
friendly
sucks,
it's
getting
better
for
two
point:
X
I
ended
up
contributing
some.
A
B
A
You
can
use
the
uses
he
threw
pennies
Buster
and
only
after
deployment,
and
only
after
right,
ninety
days
or
so
when
your
certificates
are
up
for
renewal
from
let's
encrypt,
do
we
find
that
we
would
get
these
difficult
to
devote
race
conditions.
There
were
leader
election
issues
with
clustering
there
and
because
it
was
basically
an
either
on
or
off
ring.
You
put
a
host
into
a
round
definition,
a
rule.
A
Tcp
proxying
on
the
same
install,
which
is
really
great.
It
does
have
a
support
for
sort
of
what
they're
calling
like
legacy,
but
really,
what
is
the
native
kubernetes
ingress
object
and
they
also
have
introduced
as
a
lot
of
people
have.
You
know,
moving
forward
their
own
custom
resource
definition
that
allows
you
to
really.
B
A
B
A
The
documentation
still
sucks,
but
it's
better
and
it's
getting
better
they've
done
a
lot
of
work
there.
But,
however,
cluster
Daphne
is
bond,
and
so
that's
why
we
had
to
switch
also
their
release.
Cadence
it's
worth
noting,
is
rather
slow.
They
do
not
release
sort
of
nightly
builds.
So
we
had.
We
had
to
actually
build
our
own
image
from.
B
A
A
pinned
commit
to
get
some
of
the
features
that
were
committed
in
a
dead
end
to
like
master
but
haven't
been
released
yet
so
in
planning.
Our
upgrade
planning
actually
was
really
important.
Here
we
wanted
to.
While
we
were
switching
and
take
advantage
of
the
fact
that
many
of
our
clients
are
behind
CloudFlare
and
you
know,
I'm
sort
of
ambivalent
about
CloudFlare,
I.
Think.
B
A
A
B
A
Thing
that
we
were
able
to
do
will
show
you
some
tooling
here.
A
minute
is
just
running
shell
script
to
identify
which
of
these
things
are
actually
behind
CloudFlare.
Do
an
audit
plan
pulling
those
critical
certificates
in
and
it
makes
our
maintenance
a
lot
easier
because
we
don't
even
have
to
worry
about
being
sir
sirs
shell
scripting.
If
you
are
doing.
A
A
A
A
Your
cluster
and
this
you
know,
might
sound
a
little
bit
ticked,
but
it's
true
and
allows
you
to
you
know
this
could
be
perfect
control
back
to
that
right.
Our
rollback
plan
essentially
was
keep
the
service
right
that
has
an
external
load
balancer
attached
to
it.
The
service
that
points
traffic
to
your
ingress
controller
run
your
new
ingress
controller
inside
of
the
pod
on
different
ports
than
the
old
one,
and
then
all
you
have
to
do
to
move
back
or
forward
into
your
new
controller
is
change
those
ports
right.
A
Just
because
you're
handling,
you
know,
unencrypted
plaintext
traffic
on
part
of
your
ingress
doesn't
mean
it
all
has
to
be
port
80
after
you
terminate
it
right,
so
you
can
change
482
port,
PD
81!
That's
your
current
controller,
88
or
whatever
you
want
to
do.
Is
your
new
one
and
you
can
you
get
sort
of
a
ee
or
traffic
with
a
quick
edit
of
a
service
so
get
creative
with
your
port
mapping?
A
The
other
thing
that
we
do
and
I've
got
some
configuration
snippets
here
that
I'll
show
you
toward
the
end
is
having
a
split
Verizon
right.
So
we
actually,
you
know,
turn
most
of
the
unencrypted
traffic
around
with
a
301
redirect
to
the
HTTPS
version,
because
we're
full
time
encrypted.
However,
we
let
some
unencrypted
traffic
like
let's
encrypt
or
acne
challenges,
come
through,
so
stand
up
all
the
pieces
that
you
can
before
your.
A
Up
cert
manager
point
your
challenge
route
to
cert,
to
your
new
controller
that
has
sort
of
an
inner
running
with
it
right.
You
can
do
that
before
sending
all
of
your
production
traffic
to
the
new
controller.
So
again
you
know
I,
don't
want
to
cross
your
eyes,
but
just
some
examples
of
some
of
the
code
that
we
have
worked
up
to
do
this
conversion.
For
instance,
this
is
a
script
that
we
ran
to
help
identifying,
write
amazing,
QC
TL
to
get
all
of
the
ingress
objects
that
don't
contain.
A
What
we
basically
have
is
a
wild
card,
temporary
URL
right
so
live
sites
and
spit
out
those
rules,
I
pull
in
the
known
IPS
for
CloudFlare
and
then
use
tools
that,
for
instance,
rep
cider
right
so
I
can
just
run
a
quick
comparison
in
against
the
known
CloudFlare
IPAs
right.
So
this
kind
of
tooling
allowing
us
to
you
know
just
an
example
of
how
you
know
quick.
These
shell
scripts
are
a
tool
in
your
toolbox.
A
Can
get
really
creative
with
JQ
in
transforming
the
chase
and
output?
So
you
know
many
of
many
times
when
we're
doing
when
we're
doing
edits
on
kubernetes
objects.
We
prefer
animal
because
it's
more
easily
human
editable,
but
Jason
is
really
easily
detectable
in
to
take
you
and
then
you
know.
This
is
a
great
opportunity.
I
learned
a
lot
about
sort
of
the
JSON
or
attach
the
JSON
caching
functionality
in
cubes,
ETL
cache.
A
So
I
learned
a
lot
about
you
know,
sort
of
more
kubernetes
internals
doing
this
and
part
of
it
is
because
I
really
spent
time
working
up
these
shell
scripts
and
then
finally,
here
just
an
example
of
you
know.
This
is
so
talking
about
that
split
horizon
right.
You
can
even
inside
the
pod
that
you're
sending
all
of
that
extra
traffic
to
inside
of
the
pod
I
actually
have
nginx
listening
first
as
a
reverse
proxy
in
front
of
traffic
or
ingress
controller
I.
Have
it
and
there's
a
few
rules.
A
In
plain
text,
I
also
have
a
quick
known
path
that
I
can
use
to,
for
instance,
validate
the
traffic
is
even
coming
to
us
right.
So
if
I
want
to
I
want
to
make
sure
that
a
client,
you
know
that
something
is
propagated,
I
can
look
for
that
path
and
it
replies
with
for
18
I'm
a
teapot
and
then
otherwise.
We
just
turn
around
the
redirect
traffic.
So
again
you
know,
that's
a
very
you
know,
sort
of
quick,
that's
a
that's
a
quick
summary
of
what
we
did
to
your
go
on
our
ingress
conversion.
A
D
D
A
A
So
we
didn't
actually
migrate.
The
certificates
I've
validated
that
we
weren't
going
to
be
up
against
any
rate
limiting
and
I
just
had
them
reissued,
and
also
we
have
a
subset
of
sites
that
that
we
just
knew
that
we
were
going
to
set
up
origin
pools
certificates
from
CloudFlare,
so
that
reduced
number
of
sites
that
we
actually
had
to
reissue.
If
that.
D
A
Yeah
and
the
upgrade
the
certain
manager
documentation
is
really
good,
but
I
did
notice
that
the
upgrade
documentation
is
sort
of
like
yeah,
just
like
turn
on
turn
on
cert
manager
and
pointed
at
your
guyses
and
it'll
like
upgrade,
and
it's
not
real
clear
as
to
exactly
how
that
works
and
I
tried
to
redo
the
code
and
it's
a
sort
of
opaque,
so
I
just
you
know
it
was
easy
enough
to
just
get.
Do
a
reissue
yeah.
B
A
A
What
was
read
back
more
so
than
sort
of
like
choosing
how
to
target
the
right
thing
was
rolling
up.
Different
values
like
I,
got
much
better.
It
takes
you
sort
of
saying:
okay,
like
pull
this
stuff
value,
my
cool
to
pull
the
host
out
of
all
of
the
different
rules
right
and
then
give
me
an
array
of
all
of
those
that
I
have
to
stick
into
the
TLS
stands
up
so
I'm
sure.
A
B
A
B
There
yeah
the
reason
I
bring
up
the
service
mesh.
Is
that
especially
I
out
point
to
sto
is
that
project
is
a
bit,
is
more
mature?
There
are
more
features
available
to
you.
It's
envoy,
that's
backing
the
instance
doing
the
the
load
balancing
to
per
se
and
yeah.
That's
to
your
point
is
one
of
those
things.
I
do
look
at
when
I'm,
considering
the
a
particular
product
or
an
option
there
is
which
one
has.
B
A
And
traffic,
you
know
we
got
served
by
that,
but
the
other
piece
of
it
is
is
that
you
know
a
lot
of
I
could
over
in.
We
could
over
engineer
this
right,
that's
always
a
risk
and
so
I'm
trying
to
not
over
engineer
it
and
give
this
vanilla
as
possible.
Just
given
it's
all
and
there's
you
know,
there's
no
right
or
wrong
answer
about
when
you're
being
grass
controller
is.
This
is
like
fifty
so
at.
E
Yeah,
so
actually
so,
I
work
with
winter
and
also
CTO
there
so
actually
a
situation
that
has
a
lot
of
parallels,
different
country,
different
time
zone
as
well,
but
yeah
also
done
plenty
of
best
scripting
in
the
past,
well
many
months
and
also
using
traffic
but
yeah.
The
thing
I
want
to
talk
about
today
is
hell.
We
actually
had
some
interesting
discussion
like
already
way
back
at
DrupalCon
about
like
using
helm
versus
not
using
home,
but
we're
actually
very
happy
with
it
and
I.
Give
this
this
presentation,
or
internally
I,
told
a
bit
more
about.
E
E
This
is
this
is
really
like.
It
has
all
the
features
it
has
Drupal.
Of
course
it
has
of
Mary
TBL
Essex
search
memcached,
the
different
things
that
we
use
in
combination
with
Drupal.
It
also
has
put
a
few
things
in
terms
of
like
providing
shell
access
in
terms
of
providing
reference
data
for
new
environments.
E
Everything
like
that
we
deploy
is
is
entirely
built
on
this
and
also
quite
a
few
production
sites
nowadays
so
and
yeah,
and
maybe
one
point
about
how
we
structure
things
so
for
us
everything
is
done
through
get
the
that's
the
only
way
that
the
lovers
interact
with
the
system
and
the
way
that
we
have
think
structured
is
that
for
each
project
we
have
a
namespace.
So
for
each
git
repository
we
have
the
namespace
and
then
this
is
actually
country
herbs.
What
automatically,
with
our
CI
setup,
with
a
circle
CI
and
then
for
each
branch.
E
There's
a
dedicated
release,
so
a
helm,
release
and
they're.
In
there
we
have
all
the
the
various
containers
that
are
created
by
deployment
stage
rule
sets
from
jobs
and
so
on
and
yeah.
That
means
that
we
have
multiple
releases
that
are
created
automatically
and
that
are
also
deleted
automatically.
Whenever
we
delete
a
branch
so
is
on.
E
We
also
have
two
clusters:
we
have
one
for
production
and
there's
only
the
production
branch
there
and
we
have
a
non
production
cluster,
which
has
pretty
much
everything
that
that
is
not
production
and
it's
actually
bigger
and
we
have
a
different
resource
allocation
there.
But
but
we
actually
have
quite
a
few
quite
a
few
releases
at
the
at
the
same
time
and
first,
the
well
maybe
before
we
we
talk
about
like
what
the
changes
are
for
us.
E
The
main
reason
for
wanting
to
move
to
helm
3
is
that
we
I
like
we
saw
that
there
were
a
few
issues
that
we
were
getting
affected
from
from
helm
and
also
in
terms
of
timing.
We
have
well,
since
we
migrated
we
on
board
a
few
larger
clients,
specifically
like
moving
to
production,
and
we
want
to
make
sure
that
these
this
migration
was
done
before
that.
So
we
didn't
have
that
many
garments
at
that
point.
So
it's
a
lot
easier
to
make
the
migration
before
we
created
those.
E
E
Actually,
the
the
main
changes
you
took
them
from
from
the
the
FAQ,
the
official
I.
Thank
you,
but
I
can
quickly
go
through
what
it
is
that
that
is
relevant
for
it
for
us,
so
the
removal
of
tiller.
Actually,
we
never
really
had
an
issue
with
tiller,
but
yeah.
It's
definitely
good
knots.
Do
you
have
it
and-
and
we
also
noticed
that
we
tiller
was
sometimes
it's
a
bottleneck,
especially
when
you
have
a
lot
of
releases
in
a
cluster,
but
it's
the
case
in
our
dev
cluster.
E
We
saw
that
it
sometimes
needed
a
lot
of
resources
and
not
having.
It
is
actually
a
pretty
nice
thing,
so
it
wasn't
a
big
issue,
but
it's
definitely
nice
to
have
the
other
thing
that
we
realize
is
so
with
the
the
way
that
helm
to
actually
those
deployments.
It
takes
a
look
at
what
it's
generated,
so
the
templates
that
it
generated
for
the
previous
values,
and
it
looks
at
what
the
template
that
it
generates
for
current
values
and
make
a
dev
of
that.
E
E
It's
bit
based
on
the
the
Osiris
project,
works
in
a
similar
way,
but
but
it's
abled
instead
of
just
taking
down
one
deployment,
it's
able
to
take
down
a
deployment,
a
stageful
step,
it
stopped
the
cron
jobs
and
things
like
that,
so
we're
able
to
to
scale
down
an
entire
release,
Drupal
release
from
from
that
to
nothing,
and
we
had
the
issue
that
we
were
able
to
scale
down.
But
then,
when
you
would
deploy
again,
it
wouldn't
scallop.
It's
not
exactly
what
you
want
so
I
mean
we
really
needed
this.
E
E
Okay,
well
like
that,
could
be
any
branch
and
and
then
with
helm
3.
We
have
the
representative
name
that
matches
with
the
namespace,
and
then
we
generate
the
future
brand.
Well,
they
release
name
entirely
based
on
the
branch,
and
so
it's
we're
pretty
much
able
to
generate
things
directly
like
that,
and
that
gives
us
a
lot
more
clarity
in
the
way
things
are
done
and
also
something
that
we
use
quite
a
bit
ulla,
dating
chart
values
with
JSON
schema.
E
So
now
it's
possible
to
just
drop
in
JSON
schema
for
for
the
chart
and
for
the
chart,
values
and
it
automatically
when
you,
whenever
you
create
like
upgrade
or
install
or
create
a
new
chart,
release
it
automatically
validates.
If
the
values
are
correct
and
you
can
do
a
lot
of
things
with
JSON
schema,
so
we're
doing
some
some
basic
testing-
that's
the
like.
We
get
the
right
data
types
and
so
on,
and
but
we
can
also
do
something,
some
more
advanced
things.
E
For
example,
if
the
the
the
release
name
is
production
or
contains
production,
we
guarantee,
like
we
check,
that
the
the
various
resources
or
not
sets
to
the
default
that
we
that
we
have
further
and
on
development
environments.
So
there's
there's
quite
a
few
things
that
we
can
handle
they're,
also
making
sure
that
what
not
sure
well
the
the
in
production,
certain
things
are
protected
and
so
on.
So
there
there's
a
lot
of
things
that
we
can
do
there.
E
Actually
we
have
this
in
completely
the
wrong
place.
It's
wrong
indentation,
something
like
that
and
we
actually
caught
quite
a
few.
Quite
a
few
things
with
that
so
yeah
held
three
has
definitely
made
our
resource
definitions.
That's
more
solid
and
yeah
library
chart
support
and
we're
not
really
using
it.
That's
something
we
might
use
in
the
future,
but
yeah
at
this
point
weird.
We
don't
really
use
it:
okay,
so
yeah,
it's
it's
nice
and
shiny
and
and
all
new,
but
how
do
we
make
the
switch?
So
there's
something
that's
supported
out
of
that
box.
E
There's
this
film
two
to
three
plug
in
and
it's
pretty
much
like
it
leaves
all
the
the
resources
in
place
and
it
just
my
creates
the
data
metadata
about
the
release
from
one
place
to
another.
It's
actually
quite
nice.
It
worked
almost,
and
one
of
the
disadvantages
is
since
keeps
everything
in
place
and
it
also
kept
the
existing
release
names
and
for
us
we
kind
of
wanted
to
get
rid
of
that.
But
at
the
same
time
there
was
no
no
hurry
to
do
that.
E
So
we
haven't
been
happy
to
to
keep
those
alter,
maybe
sometimes
cryptic
release
names,
but
the
main
issue
was
actually
with
the
database
charts.
So
it's
something
that
actually
affects
both
Mara
DB
and
my
sequel
and
and
any
other
database
and
the
problem
is
a
helm,
actually
add
like
a
lot
of
charts.
Actually
they
generate
to
be
assigned
to
all
the
resources
and
release.
E
Well,
the
the
persistent
volume
claims
that
are
created
by
those
datasets
and
it's
pretty
much
impossible
to
change
a
label
of
persistence,
an
automatically
generated
persistent
volume
claim
in
a
way
that
that
kubernetes
will
not
complain.
It's
it's
a
known
issue,
there's
quite
a
few,
just
quite
a
few
comments
on
the
on
the
the
issue
on
github,
but
there's
there's
really
no
workaround,
at
least
not
not
in
a
way
that
makes
this
helm
2
to
3
plugin
work.
E
So
that
was
the
the
first
off
okay,
that's
that's
not
gonna
work
and
we
end
up
with
a
second
option,
which
was
to
just
create
a
new
release
and
because
their
release
names
have
a
different
schema.
We
were
just
able
to
create
a
new
release
and
then
we
were
what
we
did.
We
there
are
two:
we
create
a
script,
you
actually
might
create.
The
data
from
the
alts
really
stood
in
your
release,
and
in
this
case
it
was
actually
reusing.
E
All
the
all
the
of
the
data,
also
the
database
credentials
were
the
same
between
the
the
old
release
and
the
new
release,
so
we're
actually
able
to
migrate
the
data.
Well,
the
database
content
directly
from
one
but
stew,
to
another
point:
we
hadn't
yet
added
to
the
the
the
network
policies,
but
but
yeah
at
that
point
it
was.
It
was
still
quite
easy
to
get
the
data
from
one
place
to
another
and
then,
when
we
actually
created
the
the
new
release,
we
had
it
initially
with
a
dedicated,
well
a
different
host
name.
E
This
way
we
had
two
increases
side-by-sides
that
were
pointing
to
saying
something
different
and
then
once
we
were
able
to
test
that
everything
worked
and
we
just
delete
the
old
release
and
that
actually
worked
quite
so
we
we,
we
did
some
testing
in
our
development
cluster
and
then
once
we
had
a
crib,
that's
work.
Well,
we
just
rolled
that
through
through
everything
and
yeah
it.
E
We
did
it
manually
one
project
at
a
time,
just
to
make
sure
that
if
there
were
any
issues,
then
that
we
could
well
that
we
didn't
just
go,
make
two
too
risky
steps,
but
actually
I
think
we.
We
could
have
done
that
and
it
would
have
been
fine
but
yeah
a
lot
of
shell
scripting.
For
that
thing,
though,
is
that
yeah
I
mean
things
went
relatively
well,
but
there's
definitely
a
few
things
that
did
not
go
exactly
as
planned.
So
one
of
the
things
that
we
realize
is
persistent
volumes
are
not
namespaced.
E
Well,
of
course,
we
knew
that,
but
the
problem
is
that,
if
we
have
well
for
for
common
branch,
names
like
master
means
that
we
have
multiple
projects
called
master
and
previously
the
persistent
volumes
were.
They
were
named,
based
on
the
release
name
and
suddenly
that
doesn't
work
anymore,
because
because
then
you
have
multiple,
multiple
volumes.
That
would
be
called
the
same
thing
and
in
our
initial
test
we
only
tested
on
one
specific
project
with
a
dedicated
branch,
and
it's
only
when
we
started
deploying
this
two
additional
projects
that
really
run
into
this.
E
So
yeah
keep
in
mind.
Persistent
volumes
are
not
named
spaced,
the
persistent
volume
claims
are,
but
the
percent
ammonium
claims
and
the
selectors
on
them
are
also
not
named
spaced,
so
we
need
to.
We
actually
run
into
the
same
issue
twice.
That
was
also
relatively
easy
to
fix,
but
it
would
have
been
nice
to
think
about
that
right
away.
Also,
health
really
validates
the
release
names.
E
It
didn't
really
do
that
before,
but
it's
actually
a
nice
thing
and
it's
a
kind
of
thing,
and
so,
when
we
switch
to
this
new
structure,
we
didn't
need
the
shortening
logic,
because
we
have
much
longer
name
fifty-three
characters,
and
then
we
realized
that
we
actually
have
developers
who
use
very,
very
long
branch
names
in
some
cases
and
yeah,
so
that
that's
that's,
that's
not
something
we're
expecting,
but
yeah.
It
turns
out
that
some
people
do
that.
E
Tell
three
validates
that
ahead
of
time,
so
it
will
refuse
to
create
a
release
instead
of
just
just
starting
the
deployment
and
then
airing
outs,
because
there
are
some
ones
and
some
and
some
individual
properties
that
are
too
long,
also
making
changes
across
a
distributed
system.
It
takes
a
good
amount
of
planning
and
we
didn't
get
it
like
completely
right.
E
E
What
are
the
intermediate
steps,
and
also
one
thing
to
keep
in
mind-
is
like
how
what
how
do
the
steps
stretch
out
over
time,
because
it
could
be
that
somebody
is
just
re
running
a
deployment
that
happened
a
week
ago
and
then,
as
with
circle
CI,
if
it
reruns
the
deployment,
it
still
uses
the
same
version
of
the
code
and
same
version
of
the
deployment
process
that
was
defined
at
the
time
in
order
to
guarantee
consistent
builds.
But
that
also
means
that
you
need
to
be
like
to
plan
with.
E
Somebody
will
use
the
new
docker
file
with
with
the
old
deployment
system
and
things
like
that.
So
not
nothing
major,
but
a
few
surprises
there
that
that
were
actually
easy
to
fix.
No,
the
last
one,
that's
also
super
easy
one
to
fix,
but
helm
three
doesn't
automatically
create
namespaces.
So
we
have
a
system
where
the
only
thing
that
developer
needs
to
do
to
create
a
new
project
is
create
the
repository,
enable
circle,
CI
and
just
push,
and
previously
that
just
worked
automatically
because
home
3
would
create
the
namespaces.
E
And
now
we
had
to
update
our
circle
CI
set
up
so
that
if
the
namespace
didn't
exist,
it
gets
created
again,
nothing
major,
but
something
that
is
good
to
know
about,
and
actually
we've
been,
we've
been
very
happy
with
this:
it's
it
was
a
messy
migration,
but
when
it
was
done,
it
was
very,
very
nice.
Do
you
have
this
behind
us
and-
and
we
see
that
there's
a
lot
of
benefits
to
this,
so
our
main
takeaway,
it's
it's
future
proof,
and
now
we
can
focus
on
other
things,
which
is
just
very
nice.
A
A
E
We
had
a
brief
look
at
various
solutions
and
and
realized.
Well,
actually
they
just
pretty
much
the
same
thing
and
and
also
helm,
has
a
like
a
very
nice
echo
system.
Whenever
we
need
something
a
couple
weeks
ago
or
months
ago,
or
now,
somebody
just
needed
a
project
and
we
were
just
able
to
add
it
as
a
sub
chart
and
then
five
minutes
later,
where
we're
done.
E
The
it's
it's
nice
to
be
able
to
make
use
of
that
echo
system
and
still
have
something.
That's
like
very
much
tailored
to
your
needs
and
also
there's
in
terms
of
tooling
like
they.
We
used
a
helm
unit
tests
plug-in
pretty
heavily.
So
all
the
the
charts
are
unit
tested
and
we've
seen
that
this
is
really
really
nice
to
be
able
to
make
changes
to
to
validate
that
we
were
catching
the
various
use
cases.
C
Any
other
questions
fifty
well.
Thank
you
very
much
Florian
for
the
presentation.
It
was
an
awesome
session.
Thank
you,
so
I
know
at
the
beginning,
I
completely
neglected
to
go
through
introductions.
I
apologize
for
that
it
was
a
little
excited
if
anybody
would
like
to
introduce
themselves
now.
You
are
more
than
welcome
to
I'm
Ellie,
incidentally,
and
hosting
the
meeting
today,
because
Kevin
is
on
an
airplane
and
I
work
at
a
dev
and
marketing
and
communications
and
events
when
they're
happening
and
all
that
good
stuff.
So
anyone
else
want
to
do
a
little
last-minute
intro.