►
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
Day 2 Operations of Helm Charts Dex Horthy, Replicated, Inc
A
A
A
Cool,
let's
talk
about.
Let's
talk
about
charts,
there's
no
WordPress
chart
in
this
in
this
talk,
but
there
could
be
cool
so
today,
I'm
gonna,
try
to
convince
you
all
that
kubernetes
is
not,
as
has
been
purported,
configuration
is
code.
Why
this
matters,
how
you
all
can
leverage
it
as
maintainer,
x'
and
as
chart
consumers
chart
consumers
and
why
that's
great
for
helm
users
great
for
maintainer,
x'
great
for
kubernetes
in
general
cool,
so
configuration
is
code
before
we
get
into
that.
We
kind
of
want
to
talk
about
like
different
ways
to
ship
code.
A
How
many
chart
maintainer
x'
do
we
have
in
the
audience
today
all
right?
Okay,
any
cluster
operators
here,
anyone
running
helman,
helm
users,
awesome
leave
your
hand
up
if
you're
running,
helm,
charts
in
production
Hey
all
right.
It's
good
tech,
hell,
maintainer
x',
you
guys
Rock,
awesome,
cool,
so
first
question,
of
course,
where
I'm
going
is
who's
code?
Are
we
talking
about
here
and
where
is
it
going?
A
You're
like
up
here,
I
go
most
of
the
time,
but
if
you
use
any
of
this
stuff-
or
you
know
hundreds
of
other
charts
you're,
probably
in
the
bottom
row
as
well,
were
you
deploying
someone
else's
code
to
your
own
servers
and
then,
if
you're
a
chart
maintainer
you're
over
here-
and
you
know,
if
you're
building
a
package
manager
or
a
pass
or
a
marketplace,
you're
kind
of
like
down
in
this
other
corner?
That's
how
the
maintainer
is.
A
You
know
so
in
either
case
the
things
we
care
about
today
are
kind
of
these
first
three,
so
we
got
first
party
and
then
we've
got.
You
know
the
chart
maintainer
when
I
refer
to
is
the
OTS
maintainer,
the
off-the-shelf
software
maintainer,
and
then
is
you
if
you're
consuming
charts,
you're
kind
of
the
OTS
consumer?
So
why
are
we
making
this
distinction?
Why
do
I
determine
the
difference
between
OTS
and
first
party?
Just
a
little
bit
back
to
the
idea
of
configuration,
is
code
and
grant
went
into
this
a
little
bit?
A
So
first
party
you've
limited
deployment
environments,
you
have
direct
access
to
the
source
configuration
and
you
probably
have
direct
access
to
the
application
experts.
You
know
staging
and
dev
environments.
You
know
a
lot
of
cases
if
you're
doing
a
full
stack
thing.
You
are
the
expert,
so
the
consequences
are
that
it's
easy
to
change
and
there's
fewer
changes
needed
because
there's
fewer
environments,
third
parties,
kind
of
the
opposite,
infinite
deployment
environments
right.
How
many
places?
A
How
many
places
are
they
running
in
Postgres,
in
production
like
I,
wouldn't
even
try
to
count
that
and
then
you
know
you
have
limited
control
over
the
source.
So
you
got
to
make
a
lot
of
changes
and
it's
harder
to
change.
I
gotta
go
get
permission
from
a
stranger
to
update
what's
being
shipped.
So
if
you
remember
this
picture
from
a
long
time
ago,
the
dev
ops
wall,
we
knocked
it
down,
we
sprinkled
some
dev
ops
on
it.
A
Now
we're
all
working
together,
and
so
this
is
a
lot
harder
for
off-the-shelf
stuff,
though
this
is
a
lot
harder
for
third-party
stuff,
even
though
you
may
be
doing
this
over
here,
there's
still
a
wall
over
here
when
you
want
to
use
somebody
else's
code
and
this
problem
kind
of
predates
kubernetes.
You
know
you
look
at
tools
like
puppet
salt
chef
in
Cibola.
We've
had
these
tools,
2013
kubernetes
and
docker
and
yeah
I
know
some
of
these
are
configuration.
A
Managers
kubernetes
is
really
more
of
a
scheduler
but
they're
all
ways
to
get
code
up
been
running
first
first
party
or
third
party,
and
if
you
ever
pulled
down
at
home,
I'm
sorry,
you
were
pulled
down
a
chef
recipe
made
a
bunch
of
changes
to
it
to
get
it
working
in
your
stack.
You
know
you
know
what
I'm
talking
about
this
predates
kubernetes,
but
by
nature.
What
I'm
hoping
to
demonstrate
is
that
kubernetes
is
equipped
to
solve
this
problem,
a
lot
better
than
its
predecessors.
A
So
what
is
this
problem?
Look
like
in
kubernetes,
I'm,
gonna,
go
super
deep
here,
cuz
I,
think
gran
and
mark
gave
a
really
good
kind
of
overview
of
how
this
looks
but
I've,
essentially
kubernetes
is
its
own
wall
and
it's
this
kind
of
application
versus
runtime
config
wall.
So,
on
the
app
side
you
have
environment
variables,
config,
Maps,
image,
tags,
pull
policies,
the
thing
that
the
app
maintainer
really
knows
about
and
is
like
prepared
to
kind
of
execute
against,
and
then
on
the
other
side.
A
You
have
kind
of
the
things
that
would
be
owned
by
the
cluster
operator.
Things
like
ingress,
pod
security
context,
HTTP
proxies
all
this
kind
of
other
runtime
stuff,
and
we
can
kind
of
see
these
fields
in
a
real
life
object.
I'm,
omitting
a
couple
things
because
match
labels
just
makes
it
scroll
all
the
way
off
the
page.
A
But
the
thing
is
that
the
runtime
and
the
app
configure
kind
of
mixed
together,
so
you
can
kind
of
see
like
we
have
the
app
config
in
green.
So
things
like,
where
the
secrets
come
from
what
ports
are
exposed
on
these
containers
and
then
the
runtime
side
of
things
like
oh
I,
want
three
replicas
that
I
need
to
proxy
out
through
this
tunnel
and
even
in
the
image
field,
you
can
kind
of
see
that
these
are.
These
two
concerns
are
mixed
together
in
a
single
in
a
single
yamo
field.
A
Where
you
know
the
the
tag
of
the
container.
That's
actually
going
to
work
in
a
version
of
a
kind
of
a
chart
would
be
on
the
would
be
kind
of
on
one
side
and
then
the
actual
repo
you
want
to
pull
from
might
be
on
the
other
side,
so
as
well
mixed
together
and
app
config
again
finite
configurations,
the
app
maintainer
knows
all
the
possible
combinations,
and
it's
like
easy
to
template.
It's
perfect
for
something
like
helm,
the
runtime
side.
A
You
know
you've
got
infinite
options,
you
got
C
RDS,
you've
got
infrastructure,
it's
independent
of
the
application,
and
it's
really
like
the
cluster
operator
concerns.
So
you
got
these
two
things
is
fun
little
parallel
between
app
and
run
time
and
kind
of
first
party
and
third
party.
In
terms
of
like
the
limits
on
the
number
of
environments
and
all
that
will
get
a
little
bit
deeper
into
this,
but
the
consequence
of
this-
and
we
talked
about
this
a
lot
yesterday-
we
you
know
maintaining
an
OSS
chart-
means
accepting
a
lot
of
changes
to
add
configurability.
A
So
you
look
at
like
the
Prometheus
chart.
You
know,
70%
of
the
commits
are
make
this
one
field
configurable,
and
so
you
know,
as
an
operator
customizing
a
third-party
chart.
It's
it's
slow.
You
got
to
go
through
a
process
because
it's
not
your
code
and
it
adds
complexity.
You
know
a
thousand
lines
of
values.
Yeah
mole
means
everyone
needs
to
learn
this
before
they
can
install
it
and
most
the
time
the
defaults
are
good,
but
even
when
you
do
want
to
change
something
got
to
kind
of
grok
the
whole
thing.
A
So
how
can
I
get
the
speed
and
simplicity
of
first-party?
Four
charts?
I
don't
own,
so
you
got
two
options.
You
can
fork
the
chart.
So,
let's
look
at.
We
can
look
at
this
base
chart
where
there's
one
there's
one
value.
The
color
blue
sets
an
environment
variable,
so
we
could
fork
this.
We
can
add
replicas
to
it
and
then
we
can
kind
of
have
our
you
know
runtime
and
R&R
in
our
app
config
combined.
A
Otherwise
you
can
and
then
you
know
your
template
this
out
and
it
all
works.
Otherwise
you
can
template
it
out
and
then
change
it
so
take
the
chart,
don't
modify
it
template
it
out,
get
your
color
in
and
then
you
know
just
modify
the
amyl
at
the
end
and
add
the
replicas.
Neither's
options
are
great.
You
get
customized
ability,
but
you
kind
of
become
the
owner,
and
this
makes
day
two
really
hard.
I
want
to
pause
here,
because
we
have
said
the
word
day
to
a
bunch
of
time
so
far
and
haven't
really
defined
it.
A
So,
in
my
mind
and
I'm
sure,
there's
like
extensions
to
this
but
day
two
is
kind
of
all
the
things
that
you
would
do
when
you
deploy
your
own
application.
Regular
updates
monitoring,
alerting
logging
canary
deploys
security
scanning
all
this
stuff.
When
you
ship
your
own
first
party
code,
you
want
to
be
able
to
do
this
stuff
when
you
ship
other
people's
code
to
kind
of
falls
into
two
categories
here.
A
So
like
there's,
the
observability
side,
which
is
kind
of
like
on
like
more
about
customizing
the
manifest,
but
the
deployment
side
and
doing
regular
updates
and
continuous
delivery
is
really
like
what
we
want
to
focus
on
today.
So,
like
you,
can
think
of
your
workflow
as
something
like
you
know,
you're
writing
some
code
or
you're
writing
some
yamo.
You
commit
it.
There's
a
review!
Maybe
there's
some
pre
pre
commit
steps.
A
You
know
these
concerns
kind
of
sit
to
the
right
side
of
this.
This
this
whole
diagram,
and
so,
in
other
words
like
day
two
again,
is
like
leverage
the
same
process,
I
use
for
my
own
code
when
I
run
something
else
and
to
do
regular
updates
and
CD
and
monitoring
a
log
aggregation.
You
kind
of
need
a
few
things
you
need
like
deployment
flexibility.
You
need
deep
customization
and
then
you
also
need
like
full
transparency.
You
need
to
be
able
to
see
what's
actually
happening.
A
So
when
you
do
fork
charts,
you
don't
quite
get
all
of
those
things,
so
it
makes
it
really
hard
to
update,
because
now
that
you
fork
this
to
add
the
things
you
need,
whether
you
template
it
it
or
forked
it
directly.
You
still
own
the
whole
thing
and
when
the
chart
maintainer
ships,
an
update,
you
got
to
go
manually,
merge
that
all
in
which
leads
to
a
fun
game.
A
I
like
to
call
choose
your
merge
conflict,
so
you
could
fork
the
chart
and
then
you
could
resolve
this
merge
conflict
or
you
could
template
a
modify
and
you
could
resolve
this
merge
conflict
now.
You
can
argue
that
this
one's
a
little
bit
nicer,
but
it's
it's
never
fun
and
in
in
the
end
of
the
day,
you
end
up
right
instead
of
owning
a
couple
different
values
files.
A
You
want
all
these
kind
of
fork
charts
on
top
of
that
and
because
it's
a
manual
process,
it
probably
doesn't
get
updated
that
often,
which
is
scary,
because
you
need
security,
you
need
features
you
need
fixes.
So
the
question
is
like
how
do
I
get
better
at
this
and
the
answer
is
you
got
to
embrace
the
wall?
You
got
this
wall.
You
got
these
two
classes
of
config
they're,
all
mixed
together.
A
Those
you've
been
following
the
debate
and
there's
a
lot
of
talk
of
overlay
and
templating,
and
all
these
different
options,
like
the
thing
that,
like
we
want
to
like
talk
about
today.
The
solution
that
we're
proposing
is
something
like
helm,
plus
customize.
We've
all
heard
this
a
couple
times:
declarative
configuration
for
kubernetes,
it's
all
based
on
Brian
grants,
white
paper,
who
is
the
the
Omega
symbol,
you've
seen
them
floating
around
on
probably
github
issues
and
maybe
Twitter.
It
goes
deep
into
the
difference
between
templating
and
patching
and
kind
of.
A
Why
why
he
thinks
that
patching
is
kind
of
the
the
better
way
to
do
this,
so
I
don't
fully
buy
in
because
obviously
like
templating
is
necessary
and
the
ytt
talk
actually,
if
any
guys
caught.
That
was
really
great.
So
if
you
missed
that
go
watch
the
video
it's
it's
like
a
really
good
breakdown
of
all
the
differences.
A
Anyways
template,
freaking
dip
configuration
means,
there's
actually
no
code,
and
this
is
a
lot
more
structured
than
things
like
chef,
where
okay,
that
looks
like
a
I
could
put
that
in
a
JSON
object.
But
then
you
have
variable
references
and
for
loops
and
it
gets
really
imperative.
You
can
do
the
same
thing
for
ansible
like
good.
It's
declarative,
I
have
the
state
that
I
want
and
I
put
it
in
a
list
and
it
works,
and
then
you're
suddenly,
like
templating,
desired
state
operations
over
like
a
double
nested
for
loop
and
it's
I
don't
know
it's.
A
It
turns
out
that
what
we
thought
was
Nemo
was
actually
just
code,
pretending
to
be
Gamal,
so
helm
doesn't
quite
solve
this
either.
It
doesn't
give
you
full
access
to
the
kubernetes
api
like
by
design.
I
think
you
don't
want
people
to
you
want
to
give
people
a
private
like
sorry
public
variables
that
they
can't
we
can
kind
of
package
it
in
a
way
that
is
approachable.
It
still
loops.
There's
still
some
control
flow.
A
So
how
can
customize
help
the
answer
that
we're
gonna
throw
up?
Is
patch
don't
fork,
so
we
have
our
app
config
or
runtime
configured
with
customize,
we'll
kind
of
run
through
a
really
quick
example.
So
the
chart
contains
the
app
config
we
template
it
out,
and
then
we
write
a
small
patch
file
strategic
merge
and
you
can
build
these
together.
So
you
have
your
things:
mixed
together,
they're
stored
separately,
built
into
coop
controls.
You
could
control
customize,
and
so
you
have
your
template
in
your
patch.
You
can
extend
your
patch.
A
If
say,
we
want
to
update
the
image
pool
policy
or
something,
and
even
though
kubernetes
consumes
all
these
types
in
a
single
object
like
we
can
store
them
all
separately.
This
scales
really
nicely
do
a
real
app
with
actual
like
multiple
manifests.
You
have
your
base,
which
is
all
the
application
side
and
your
overlay,
which
would
be
all
the
patches
and
even
though,
like
the
chart,
maintainer
didn't
this
configurable
I
can
scale
this
to
multiple
environments
like
oregano
moly
am
I'm
out
of
yeah
Mille
to
make
the
tweaks
that
I
need.
A
So
you
know
staging
replicas
three
production
replicas
ten.
So
what
does
this
have
to
do
with
day
to
operations
kind
of
the
difference
between
ten?
It
gets
back
into
the
difference
between
templating
and
patching.
You
know
the
user
can
periodically
rebase.
Their
base
is
straight
out
of
the
white
paper
there
and
it's
kind
of
like
this
is
the
most
important
innovation
of
customized.
Is
you
can
pull
down
the
latest
and
you're?
Never
resolving
get
merge
conflicts
because
there's
no
templates,
because
there's
no
code
because
there's
no
control
flow.
We
can
just
do
this
really
easy.
A
Like
structured
object,
merge
it
still
emerged,
there's
still
room
for
conflicts,
but
we
find
that
in
ninety
percent
of
cases
you
can
just
pull
the
new
one
and
you
never
ever
get
stuck
with
something
like
this.
Another
bonus
is
like
customizing
mitts
playing
a
mole,
which
means
you
get
gras,
Keable,
diffs
and
then
so
yeah.
That's
it
patch,
don't
fork.
It's
easy
to.
A
Customize
is
easy
to
update
it's
a
lot
of
steps
that
we
went
through
so
we've
kind
of
like
built
a
tool
that
we'd
like
to
share
with
y'all
for
how
to
really
operationalize
this
so
replicated
cots
stands
for
kubernetes
off-the-shelf
software.
It's
this.
If
you're
familiar
with
replicated
ship,
it's
kind
of
the
spiritual
successor
to
that
project,
it's
really
optimized
for
managing
helm,
charts,
there's
no
more
forking
charts.
It
streamlines
the
advanced
side
of
the
configuration
with
customized
plugs
into
your
workflow.
You
can
do
get
ups
and
all
those
all
that
fun
stuff.
A
So
if
you
want
to
do
this,
you
want
to
merge
and-
and
you
want
to
kind
of,
have
your
chart-
maintainer
x'
on
one
side,
cluster
operators
on
the
advanced
config,
the
patch
moves,
along
with
the
updates
and
instead
of
owning
fork,
charts
you
just
own.
Some
patches
cots
is
built
on
this
idea
of
mid
streams
and
down
streams.
So
customize
is
kind
of
infinitely
recursive.
A
You
can
have
as
many
layers
here
as
you
want,
but
this
is
kind
of
the
formula
we
found
works
for
most
folks,
where
you
know
the
upstream
is
a
chart
owned
by
a
by
a
maintainer,
and
the
midstream
is
essentially
like
our
organization-wide
like
config,
for
this
object.
So
today
we're
gonna,
run
20
different
Redis
is
but
we're
all
gonna
they're
all
gonna
have
the
same
kind
of
basic
parameters
in
the
base
like
base
set.
A
You
know,
environment
variables,
memory
limits,
however,
we're
gonna
do
it
and
then
the
down
streams
are
your
actual,
like
deployment
environment.
So
you
might
have
proud
of
us
proud
us.
East
may
be
a
staging
environment,
and
so
we
can
look
at
this
basic
workflow
and
the
idea
here
is
that
cots
formally
ship
is
gonna
plug
in
at
the
code
review
stage
and
let
you
kind
of
like
pipe
all
of
the
changes
coming
from
the
helm,
charts
hub
or
from
monocular
into
your
core
first
party
workflow.
A
A
A
Essentially,
you
have
some
stuff
things
happen.
It
gets
into
a
repo,
and
once
things
are
committed
to
the
repo,
then
it's
going
to
get
piped
into
the
cluster,
and
so
cots
is
gonna
own
kind
of
just
one
side
of
this.
The
big
goal
is
like
it's
gonna
help
you
get
the
right
things
into
your
repo
and
you
bring
your
own
process
for
the
right
side
of
this.
We're
gonna
install
an
app
called
better
DB.
This
is
a
very
small
app
and
super
simple.
A
It's
not
actually
a
database,
it's
got
small
values
file
and
so
we're
gonna
have
the
upstream,
which
is
the
chart.
We're
gonna
have
a
mid
stream
which
is
kind
of
our
core
config,
and
then
we'll
have
two
down
streams.
A
lot
of
production
and
staging
and
cots
is
going
to
help
us
again
get
the
right
thing
and
the
right
repo.
As
soon
as
it's
released
and
like
getting
back
to
day,
2
operations
like
as
things
are,
pushed
into
the
repo.
A
This
is
where
you
can
do
things
like
CVE
scanning
integration,
test
policy
enforcement,
managing
docker
images
and
then
even
going
further
to
the
right
of
doing
automated
monitoring,
because
you
on
the
right
side,
you
can
do
all
your
Canaries
and
log
aggregation
kind
of
all
the
normal
ways
that
you
would
ship
your
own
code
that
you're
managing.
So
let's
pop
into
a
quick
demo
and
we'll
see
we'll
see
what
we
can
do.
A
This
is
just
you
know:
community
chart
I,
have
my
get
ups
deploy
repo.
So
this
is
that
repo
from
the
diagram
which
is
like,
when
things
get
merged
into
specific
branches
of
this
they'll,
go
out
to
specific
clusters.
So
I
have
a
staging
branch,
I
have
a
production
branch
and
the
assumption
is
that
there's
some
process
that
gets
that
live
in
in
those
specific
down
streams,
so
I'm
gonna
we're
gonna
use
a
tool.
This
is
a
hosted
version
of
this
that
you
can
log
into
with
github
it's
public
right
now.
A
It's
also
open
source
or
will
be
open
source
very
soon.
I
think
there's
a
version
of
it.
You
can
run
on
Prem
and
your
own
cluster.
It
has
things
like
that.
But
essentially
you'd
say
it's
a
multi
app
management
dashboard,
and
so
you
can
have
a
bunch
of
different
Helms
hearts
managed
in
this
way.
So
to
kick
this
off
I'm
just
gonna
install
a
new
application
here,
so
I'll
grab
the
URL
from
github
there's
a
bunch
of
different
URL
formats
that
are
supported
here.
A
A
So
let
me
just
do
a
couple
changes
here:
you'll
see
that
don't
make
that
bigger,
so
we'll
change
the
replica
count
to
two
and
then
we'll
change
the
service
type
to
load,
balancer,
let's
say:
I'm
running
a
cloud
provider,
so
the
basic
app
options
get
configured
by
the
helm,
values
and
then
it's
going
to
template
everything
out
and
give
me
the
option
to
add
additional
customizations
on
top.
In
this
case,
the
patch
we're
gonna
add
on
is
gonna,
be
some
custom
secret
management
by
a
vault
in
it
container.
So
it's
gonna.
A
You
know
we
manage
secrets
in
our
own
way.
We
don't
want
to
use
environment
variables,
and
so
what
we're
gonna
do
is
we're
gonna,
add
in
a
NIC
container,
that's
gonna
copy
secrets
out
of
vault
and
drop
them
drop
them
into
where
they
need
to
be
so.
There's
kind
of
an
editor
here
that
can
help
you
build
patches
out
what
I'm
gonna
do
is
actually
I
just
have
this
patch
somewhere
that
I'm
gonna
just
grab
it.
A
Cool
so
I'm
tap
here,
I'm,
actually
just
gonna
paste
this
in
here
I
can
save.
This
I
can
see
the
different
Aynor
and
then
adding
some
volume
mounts
to
the
already
original
container
we
can
save
through
this,
and
then
the
option
here
is
I
can
excuse
me.
Who's
gonna
run
through
I
can
just
download
the
gamal.
That
is
the
result
of
this.
A
So
in
this
case
the
Edit
I'm
gonna
make
here
is
essentially
just
modifying
the
vault
kind
of
URL
that
we're
gonna
use.
So
this
is
our
staging
environment,
so
we'll
pop
through
here
and
again,
this
kind
of
shows
you
that
the
more
interactive
side
of
this
so
I
can
come
in
and
say
you
know,
vault
staging!
That's
some
big
code,
calm
cool!
So
I've
got
my
mid
stream,
which
is
my
kind
of
like
standard
configuration
and
then
I've
done
environment,
specifically
specific
tweaks
at
the
downstream
layer.
A
Cool,
and
so
what
this
is
gonna
do
is
gonna
create
a
PR
into
my
staging
repo
and
I
can
come
here
and
I
can
review
it.
You
could
see
all
the
animal
being
merged
in
I'm,
gonna,
pull
this
in
and
essentially
that's
going
to
in
a
sec
should
show
that
we've
updated
the
version
here.
We
can
do
the
same
thing
for
our
production,
essentially
in
this
in
this.
A
So
if
we
go
into
the
butter
better
DB
chart,
let's
assume
I'm,
you
know
the
CTO
of
better
DB
I,
see
there's
a
PR
from
a
contributor
here
and
they're
bumping
the
image
tag,
which
is
great
and
they're,
adding
a
parameter
for
the
log
level
and
I
say
great.
We
should
let
people
set
the
log
level
when
this
thing
is
running
so
I'll
come
through
here
and
I'll
merge
this
in
so
the
helmet
chart
has
been
updated
behind
the
scenes.
Maybe
it
happened
in
a
different
time
zone
and
I
was
asleep
as
the
cluster
operator.
A
A
A
Of
course
you
do
this
five
times
and
then
we
do
the
live
demo
and
it
doesn't
quite
make
it.
Okay
cool.
So
we
have.
The
upstream
version
is
o3o,
and
if
we
refresh
here,
we
should
see
okay
cool,
so
we've
got
a
PPR
from
from
from
from
from
the
admin
dashboard,
and
you
can
see
just
the
changes
to
the
kubernetes
handle
that
are
gonna
happen,
so
environment
log
level
default
to
info,
and
we
got
the
new
image
tag
so
as
the
as
the
kind
of
cluster
admin
I
can
also
review
this.
A
A
So
I
can
go
ahead
and
merge
this
in
and
then,
if
I
come
back
to
the
dashboard,
where
you
know
we're
up
to
date,
so
and
then
you
can
do
that
across
multiple
multiple
environments
of
internal
processes
that
happen
before
and
after
deploy.
You
can
go
to
staging
but
not
merge
the
production
one.
It's
fully
audible.
You
get
good
rollbacks.
All
of
that,
you
don't
need
me
to
preach
the
the
virtues
of
get
ops.
A
So
that's
that's
essentially
the
tool.
It's
cots
ADM.
It's
you
know
it's
the
open
source
application.
Kubernetes.
It's
got
a
helmet
art
install
it's
multi,
apps,
it
does
get
ups.
It's
got
multiple
lets.
You
have
multiple
down
streams,
let's
stuff
about
doing
preflight
checks
against
clusters
before
installing
apps
into
them.
It's
got
kind
of
sophisticated
at
maintainer,
defined,
troubleshooting
tools
in
there,
and
so
in
general,
like
as
an
operator
you
get
to
own
less
yeah
Mille.
You
get
to
use
your
own
process,
you
can
change
whatever
you
want
and
you
get
to
see
exactly.
A
What's
going
on
as
a
maintainer,
you
write
mostly
amel
and
fewer
templates
and
you
only
template
your
app
config.
Nobody
will
fork
your
chart
anymore.
This
makes
hell
even
more
flexible
than
it
already
is,
and
it
enables
kind
of
first-time
and
advanced
users
to
customize
exactly
what
they
want.
So
the
next
steps
you
can
go
check
out
the
cots
project
you
can,
you
can
do
cots
pull
to
do
this
locally.
You
can
do
cots
install
to
work
with
that
admin
console
the
helm.
Install
is
coming
soon.
We've
plugin
called
outdated,
coop
control
plugin.
A
So
you
can
scan
your
cluster
for
images
that
are
out
of
date
compared
to
the
docker
registry,
similar
with
unfor
cat
marks
demo.
Yesterday,
you
can
find
us
in
the
ship
channel
at
kubernetes
slack
probably
soon
to
be
renamed
to
cots
shipping.
One
point
I
was
of
projects
is
hard
and
last
of
all
embrace
the
wall.
It's
not
going
anywhere
thanks.
Everybody!
That's
day,
2
operation
to
help
charts,
Oh.
A
Downtime,
no
time
for
questions
we'll
be
down
at
the
booth.
Come
grab
a
grab.
A
hoodie
you'd
love
to
chat
more
about
this
Oh.