►
From YouTube: What's in Helm v3? - Matt Butcher, Microsoft
Description
You’re already familiar with Helm v2 but do you know what’s new in Helm v3 and how it all works? In this introduction to Helm v3, we’ll cover the major changes between Helm v2 and v3 that you’ll want to take note of in your migration including the following:
- Removal of Tiller - v3 uses the Kubernetes API server instead
- Using RBAC to limit access & resources
- Inheriting security controls from kubeconfig
- Updates to chart repositories
-Improved Upgrade Strategy – three-way strategic merge patches
-and more!
A
I'm
matt
butcher,
I'm
one
of
the
helmet
corps
maintainers,
I'm
super
excited
because
we
have
a
bunch
of
helm
core
maintainers
on
here
today,
karen
bridget
martin
and
I
will
be
talking
a
little
bit
and
then
during
the
workshop
portion,
you'll
get
to
meet
some
other
helm
core
maintainers.
A
So
here's
the
here's,
the
agenda,
all
the
time
zones
here
on
pacific
time,
which
I
am
not
so
I
apologize
if
I
mis
make
any
mistakes
here,
so
we'll
kick
off
now.
Do
this
brief
welcome
deck?
Then
I'm
gonna
dive
in
and
talk
for
about
an
hour
about
about
helm.
A
What's
in
helm3
how
helm3
changed
I'll,
give
you
some
history
I'll,
tell
you
some
hopefully
amusing
stories
and
give
you
kind
of
the
conceptual
grounding
for
the
next
talk
after
I'm
done,
martin
one
of
the
core
maintainers
and
and
the
person
who
is
instrumental
in
handling
the
in
building
the
helm.
A
Two
to
three
migration
plug-in
we'll
dive
in
with
a
really
practical
talk
about
how
that
works
and
how
you
do
migrations
with
helm
and
when
we'll
walk
through
kind
of
the
the
entire
process
start
to
finish,
and
then
we'll
take
a
short
break
after
that,
and
then
the
part
I'm
really
excited
about
is
we'll.
A
Do
a
hands-on
workshop
we've
been
working
to
prepare
some
curriculum
to
basically
make
it
easy
to
walk
through
the
process
of
doing
a
helm
two
to
three
migration
and
we've
got
a
whole
bunch
of
people
standing
by
to
help
out
we'll
break
into
some
breakout
rooms
and
we'll
we'll
do
that
all
together,
really
looking
forward
to
that.
So
that's
our
that's
our
agenda
for
the
day.
B
Awesome:
okay,
so
next
just
a
few
housekeeping
items
before
we
get
started,
everyone
should
be
muted
upon
entry.
Since
this
is
a
regular
zoom
meeting,
we
want
to
give
our
speakers
the
time
and
attention
to
present
first,
and
so
with
that.
If
you
do
have
questions,
please
go
ahead
and
drop
them
into
chat
I
and
I'll
be
going
through
them
and
we'll
we'll
do
them
at
the
end
of
the
presentation.
B
So
again,
just
drop
your
questions
into
chat
and
we'll
do
them
at
the
end,
and
then
please
make
sure
you
adhere
to
the
cncf
code
of
conduct.
This
is
an
official
cnc
event,
so
just
be
respectful
to
everyone
here
and
then.
Lastly,
just
kind
of
a
huge
thank
you
to
all
the
people,
who've
helped
put
on
this
workshop.
So
you
know
people
listed,
are
helping
with
the
hands-on
workshop
or
doing
the
presentations
or
have
just
contributed
to
the
event
in
some
way
and
also
a
huge
thank
you
to
cncf
for
their
support.
B
With
that,
I
guess
matt.
Do
you
want
to
get
started
a
little
bit
early?
Sure,
okay,
well
I'd
like
to
introduce
butcher
a
principal
and
software
engineer
at
microsoft,
and
he
will
be
talking
about
what
is
in
home.
B
A
Okay,
slides
up
okay,
karen
all
right,
okay,
so
I'm
really
excited
to
be
here
today.
What
we're
gonna!
What
we're
really
focusing
on
today
is
the
differences
between
helm2
and
helm3
and
how
to
migrate
from
one
to
the
other.
So,
in
my
session,
I'm
going
to
talk
a
lot
about
how
helm
works
and
then
kind
of
what
the
design
decisions
were.
A
So
that's
what
I'm
going
to
talk
about
very
focused
on
sort
of
the
the
way
that
things
work
and
the
abstract
notions
that
we
have
guiding
helm.
Then
I'm
going
to
hand
off
to
martin
and
martin
is
going
to
take
a
much
more
practical
approach
and
really
focus
on
what
it
means
to
upgrade
what
how
things
work,
what
to
what
to
look
out
for.
So
that's
how
we'll
break
things
out
in
these
first
couple
of
sessions
today.
A
A
Well,
that's
just
a
little
bit
that's
about
a
year
after
helm,
3
came
out,
and
this
happens
to
be
the
day
that
helm
2
becomes
unmaintained.
So
what
does
that
mean?
Well,
there
are
two
major
components
of
this
that
you
should
understand.
The
first
one
is
that
the
piece
of
software
that
we
call
helm
and
tiller
will
no
longer
receive
any
updates
at
that
point
right.
So
currently
we
are.
We
are
in
the
rc
phase
of
helm
2.17.0,
which
means
it'll
be
released
within
within
a
few
days.
A
This
is
our
last
helm
release
on
the
two
2.0
branch,
our
last
planned
release.
So,
for
a
long
time,
we've
been
doing
only
security
fixes
on
this
on
on
helm2,
but
all
good
things
have
to
come
to
an
end,
and
in
this
case,
in
order
to
to
devote
full
attention
to
helm3
and
start
the
planning
for
helm4,
we
need
to
stop
supporting
helm2,
so
2.17.0
will
be
the
last
release.
There
will
be
no
additional
work
on
helm2,
after
that
there
will
be
no
updates
after
november
13th
2020,
not
even
security
updates.
A
So
that's
one.
One
part.
The
second
one
which
has
been,
which,
which
surprises
people
a
little
bit
more,
is
that
on
november
13th,
this
staple
and
incubator
chart
repositories
will
also
stop
receiving
updates
right
now,
they're
in
maintenance.
Only
mode,
if
you
go
look
at
slash,
github.com
you'll
see
that
people
are
still
making
fixes
to
things,
but
they're
small
fixes
right
we're
not
accepting
new
charts.
A
I
believe,
but
the
plan
is
that
those
repositories
will
no
longer
receive
any
updates
and
from
november
13th
onward,
they
will
be
marked
as
archived
and
and
no
more
no
more
security
updates,
no
more
new
charts
being
added.
A
Furthermore,
as
of
november
13th
2020,
the
google
cloud
storage
bucket
that
holds
a
bunch
of
those
charts
will
no
longer
be
available,
so
we're
working
on
migrating
everything
over
to
charts.helm.sh,
which
will
be
the
new
endpoint
for
your
helm
repositories,
but
those
will
just
be
mirror
static,
archives
of
the
the
the
helm,
incubator
and
stable
repositories
so
very
quickly,
then.
What
does
that
mean?
As
far
as
how
you
get
charts?
Now,
here's
kind
of
the
here's
kind
of
the
deal
when
we
first
started
this
model
for
helm
2.
A
We
we
never
expected
helm
to
to
grow
as
quickly
and
as
expensively
as
it
has,
and
so
the
process
we
had
for
for
accepting
charts
was
very
editorially,
based
where
we
had
a
group
of
people
who
who
managed
the
stable
and
incubator
chart
repositories
and
examined
every
single
chart
that
came
up
well,
it
has
become
an
unmaintainable
situation.
A
There
are
just
too
many
good
charts
out
there
and
too,
and
it's
too
stressful
to
try
and
be
the
sole
gatekeeper
for
the
public
helm
chart
world
so
over
the
last
year
and
you've
probably
noticed
this.
We
have
been
starting
to
spin
off
separate
chart
repositories
and
we've
been
switching
to
a
model
closer
to
npm
or
cpan
or
any
of
the
sort
of
distributed
package
manager
systems.
A
So
now
we
have
artifact
hub,
which
is
an
official
cncf
project.
If
you
are
looking
for
charts,
you
can
go
to
artifact
hub
search
for
something
in
there
and
it
will
point
you
toward
it'll,
give
you
instructions
on
which
repository
to
configure
and
which
charts
to
fetch
from
there
and
so
behind.
The
scenes.
Artifact
hub
is
sort
of
aggregating
from
dozens
and
dozens
or
probably
hundreds
at
this
point
of
package
archives
for
helm,
charts
and
and
surfacing
those
in
a
nice
easy
to
use,
search
fashion
in
helm3.
A
We
actually
have
the
helm,
search,
client
wired
up
to
talk
to
artifact
hub.
So
if
you
do
helm
search
hub
wordpress,
it
goes
to
artifact
hub,
finds
all
the
wordpress
charts
on
artifact
hub
and
and
prints
them
out
in
your
command
line
terminal.
So
there's
a
lot
of
integration
between
these
two,
but
going
forward
stable
and
incubator
as
centralized
chart
repositories
will
be
gone
and
this
more
distributed
approach
where
people,
organizations
and
individuals
self-publish
their
charts.
That's
the
way
we'll
do
things
going
forward
so
november.
A
In
these
last
couple
weeks
before,
then,
you
can
still
expect
security
updates
and
very
very
minor
changes
on
the
charts
repository.
But
if
you
don't
upgrade
by
november
13
2020,
you
will
be
on
your
own,
because
the
helm
maintainers
will
officially
stop
all
of
our
development
efforts.
So
that's
something
to
keep
in
mind
and
that's
actually
the
real
sort
of
driving
force
for
why
we
wanted
to
do
this
webinar
right
now,
so
that
you
would
still
have
ample
time
to
finish
up
these
migrations
and
so
that
we
could
help
you.
A
You
know
sort
of
proactively
prepare
for
this
and
find
any
any
potential
gotchas
right
now
up
front
when
core
maintainers
can
be
here
and
help
you,
okay,
so
that's
kind
of
the
preamble
all
right.
So
what
I'm
going
to
cover
in
this
particular
talk
is
I'm
going
to
start
by
giving
you
sort
of
the
background
story
for
where
home
came
from
how
it
developed
along
the
the
particular
path
that
it
did
and
and
and
point
out
some
of
the
hiccups
along
the
way.
A
One
of
our
really
bad
assumptions
that
we
made
that
we
had
to
correct
for
with
helm3.
After
that,
I'm
going
to
talk
particularly
about
the
the
the
brief
life
and
tragic
death
of
tiller
and
and
what
that
means
in
the
context
of
helm2
and
helm3,
and
then
I'm
going
to
spend
some
time
talking
about
releases.
A
So
where
did
helm
come
from?
Maybe
the
story
has
been
told
too
many
times,
but
I'm
gonna
tell
it
again
anyway,
because
it's
helm's
birthday
and
it's
fun
for
me
to
tell
this
story.
Five
years
ago
we
had
a,
I
was.
I
was
working
for
a
company
called
deus.
It
had
just
recently
been
purchased
by
engine
yard
and
we
had
been
doing
some
r
d
on
this
brand
new
platform
that
was
called
kubernetes.
A
I
think
at
the
time
kubernetes
was
maybe
at
version
1.1,
maybe
1.2
by
that
point,
and
so
we
had
been
doing
all
kinds
of
crazy
stuff.
You
know
all
kinds
of
you
know,
experiments
and-
and
you
know,
edge-leaning
things
trying
to
figure
out
what
we
could
do
with
kubernetes
and
we
had
become
convinced
that
kubernetes
was
going
to
be
the
next
big
thing.
A
So
I
was
asked
to
present
a
kubernetes,
in
a
nutshell,
kind
of
thing
to
explain
to
not
just
the
other
developers
that
were
in
my
group
or
in
the
deus
part
of
engineering
but
to
all
of
engine
yard,
including
marketing
and
communications,
and
the
executive
team,
and
everything
explain
in
30
minutes
what
kubernetes
was
now
I'm
sure
some
of
you
are
are
laughing
already,
because
we
all
know
how
hard
it
is
to
explain
to
anybody
what
kubernetes
is
and
what
it
does.
A
So
so
and
I
drew
the
after
lunch
slot
too.
So
I
knew
I
was
going
to
be
doing
this
right
after
lunch,
so
it
came
back
and
and
arranged
my
kids
stuffed
animals
around
the
house
and
took
pictures
of
them
and
giraffe
and
gopher
and
stuff
like
that
and
and
wrote
this
silly
presentation.
Silly
powerpoint
presentation
called
the
illustrated
children's
guide
to
kubernetes
and
karen
and
I
later
partnered
up
and
and
did
the
little
book
based
on
that.
So
later
that
day
we
decided
to
kick
off
a
hackathon
project.
A
At
stake
was
a
75
amazon
gift
card,
and
so
my
team
jack
francis
and
remus
and
I
jack
remus,
and
I
we
decided
that
we
wanted
to
try
something
kubernetes
oriented
because
we
had
just
kind
of
announced
that
was
going
to
be
our
big
shift.
So
we
we
came
up
with
this
idea
to
do
a
package
manager
for
kubernetes.
A
We
called
it
kate's
place
k8s,
we
thought
it
was
really
cute
and-
and
we
spent
the
next
two
days,
you
know
at
every
every
possible
moment,
between
sessions
and
late
at
night
and
stuff,
like
that
hacking
together
this
little
demo
of
a
package
manager
for
kubernetes,
so
we
won
the
75
gift
card.
I
know
that's
what
all
of
you
really
yeah.
We
won
it.
We
split
it
three
ways.
I
think
I
spent
mine
on
food,
unsurprised,
probably
coffee,
knowing
me
and
and
then
we
thought
that
was
the
end
of
it.
A
Well,
the
next
day,
which
was
a
friday
I
got
into
the
office
and
the
phone
rang.
Yes,
the
actual
actual
phone
rang
and
I
picked
it
up
and
it
was
the
ceo
and
the
cto
and
they
said
hey.
We
were
talking
last
night
and
we
think
that
this
package
manager
for
kubernetes
thing
might
be
a
good
idea,
so
we'd,
like
you
and
and
remus
and
jack,
to
keep
going
on
that.
A
A
You
know
the
right
name
for
it
so
jack
and
I
sat
down
with
a
nautical
dictionary
and
flipped
through
it
tossing
words
back
and
forth
until
until
until
jack
said,
hey
wait,
how
about
helm
and-
and
I
went
oh
that's
great,
and
then
we
came
up
with
charts
as
the
metaphor
for
what
the
packages
would
look
like,
and
that
was
the
birth
of
helm.
So
so
adam
maurice
and
michelle
nerali,
and
I
really
spent
the
next
several
weeks
just
hammering
away
on
code.
A
It
was,
you
know,
go,
and
that
was
a
new
language
for
for
some
of
us
and
and
we
were
just
kind
of
working
and
working
really
exciting.
Various
engineers
would
drop
in
and
help
out
and
then
drop
out
and
on
the
very
first
kubecon
which
was
in
san
francisco,
we
announced
helm,
which
was,
I
think,
helm,
0.1
or
0.2,
or
something
like
that,
and
and
we
showed
it
off
now.
The
first
kubecon
was
probably
smaller
than
this
than
this
workshop.
A
Today
I
don't
remember
how
many
people
it
were,
but
I
remember
being
able
to
make
eye
contact
with
pretty
much
everybody
in
the
audience.
So
it
was
pretty
small
and
we
had
a
great
time
and
it
was
a
lot
of
fun,
but
we
didn't
really
expect
it
to
go
anywhere.
So
that
was
about
I
think
november.
Maybe
early
december
of
2015.
A
well
january,
google
called
us
and
said:
hey
you
want
to
fly
out
to
google
in
seattle
and
chat
with
us
about
helm.
We've
been
working
on
this
other
project.
We
think
the
two
can
collaborate
so
adam
and
I
and
and
gabe
fly
out
to
google
in
seattle
meet
with
some
engineers
there
and
they
proposed
that
we
merged
together,
helm
and
they're
they're
about
to
be
released,
or
maybe
it
had
already
been
released
system
called
kubernetes
deployment
manager.
So
we
started
working
on
that
with
them.
A
That
became
helm2
the
basis
for
helm2,
so
helm1
never
really
saw
the
light
of
day.
I
think
the
helm
classic,
which
is
what
we
call
it
now,
maybe
made
it
up
to
0.13,
but
because
google
had
already
released
what
they
called
their
1.0
release
of
deployment
manager.
We
had
to
skip
to
the
next
number
and
that's
why
we
went
from
helm
classic
to
helm
2
but
never
really
released
the
helm
one.
A
So
we
worked
together
on
that
and
then
released
the
new
version
of
helm
and
we
were
shocked
when
people
just
started
jumping
in
and
contributing
code.
I
think
we
have
now
hundreds
of
companies
and
thousands
of
developers
who
have
contributed
code
to
helm
and
probably
an
order
of
magnitude
higher
than
that
who
have
contributed,
charts
and
developed,
charts
and
stuff,
like
that,
it's
just
been
phenomenal.
A
It's
been
fabulous,
but
as
we
went,
we
realized
that
some
of
the
assumptions
we
made
back
in
the
kubernetes
1.2
to
1.3
or
four
times
were
incorrect
assumptions
and
what
we
really
wanted
to
do
is
we
really
wanted
to
follow
semantic
versioning
and
semantic
versioning
says
you
know:
version
numbers
are
major.minor.patch.
A
So
if
you
fix
a
bug,
you
increment
the
patch
release,
if
you
add
a
feature
that
doesn't
break
anything,
you
increment
the
minor
release,
but
anytime,
you
need
to
make
a
breaking
change.
You
have
to
increment
the
major
release
number,
so
we
knew
that
as
soon
as
we
broke,
something
we'd
have
to
go
to
helm3,
so
we
deferred
for
a
really
long
time
and
collected
all
of
our
breaking
changes
up
and
then
started
this
big
development
effort
to
do
helm3
and
then,
when
we
released
helm3
we
knew
it
had
a
lot
of
big
changes.
A
They
were
going
to
necessitate
things
like
today's
workshop
right,
but
and
then
with
home
three,
you
know
again
we're
following
the
same
pattern:
we're
not
going
to
introduce
any
breaking
changes
during
the
lifespan
of
helm3
helm4
will
be
the
first
release
where
we
break
anything.
A
But
what
this
meant
for
us
practically
during
this
sort
of
history
of
helm
thing
is
that
we
we
did
have
to
kind
of
buffer
up
a
lot
of
changes
and
wait
for
quite
a
while,
and
then
you
know
adam
who
ran
the
the
helm3
development
cycle
said:
okay,
here
we
go,
everybody
get
ready
and
we
started
doing
patch
after
patch
after
patch
breaking
change
after
breaking
change
in
the
helm3
branch
and
then
eventually
got
that
stable
and
got
that
released.
A
A
So
so
don't
you
know:
don't
worry
that
you're
gonna
migrate
to
home
three
tomorrow
and
then
have
to
migrate
to
home.
It's
still
probably
a
year
to
two
years
out,
but
we
are
trying
to
open
up
the
opportunity
now
for
the
developers
who
have
been
currently
maintaining
home,
2
and
home
3
to
be
able
to
work
on
home,
3
and
start
buffering
stuff
up
for
that
home,
4
bucket
all
right.
So
that
should
give
you
a
good
background
of
of
where
we've
been
with
helm
and
and
what
it's
done.
A
But
I
want
to
pivot
now
and
talk
for
a
little
while
about
tiller,
and
I
so
I
like
to
call
this
section:
the
life
and
death
of
tiller
and
overly
dramatic
telling
of
the
story.
So
it
all
starts
in
that
january
meeting
at
google.
So
we
were
at
the
google
campus.
Having
a
great
time
we
took
a
break
at
lunchtime,
went
to
the
cafeteria
all
together,
we're
standing
in
line.
A
You
know
in
the
in
the
the
the
you
know:
scoop
food
on
your
plate,
section
of
the
google
cafeteria
and
just
kind
of
chatting,
and
I'm
chatting
with
one
of
the
engineers
there,
and
he
says
you
know
I
really
like
working
on
kubernetes.
This
has
been
great,
but
I'm
a
little
concerned,
because
you
know
some
people
seem
to
think
that
you
know
we're
going
to
be
talking
about
thousand
node
kubernetes
clusters
with
lots
of
different
development
teams
working
together.
A
But
you
know:
look
at
kubernetes
we're
not
going
to
be
able
to
reach
that
he
said
you
know.
Kubernetes
is
never
going
to
be
a
multi-tenant
system.
It's
always
going
to
be
the
case
that
a
development
team
stands
up
their
own
kubernetes
cluster.
We
have
three,
maybe
five
tops,
maybe
15
nodes
on
this
thing
and,
and
you
know
each
team
has
their
own
and
and
you
know
so
that
became
the
topic
of
lunch,
and
so
we
discussed
this
thing
the
whole
the
whole
of
lunch.
A
A
We
had
kind
of
basically
convinced
ourselves
that,
yes,
indeed,
kubernetes
was
going
to
be
a
single
tenant
system
and
small
teams
would
work
on
it
and
every
team
would
have
their
own
cluster
and
that
actually
became
sort
of
a
maybe
maybe
from
that
point
on
unspoken,
but
sort
of
a
driving
principle
for
how
we
developed
home
from
then
on
out
now
keep
in
mind
at
that
point,
there
really
weren't
any
our
back
mechanisms
in
kubernetes
and
admission
controllers
were
not
a
thing
deployments,
weren't,
even
a
thing
at
that
point.
A
A
So
this
is
how
helm
classic
worked.
Helm
classic
basically
was
a
manifest
uploader.
If
we
were
really
honest,
I
mean
package
manager
is
a
very
generous
way
to
say
what
what
helm
classic
did
it
for
the
most
part
had
a
chart.yaml
and
a
bunch
of
static
yaml
files
that
were
just
kubernetes
resources
and
it
could
bundle
them
up
together,
send
them
all
to
the
kubernetes
api
server.
A
At
the
same
time,
wait
for
the
api
server
to
say,
okay,
you're,
good
and
then
report
to
the
user
that
it
was
that
it
was
working
that
was
about
all
helm
classic
did
we
relied
heavily
on
conventions
for
labels
and
things
like
that
for
people
to
be
able
to
figure
out
what
they
just
installed
and
we
didn't
really
have
a
strong
upgrade
story
or
a
strong
rollback
story,
or
even
a
team
management
story
for
helm
classic.
A
But
again
it
was
0.13.
So
you
know
it's
fairly
early
on
one
of
the
things
that
a
lot
of
people
have
not
taken.
A
look
at
is
is
what
deployment
manager
did
and
how
so
we
were
taking
that
model
that
helm
classic
model
and
combining
it
with
the
deployment
manager
model.
And
that's
where
helm2
came
from
the
deployment
manager
model
and
I'm
got
to
be
honest.
I
don't
actually
remember
the
model
and
all
the
names
correctly,
so
I've
kind
of
fudged,
some
of
the
names
here.
A
I
think
the
client
was
named
kdm
and
I
I
I
can't
remember
what
we
called
packages
in
deployment
manager.
They
might
have
been
resource,
bundles
or
templates,
or
something
like
that,
but
I
just
left
chart
here
for
the
sake
of
continuity,
but
essentially
the
way
it
worked
was.
The
client
would
would
upload
a
package
to
a
server
that
was
running
inside
of
a
kubernetes
cluster
and
that
server
was
called
deployment
manager.
It
was
a
json
api
server,
we'd
send
data
up
to
it.
A
We'd
send
you
know
heart,
we'll
keep
calling
it
a
chart
and
deployment
manager
would
unpack
this
chart
and
it
would
send
the
templates
over
a
network
connection
to
another
service
running
inside
of
the
cluster
that
was
called
expandybird.
Now
spandybird
was
sort
of
like
a
multiple
template.
Renderer
it
had,
it
could
render
python
code,
it
could
render
some
template
languages,
one
of
the
one
of
the
python
template
languages.
A
I
forget
which
one
it
was
it
could
it
had
experimental
support
for
jsonnet,
but
it
was
basically
like
its
job
was
to
expand
templates
and
then
return
back
yaml
and
then
dm
would
store
the
yaml
in
mongodb
and
then
upload
a
copy
of
the
yaml,
manifest
to
kubernetes,
wait
for
them
to
come
back
successful
and
then
return
to
the
client
that
it
had
deployed.
A
So
this
should
look
a
little
bit
familiar
because
parts
of
this
you
know
made
their
way
into
helm
side
note
here
on
occasion,
people
ask
us
to
include
support
for
more
template
languages
and
helm
and
the
main
reason
we
did.
We
don't
and
we've
reconsidered
this
multiple
times,
but
we
still
kind
of
stand
fast
on
this.
One
is
because
our
experiences
expandibird
indicated
that
it
would
get
very,
very
complicated,
because
what
we
were
experiencing
was
a
single
render
right.
A
single
chart
might
pull
in.
A
You
know
some
some
python
code,
some
you
know
fairly
trivial
templates
and
some
json
it,
and
then
it
would
render
something
and
it'd
stick
it
in
the
cluster
and
you'd
go
and
look
at
the
cluster
and
go
well.
Where
did
this
thing
come
from
and
then
you'd
have
to
sort
through
a
combination
of
python
code
and
templates,
curly,
braces
templates
and
jsonnet,
and
try
and
figure
out
how
they
interacted
together
in
such
a
way
that
they
produced
that
thing.
And
then
you
know
we
were
talking
about
adding
more
languages.
A
So
one
of
the
early
decisions
we
made
was
that
we
we
did
not
think
it
was
right
by
the
user
or
the
chart
developer
for
us
to
support
multiple
template
languages,
even
though
that
meant
that
some
people
would
have
to
learn
a
new
template
language.
It
prevented
the
case
where
all
people
would
have
to
learn
nine
or
ten
different
languages
in
order
to
just
read
a
chart
or
look
at
their
install
and
know
what
happened.
A
But
so
we
took
this
deployment
manager
model
and
that
earlier
helm
classic
model
and
we
sort
of
combined
them
and
we
decided
to
reduce
the
complexity
and
deployment
manager
and
increase
the
complexity
from
helm,
classic
and
kind
of
meet
in
the
middle.
So
what
was
the
dm
server
got
replaced
with
what
was
tiller,
where
tiller's
job
was
to
sit
as
a
grpc
server,
the
helm,
client
remained
relatively
straightforward,
it
would
basically
take
charts
and
values
and
upload
them
to
tiller.
A
Tiller
would
then
render
the
templates
itself
perform
any
additional
operations
load
that
stuff
into
the
kubernetes
api
server
store
a
release
record
of
that
release
inside
of
kubernetes
and
then
return
back
to
the
user
and
say:
okay.
I
I
deployed
your
thing
and
this
worked
well
because
it
introduced
for
us
overhelm,
classic
the
ability
to
do
upgrades
and
then
rollbacks,
but
it
was
still
simpler
than
the
deployment
manager
model
which
which
was
definitely
on
the
on
the
complex
side
and
difficult
to
operate.
A
So
this
seemed
to
work
well
and
it
met
many
of
our
obligations
because
we
had
one
single
tiller
that
was
almost
like
a
root
user
and
and
since
we
were
working
for
you
know
single
tendency
clusters,
it
was
a
great
solution
for
us
and
we
really
liked
that.
But
over
time
we
realized
and
I'll
talk
about
this.
You
know
coming
up
soon.
That
tiller
was
not
holding
up
to
expectation.
A
In
fact
where
we
had
assumed
kubernetes
would
go
one
direction.
Kubernetes
really
turned
a
very
different
direction,
and
because
of
that,
we
had
to
adjust
course
and
we
again
waited
for
a
while,
but
when
we
did
finally
do
this
with
helm3,
what
we
decided
was
we
needed
to
remove
tiller.
A
The
tiller
was
actually
had
become
more
of
a
hindrance
than
a
help
so
really
in
effect,
we
rolled
back
almost
all
the
way
to
the
helm
classic
model,
but
we
there's
one
notable
change,
and
this
is
the
change
that
we
will
spend
the
a
good
chunk
of
our
time
today.
Talking
about-
and
that
is
that
we
kept
the
notion
of
storing
releases
inside
of
kubernetes.
We
just
took
tiller
out
of
the
equation
and
tiller
was
no
longer
the
sole
authority
of
those
releases,
but
but
we
kept
the
object,
the
release
objects
there.
A
We
had
to
tune
them
up
and
that's
what
causes
this
whole
migration
thing
is
that
we
made
some
changes
to
the
release
object.
So
we'll
talk
about
that
in
just
a
moment,
but
before
going
there,
I
wanted
to
talk
about
why
we
removed
tiller.
A
There
have
been
speculations
a
lot
of
loud
opinions
and
stuff
like
that,
but
in
all
honesty
it
boils
down
to
one
thing:
it
boiled
down
to
authentication
and
authorization,
so
people
will
say:
oh
well,
you
know
tiller
was
a
security
nightmare
or
tiller
was
a
stability
nightmare
or
I
had
to
run
nine
of
these
things
in
every
cluster
or
64
of
these
in
every
cluster,
and
those
are
the
superficial
problems,
but
the
core
cause
of
all
of
this
was
actually
we
hit
a
boundary
with
authentication
and
authorization,
and
we
could
not
figure
out
a
way
to
solve
it.
A
Excuse
me,
so
what
was?
What
was
the
boundary?
Well,
here's
how
it
goes
in
helm2,
I'm
I'm
using
I'm.
You
know
I'm
matt
right
my
username's
matt
b
martin
is
martin
h
right,
so
we're
both
working
together
on
a
cluster
hypothetically
and
and
I
log
in
to
kubernetes
using
coop
cto
right.
A
That's
not
necessarily
a
it's,
not
a
bad
feature.
If
you're
working
on
a
small
team
of
users
and
and
the
trust
model
is
that
everybody
on
the
team
is
trusted,
but
as
soon
as
you
get
into
multi-tenancy
situations,
that's
a
bad
security
model
to
have
right,
and
so
we
tried
to
fix
it,
and
what
we
wanted
to
do
was
to
say:
okay,
I
want
to
tell
tiller,
hey
I'm
I'm
the
kubernetes
user,
matt
b
and
have
tiller,
say,
okay.
A
A
We
could
not
find
a
way
to
do
this.
We
tried
all
kinds
of
things.
We
had
one
branch
that
lived
on
the
helm2,
for
I
think,
six
months
where
we
were
trying
to
figure
out.
You
know
ways
to
finagle
around
and
embed
stuff
in
grpc
headers.
That
would
authorize
users
for
things.
There
was
no
way
for
a
pod
to
contact
the
kubernetes
api
on
behalf
of
another
user.
It
was
always
only
itself
and
we
we
tried
and
tried.
A
We
tried,
probably
near
a
dozen
different
ways,
some
of
which
were
you
know,
ridiculous
at
face
value
and
others
of
which
you
know
we
worked
and
worked
and
worked
and
then
ultimately
decided.
They
couldn't
couldn't
function.
The
way
we
needed,
so
we
ended
up
with
a
choice.
We
had
two
different
models
to
make.
A
This
work
model,
one
we
could
take
tiller
and
we
could
write
our
own
authentication
and
authorization
system
and
users
could
set
up
their
own
permissions
and
we
could
have
a
helm,
tiller
administrator
who
managed
the
user
and
permissions
database,
and
that
administrator
would
also
manage
the
permissions
on
the
kubernetes
cluster
and
would
have
to
manually.
You
know,
keep
those
in
sync
or
we
could
remove
tiller
altogether
and
use
kubernetes
itself
for
the
authentication
authorization
and
have
it
all
be
negotiated
between
the
client
and
and
the
kubernetes
api
server.
A
So
the
first
one
obviously
would
have
been
very,
very
complicated
right.
We
would
have
introduced
new
roles
right
as
you
need
a
tiller
administrator.
It
would
have
introduced
massive
amounts
of
new
code
and
it
would
have
introduced
the
problem
that
we
would
have
had
to
manually
synchronize.
A
The
permissions
you
had
in
kubernetes
with
the
permissions
tiller
gave
you
and
we
decided
that
was
a
non-starter,
and
so
we
began
the
process
of
tearing
out
tiller.
So
in
the
new
model,
helm
3
connects
to
the
api
server
as
matt
b
or
as
martin
h,
and
when
I
connect
to
the
kubernetes
server,
the
kubernetes
server
says:
okay,
you
only
have
access
to
your
dev
name.
Space
martin
has
access
to
staging
and
dev,
but
you
don't
so
any
time
you
try
and
install
something.
A
It
can
only
go
into
dev
and
you
can
only
install
these
particular
things
so
use
the
kubernetes
rbac
system
in
helm,
3
directly
by
contacting
and
negotiating
that
directly
with
the
kubernetes
api
server,
instead
of
having
tiller
as
a
as
a
mid
person
that
had
to
receive
everything
and
then
act
on
behalf
of
someone
else.
So
I'm
acting
on
my
own
behalf
instead
of
relying
on
tiller
and
that's
how
the
new
authentication
and
authorization
model
works,
and
it's
proven
to
be.
I
think
the
right
step
forward.
A
I
think
we
did
the
right
thing
in
adjusting
course.
Kubernetes
is
indeed
a
multi-tenant
environment
and
in
order
to
facilitate
that,
we
needed
to
integrate
as
fully
and
completely
with
the
kubernetes
way
of
doing
things
that
we
could
now.
A
Unfortunately,
the
upshot
of
a
lot
of
that
any
kind
of
pivot
of
that
size
is
that
it's
going
to
require
some
strain
as
we
migrate
from
one
to
the
other,
and
that's
you
know
what
we're
here
today
to
talk
about
right,
and
nowhere
did
this
issue
show
up
more
clearly
than
in
the
concept
of
a
release,
so
releases
have
tended,
have
turned
out
to
be
the
biggest
impact
on
the
helm23
migration,
because
we
had
to
change
the
way
this
system
worked.
A
In
order
to
follow
the
the
new
tiller
structure,
I
think
the
introduction
of
releases
which
came
about
in
helm2
wasn't
present
in
either
deployment
manager
or
helm
classic.
I
think
that
was
probably
the
single
best
design
outcome
of
helm.
So,
what's
a
release?
Well,
a
release
is
a
record
of
an
installation.
That's
a
living
record
that
that
updates
each
time.
Each
time
an
installation
is
updated,
so
here
let
me
give
you
a
practical
example.
A
So
martin
and
I
are
working
on
the
same
cluster
and
we
both
want
to
install
our
we're
both
working
on
separate
projects,
but
we
both
want
to
use,
say
drupal
to
have
our
internal
website
for
these
things.
So
I
want
my
drupal
instance.
Martin
wants
his
drupal
instance,
so
we
should
each
be
able
to
install
separate
versions
of
drupal
his
for
his
stuff
mine
for
my
stuff
and
then,
when
I
upgrade
mine,
it
shouldn't
impact
his
and
when
he
changes
the
values
configuration
on
his
it
shouldn't
impact
mine.
A
So
we
need
to
track
two
separate
installations
of
the
same
software
and
then,
as
I
upgrade
mine,
we
need
to
track
how
my
version
was
upgraded
and
what
changes
happen
between
the
last
time
and
this
time,
that's
how
the
release
system
in
helm
works.
That's
the
problem,
it's
designed
to
solve
tracking
those
releases
over
those
installations
over
time.
So
I
create
a
new
release.
A
The
way
that
it
worked
in
helm2
was
that
I
would
send
for
my
home
client
I
would
send
the
chart
and
the
and
my
values
files
up
to
tiller
but
tiller
would
manage
all
the
release
stuff.
For
me,
the
client
could
be
basically
blissfully
unaware
of
how
tiller
was
managing
all
of
the
releases.
Now
behind
the
scenes.
Tiller
was
storing
these
things
inside
of
the
kubernetes
cluster
and
was
managing
the
versioning
and
checking
for
race
conditions
and
doing
all
kinds
of
things
like
that.
A
A
I've
definitely
never
had
this
happen
to
me,
I'm
sure
all
of
you
are
saying
the
same
thing
you
have
never
had
this
particular
incident
occur
in
your
organization,
but
imagine
hypothetically
we
had
this
case
where
you
know
your
your,
your
colleague
you
know
sam,
has,
is
getting
ready
for
a
vacation.
Now
sam's
got
some
major
deadlines.
Sam's
got
to
get
this
thing
out
the
door
right
away,
works
and
works
and
works
and
works
on
it.
It's
you
know
getting
behind,
but
they
think
they
can
get
it
out
just
in
time.
It's
4
56.
A
A
You
know
wrapping
up
for
the
end
of
the
day,
looking
forward
to
the
weekend
and
that
pizza
party
you're
going
to
this
evening
when
all
of
a
sudden,
the
cluster,
you
know,
catches
fire
the
release
that
sam
did
broke
and
now
you've
got
to
fix
it,
because
sam
is
gone
and
incommunicado
for
some
amount
of
time
and
so
you're
looking
at
everything
going
well,
what
did
sam
do?
I
need
to
figure
out
what
broke,
how
it
broke
and
how
I
can
fix
it.
A
So
with
helm
classic.
There
was
no
workflow
at
all
for
this
scenario.
Right
basically,
you'd
have
to
find
sam's
workstation
log
in
and
see
what
sam
did
from
sam's
point
of
view,
because
there
was
no
record
whatsoever
in
helm2,
then
you
know
you
could
contact
tiller
and
say:
okay,
show
me
what
values
were
uploaded
show
me
what
what
chart
was
used.
You
know
show
me
what
the
rendered
version
of
the
values
was.
A
We
needed
to
preserve
that
particular
behavior,
but
we
needed
to
do
it
without
tiller,
which
meant
our
release
record
had
to
be
robust
enough,
that
it
could
handle
that
particular
scenario,
but
it
had
to
be
able
to
do
it
without
introducing
the
race
condition.
That
happens
when
say,
martin
and
I
are
working
on
the
same
particular
release,
and
we
we
can't
risk
that
if
we
accidentally
both
release
very
close
to
each
other,
we
corrupt,
the
kubernetes
installation
or,
worse,
we
corrupt
the
release
record.
A
So
we
have
kind
of
the
race
condition
thing
going
on
at
one
side
on
one
side,
while
we
were
dealing
with
the
friday
release
problem
on
the
other
and
helm,
three
had
to
solve
all
of
that,
while
recognizing
that
there
was
no
central
authority,
no
tiller,
that
was
in
the
middle
saying.
Okay,
hang
on
martin
matt's
release
is
going
out.
Okay,
matt's
release
is
out
your
turn,
martin
and
stuff,
like
that,
so
that
necessitated
making
a
number
of
changes
to
the
release
to
the
release
object.
A
But,
along
with
that,
like
I
said,
we
buffered
up
a
lot
of
changes
right.
We
we
decided
to
make
some
other
changes
to
the
way
releases
works
so
that
we
could
correct
a
number
of
other
things
we
did
so
in
helm2
releases
were
all
stored
together.
So
you
know
martin
installs,
into
his
staging
thing,
and
I
install
into
my
dev
dev
name
space,
but
both
of
our
release
records
go
in
the
same
particular
the
same
place
inside
of
kubernetes.
By
default
we
were
putting
them
in
kube's
system.
A
lot
of
people
are
like.
A
Why
would
you
have
done
that?
Well
again,
let's
rewind
to
the
core
assumption
we
made
that
turned
out
to
be
false.
It
was
we
thought
we
were
building
a
single
tenant
system,
so
it
totally
made
sense
to
put
stuff
in
coop's
system,
because
this
coupe
system
was
where
you
put
stuff.
You
didn't
want
users
to
look
at
and
we
we
never
thought
about
the
fact
that
when
you
had
100
different
development
teams,
all
using
the
same
thing,
you
could
end
up
with
tens
of
thousands
of
of
release
records
inside
of
coop
system.
A
A
That
introduced
a
number
of
management
problems,
particularly
when
we
took
tiller
out
of
the
equation,
because
tiller
was
no
longer.
We
didn't
want
to
have
to
give
everybody
access
to
the
same
namespace
so
that
they
could
store
all
of
their
releases
in
there
and
then
assumed
that
these
development
teams
wouldn't
accidentally
stomp
on
each
other's
releases.
A
So
we
needed
to
fix
that
the
the
the
helm
2
era
fix
was
to
just
tell
everybody
to
run
lots
and
lots
of
instances
of
tiller,
but
that
seemed
like
not
a
good
solution
in
the
long
run,
and
so
in
helm3.
We
just
changed
it
so
in
helm3,
now
the
release
records
are
stored
side
by
side,
with
the
releases
that
they
describe.
A
So
when
I
install
wordpress
into
my
dev
namespace
the
release
records
instead
of
being
written
to
coop's
system
or
somewhere
else,
the
release
records
are
also
written
into
the
same
name:
space
they're,
also
written
into
the
dev
name
space,
and
when
martin
installs
his
version
into
his
staging
environment,
his
release
records
get
written
into
the
staging
name
space.
Now
this
has
turned
out
to
actually
have
some
great
benefits
to
it.
First
of
all,
you
know
you
can
use
our
backs
and
things
like
that
to
limit
access.
A
So
we
didn't
worry
about
necessarily
polluting
that
that
name
space
with
things
that
that
users
could
change
around
when
they
shouldn't.
But,
moreover,
say
you
say
you,
you
know
you
get
you
get
that
friday
situation
right
and
and
you
have
to
drop
into
a
name
space
you're
unfamiliar
with-
and
you
do
you
know,
coop
ctl
get
pods
and
you
see
a
bazillion
things
running
and
you
do
ctl
get
secrets
and
you
see
another
bunch
of
things
running
and
you're
going.
A
What
are
all
these
things
and
how
do
they
tie
together?
Well
now
you
can
just
point
helm
three
at
that
name.
Space
and
say
tell
me
all
the
things
that
were
released
into
this
namespace.
You
know
ls
this
namespace
and
it'll
say:
okay.
Well,
you
know
here's
here's
a
drupal
instance
and
here's
a
chart
museum
instance
and
here's.
You
know
three
proxy
instances,
that's
what
you're
seeing
in
this
cluster,
so
it
really
localized
the
management
of
your
releases
into
just
one
namespace,
which
again
really
fits
the
multi-tenant
model
that
we've
seen
kubernetes
mature
into.
A
So
it's
been
great.
We
really
like
that.
The
really
kind
of
the
only
caveat
we
had
to
give
people
in
this
case
was
just
don't
create
multi-tenant
name
spaces.
You
don't
want
all
of
your
teams
working
on
the
default
name
space.
You
want
to
sort
of
segment
things
out
and
then
the
the
built-in
security
features
of
kubernetes
and
the
our
back
system
will
help
you
protect
each
name
space
and
grant
people
access
to
just
the
releases.
The
home
releases
that
they're
supposed
to
be
able
to
see.
A
So
we're
actually
really
excited
about
that
particular
change
in
helm3
because
it
really
opened
up
a
security
model
that
better
fit
with
the
way.
Kubernetes
now
works
right.
The
way
kubernetes
had
matured
its
security
model.
Then
we
made
one
more
change
to
releases.
That's
a
notable
change.
A
When
we
wrote
helm2,
we
stored
our
releases
inside
of
config
maps.
Why
did
we
do
that?
Well,
there
were
really
two
reasons
to
store
them
in
config
maps.
At
that
point
reason
number
one
was
that
config
maps
were
just
ever
so
slightly
smaller
than
secrets.
We
had
one
less
base,
64
pass
over
them
ever
so
slightly
smaller,
not
not
actually
a
very
good
reason,
but
the
second
reason
is
even
worse:
we
used
config
maps
because
config
maps
were
new
and
shiny
and
we
were
like
hey
new
and
shiny.
A
This
must
be
the
way
forward.
Secrets
are
old,
they're
going
to
go
away,
we're
going
to
use
config
maps.
Well
we're
really
kind
of
wrong
on
that
account
too,
and-
and
you
know,
the
the
excuse
at
the
time
was
well
a
secret's
not
really
secret
at
all.
It's
just
a
base64
encoded
object,
but
since
then
again
things
have
changed.
A
The
security
model
for
kubernetes
has
matured
and
the
usage
patterns
for
for
helmand
for
kubernetes
have
changed,
and
now
we're
in
this
situation,
where
it's
more
important
to
protect
the
information
in
that
release
than
it
is
to
save
on
size
or
anything
else,
and
and
so
they
really
ought
to
go
in
secrets
and,
furthermore,
secrets
and
kubernetes
now
can
be
backed
by
vaults
and
other
storage
systems
that
actually
store
encrypted
secrets.
So
there
is
actually
a
legitimate
safety
and
security
story
behind
storing
them
in
secrets.
A
Okay,
so
now,
if
we
add
all
those
things
up
right,
we've
made
several
big
changes
to
releases.
We've
changed
where
they're
stored
we've
changed
some
of
the
formatting
of
how
the
how
the
record
looks.
We've
changed
where
the
kind
of
object
they're
stored
inside
of
in
kubernetes.
Those
are
all
big
deals
and
those
are
all
things
that
when
we
move
from
home
to
home
three,
they
have
to
be
renegotiated
right.
We
have
to
actually
migrate
that
data
from
an
old
format
in
an
old
location,
to
a
new
format
in
a
new
location.
A
A
Why
releases
have
really
been
a
focal
point
is
because
this
is
the
number
one
thing
that
that
we
have
to
deal
with
and
deal
with
very
carefully
when
we're
doing
migrations
like
later
on
in
the
workshop,
I
did
want
to
cover
one
more
thing
as
we
as
we
talk
about
the
differences
between
helm,
2
and
3.,
but
this
one
will
have
less
of
an
impact
than
you
might
suppose
on
the
migrations
that
we're
going
to
do
today
and
on
your
real
world
migrations.
A
When
we
introduced
telm3,
we
also
changed
the
structure
of
the
chart.
It
was
really
our
biggest
change
ever
to
the
chart
format.
So
again,
another
embarrassing
story.
I've
been
working
on
a
book
with
matt
farina
and
with
josh
dalitsky
about
helm3
it'll,
be
out,
I
think,
probably
december
or
something
so.
We're
done.
Writing
everything.
So
have
you
ever
worked
on
on
the
book
writing
process?
A
You
write
all
the
chapters
and
then
you
send
them
to
the
publisher
and
the
publisher
gets
some
volunteers
from
the
community
and
and
from
the
technical
technical
arenas
to
read
the
book
and
give
you
early
give
you
an
early
critique
of
what
was
clear
what
was
unclear
what
wasn't
covered?
What
was
over
covered
all
that
kind
of
stuff,
and
I
was
shocked
when
I
got
back
a
technical
review.
That
was
just
like
angry,
we'll
just
say:
angry
and
the
reviewer
is
going
hey.
I
don't
understand
this.
A
I
thought
you
were
going
to
write
a
book
about
helm,
3
and
instead
you're
writing
a
book
about
helm,
2
and
over
and
over
again
you're
talking
about
helm,
2
and
you're.
Talking
about
you
know
how
this
should
work,
and
I
don't
even
understand
why
you're
bothering
to
cover
any
of
this,
because
by
the
time
the
book
comes
out,
helm2
will
be
fully
deprecated
and
I'm
going,
I
only
wrote
one
paragraph
about
helm2
and
it
was
basically
migrate
off
of
helm2
by
november
13th
or
else.
A
Then
I
realized
to
my
horror
that
I
had
been
talking
about
charts,
charts,
v2
and
the
reader
had
thought
that
charts
v2
were
part
of
helm
v2
and
we
had
I.
This
is
the
first
time
it
actually
occurred
to
me.
Oh
no
charts
v1
worked
with
helm,
v2
charts,
v2
work
with
helm,
v3,
charts,
v2,
don't
work
with
helm,
v2,
but
charts
v2
and
charts
v1
both
work
with
helm,
v3.
We
created
a
naming
nightmare
and
I
very
much
apologize
for
not
noticing
this
until
just
recently.
A
A
We
changed
the
version
string
from
version
one
to
version
two,
that's
always
a
good
thing,
but
we
also
we
moved
from
the
requirement
cml
content
directly
into
the
charts.yaml
file.
So
there
is
no
more
requirements.yaml.
We
added
a
crds
directory.
We
added
the
ability
to
write
json
schemas
that
can
be
used
to
schematize
a
values.yaml
file.
A
Martin
wrote
the
support
for
a
library
chart.
A
library
chart
is
a
a
chart
that
doesn't
actually
install
anything
on
its
own,
but
can
be
used
by
other
charts
for
it
to
provide
common
tooling.
It
was
a
pattern
that
we
saw
emerge
out
of
the
out
of
the
community
building
of
charts,
but
which
we
thought
we
could
sort
of
like
further
canonize
and
then
make
it
easier
to
support
as
a
first
class
thing.
A
A
You
may
have
inadvertently
run
across
this
when
you
tried
to
install
a
v2
chart
with
v2
helm,
because
those
are
incompatible,
but
the
the
best
part
for
this
particular
context
is
that
you
don't
have
to
actually
worry
about
the
v2
versions
of
charts
in
order
to
do
this,
migration
helm3
shipped
with
full
support
for
the
older
chart
format,
and
so,
as
we
do
this
migration,
we
actually
don't
have
to
change
anything
about
the
charts.
A
The
focus
is
really
more
on
the
releases
now,
as
you
have
time
and
needs
that
are
better
handled
by
the
new
version
of
charts.
You
know
I'd
urge
you
to
migrate,
because
the
new
version
does
have
some
niceties
and
some
some
forward-looking
features
that
will
make
certain
other
things
easier,
but
there's
no
rush
to
take
care
of
that
particular
thing
during
your
migration.
A
So
I
think
I
would
like
to
give
a
chance
for
people
to
ask
questions
if
we
feel
like
we
have
enough
time
for
that.
Otherwise,
we
can
go
straight
into
martin's
I'll.
Let
bridget
kind
of
call
the
shots
here.
C
Oh
hey:
we
had
a
few
questions
in
the
chat,
but
I
think
we
are
mostly
up
to
date,
perhaps
just
telling
folks
if
they
are
getting
started,
taking
a
look
at
the
helm,
3
book.
Can
they
start
there?
What
can
they
start
with.
A
Yeah
so
it'll
come
out
from
o'reilly
in
in
december
of
this
year,
and
I
I
don't
actually
know
what
the
publication
date
is
in
the
meantime,
doc's
the
doc
site
for
helm,
you
know
covers
really
a
lot
of
the
the
same
material.
A
The
book
will
be
sort
of
more
obviously
bigger
narrative
form
with
a
lot
of
explanation
of
how
it
works
and
more
more
stories
about
mistakes
we
made
in
helm2
and
how
we
fix
them
and
other
things
like
that,
but
definitely
for
now
the
best
best
sources,
the
helm,
dot,
sh
stocks
and
then
coming
in
december,
you
know
makes
makes
a
lovely
holiday
gift.
That's
probably
not
true,
but
that's
when
that
will
come
out.
Did
that
answer
the
question.
A
A
You
know
we,
we,
the
compatibility
level
for
charts,
has
been
very
high
from
helm2
alpha
1
up
to
the
present,
but
when
we
wanted
to
change
some
things
in
the
format
of
the
chart,
we
incremented
the
version
number
exactly
one
which
brought
us
to
charts
v2,
so
charts
v2
are
for
helm,
v3
and
charts
v2
do
not
work
in
helm
v2.
So
again
it
was
a
naming
faux
pas
on
our
side
and
we'll
try.
I
guess
I
guess,
with
helm4,
we'll
have
another
shot
at
it.
A
A
So
that
is
an
interesting
question
and
I
like
it
because
it
is
possible-
and
I
don't
know
of
anybody-
who's
tried
this
before
so
here's
what
you
could
do.
You
could
write
a
controller
that
would
observe
the
release
records
as
they're
written
to
kubernetes.
So
you
basically
write
a
controller
that
watches
for
secrets
with
the
helm
type
attached
to
them,
and
the
the
release
record
actually
gives
a
fair
amount
of
information
about
the
current
state.
A
The
the
release
is
in
and
many
of
these
things
are
you
don't
necessarily
see
surfaced
all
the
way
to
the
client,
because
they're
so
quick
that
the
client
wouldn't
necessarily
see
them,
but
you'll
see
it
go
into
its
pre-install
and
then
it's
install
and
then
it's
installed
status
or
its
upgrading
status
and
then
it's
upgraded
status.
A
A
C
Great
so
summarizing
if
you're
migrating
a
chart,
not
a
deployment.
Just
the
chart
are
only
a
couple
of
changes
needed
like
move
the
requirements.yaml
inside
the
chart.aml
bump
the
api
version
anything
else
and.
A
D
Yeah
so
a
and
I'm
going
to
touch
and
touch
on
it
in
a
few
minutes,
but
your
api
version,
v1
charts,
which
were
used
in
lv2
and
v1,
are
still
rendered
renderable
in
helm,
v3
without
any
changes
except
around
crd
install
hooks.
So
what
will
happen
in
that
situation?
Is
it
doesn't
install
the
crds
if
you're,
using
crd,
install
hooks
and
also
it
won't,
create
a
namespace
on
the
fly
unless
you
give
it
an
extra
flag?
But
apart
from
that,
they're
still
renderable,
because
we
wanted
to
maintain
that
capability.
D
But
if
you
want
to
use
new
capability
like
you
want
to
use
the
type,
whether
you
want
to
specify
it's
an
application
library
charts
or
you
want
to
use
the
new
way
to
to
use
dependencies,
for
example,
in
the
chat
yamaha,
then
you'd
bump
it
up
to
api
version.
V2
and
you'd
make
the
changes
there
and
I
suppose,
going
down
the
line
it'd
be
before
we
get
to
helm4.
If
helm4
comes
out
someday,
then
you
probably
would
want
to
have
moved
up
to
api
version.
V2
does
that
seem?
A
C
Yeah,
thank
you,
martin,
and
I
think
we
have
time
for
one
last
question
which
is
so
helm3
doesn't
use
its
own
dedicated
crds,
just
secret
resources.
A
Yes,
yes,
we
we
explored
the
crd
route
very
carefully.
There
were
a
couple
of
security
things
that
caused
us
to
go
against
the
that
cause
that
weighed
against
it,
but
ultimately,
at
the
end
of
the
day,
there
is
one
features
feature
of
crds
that
we've
realized
could
be
so
utterly
catastrophic
that
we
would
not
do
it.
A
Crds,
by
definition,
are
are
modifiable
by
cluster
users
right,
whereas
secrets
are
not
if,
for
any
reason,
you
delete
a
crd,
it
deletes
all
the
resources
of
that
type,
which
means
with
one
accidental
typo.
You
could
wipe
out
all
the
helm
releases
on
your
cluster
and
we
went
that's
not
any
problem.
We
ever
want
to
force.
Anybody
into
secrets
actually
have
been
perfectly
capable
of
of
accomplishing
what
we've
needed,
and
so
there
wasn't
necessarily
a
big
big
requirement
that
we
move
off
of
secrets.
A
The
the
security
consideration
for
secrets
was
because
secrets
are
auto,
backed
by
a
vault
on
some
systems
and
the
different
customers
can
choose
easily,
which
particular
security
backend
they
want
secrets,
ended
up
being
having
some
highly
desirable
features.
That
would
make
the
security
model
quite
a
bit
stronger,
but
at
the
end
of
the
day
it
was
that
scary
scenario
in
which
you
could
wipe
out
all
your
releases
that,
ultimately,
I
think,
convinced
us.
That
was
wrong.
So
that's
why
we
chose
secrets
and
why
we
didn't
choose
crds.