►
From YouTube: Kubernetes SIG Apps 20170417
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
welcome
everyone
to
the
April,
seventeenth,
2017,
kuber,
net
e
cig
apps.
My
name
is
Matt
Farina
and
we
may
or
may
not
have
Michelle
today,
we'll
have
to
wait
and
see.
So
today,
let
me
start
by
sharing
the
the
where
we
can
record
any
minutes
or
discussion
and
actually
has
what
we're
going
to
talk
about
today.
So
I'll
share
that
into
chat
for
anybody
who
wants
to
to
follow
along
and
take
notes.
A
You'll
see
that
this
week
we
are
a
little
light
on
some
of
the
organized
topics
which
is
really
going
to
give
us
a
good
opportunity
to
dig
into
one
of
the
topics
on
here,
which
is
the
update,
API
comparisons
for
deployments.
Things
like
that,
and
so
we'll
get
into
that
in
a
minute
two
quick
announcements,
one
idea
of
demo
suggestions:
anybody
have
something
they
want
to
demo
or
maybe
something
you
would
like
us
to
track
down,
because
she
got
something
cool
and
you
would
like
to
see
us
have
demo
here.
But
maybe
it's
not
yours.
A
If
you've
got
something,
please
let
us
know
reach
out
on
the
list
or
reach
out
to
Michelle
and
myself.
We
would
really
appreciate
knowing
anything
else.
That's
out
there,
we've
got
some
we've
got
some
ideas.
We
do
have
some
stuff
lined
up,
but
since
we
do
so
many
demos
here,
we're
always
looking
for
more
and
so
please
reach
out
if
you've
got
something,
even
if
it's
just
a
feature
on
something
that
exists
already,
and
you
want
to
show
something
new
in
China.
A
It
really
does
hit
home
here,
because
its
user
example
documentation
and
there's
a
discussion
around
finding
a
good
path
forward
for
handling
that
some
of
the
stuff
in
the
example
structure,
as
it
is
today,
stale,
outdated,
tested
and
they're
trying
to
figure
out
who
and
what's
the
best
road
to
handle
cleaning
that
up
to
give
a
better
experience
for
the
end
users
coming
in
a
lot
of
the
content.
There
predates
home
and
a
whole
bunch
of
the
stuff
we
have
today,
and
so
we
need
to
figure
that
out.
A
A
A
Yep,
so
the
topic
is
our
discussion.
Point
of
the
deployments
updates
for
stateful
sets
things
like
that.
There's
a
document
here
and
I
will
drop
it
into
the
text
here
and,
since
this
is
for
deployment,
statement,
sets
stateful
sets
and
updates.
This
isn't
my
cup
of
tea,
who
is
here
who
wants
to
walk
us
through
this?
C
C
You're,
okay,
so
I
want
to
talk
about
how
we
handle
the
rolling
update
of
the
main
controllers,
which
is
deployment.
Demon,
set,
stay
full
set,
and
we
have
two
men
solutions
now
One
is
using
hash
of
the
template.
The
other
one
is
using
template
generation
and
right
now,
deployment
is
using
the
hash
of
template
to
do
the
running,
update
so
on
the
hash
value
is
used
to
create
for
unified.
C
A
C
And
and
the
template
generation
is
also
used,
but
we
plan
to
use
it
as
a
similar
way
like
using
it
to
unify
the
sub
resources
created
for
history
and
right
now.
Dementia
is
using
template
generation
to
differentiate
different
parts
of
different
revision,
but
demon
said
and
stifle
set
doesn't
have
any
history
implemented.
Yet
so
that's
the
general
idea
so
to
try
being
more.
C
The
reason
why
we
need
this
discussion
is
that
right
now,
deployment
has
a
hash
collision
issue
because
it
uses
the
Adler
to
compute
a
hash,
and
we
have
a
customer
reported
issue
that
has
collision
is
not
as
stable
as
other
hash
functions
like
fnv,
so
we
are
proposing
to
switch
to
other
hash
functions
and
so
for
using
hash.
The
deployment
can
just
switch
to
a
more
stable
hash
algorithm,
and
then
you
just
if
we
just
switch,
then
we
need
to
migrate,
because
the
old
hash
value
might
conflict
with
the
new
hash
value.
C
C
A
C
B
D
C
So
the
item
potom
part,
is
because
the
Hat
deployment
controller
use
hash
I
mean
use
cash.
So
if
there's
a
chance
that
it
created
a
replica,
sepa
has
a
signet
and
then
in
what
you
might
create
more
than
one
replica
set
with
the
same
template,
and
we
have
this
issue
in
replica
set
and
replication
controller
when
they
create
pods.
The
part:
are
the
paws
have
rent
random
names,
so
they
may
create
more
than
they
made
it,
even
though
they
will
eventually
become
what's
specifying
the
stack
but
there's
this
issue.
C
C
Increase
that
unique
afire
and
then
the
next
time
it
will
create
a
new
hash
value
or
if
it's
not
due
to
namm
conflict,
then
you
will
just
retry
it
again
with
the
same
unica
fire.
So
the
hash
value
will
be
the
same
or
it's
not
either.
It's
because
you
already
create
I
reckon
set
with
the
same
name
and
the
template
is
the
same
as
the
one
that
you
want.
So
you
just
haven't
observed
this
change
in
the
cash,
so
you
can
just
say
you're
done,
because
this
is
the
reference
that
you
want.
C
C
We
use
the
hash
value
to
create
/,
typically
use
it
as
the
template
name
and
the
pot
to
play
labels,
and
you
also
use
that
to
label
your
pods.
But
the
problem
is
that
we
right
now
use
template
generation
in
demon
set
pods,
so
we'll
need
to
deprecate
temporary
generation
if
we
want
to
switch
to
cut
so.
B
For
those
who
are
not
familiar
with
a
template
generation
is
just
an
incremental.
It
count
there
every
time
I
visit
updates
the
poor
template
of
them
on
set
other.
It
is
a
generation
that
is
increased
and
based
on
that,
we
are
planning
on
having
see
story
of
poor
templates
or
whatever
it's
all
stupid
horse.
The
story,
the
name,
would
be
great
I,
the
generation,
that's
what
the
generation
proposal
is
about.
Yeah.
C
So
temporary
generation
is
the
numbers
starting
from
one.
Every
time
the
template
is
updated,
the
API
server
will
increase
the
number
by
one,
so
only
the
users
I
mean
only
a
server
can
update
it.
Users
cannot
change
this
value,
so
we
used.
If
we
use
template
generation
for
history,
it
should
be
unique.
C
C
You
ok,
so
the
safest
that
we
could
we
use
template
has
you
will
be
similar
to
demon,
said
and
also
the
on
deployment
current
implementation,
but
you
need
to
know
that
staples
that
doesn't
have
the
template
generation
implemented
yet
today.
So
if
we
you
we
can
use
either
there's
no
need
to
defecate.
C
And
for
a
template
generation,
if
we
want
to
use
template
generation
for
a
deployment,
then
we
need
to
deprecated
the
hash,
template,
hash
and.
C
We
also
need
to
figure
out
how
to
adopt
existing
replica
set
from
previous
release.
So
the
idea
is
that,
for
the
ripple
effect
created
within
the
new
release,
we
just
named
it
as
we
suffix
it.
As
with
the
template
generation
and
for
the
old
web
cassette,
we
just
relay
belit
with
the
replica
set
name.
C
So
if
it's
it's
guaranteed
to
be
unique,
because
you
already
have
it.
C
C
C
Has
a
similar
remaining
mm
with
revision
and
Willie?
It
confuse
users
like
which
one
is
the
real,
which
one
means
the
real
version.
B
D
And
that's
my
main
concern
about
the
template.
Generation
approach
is
confusion
with
the
revision,
so
after
a
rollback,
template
generation
will
increase,
but
the
replica
set
that's
actually
used,
or
you
know,
whichever
controller
we're
talking
about
the
name
of
that
resource
and
the
labels
are
going
to
be
the
old
maps,
the
old
template
generation,
not
the
new
template
generation
or
regular
version
yeah
with
a
hash.
There
is
no
such
confusion
because
it
just
looks
like
you
know
something
that's
not
supposed
to
matter,
which
in
fact
is
what
this
is
supposed
to
be.
D
It's
just
supposed
to
be
something.
That's
not
supposed
to
matter.
It's
not
something.
It's
supposed
to
be
used
for
matching
or
anything
else
is
just
a
unique
vacation
mechanism
and
the
hash
makes
that
more
obvious,
and
over
time
we
can
converge
the
approaches
for
all
the
controllers,
including
replica
sets
and
how
they
create
pods
move
to
a
more
deterministic
but
similar.
Looking
mechanism
like
we
can,
you
know,
make
them
all
same
in
terms
of
format
and
in
how
they're
generated,
whereas
with
template,
generation,
I
think
there's
pretty
much.
F
Discussion
for
staples
that,
at
least
at
this
point
for
upgrade,
is
actually
along
the
lines
that,
when
the
template
generation
increases,
the
controller
has
to
detect
the
equivalence
between
the
new
template
and
previous
revisions
of
its
history
and
then
real
able
its
children
in
order
to
make
them
match
the
appropriate
current
template
generation.
So.
F
D
F
Not
so
for
template
generation
for
staples
that
the
idea
was
to
embed
the
template
generation
as
a
label
on
the
pot
itself.
So
you
could
real
able
it,
but
it
wasn't
meant
to
be
selectable.
It
was
meant
for
it
was
meant
as
an
identifier,
okay,
it
effectively
you're
using
as
a
label,
and
you
could
slept
on
it,
but
it's
no
different
from
an
annotation
in
terms
of
how
it's
actually
being
used.
B
I
think
in
the
permits
would
still
like
continue
using
diff
equality
right
and
we
can
replace
with
equality.
The
name
is
epic
effect
shouldn't
really
matter,
because
it's
something
that
is
controlled
by
deployment
and
you
just
or
they
make
noise
I
have
the
deployment
I,
don't
care
about
anything
else.
B
D
Mostly
through
other
items,
we
want
to
be
able
to
see
things
like.
Are
they
clog
of
this
particular
revision
be
hitting
correctly
or
not?
In
which
case
it's
useful
to
be
able
to
have
some
sort
of
identifiers
that
they
can
understand
in
terms
of
some
concepts,
so
I
think
you
will
want
to
consider
perhaps
labeling
pods
with
revision,
but
not
selecting
on
that.
We
didn't
ed.
You
need
to
fire
for
selection
and
I
agree
that
unique
fire
should
serve
no
purpose
other
than
just
as
a
nikka
fire
for
the
resource
name
into
the
label.
A
E
To
rehash
I
think
hash
will
almost
always
worth
lion
almost
always
so
assuming
assuming
10
history,
I
10
deployments
pro
namespace
and
tennis
rihanna's
pro
names
for
deployments
the
probability
of
collision.
What
a
good
32-bit
asked
should
be
about
one
in
a
million,
so
I
think
we
can
accept
one
in
a
million
failures,
/
namespace,
but.
D
A
Okay
bright,
you
bring
up
an
interesting
point
on
the
you
know:
changes
algorithm.
Have
we
looked
at
something
where
or
today,
I
haven't
looked
at
how
we
store
it.
Do
we
actually
store
the
algorithm?
You
know
to
signify
it
and
then
like
a
separator
like
a
colon
and
then
the
hash
or
just
a
hash
itself,.
D
D
The
collision
avoidance
mechanism
I
think
it's
not
much
work
and
it
can
be
a
status
field
that
users
don't
have
to
care
about
it
all
and
it
will
make
it
robust
against
these
kind
of
like
I.
Don't
think
we
need
to
do
any
work
to
migrate,
pre-existing,
replicas,
that's
generated
by
deployment
and
running
and
currently
running
clusters.
If
we
have
this
collision
avoidance
mechanism
because.
C
D
C
B
F
Do
you
want
to
join
me
hot
hands?
Well,
the
only
issue.
So
we
do.
We
ever
want
the
user
to
be
able
to
view
the
history
corresponding
to
a
particular
revisionary
me
because
with
the
template
generation,
ID
right
like
if
the
way
I'm
proposing
concatenate
the
name
together
via
the
staple
set
name
and
the
generation
allows
a
user
to
view
a
staple
set
pie
template
as
part
of
the
history
by
just
saying,
I
want
this
quad
template
with
this
name
at
this
generation,
hey
what
we
had.
G
F
G
Want
to
I
think
in
the
long
run
is
that
we
think
there
should
be
additional
systems
outside
that
help.
Make
history
like
real
history
of
an
object
more
of
a
generic
problem
and
that
we
should
have
it
unfolded,
but
transformed
proxy
on
the
on
the
server
side.
But
I
don't
know
that
we
have
to
like
I.
Don't
know
that
the
client
should
necessarily
start
hard
coding
that
assumption
well.
D
F
The
rollback
would
look
like
the
template
generation
goes
forward
and
then
you
relive
for
staples
Yuri
label,
the
pies
that
have
the
previous
equivalent
template
to
have
an
generation
as
their
label.
So
they're
selectable
by
the
new
template
generation
and
the
template
generation
is
consistent.
The
current
template
o
staple
set.
What.
F
D
G
I
think
I
will
say,
though,
that
it's
there,
the
replica
set
and
deployments
is
a
complex
mechanism.
We
don't
have
a
ton
of
controllers
diving
controllers
today
and
if
we
did
have
controllers
driving
controllers
like
federated
acceleration,
staple
sides
are
going
to
have
the
same
problem
when
they
map
down
from
staple
sets
and
a
federation
down
to
state
sets
in
the
cluster
depending
on
how
that
design
proceeds
like
it's
going
to
be
harder.
We
should
make
sure
that
within
the
staples
bed
itself,
that
it
manages
the
templates
reasonably.
F
So
and
then
that
that's
another
question
with
this
entire
approach
so
for
staples
that
currently
using
pod
templates
as
a
method
of
storage,
history
is
a
proposal.
Do
we
have
actual
concurrence
that
that's
what
we
want
to
do
when
we
consider
using
a
sub
resource
inside
of
the
status,
for
instance
perspect
that
were
an
entirely
separate
sub
resource
for
another
object?
It
was.
G
G
So
in
the
original
design
discussions
it
can't
be
a
sub
resource,
because
there's
too
much
data
to
store
it
would
mean
we
would
effectively
divide
the
size
of
a
reasonable
pod
template
by
a
third.
So
I
would
say
you
know
to
keep
the
current
the
next
and
the
previous,
so
I
think
we
have
to
deal
with
a
separate
object.
We
can
really
transform
a
separate
object
as
a
sub
resource,
as
in
the
history
sub
resource
proposal
and
whether
it's
a
pod
template
or
not,
I
think,
is
the
do.
G
F
We
do
a
separate
resource
as
basically
a
reference
sub
resource
inside
of
respect
right,
separate
objects
as
a
reference
of
resource
inside
of
staple
center.
Say
you
get
the
uniqueness
benefit
right,
so
I'm
not
going
to
have
collisions
between
demon
sets
and
staple
sets
in
the
same
namespace
for
the
same
name
in
terms
of
their
history.
If
we
take
similar
approaches
and
I
mean
I,
don't
in
terms
of
making
it
more
complicated
or
less
complicated
than
pod
template
I,
don't
think,
there's
any
extra
complication
there.
F
F
G
For
things
that
aren't
deployments,
which
have
to
manage
controllers
that
themselves
are
stateful
objects,
I
think
there's
a
reasonable
statement
that
we
can
make
that
the
whole
point
of
a
staple
said
is
to
give
you
a
target,
but
you
can
update,
and
then
this
is
done.
Brings
it
into
convergence
as
much
as
possible.
The
history
is
primarily
there
so
that
a
user
can
identify
and
roll
back
to
a
previous
state.
G
F
G
Had
intended
pod
template
for
a
different
purpose
and
I
think
that
we're
at
the
point
where
we're
trying
to
decide
which
whether
we
give
up
on
the
original
purpose
of
pod
template,
which
was
to
keep
controllers
from
being
able
to
create
any
object
and
system
just
by
being
able
to
run
a
pot
to
be
able
to
take
over
the
system,
because
a
controller
that
can
create
positive,
probably
the
most
powerful
thing
in
the
universe.
Right
now.
Yeah.
G
We
start
talking
about
volume,
claim
templates
and
I
not
positive,
but
I
can
pretty
I
feel
somewhat
confident
in
saying
in
I
update
a
stateful
set
and
change
the
volume,
clin
template
and
totally
blow
away
my
staple
sets,
you
know
to
sig,
so
the
new
stateful
set
doesn't
work,
I
kind
of
wanted
to
be
able
to
roll
back
to
the
previous
version
of
both
my
volume
claim
templates
and
all
that,
whether
or
not
whether
that's
you
know
whether
we
included
but
I,
do
feel
like
there's
at
least
some
hedge.
That
I
would
make
that
pod.
F
F
G
D
So
actually,
I
want
to
expand
on
the
delete
and
recreate
point
a
little
bit
since
we're
getting
kind
of
off
the
original
topic
about
tempo
generation.
Another
thing
I
do
want
to
be
able
to
do
is
enable
users
to
delete
and
being
created
resources
as
an
escape
hatch.
Vic
has
come
up
many
many
times,
for
example,
when
we
make
incompatible
changes
to
resources
or
we
change
the
name
like
that's
a
staple
set,
or
you
know
basically
for
every
single
one
of
the
key
controllers.
We've
been
developing
post,
be
one
API.
D
There
have
been
cases
where
has
been
useful
to
be
able
to
delete
and
recreate,
and
there's
still
examples
of
that
for
services
or
if
you
want
to
do
some
out
of
band
orchestration
and
not
have
to
control
or
fight
with
you.
It's
useful
thing,
since
we
don't
have
a
way
of
freeing
the
controllers
generally
yeah,
so
I
want
to
preserve
that
ability.
So
if
we
did
used
what
generation
I
would
want
to
way
to
be
able
to
auto-detect
what
template
generation
should
be
used.
D
G
So
I
didn't
want
to,
though
just
rewinding
back
I
mean
the
argument.
I
think
for
everything.
That's
not
a
deployment
is
that
the
history
object
is
mostly
for
communiqués
and
for
the
controller
to
have
some
atomic
record
that
it
can
use
in
the
absence
of
a
more
general
multi
provision
store
of
staple
sets.
So
we
should
bias
towards
that,
and
then
we
should
assume
in
the
future
that
there
will
be
a
genetic
history
mechanism
for
people
who
want
to
go
back
and
reconstruct
full
version.
History,
but
I
can
see.
There's.
D
A
stronger
reason
which
is-
and
maybe
you
think,
is
different
for
replicas-
that
versus
pods
template,
but
I,
don't
think
of
it
as
different,
which
is,
if
you're
doing
any
kind
of
rolling
update.
You
need.
Multiple
versions
of
the
configuration
live
at
the
same
line
yeah
and
you
need
some
way
of
managing
the
life
cycles
of
those
resources.
D
For
example,
if
you're
possible,
as
references
to
persistent
volume,
claims
or
config
masks
or
secrets
or
whatever
it
is,
you
need
those
resources
to
stay
alive
as
long
as
the
pods
that
are
consuming
them
or
as
long
as
you
want
to
roll
back
to
a
template
and
create
pods
using
that
same
configuration
again.
So,
but.
G
And
what
is
the
history
of
the
pod
template
or
a
staple
set
history
or
staple
set
pod
template
or
a
staple
set
historical
revision
or
a
staple
set
revision
or
whatever?
It
is
you're
right?
We
want
to
hang
references
also
than
the
objects
that
exist
in
the
system
need
to
form
that
referential
staff,
but.
D
G
There
has
to
be
an
object.
It
does
not
have
to
be
pawed.
Template
Brandi
made
the
case
that
pod
template
in
its
initial
purpose
is
more
important
than
it
used
to
be,
because
we
are
now
starting
to
reach
the
boundaries
of
where
that
security
inversion
becomes
very
dangerous
and
as
well
as
the
tooling
coordination
yeah
like
you,
control
set,
for
example,
exactly.
F
So
that
okay,
Bruce
April,
said
if
we
move
toward
the
idea
that
we
have
this
object
and
the
object
could
be
referenced
as
a
sub
resource,
but
created
separately,
Mac
stores,
all
the
history
and
we're
already
going
to
have
to
do
strong
equality
like
DD
quality
testing,
to
determine.
If
the
pod
template
matches
anything
inside
of
the
history,
then
do
we
even
need
a
template
generation,
because
that
even
a
necessary
thing
anymore,
you.
C
D
F
F
D
D
D
G
D
G
G
F
E
D
Well,
so,
first
of
all,
this
issue
that
Clayton
mentioned,
which
is
the
pod
templates
themselves
today,
are
big
and
they
continue
to
get
bigger
and
I
precede
them
continuing
to
get
bigger
and
resources
are
bounded
size
and
second,
it
is
going
to
prevent
you
from
providing
flexibility
in
terms
of
how
much
history
is
preserved.
This.
E
G
E
G
G
I
mean
I
will
say
that
the
order
of
Delta's,
so
typically
the
vast
majority
of
changes
are
small.
However,
I
mean
to
Brian's
point:
yeah
I
mean
assuming
the
existence
of
an
external
system
that
can
store
infinite,
deltas
and
infinite
history.
It
becomes
less
of
an
issue
to
have
those
be
live,
because
if
you
have
30
live
things
holding
references,
you
have
another
problem
which
is
I.
G
D
G
People
are
starting
to
put
very
large
environment
variables
into
pod
templates,
which
might
be
nested
config
ins,
a
very
reasonable
use
of
the
system.
That
is
certainly
the
whisk.
Is
there
Eric
that
even
a
small
change,
we
can't
do
subfield
Delta's
efficiently
enough
to
not
break
somebody
who
has
something
to
koi
today,
but.
E
G
K
in
you
know,
a
32
k,
environment
variables
stored
as
a
config
looking
in
lyon,
config
is
a
very
reasonable
way
to
use
the
system.
Today.
32K
is
like
right
at
the
balance
of
like
you,
change
that
three
or
four
times
and
you're
starting
to
push
up
against,
like
that's,
going
to
cause
significant
crumbs
in
the
rest
of
the
system.
So.
D
We
have
some
limits
today,
not
enough
limits
for
sure
which
is
going
to
be
hard
to
fix,
but
we
have
some
limits
today,
but
they're
designed
for
pause
or
pod
templates
to
just
have
one
per
at
CD
key,
changing
those
limits,
great
backward
compatibility
at
some
point.
You
may
need
to
impose
new
limits
and
break
backward
compatibility,
but
can
be
painful.
G
I
mean
like
the
DV
unique
ification
like
it
is
annoying.
It
is
not
insoluble,
but
the
problem
is
it's
unstable.
With
regards
to
unanticipated
changes
in
the
system
which
we've
already
demonstrated
several
times,
so,
if
is
it
worth
it
for
us
to
build
the
correct
d,
unique
ification,
the
first
time
and
say,
do
we
believe
that
we
can
anticipate
all
future
changes
based
on
our
current
experience
over
the
last
23
years?
On
this,
the.
E
G
It's
interesting
because
I
think
about
something
like
the
epic
ed
operator,
which
doesn't
look
like
a
pod
template
at
all
and
we're
not
seeing
an
explosion
of
the
number
I,
don't
anticipate
an
explosion
in
the
number
of
controllers
that
we
create
pods
in
the
near
term.
It
probably
is
much
more
domain-specific
riccardi
resources,
elasticsearch
prometheus
dealer
manager,
etcetera,
etcetera,
the
number
of
operators
people
are
starting
to
create
those
tend
to
be
much
more
specific
resources.
Do
we
believe
that
we
need
a
solution
that
assists
on
as
well
or
would
they
use
the
unique
education.
E
Like
elasticsearch
doesn't
create
positively
to
create
stateful
sets
and
and
deployments
and
expect
to
see
a
lot
more
operators,
not
all
but
many
operators
that
are
going
to
create
our
generic
things
and
use
them,
but
so
that
means
so.
That
means
that,
like
we're
now
going
to
have
like
a
stateful
site,
template
and
a
deployment
template
in
a
daemon
set
template
problem,
so
I
think
we
should
make
sure
that
we
don't
just
focus
on
the
fact
that
yeah
we
have
a
pod
template
object.
E
F
If
the
higher
level
operators
can't
use
staples
set
as
a
primitive
to
to
build
their
staple
applications
and
I,
don't
I
thought
we're
not
doing
something
right.
If
you
like
everybody,
working
around
staple
set
and
doing
something
custom
that
we
haven't
kicked
that
target,
that
seems
sweet
spot
when
we've
covered
a
large
number
of
applications
for
it
and.
G
Agree,
I've
mostly
been
probing
that
angle
for
the
wood,
the
SCBT
operator
hypothetically,
need
to
use
what
what
what
are
the
things
that
it
would
need
to
do
to
properly
handle
rollbacks?
Would
it
need
to
read
the
limited
sub
resource
for
history
on
the
staple
set,
or
would
it
need
to
touch
the
raw
objects?
Would
it
need
its
own
unique
ification?
It
also
needed
to
hold
on
to
garbage
references,
etc
and.
E
So
it's
not
even
clear
to
me
that
it
would
necessarily
surface
like
I.
Don't
think
that
elasticsearch
operator
services,
the
pod
templates
for
the
individual
parts
to
users
I
think
you
just
they
can
just
a
select
subset
of
fields
they
can
set.
So
it's
almost
a
different
problem.
There's
that
needs
to
be
solved
for
history.
For
that
thing,
so
there
you
have
to
solve
the
like
here's,
the
five
fields.
I,
let
you
change.
Let
me
have
a
history
of
those
values,
or
maybe
I'm
not
going
to
do
roll
back.
I
would.
F
Hope
it
for
staples
and
demon
said
whether
to
go
back
to
the
HDFS
as
a
reference
that,
if
I'm
updating
my
data
nodes,
I'm
performing
a
configuration
up.
The
update
on
the
demon
set
of
data
nodes
that
are
in
my
cluster
and
that
roll
out
that
that
configuration
change
or
that,
even
if
its
bits
moving
as
well,
that
binary
update
is
going
to
be
handled
by
the
demon
set
itself
and
saying
that
I'm
going
to
upgrade
the
configuration
on
my
my
name,
those
or,
if
I'm,
using
Journal
notes
for
a
farm
journal.
F
Those
staple
sets
would
handle
the
configuration
for
me.
So
the
operator
I'm,
more
concerned
with
encoding,
application-specific
logic
and
application
consists
so
to
put
it
for
an
example.
If
I
was
actually
going
to
do.
An
update
for
a
gfs
I'd
have
to
do.
One
update
to
the
name
notes
to
begin
with.
Do
periodic
iterative
updates
to
the
demon
set
to
update
the
data
notes
and
then
do
another
update
to
the
name.
Moves
to
finalize
that
upgrade
for
HDFS
right
and
we're.
F
In
the
same
thing,
I
was
going
to
try
to
do
yard
or
something
like
that
to
see.
Em
GOI
have
multiple
updates
to
constitute
one
application,
wide
actual
update
and
that
I
feels
like
the
domain
of
the
operator,
but
we
need
to
provide
the
operator
with
the
white
primitive,
so
it
can
do
that
easily.
Well,.
G
D
G
G
Because
we
could
certainly
abuse
annotations
on
that
for
controllers,
that
needed
for
in
add
things
to
a
historical
controller
reference
like
Delta's
for
things
that
aren't
directly
captured
like
say
the
historical
controller
reference
included
a
pod
template
hypothetically.
We
could
certainly
leave
affordances
for
future
edition
of
seems,
like
a
volume
claim.
Template
Delta
did
all
the
information
that
the
staples
that
needs
to
reconstruct.
G
G
Mean
have
a
strong
case
for
staple
set
as
Ken
point
out.
I,
don't
have
a
strong
case
or
staple
set
meeting.
Vol-Plane
templates
I
am
I,
know
that
stay
for
couple
tied
to
their
data.
The
data
is
tied
to
labels.
Labels
are
going
to
be
live
on
both
the
set
that,
even
if
we
have
delete
and
recreate
there's
going
to
be
some
level
of.
We
may
need
to
track
that
and
then
I
want
to
leave
it
off
in
the
future.
E
So,
let's
all
have
a
question:
well,
users
say
they
want
to
roll
back
today.
Do
they
mean
I
want
to
roll
back
to
a
previous
pod
template,
or
do
they
mean
I
want
to
undo
the
last
operation?
I
did
to
this
object
to
whatever
feels
they
were,
and
maybe
it
just
happens
that
the
pod
templates
currently
the
only
thing,
that's
updateable,
on
the
object,
but
there
might
be
fields
in
the
non
template
spec
that
are
updateable
in
the
future.
So
we
end
up
the
end
of
story
of
the
Mon
temp
non
pod
templates.
D
D
You
can
see
one
full
rollback
capability
to
just
store
their
configs,
inverse
control
and
roll
back.
That
way,
and
as
another
thing,
I'm
using
the
actual
resources
that
are
going
to
be
used
by
the
system
as
part
of
the
history,
you
know
is
necessary
when
we
have
multiple
versions
in
flight,
for
example,
to
config
mess
with
secrets
and
so
on.
He
could
be
lighted
needed
for
the
lifecycle
management
of
those
resources.
In
the
case
of
the
replica
sets
generated
by
deployment,
it's
needed
because
they're
actually
managing
life
pods,
but
for
general
rollback
of
other
properties.
D
I,
don't
necessarily
know
that
we
need
to
solve
that
problem,
and
there
is
at
least
one
case
where
I
believe
in
general.
It's
we
don't
want
to
roll
it
back,
which
is
the
replica
accounts
and
deployment
for
example,
which
people
often
want
to
control
operationally
independent
of
other
robots
or
may
even
be
controlled
by
an
auto
scalar,
which
hopefully
will
become
a
very
typical
case,
or
at
least
a
frequent
case
in
a
teacher.
So
that's
deliberately
not
rolled
back
when
someone
says
roll
back,
I'm.
G
Gonna,
add
something
so
this
is
kind
of
like
this
is
stuff
that
in
open
ship
deployment
configs,
because
we
added
hooks
and
hooks
roughly
look
something
like
a
very
small
pod
template,
but
the
end
state
of
a
hook
is
well.
Why
can't
I
do
all
the
things
I
can
do
in
todd's,
because
the
hook
is
really
just
a
simplified
job.
There
was
some
overlap
there
that
I'd
like
to
avoid
making
that
same
mistake
twice
for
the
same
reason
that
pod
templates
can't
grow
in
the
same
object.
You
can't
have
like
three
copies.
G
E
When
so,
we
so
we
said
that
serious
heavy
duty
users
should
store
their
config
in
revision
control
and
have
the
rollback
happen
at
the
revision
control
level.
We
also
said
that
we
have.
We
can't
have
the
state
history
stored,
an
object
because
our
users,
who
have
very
large
environment
variables
and
that
they
require
a
predictable
level
of
number
of
rollbacks,
are
we
sure
that
those
are
get
the?
E
Is
it
possible
that
those
are
already
handled
by
the
same
case
in
a
sense
that
are
the
people
that
have
the
big
environment
variables
and
the
people
that
have
me
predictable
rollback?
Are
they
also
an
evil?
They
should
be
using
it
and
we
can
just
focus
on
the
casual
or
an
abductor
user
with
the
built
in
history?
Well,.
D
So
another
reason
for
the
history
is:
it
can
be
the
number
of
different
versions
currently
in
flight.
Maybe
that's
not
an
issue
with
spatial
sense,
but
with
deployment.
I
definitely
wanted
to
be
the
case
that
I
can.
If
I
have
a
deployment
of
a
thousand
pods,
I
can
have
kin
rollouts
in
flight,
like
I
can
just
keep
pushing
changes
that
I
don't
have
to
worry
about
the
case
that
Oh
rollouts
already
in
the
fight,
so
I
can't
actually
push
another
change.
A
Yeah,
hey
folks,
we've
got
three
minutes
left
in
sig,
apps
and
so
I'm.
Just
doing
a
time
check
here,
I
see
that
this
is
a
very
Hardy
conversation
going
on
and
so
I
feel
bad
jumping
in
and
calling
time
on
this.
So
I
just
want
to
ask.
What's
the
next
step
that
we
should
take
here
to
continue
handling
the
state
of
tie
off
or
continue
conversations
or
whatnot,
because
we're
coming
to
a
close,
we.