►
From YouTube: Kubernetes SIG Apps 20180521
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
B
Part
of
that
is.
We
wanted
to
make
a
deal
with
our
application
developers
that
we
expect
their
application
to
be
deployed
to
multiple
clusters
in
multiple
regions
and
multiple
data
centers
such
that
as
the
kind
of
operators
of
the
clusters.
We
can
take
any
one
of
them
down
for
maintenance
at
any
given
time
or
because
we
made
a
bad
configuration
change
or
because
sed
is
sad
or
whatever
you
know
operational
stuff
to
make
that
more
palatable
and
not
just
pushing
burden
onto
the
application
Deb's.
We
built
a
deployment
tool
to
handle
that
for
them.
B
So
what
you're
looking
at
in
this
diagram
is
overwhelming.
I,
probably
should
have
waited
to
share
it,
but
that's
okay,
the
gist
of
it
is.
We
have
kind
of
this
control
plane
data,
plane
separation,
where
we
have
a
dedicated,
kubernetes
cluster,
where
we
run
our
deployment
operator
called
shipper
and
the
set
of
CRD
is
that
shipper
operates
with,
and
then
the
controller
takes
those
CR
DS
and
then
orchestrates
a
bunch
of
application
clusters
on
your
behalf.
B
B
You
have
an
application
object
in
your
namespace
in
the
management
cluster,
which
has
a
template
which
corresponds
to
an
individual
release
of
that
application.
So
you
edit
the
template
of
the
application.
It
creates
a
new
release
just
like
when
you
edit
a
deployment
object.
It
creates
a
new
replica
set,
it's
one-to-one
so
like.
If
you
want
to
go,
do
a
thing
you
create
or
edit
an
application
object
in
the
management
cluster,
and
we
orchestrate
that
change
out
to
all
the
clusters
on
your
behalf.
B
What
I'm
gonna
do
is
create
an
application
resource
in
the
management
cluster.
Actually,
all
of
my
coop
CTL
interactions
will
be
with
the
management
cluster
I'm
not
actually
going
to
touch
any
of
these
four
application
clusters
directly
and
then
we'll
kind
of
see
what
happens
by
looking
in
the
workloads
dashboard
along
the
way.
B
Okey
dokey
so
I'm
just
to
give
you
a
kind
of
quick
overview
of
what
an
application
object
looks
like
this
is
what
I'm
gonna
create
into
the
management
cluster
and
a
namespace
called
demo.
Namespaces
and
management
are
one-to-one
with
namespaces
and
application
clusters,
so
kind
of,
if
you
have
permission
to
do
something
in
the
management
cluster
in
a
namespace,
the
same
permissions
apply
to
you
and
the
application
clusters.
B
The
application
object
consists
of
a
template
kind
of
like
a
deployment
where
we
have
a
chart,
and
this
is
the
actual
stuff
we're
gonna
deploy
a
set
of
cluster
selectors
to
target
where
you
want
to
put
it
a
strategy
which
defines
how
you
will
do
this
rollout,
and
this
is
where
you
can
kind
of
decide.
Do
you
want
to
do
something
that
feels
like
blue
green,
something
that
feels
like
canary?
B
So
for
capacity.
This
represents
the
percentage
of
a
full
kind
of
final
state
for
one
of
these
identities,
so
either
incumbent
the
old
release
or
contendere
the
new
release,
and
so
what
we're
saying
is
in
step
zero
of
my
strategy.
My
old
release
should
be
at
a
hundred
percent.
It
should
still
be
you
know,
doing
all
of
the
work.
The
new
release,
The
Contender,
should
have
a
single
pod
or
one
percent
of
its
capacity,
and
this
is
something
that
you
might
like.
B
One
one
note
I
want
to
kind
of
explain
here
is
that
we're
doing
traffic
splitting
with
only
vanilla,
kubernetes
services
and
labels
on
pods?
So
we're
not
doing
any
service
mushy
stuff
here
we
wanted
to
make
sure
that
kind
of
anyone
could
use
this
without
having
a
particular
service
mesh
installed.
We
will
probably
like
we
know
what
we're
gonna
extend
into
doing
more
service
mesh
things
later
on,
but
for
now
it
just
does
vanilla
kind
of
like
what
you
see
in
the
crouppen
Indies
Docs
for
doing
at
Canary
deployment
and
then
so.
B
B
B
B
B
B
Of
course,
it
works
five
times
before
I
start,
but
so
I've
created
I've
installed
a
bunch
of
deployments
into
a
bunch
of
clusters
and
I'm
expecting,
like
my
multi
cluster
Orchestrator,
to
scale
them
up
to
the
position
they
should
be
yet
in
the
first
step
of
the
strategy,
and
so
that
should
happen
shortly
should
also
create
a
bunch
of
services.
For
me,
great.
B
And
then
we
can
look
at
our
application
object
and
see
exactly
what
I
created
there,
but
also
some
status
fields.
So,
for
example,
am
I
in
the
middle
of
aborting
a
rollout,
do
I
have
a
released
according
to
my
template,
what
releases
do
I
have
in
my
in
the
middle
of
a
rollout
that
kind
of
thing
if
I
look
at
the
release.
B
B
C
B
Okay,
wonderful!
Thank
you,
sorry
about
the
jumping
back
and
forth
so
I
snapshotted
that
application
template
into
my
really
subject:
I'm,
looking
at
a
release,
object
here
and
we
can
go
down
and
look
at
the
conditions
and
the
status
and
kind
of
see
like
what's
going
on
with
this
thing,
so
it
says
I'm
at
the
first
step,
I'm
trying
to
converge
on
the
first
step
of
my
strategy.
That
is
actually
where
I
am
right.
Now
and
I'm
waiting
for
I've
got
the
capacity
I
wanted
I
want
the
installation
status.
B
So
we
can
see,
thank
goodness
all
of
the
pods
are
in
place.
This
is
my
kind
of
step,
zero
position,
so
I'm
gonna
go
ahead
and
tell
it
yes,
please
advance,
because
I
don't
have
an
old
release
if
I
try
to
send
traffic
to
any
of
the
services,
I
won't
get
anything
so
I'll
say:
go
ahead
to
step
whoops,
one,
not
R,
which
will
tell
it.
You
should
be
at
the
state
described
by
step,
one
in
the
strategy,
and
so
if
we
go
back
to
Google,
we'll
see
things
happening.
B
So
I'll
do
that
and
then
start
sending
traffic,
so
cool
I'm
hitting
a
thing
and,
let's
say
I,
just
discovered
that
oh
hey,
actually
after
the
slash,
slash
I,
wanted
the
pod
name,
but
I
forgot
to
put
in
my
chart
that
it
should
inject
that
environment
variable
so
I
have
a
bug
in
my
configuration,
so
I'm
gonna
finish
the
release
that
we
have
by
going
to
the
final
step.
Oops
excuse
me.
B
So
the
bug
here
is
in
the
chart,
so
I
have
a
different
version
of
the
chart
with
the
bug
fixed.
Let's
say
it's
version:
3
I'm
pulling
this
chart
from
a
chart
museum
that
I
have
deployed
into
the
same
management
cluster,
but
it
could
be
anywhere
so
let's
go
ahead
and
do
that
I'm
not
going
to
change
anything
else,
I'm,
not
changing
the
image,
I'm,
not
changing
the
values
or
anything
like
that.
B
B
And
that
will
get
things
to
scale
up
and
also
add
things
to
the
load,
balancer
and
a
50/50
split.
So
now,
if
we
start
watching
our
traffic,
we'll
see
that
we're
kind
of
low
bouncing
evenly
between
the
two
assuming
the
networks
do
not
get
clogged
with
garbage,
but
so
we're
balancing
approximately
equally
between
the
two
different
sets
of
pots
and
that's
cool
one
nifty
thing
about
this
approach
is
that
our
rollout
process
is
totally
declarative.
So,
if
I
advance
I,
add
my
pods
to
traffic
and
I
decide
actually
whoa.
This
is
no
good.
B
I
can
go
backwards
to
step
zero,
and
so
that
will
remove
things
from
the
load,
balancer
and
scale
them
back
down,
and
so
we
should
see
our
traffic
going
back
to
only
the
first
release
kind
of
I've
decided.
This
was
a
bad
roll.
I
want
to
bail
out
so,
but
actually
this
is
a
good
release.
We
would
like
it
to
be
full
on
we'd,
want
it
to
be
in
the
final
position
of
our
strategy,
so
we'll
change
the
target
step
two
to
the
controller's.
B
B
B
We're
rolling
it
out
internally,
right
now,
and
so
like
one
of
the
reasons
we
didn't
just
kind
of
immediately
release.
It
is
that
we're
just
starting
to
put
production
cycles
on
it
in
internally,
and
we
want
to
kind
of
work
it
out.
The
normal
bumps
that
come
with
something
of
this
criticality
developing
live
before
we
kind
of
throw
it
at
the
world.
I
see
a
question
in
the
chat
about
looking
at
the
signal.
T
cluster.
That
was
one
of
my
big
goals
at
KU.
A
Mean
it
may
not
be
as
flexible
as
Federation
v2,
but
if
you're
looking
for
feedback
as
to
whether
there
would
be
value
to
getting
community
contributions
and
potentially
open
source
in
it,
I
mean
I
can
only
speak
for
myself,
but
I
would
say:
yeah
go
for
it.
I
mean
Multi.
Cluster
deployments
are
something
that
people
are
definitely
struggling
with
in
the
open-source
community.
So
even
if
it's
not
as
fully
featured,
and
even
if
it's
not
Federation
v2
or
a
full
multi
cluster
implementation,
it
provides
the
capability
now
right
they
people
could
actually
use.
A
D
A
Double-Clicked,
so
thanks
again
for
the
awesome
demo,
if
there's
no
other
questions,
then
we'll
move
on
to
project
updates.
If
anybody
wants
to
give
any.
A
Okay,
so
I
can
give
a
little
bit
of
an
update
on
workloads.
There
are
a
few
new
bugs
open
against
job
cron
job,
a
few
against
deployment
that
we're
seeing
and
we're
going
through
kind
of
just
doing
a
bug
scrub.
There
was
one
patch
that
got
put
into
one
10-4
job
back
off
and
that
should
get
back
toward
in
the
next
one.
Ten
series
release.
A
Okay,
all
right!
So
if
we
don't
have
I.
E
Guess
I'll
give
one
on
held
real
quick.
So
last
week
we
noted
that
helm
put
in
a
PR
proposal
to
become
a
CNC
F
project
we
actually
presented
to
the
C&C
FTO,
see
last
Tuesday
after
this
meeting
to
answer
a
bunch
of
questions,
it'll
be
a
little
bit
before
they
vote
and
they
decide
and
they
ask
questions
and
we
figure
out
where
the
right
place
is
to
live.
This
might
be
one
of
the
things
where
something
that
is
very
kubernetes
targeted.
Where
does
it
belong
within
the
CNC
F
under
kubernetes
or
not?
E
We're
gonna
have
some
interesting
conversations
about
it.
We're
pushing
that
issue
so
that
has
now
been
presented
to
them,
though,
and
then
I'll
say
on
the
community
charts
I
know
that
there's
folks
out
there
who
might
be
a
little
frustrated
at
the
speed
things
are
being
merged.
One
of
the
things
that
we'll
say
is
we're
working
on
the
human
way
that
we
do
things
now
has
not
scaled.
D
Some
of
the
bigger
things
that
are
coming
up
that
are
being
worked
on
right
now
are
helm
components
and
what
what
we're
going
to
do
is
we're
going
to
actually
make
it
so
that
you
can
consume
helm,
charts
inside
of
case
on
it,
and
so
that
would
be
the
only
update.
We
have
a
demo
that
was
going
to
be
next
week
and
I,
just
actually
ping
Matt
and
saying
that
next
week
is
morale
day
in
the
United
States
I
will
not
be
working
so
we'll
see
a
demo,
probably
hopefully
the
week
after
that.
A
Okay,
all
right
so
moving
on
to
discussion
topics,
so
it
didn't
have
a
lot
for
today.
There
was
one
thing
that
was
discussed:
kind
of
heavily
at
Kubek
on
that
I
wanted
to
bring
up
just
to
get
a
feel
from
other
other
members
of,
say,
Apps
about
where
they're
at
with
it
and
what
they
think
about
it.
So
I
don't
know
if
everyone's
familiar
with
openshift
but
open
ship
deployment,
configs
have
a
notion
of
lifecycle,
hooks
and
lifecycle
hooks
lets.
You
do
things
like
at
a
particular
point.
A
In
your
deployments
rollout,
you
can
launch
a
job
that
would,
for
instance,
trigger
a
schema
evolution
for
a
database.
That's
a
pretty
common
one
and
there's
a
proposal
open
for
deployment
lifecycle
hooks
in
the
community.
So
it
was
something
that
that
I've
literally
heard
requested
and
cardiac
s
was
the
one
who
initially
opened
up
the
the
proposal.
A
I
just
wanted
to
see
if
there
are
any
thoughts
on
whether
others
feel
it
would
be
a
useful
feature
or
whether
you
think
we
have
enough
in
terms
of
the
core
controllers
right
now
and
it's
something
you'd
rather
build
on
top.
As
an
extension,
you
just
kind
of
want
to
get
a
feeling
for
what
people
thought
about.
F
E
E
How
do
operators
approach
and
cover
over
this,
and
then
you
know,
life
cycle
is
one
of
those
things
we
talked
about
in
helm
and
because
you
can
get
so
application
specific.
How
do
you
have
something
generic
enough
you're,
not
a
life
cycle
management
tool?
How
much
into
that
do
you
want
to
be
versus
giving
other
things
the
ability
to
do
that
I?
Think
it's
a
hard
space
I
think.
E
I
mean
a
really
simple
one
is
before
I
install
my
thing,
I
want
to
make
sure
that
the
CRD
is
installed
and
at
an
acceptable
version
that
my
thing
will
work
with
right.
There's
a
really
simple
life
cycle
case
at
install
time
and
then
how
much
further
after
those
kinds
of
use
cases,
do
you
go
before
we
say,
withdraw
the
line
you
got
to
use
something
else.
Maybe
we
had
an
idea
of
that
it
might
help
codify
how
much
should
be
in
there.
So
I
guess
the.
A
Reason
I
wanted
to
bring
this
up
is
because
I'm
not
entirely
sure.
This
is
something
that
you
can
do
well
with
a
third-party.
We
were
the
custom
controller
unless
you
don't
use
the
existing
workloads
primitives
right,
so
there's
no
way
that
really
pause
the
deployment
and
then
launch
something
and
then
start
it
again.
I
mean
there
are
tricks.
You
could
do,
there's
kind
of
hackery
that
you
could
get
around
like.
Maybe
you
mutate
a
pod
so
that
the
readiness
check
fails
and
that
blocks
the
deployments
rowing
update.
A
You
could
delete
the
image
from
a
pod
that
would
probably
just
cost
a
block,
but
that's
gonna
cause
air
conditions,
and
you
really
probably
don't
want
to
do
it
that
way.
So
one
thing
that
we
want
is
for
people
to
be
able
to
leverage
the
existing
workloads
primitives
to
build
a
higher
level
orchestration
on
top
of
it
operators,
custom
controllers
and
so
forth.
So
I
guess
my
concern
is
primarily
e.
A
G
I'll
think
it
was
all
second
what
Ben
said
and
actually
to
answer
to
answer
your
question
can
about.
There
is
a
reasonable
option
that
you
create
your
own
CRD.
That
will
do
their
initial
initialization
that
you
care
about
and
then
we'll
just
kick
off
the
deployment.
But
the
problem
with
this
approach
is
that
you
need
to
know
how
to
write
the
controller,
because,
basically
you
will
end
up
writing
one,
even
very
simple
one.
It
has
some.
G
It
requires
some
knowledge
about
the
API
and
additionally,
it
requires
you
to
deploy
your
controller
on
on
to
the
cluster,
which
means
you
are
deploying
your
controller.
You
are
deploying
your
actual
application.
This
is
supposed
to
be
serving
that
simple
use
case
on
invoke
this
simple
two
line:
there
one-liner
script
that
just
initially
initialize
database
or
perform
some
initial
configuration
once
or
eventually,
every
now
and
then
and
then
and
so
I-
don't
see
any
easy
way
these
days
other
than
something
like
lifecycle
hooks.
There
are
options
that
you
can
solve
this.
G
G
If
he's,
has
access
to
a
limited
set
of
resources,
for
example
a
bishop
online
or
you
have
a
dedicated
piece
of
a
cluster
for
yourself,
you
cannot
deploy
CRTs,
because
you
need
to
have
a
cluster
scope
resource
to
be
able
to
do
so,
and-
and
we
don't
have
anything
like
that-
that
will
allow
creating
names
they
scoped,
CRTs
and
I
know
that.
How
much
was
interested
in
having
something
like
that?
But
these
days
we
don't
have
anything
like
that,
which
means
in
managed
clusters.
You
are
not
allowing
people
to
do
simple
stuff
or.
A
F
A
G
E
So
spice,
oh
good,
oh,
if
somebody
else
has
something:
first,
they
can
go.
I
was
just
gonna
jump
in
and
say
you
know
from
a
practical
standpoint,
how
I'm
actually
added
hooks
in
that
fire
at
certain
points
on
they
deal
with
install
delete,
upgrade
and
robot
and
there's
even
a
special
case
for
cities
to
deal
with
the
kinds
of
pragmatic
things
that
folks
have
had
to
deal
with,
and
you
know
because
this
is
when
you're
installing
something
or
upgrading
it.
There
are
some
real-world
cases
where
you
get
into
light
life
cycle.
E
Those
couple
of
line,
things
and
and
home
has
had
this
for
quite
some
time,
because
it's
needed
to
do
certain
application
things
and
you
actually
find
this
in
for
helmet,
make
sense,
because
you
find
this
inside
of
other
package
managers
if
you're
inside
of
like
apt
and
some
of
these
other
things.
These
kind
of
events
can
occur
that
you
want
to
do
stuff,
and
so
it
was
approached
that
way.
But
the
only
way
I
know
of
to
do
this
without
something
like
helm
today,
is
to
use
an
imperative
tool.
E
E
For,
but
it's
very
much
you
know
install
this
thing
and
then
probe
to
make
sure
it's
there
and
then
once
it's
there,
you
can
go
to
the
next
steps
and
that's
because
of
just
how
OpenStack
installation
is
you've
got
to
require
certain
things
there
before
you
can
get
to
next
things.
You
can't
really
just
give
it
an
imperative.
There
isn't
a
tool
to
give
an
imperative
or
de
clay.
You
have
to
I'm
sorry
declarative,
config,
it's
all
it
happens
step-by-step.
You
have
to
control,
that's
what
we
do
today.
E
G
This
is
supposed
to
be
simple.
One,
eight
one,
one
type
of
a
one
offs
you
can
compare
that
to,
for
example,
see
our
IDs
versus
API
servers.
Crt
is
supposed
to
be
simple.
It's
easy
to
write.
Crc
are
these
and
you
have
a
custom
resource
definition
for
yourself.
If
you
want
to
have
some
more
advanced
features
around
the
type
you
either
go,
you
can
go
for
writing
your
own
API
servers.
This
is
the
use
case
and
the
difference
that
we're
trying
to
address
with
the
lifecycle
hooks.
G
We
want
to
provide
simple
primitives
to
work
with
the
current
objects,
current
resources
that
we
have,
if
you
want
something
more
complicated
fine.
These
are
the
other
options
that
you
can
go
with
and
go
ahead,
feel
free.
If
you
are,
if
you're
looking
for
some
one
off,
that
will
initialize
that
IV
schema,
that's
probably
the
most
common
use
case.
This
is
something
that
is
perfect
for
the
for
those
lifecycle
hooks.
E
Yeah
yeah,
in
this
case
I
was
actually
more
or
less
just
getting
at.
If
we
do
this,
we
need
to
communicate
where
this
should
be
used
and
where
it
shouldn't.
So
people
know
where
to
go
to
other
tools,
a
communications
thing
right
and
then
just
to
share
examples
of
what
helm
did,
because
it's
a
practical
problem.
So
it's
already
been
done
elsewhere,
I'm
not
just.
G
Sharing
yeah,
but
that
I
agree
with
it
definitely
outlining
that
this
is
only
addressing
simple
use
case.
If
you
want
to
have
something
more
complicated,
then
there's
either
a
cube
flow.
There's
CR
these
or
or
whatever
else
we
can
come
up
with
and
point
people
so
that
they
just
know
how
to
look
so
that
they
connect
the
dots
easily.
Without
you
know,
trying
to
reinvent
half
of
the
kubernetes
on
their
own.
A
Cool
and
then
I
guess
my
only
other
thought
was
that
if
we
did
this,
do
we
want
to
do
it
uniformly?
Does
every
workload,
control
or
get
lifecycle
hooks
am
I
under
stall
would
be
like
if
we're
gonna
do
this
at
one
place
in
the
API,
we
would
want
to
do
it
for
deployment
and
at
least
staples
that
I'm
not
sure
if
lifecycle
hooks
for
demons
that
make
sense,
but
that's
something
we
would
also
have
to
consider.
G
Yeah
definitely
worth
starting
with
deployments
and
adding
for
stifle
sets
the
remainings
I,
don't
see
any
reason
to
add
those.
Two
jobs
barely
even
knew.
What
that
looks
like
that's,
yeah,
no
I,
don't
think
that
doesn't
make
sense.
You
can
embed
whatever
logic
you
have
an
engine
of
itself,
so
that's
it's
pointless
over
there
demon
said
I
would
say
that
have
an
option
for
it,
but
we
didn't
have
to
implement
it
until
somebody
will
ask
for
that,
but
definitely
say
full
sets
and
deployments
is
the
place
to
start
with
you.
With
these
the.
A
Thing
about
daemon
sets
so
staple
sets
and
deployments
both
go
for
some
target
number
of
replicas,
with
a
different
type
of
lifecycle,
management
for
the
pods
themselves
and
different
guarantees.
But
demon
said
it's
sensitive
to
the
number
of
nodes
selected
by
its
nuit
selector
right.
So
it's
kind
of
hard,
but
it's
not
clear
what
end
looks
like
for
a
demon
set
right.
Nodes
can
come
well
depending
on
I
guess
in
general.
A
Nodes
can
join
and
leave
the
cluster
at
any
time,
and
if
the
selector
selects
a
new
node
that
comes
in
all
of
a
sudden,
you're
deploying
a
new
demon
set.
So
there's
no
like
notion
of
the
demon
set
is
done
when
it's
this
large.
It's
done
when
it's
deployed.
They
had
the
task
that
it
needs
to
across
the
set
of
nodes
that
it
selects
so
I.
Just
conceptually,
don't
know
what
that
life
cycle
looks
like.
E
I
have
a
question
building
on
this,
though,
if
you
don't
mind
so,
if
I'm
writing
stuff,
that's
gonna
go!
Do
this
and
I'm
writing
it
in
a
generic
sense,
say:
I'm
gonna
write
something
and
then
share
it
with
others,
and
they're
gonna
be
on
varied
versions
of
kubernetes.
How
do
we
detect
if
a
cluster
has
this
in
it
yet
or
not?
Do
we
just
have
to
have
knowledge
of
the
versions,
or
will
there
be
something
around
feature?
E
Detection
I'm,
just
this
just
came
into
my
mind
because
now
we're
talking
about
iterating
on
stable
api's
and
the
features
they
have
do
we
now
need
to
know,
features
perversion
of
kubernetes
and
build
that
into
our
thinking
and
distribution
or
then
feature
detection
on
top
of
being
able
to
figure
out.
Is
this
feature
here
or
not
within
a
sub
control?
I,
don't
know
this
just
occurred
to
me,
and
so
I
was
throwing
that
thought
out
there
for
others,
because
that's
a
problem
when
you
have
to
share
and
people
are
cross
versions-
I
mean.
A
Generally
speaking,
if
you're
gonna
roll
up
features
into
a
stable,
API,
you'd
want
to
release
them
at
various
degrees
of
stability
themselves,
right
so
you're
managing
a
feature.
Usually
you
would
do
this
with
feature
Gate
blacks,
so
you
would
have
to
know
if
the
features
enabled
in
your
in
your
version
of
kubernetes
based
on
the
flag
when
you
roll
outlook
alpha
it's
going
to
be
disabled
by
default
on
almost
all
distributions.
A
Unless
that
distribution
says
that
we
feel
that
the
Alpha
feature
is
stable
enough
to
configure
it
as
on
by
default.
In
practice,
I
have
been
seen
that
very
prevalently
when
it
goes
beta
and
it's
on
by
default
then
effectively.
It's
enabled
the
questions
there
really
become
around
rolling
forward
and
rolling
back
in
the
event
that
you
roll
out
something
with
a
new
feature,
and
then
you
feel
like
that
particular
release
is
unstable
and
you
want
to
roll
back
your
distribution
of
kubernetes
and
now
that
feature
has
been
disabled.
A
E
Yeah
but
think
about
it,
this
way
you're
a
tool
deploying
something
into
kubernetes.
How
do
you
have
knowledge
about
what's
enabled
or
not
in
that
cluster,
to
detect
it
and
then
maybe
throw
a
linting
error
right
to
say
your
cause
doesn't
have
this
feature
of
the
stable?
You
know
you're
dealing
with
deployments
and
now
there's
a
beta
feature
in
there's
an
honest
it
off.
Is
it
available?
How
do
you
deal
with
that
as
something
that's
deploying
it
into
it?
E
A
A
discovery
API
for
features
per
se,
you
can
only
discover
resources
and
what
resources
are
available
be
in
the
discovery.
Api,
that's
more
of
a
meta
question
should
be
enable
a
way
to
like
label
a
feature.
I
mean
I.
Guess
you
could
detect
I
think
you
actually
can
get
the
feature
flags
out
and
determine
if
one.
G
A
G
E
A
Can
use
the
open,
API
spec
to
look
the
actual
resource
definition
and
say?
Is
this
divide
yeah
you'd
be
encoding?
That's
tricky
an
API
parsing
from
the
discovery
API
into
your
tool,
which
I
don't
know
if
I
was
going
to
use
it.
My
general
approaches
that
do
it
based
on
kubernetes
version,
so
I
know
if,
on
that,
one
10
is
it's
available
and
use
only
stable
features
that
are
released
at
a
particular
version
and
code
against
that
version.
Don't
use
things
that
you
feel
are
too
unstable
to
include,
or
that
might
be
deprecated.
E
G
I,
remember
a
discussion
I
think
with
Brian
some
time
ago
about
that
we
don't
want
to
rely
at
least
internally
in
cube
and
in
particular
in
versioning
names.
That's
why
we
have
to
discovery.
That's
why
you
have
the
open,
API
descriptions
so
that,
whenever
we're
actually
looking
whether
something
exists
in
the
cluster,
we
are
verifying
the
function
availability
either
through
resource,
maybe
not
necessarily
through
field
like
these
I,
don't
recall
any
tests,
but
through
very
trying
every
source
is
present.
E
I
was
just
going
to
add
the
complication
comes
on
with
there's
this
idea
that
the
core
kubernetes
is
looking
to
move
slow
and
not
change
very
quickly.
Wanting
to
layer
things
on
via
the
ecosystem.
Now
and
there's
been
a
number
of
times
or
sega
architecture.
I
said:
let's
not
put
this
in
kubernetes
we're
starting
to
have
the
tools
with
C,
RDS
and
custom
controllers.
Do
that
as
an
ecosystem
thing,
and
then
clusters
can
opt
in
to
have
it
so
features
per
version.
E
Right
is
there's
the
core
of
kubernetes,
but
many
of
the
things
going
forward
are
going
to
be
ecosystem
things
that
are
installed
in
that
cluster,
and
so
when
it
comes
to
feature
detection
and
knowing
what
version
of
what
is
available
on
how
it
works
going
forward,
it
may
not
just
be
as
simple
as
saying
well,
this
version
of
kubernetes
has
this,
so
we
can
trust
these
api's.
It's
now
can
be
about.
How
do
we
know
which
one's
CR,
DS
and
controllers
are
installed
in
the
cluster?
A
Detecting
a
new
resource,
that's
a
custom
resource
definition.
That's
been
installed
in
your
cluster
is
actually
fairly
easy
and
it's
a
little
bit
different
from
trying
to
detect
whether
a
feature
that's
behavioral,
that's
based
on
the
presence
of
a
field,
is
available
in
a
particular
version
of
kubernetes
dynamically,
using
the
discovery
API
like
for
the
CR
D,
all
you
have
to
check
is:
is
the
resource
installed
if
it's
installed,
then
you
know
it's
available.
A
E
A
I
think
it
was
what
you
brought
up
in
terms
of
doing
things
slowly
in
the
core.
Api's
makes
a
lot
of
sense
right
so
like
if
you're,
adding
a
custom
resource
definition
to
extend
or
extension
API
server
to
extend
the
functionality
of
a
cluster.
That's
a
use
case
going
forward
that
we're
working
with
them.
We
expect
to
be
more
prevalent,
but
it
doesn't
mean
we're
not
going
to
do
anything
in
the
existing
resources.
A
So,
while
we're
not
generally
adding
a
lot
of
new
resources
into
the
core
API
as
adding
modifications
and
adding
new
capabilities
where
necessary
to
the
core
API
is
is
something
that
we're
going
to
do.
So,
it's
not
like
we're
not
doing
any
features
on
the
core,
api's
whatsoever
and
I.
Guess
the
the
standard
way
that
people
have
been
dealing
with.
It
is
primarily
doing
version
skew
detection,
just
knowing
that
I
don't
build
against
this
until
its
beta.
If
once
its
beta,
it's
stable
and
I
need
to
be
a
version
110
or
111
of
kubernetes.
A
G
Especially
I
was
proven
to
work
okay
with
there's,
so
many
changes
that
happened
in
the
core
and
bicorn
me
I
mean
the
stuff
that
lives
since
the
beginning,
like
pods
or
replication
controllers,
there
has
been
some
changes
to
every
single
of
those
API
and
they're
like
stable
and
GA
since
I,
don't
know
3/4
pad
for
the
past
two
years
and
I.
Remember
basically,
and
we
did
modify
a
lot
and
in
the
pod
spec
and
upon
definition,
itself
included.
E
G
Security-Related
or
whatever,
so
it
like
like
and
said
it
will
happen,
we
will
evolve
those
API
and
as
long
as
you're
not
breaking
backwards
compatibility,
it's
fine,
it's
just
yeah
and
it
the
worst
thing
that
could
happen.
If
we
start
implementing
new
API
versions
of
the
resources,
every
single
time
will
be
trying
to
add
something
to
it.
We
would
end
up
in
a
hell
managing
that
many
version,
different
versions
of
those
API
groups.
We
already
have
a
quite
significant
number
of
the
groups.
A
Speaking
of
which,
as
a
topic,
we
should
discuss
at
some
point
in
the
future:
fabrication,
radius,
workloads,
API,
replication
controller,
the
replication
controllers
of
you,
one
that's
around
forever,
but
I
was
thinking
more
along
the
lines
of
extensions
1
beta
1
deployment,
extensions,
1,
beta
1
daemon,
set
applications,
1
beta,
1,
staple
side,
applications,
B,
1,
beta,
2,
stable
side
demon
set.
Yes,.
G
Let's
start
with
the
extensions
V
1
beta
1
I
had
a
goal
to
get
rid
of
the
extensions
you
want.
They
don't
want
API
by
the
end
of
last
year,
which
unfortunately
did
not
happen,
but
I
think
it's
doable
for
this
year.
I'm,
not
sure
how
much
of
other
stuffs
are
there,
but
I
feel
pretty
confident
that,
with
all
the
workloads
or
live
and
neck
sessions,
we
want
beta
1.
They
should
be
removed
in
the
next
version.
Well,.
A
G
A
Adversity,
migration
is
definitely
like
a
technical
problem
that
needs
to
be
solved,
but
we
probably
want
to
disable
it
by
default
so
that
you
actually
have
to
take
action
on
your
cluster
in
order
to
continue
using
it.
That's
a
very
strong
well
as
you
upgrade
that
this
is
going
away
soon.
So
it's
a
way
of
unambiguously
communicating
our
intentions
to
the
rest
of
the
community
to
make
sure
that
they're
doing
the
right
things
and
they
don't
get
burnt
to
a
place
where
oh
wait,
I
didn't
do
my
storage.
A
G
G
We
enabled
we
changed
the
cohabitation
in
such
a
way
that
previously,
all
the
resources
were
saved
as
extensions
V,
1,
beta,
1
and
ADC,
be
nowadays
1
by
D
there,
yes,
exactly,
which
means
every
time
you
touch
the
every
single
resource
you
touch
in
111
will
get
re
encoded
and
stored
with
the
new
version.
This
opens
the
path
to
disable
the
API
in
112
and
a
completely
removal
in
130.
That's
that's.
C
My
name
is
Jan
I'm
new
here
by
the
way
so
I'm
from
Ali
Papa.
So
we
have
one
feature
request.
I
want
to
hear
you
guys
whether
it
makes
sense
or
not
so
the
request
is
we
want
to
upgrade
a
part
we
wanted
to
increase
upgrade
of
the
core.
Basically
with
we
don't
want
to
tear
down
the
part
if,
while
upgrading
the
port
so.
A
C
That's
what
we
want.
We
want
to
change
the
can
image
container
image,
but
without
tearing
down
the
port,
you
can
do
that
yeah.
We
can
do
that,
so
we
won't
see
whether
it
make
sense
to
add
at
the
deployment
control
level
so
like
adding
a
new
loading,
updated
strategy,
something
like
that
say,
updated
or
without
Terry
Turrentine.
C
A
C
A
A
We
can't
do
in
place
network
reprogramming
without
tearing
it
down,
I,
don't
think
I
mean
in
theory.
We
could,
but
we
don't
so
about
the
only
thing
you
really
can
change
is
the
container
image,
adding
a
rolling
update
strategy
for
all
the
workloads
controllers.
To
just
touch
the
image
would
mean
you'd
have
to
try
to
detect
image
queue.
I
mean
it's
doable,
but
it's
probably
non-trivial
like
you.
A
Don't
also
have
to
take
into
account
which
hash
the
image
is
right,
because
you
can't
just
do
it
based
on
the
string
comparison
that
might
not
achieve
the
desired
result,
which
is
why
I
kind
of
think
image
streams
might
be
the
way
that
you
want
to
do
it
in
practice.
But
I
mean
if
you
want
to
open
a
proposal,
we
definitely
be
as
community
to
be
willing
to
look
at
it.
Yeah.
C
So
this
is
just
a.
We
have
a
use
cases
which
requires
this
and
we
want
C
at
the
controller,
but
we
don't
want
to
add
for
every
controller,
so
just
deployment
controller
or
something
that
make
sense.
So
so
so
this
can
support
some
of
our
use
case.
So
the
image
is
saying
you
talking
about,
because
I'm
not
quite
familiar
with
that.
A
Openshift
has
a
feature
called
image
streams
which
allows
you
to
automatically
detect
the
update
of
an
image
and
change
the
image
of
a
container
right.
So,
instead
of
like
updating
the
deployment
manually,
you
detect
that,
let's
say
you're
at
tagged
latest
and
latest
actually
changes
value,
it'll,
roll
out
a
new
image
for
your
containers,
and
that
might
be
a
more
it.
G
Lifecycle
action,
the
deployment
and
the
reason
the
reason
for
for
triggering
and
your
deployment
is
that
we
are
injecting
a
full
post
pic,
meaning
not
the
human
readable.
Nice
pulls
back
but
I,
rather
rather
one
that
it
contains
a
specific
ID
of
an
image
in
time
that
you
are
targeting
at
when
running
your
applications,
so
that
both
underlying
replicas
or
replication
controllers
are
pointing
to
a
specific
docker
image
by
ID.
So
that's
and
the.
F
F
F
F
C
A
But
typical
reasons,
I've
seen
are
four
very
large
images,
but
even
then
it
really
doesn't
matter
like
it
would
be
good,
so
I
would
say
this
will
run
out
a
long
time.
We
have
to
finish
up,
but
definitely,
if
you're
interested
in
doing
this
work
opener
proposal
in
the
community
so
that
we
can
have
a
more
thorough
discussion
and
have
it
recorded
in
a
cap
and
we
can
go
from
there.
It's
not
something
we're
saying
we
definitely
won't
do
but
I
think
it's
not
something.
That's
trivial
enough
that
we
can
say
yeah.
A
C
A
All
right
we're
running
a
little
over.
So
thanks
everybody
thanks
Ben
for
the
awesome
demo
and
we'll
see
you
actually
I
believe
we're.
Are
we
going
to
cancel
next
week?
Two
more
today
so
last
announcement
we
will
not
be
meeting
next
week.
It's
an
American
or
US
holiday.
Memorial
Day
will
resume
the
following
week
in
June.