►
From YouTube: SIG Cluster Lifecycle 2021-02-09
A
A
A
A
A
B
Hi
yeah,
I
put
a
few
slides
together
to
make
it
easier
too,
but
if
you
want
to
click
or
I
can
share
my
screen
or
I
can
just
talk
about
it
really
briefly,
a
couple
folks,
I
know
who
are
in
the
in
the
call
right
now
know
a
little
bit
about
this
either
because
I
have
talked
to
you
about
it
before
I
saw
you
at
the
cluster
api
office
hours
a
little
while
ago,
or
you
commented
on
a
pr
that
I
worked
on
a
while
ago.
B
So
hello
again
everybody
for
those
of
you
who
don't
know,
though,
I'm
from
swig
multi-cluster
and
I'm
working
on
a
cup
proposal
about
cluster
id.
B
So
this
is,
I
think,
something
that
has
been
talked
about
in
the
community
beyond
sigma
multi-cluster
2
a
lot
before,
but
we
do
need
to
solve
it
in
sig
multi-cluster
for
some
of
our
purposes.
B
So
I
just
wanted
to
say
kind
of
hi
and
talk
a
little
bit
about
it
and
then
make
myself
available
either
here
or
if
you
want
to
catch
me
on
slack
or
talk
on
the
pull
request.
Just
so
that
everybody
here
knows
that
this
is
going
on
and
in
particular
in
the
last
slide.
Four
things
we
want
to
learn
from
other
sigs
and
sub
projects
in
particular,
is
if
there's
other
user
stories
or
user
cases
outside
of
what
we
have
already
articulated
on
our
kep
draft.
B
That
differs
or
is
the
same.
We
would
love
to
hear
about
it
if
you
can
click
on
four.
Actually
I
would
love
it.
Thank
you.
B
So
this
first
one,
if
you
have
a
need
for
this
or
or
think
it
shouldn't
be
solved.
I
guess
too,
I'm
curious
to
hear
your
thoughts
if
you're
interested
in
collaborating
or
getting
updates.
I
would
love
to
know
what
the
best
way
to
do
so
with
this
sig
or
with
other
projects
is
and
then
it
so
happens.
We
also
have
a
survey
out
for
what
we
should
name,
the
crd,
that
we
propose
so
consider
putting
your
two
cents
in
yes,
exactly
so,
there's
a
survey
monkey
you
can
give
us
all
of
your
feelings.
C
I'd
recommend
the
reaching
out
to
the
cluster
api
subproject
they're
gonna
be
the
ones
that
would
have
the
most
opinions
and
thoughts
and
comments.
That's
a
very
large
group.
I
don't
know
if
you've
had
time
to
chat
with
them
or
get
on
their
agenda.
Yes,.
B
I
went
on
to
their
office
hours,
I
think
two
three
weeks
ago,
I
don't
know
it's
all
fading
together,
but
have
connected
with
some
of
them
and
hanging
out
with
them
in
slack
too
so,
but
did
want
to
come
to
this
broader
group.
Just
in
case
there
were
some
other
subprojects
or
other
connections.
People
could
make.
A
I
will
try
to
complete
the
survey
I
I
guess.
I
suggest
that
others
should
do
this.
Do
they
do
the
same?
Pretty
much
I
I've
already
read
the
cap.
Like
tim,
is
saying
the
question
api
group
is
probably
those
who
should
comment
because
they're
like
affected
by
this
yeah,
but
on
this
call
I
think
the
survey
is
a
good
way
for
people
to
provide
feedback.
B
D
Sir,
I
think
I
like
this
this
kept,
I
guess
a
lot
and
I
think
the
other
thing
which
we
might
have
to
figure
out
in
terms
of
cluster
lifecycle
projects
is
whether
our
various
toolings,
like
chaops
or
kubereum
or
cluster
api,
when
they
create
a
cluster,
should
be
seeding
these
values.
I
don't
know
whether
we
have
a
view
on
that.
Yet
laura.
B
B
Shepherding
that
through
signal
to
cluster
meetings,
mainly
and
then
also
in
conversation
on
the
pr,
so
if
you
have
feelings,
those
are
probably
the
number
one
places
to
go,
and
I'm
happy
to
also
ping
and
slack
when
there's
updates
so
that
you
know
people
know
without
needing
to
watch
it.
A
Okay
yeah
for
british
is
asking
in
chat
that
they
should
put
the
survey
in
the
meeting
notes.
I
I
will
do
that.
I
can
do
it
too.
Oh
you
got
it.
F
Hello,
somebody
hello:
this
is
joey
hi,
everyone,
I'm
from
the
six
storage
team,
so
yeah.
I
think
we
do
have
a
topic
I
wanted
to
mention,
which
is
regarding
the
csi
migration.
So
it's
for
some
background
on
folks
that
do
not
know
much
about
csi
migration
sensor.
F
Migration
is
an
effort
that
the
sig
storage
is
trying
to
drive
to
once
enabled
it
would
redirect
all
the
entry,
plugging
storage,
plugging
traffic
to
the
csi
drivers,
so
that
it
it
will
help
us
to
migrate
all
the
entry
plugins
to
you
know
to
to
the
csi.
F
It's
actually
been
beta
since
1.2
17
and
we're
now
trying
to,
but
but
actually
since,
even
if
it's
already
beta,
it's
not
being
turned
on
by
default,
and
there
are
a
couple
of
other
issues
that
we
are
trying
to
fix,
and
so
the
current
the
current
status
for
that
is
that
we're
gonna,
probably
turn
it
on
by
default
for
a
lot
of
plugins
on
1.22,
and
I
just
wanted
to
give
you
guys
a
heads
up
because
turned
on
csi
migration
do
need
one
important
components
which,
which
is
the
csi
driver
to
be
installed
by
default.
F
F
G
G
Multi-Cloud,
if
I
write
what
is
important,
also
for
for
from
the
installer
perspective,
is
to
get
a
proper
understanding
of
the
upgrade
story.
So
what
happened
on
an
existing
cluster.
F
Yeah
a
good
question
so
so
I
think
the
csi
migration
is
is
a
feature
that
are
controlled
by
a
a
feature
gate.
So
it's.
It
indeed
requires
a
certain
component,
like
the
csi
driver
being
installed,
so
that
the
in
all
the
into
the
entry,
for
example,
let's
take
aws
cbs
as
an
example.
F
F
So
yeah
I
mean
like
the
cesar
driver,
is
a
very
strong
dependency
that
we
need
to
take
care
of,
and
that's
that's
that
should
be
take
care
of
when
we
upgrade
the
cluster.
To
that
certain
version.
F
So
I
mean
like,
if
we
upgrade
to
a
version
that
we
turn
the
csm
migration
on,
then
the
csi
driver
will
be
required
on
the
cluster.
G
F
Yeah,
so
this
sensor,
migration
has
like
two
phases,
so
the
first
phase
is
that
what
once
so,
there
are
two
parts
of
it.
F
So
the
feature
flag,
the
csi
migration
feature
flag-
can
be
turned
on
on
the
on
the
master,
which
is
the
crypt
controller
manager
side
and
also
it
needs
to
be
turned
on
on
the
cubelet
side.
So
if
only
let's
say
if,
if
only
the
the
feature,
gate
is
turned
on
the
master
side
and
and
the
node
on
the
kubelet,
it
doesn't
turn
on
this.
Csi
migration
feature
feature
gate.
Then
all
the
traffic
will
still
go
go
through
the
entry,
which
is
the
the
entry
aws,
the
the
inch
storage
plugin
traffic.
F
C
C
So
I
think
that
I'm
a
little
surprised
that
someone
from
red
hat
didn't
kind
of
mention
that
the
storage
sig,
but
if
they're,
okay
with
it,
then
I
think
then
I
I'm
not
gonna
really
pipe
up
too
much.
But
I
think
my
previous
request
of
a
set
of
scenarios
for
the
particularly
the
upgrade
scenarios,
as
well
as
the
sequence
and
order
of
operations,
I
think
is,
would
be
hugely
useful
for
us
to
know
what
changes
we
need
to
make
because
we're
gonna
have
to
stage
this.
F
Yeah,
so
I
think
the
main
change
is
that
you,
like,
I
think,
for
all
the
non-kubernetes
distribution
to
set.
We
will
have
to
install
the
csi
driver,
so
I
think
here
I
said,
bundle
with
the
cloud
provider.
I'm
I'm
more
talking
about
like
for
aws
and
should
bundle
the
ebs
driver
for
gcp
issue
bundle
the
pd
sensor
driver
for
azure.
F
It
should
bundle
the
azure
drivers
I
mean,
but
for
other
clouds
like
if
they
have
some
special
specialty,
if
they
have
their
like
their
own
different
requirements,
it
might
like
be
be
different
or
have
their.
They
can
have
their
own
installation
story
as
long
as
like
the
user,
the
user,
the
users
are
happy
with
that.
I
think
it
should
be.
F
Fine
just
nice,
so
I
think
the
man,
the
man
yeah,
the
main
issue
here,
is
that
we
should
like.
I
want
to
let
the
folks
from
from
the
sick
cluster
lifecycle
team
know
like
this
is
coming
up,
and
we
don't
want
to
surprise
you
that,
like
you,
have
to
install
csi
drivers
to
use
the
storage,
plugins,
etc.
F
So,
if
there's
anything
that
we
can
do
like
before
this
happen
like
in
prepare
for
the
upcoming
change,
like
I'm
happy
to
work
with
you
guys
to
figure
out
like
what
what
we
need
to
do
on
this.
C
So
stuff
like
this,
that
will
infect
multiple
projects,
I
think,
having
an
umbrella
tracking
issue
and
then
talking
with
the
other
sub-projects
to
link
in
so
like
having
an
issue
inside
the
cubitium
repo
issue.
Inside
of
the
cops
repo,
an
issue
inside
a
cluster
api
repo
and
then
link
it
through
some
umbrella
issue.
I
think,
would
be
highly
beneficial
because
I
I
have
the
suspicion
that,
even
though
you're
talking
with
us
today
that
stuff's
gonna
get
lost
and
then
somebody's
gonna
get
the
next
tradition
and
they're
gonna
freak
out.
F
In
the
past
before
yeah
that
I
think
that
totally
makes
sense,
I
can
create
three
issues
in
chaops:
cube
admin
and
creep
spray.
If
that's
the
that's
the
like
the
demand
distribution
tools
that
I
know
I'm
wondering
like
if
you
have
another
like
repost,
that
I
need.
G
F
Okay,
yeah
sure
so
so
I
think
one
thing
I'm
also
like
I'm
curious
about
is
so
since
there
is
another
effort
in
the
sake
cloud
provider
team
like
they
wanted
to
extract
the
call
from
the
entry
cloud
provider
and
to
install
the
cloud
provider,
manager
etc,
like
eventually
the
entry
cloud
provider
will
be
removed
right.
So
it's
kind
of,
I
think
it's
kind
of
ideally
the
same
with
what
we're
doing
in
the
storage
side.
So
is
there
any
possibility
that
we
can
like
I'm
not
saying
like
we?
F
D
I
was,
I
was
going
to
suggest.
One
of
the
things
we
can
work
on
is
is
is
building
an
add-on
operator.
The
value
of
that
will
be
one.
I
can
continue
to
ramp
up
professional
people
using
it
not
an
operator
two.
I
can't
we
can
like
make
very
explicit
the
the
sequence
of
operations
that
has
to
happen
for
at
least
one
case
right,
and
we
can
sort
of
get
it
under
testing
and
see
that
that
that
one
case
works
reliably,
and
I
I
think
we
can.
D
I
suggest
we
can
like
have
one
which
does
the
cloud
provider
and
the
storage
provider
together,
and
if
people
want
to
replace
one
or
the
other,
which
is
something
I
actually
wasn't
aware
of,
then
it
will
at
least
be
a
starting
point
for
them,
and
hopefully
we
can
make
it
so
that
they
could
replace
either
or
both
of
the
manifests
or
turn
them
off
independently
and
do
one
at
a
time.
But
I
I
do
think
we
should
make
sure
that
both
upgrades
work.
D
D
F
A
Yeah
yep
just
want
to
delegate
this
to
an
operator
right,
cuba
dm-
I
don't
think,
need
this
at
all:
it's
just
a
bootstrapper.
A
If
users
have
to
install
this
separate
driver,
they
should
just
install
the
operator.
This
is
how
I
see
it.
Also,
please
provide
some
links
to
the
your
agenda
topic
because
I
mean
I
think
I
found
is
this:
the
csi
migration
kit.
F
A
F
Yeah
sure
I
I
can,
I
will
post
it
here
after
offline
yeah.
A
Yeah
so
for,
like
I
said
for
cupedia,
probably
we
don't
need
any
super
question
api
for
cops
and
cube
spray.
I
guess
we
need
one
issue
for
those
projects.
F
Yeah,
and
also
one
another
thing
I
wanted
to
mention,
is
that
if
so
this,
as
I
mentioned,
cesar
migration,
has
two
phases.
This
is
the
first
phase.
F
There
is
a
second
phase,
which
is
that,
if
like,
if,
if
a,
if
somehow,
the
the
distribution
doesn't
want
to
maintain
the
entry
plugging
at
all
like
there
is
another
feature
gate
that
they
can
turn
on,
which
will
just
unregister
the
entry
plugin
so
that
the
user
will
have
to
use
the
csi
driver
instead,
if,
if
that
makes
like
makes
your
life
easier,
I
think
that's
also
a
way
we
can
do
so.
F
Correct
yeah,
it's
it's!
It's
it's!
It's
actually
on
the
yeah,
it's
the
wrong
time.
It's
a
feature.
Gate.
A
Yeah,
okay
yeah.
Basically
this
comes
like
a
bit
of
a
surprise
to
us.
We,
if
you
can
give
us
more
links
and
lock
the
issues
yeah.
F
Yeah,
I'm
I'm
also
a
little
bit
surprised
because,
like
yeah,
I
thought
this
should
communicate
earlier
to
it
to
you
guys,
but
yeah.
I'm
happy
to
provide
some
more
information
on
this
and
feel
free
to
ping
me
on
slack
for
any
related
topics,
I'll
I'll,
definitely
put
more
ratings
and
background
in
their
stock.
A
Thank
you.
So
we
have
some
project
updates.
Now
I
read
something
for
keep
adm
quickly.
Last
office
hours,
we
had
a
meeting
discussing
the
machine
at
the
station
parenthesis
security,
knowledge
identity,
feature
that
we
want
to
enhance
costa
rica
with
we
did
some
gathering
of
ideas,
but
I
think
we
concluded
that
we
need
a
detailed
spec
to
look
at
and
decide
what
to
do
because
it
was
the
basically.
The
proposal
is
a
bit
sparse
at
the
moment.
A
So
I
think
proposal
is
going
to
be
scheduled
for
this
wednesday,
but
you
know
I'm
not
sure
if
they
will
be
able
to
create
it
by
then.
So.
The
second
thing
I
wanted
to
mention
we
did
issue
transfer
1281,
no
major
features
incubation
this
cycle,
nothing
particularly
interesting.
A
One
question
I
have
for
the
group,
which
is
technically
a
kubernetes
topic,
but
I
want
to
get
some
opinions
is
that
currently
we
deploy
the
kcm
and
scheduler,
we
bind
them
to
localhost
only
so
this
means
that
you
cannot
gather
metrics
from
them
externally,
so
the
leak
tissue
is
basically
created
by
some
prometheus
users
who
say
that
we
should
expose
these
components
on
the
node
ip.
The
same
way,
we
expose
the
api
server
on
such
control,
plane
nodes.
A
They
ask
us
to
expose
the
kcms
scheduler,
and
I
also
wanted
to
mention
that
we
already
like
we
already
enabled
only
the
secure
port,
so
they
are
serving
only
securely.
So
the
question
here
like
should
we
do
this?
It
changes
the
topology
of
kubernetes
a
little
bit.
Obviously
we're
not
going
to
backboard
this
feature.
So
what
do
people
think.
C
C
So
I
don't
there's
a
couple
ways
we
could
handle
this.
I
believe
that
that
topology
was
solved
with
openshift
as
well,
because
openshift
did
the
same
thing
so
we'd
have
to.
We
could
look
there
as
an
example
of
how
they
solved
that
endpoint
scraping,
but
they
don't
allow
random
external
endpoint
scraping
they.
They
have
this
like
proxy
thing
to
enable
it.
So
that
might
be
the
way
we
go,
but
I
there's
a
reason
why
it
was
localhost.
A
G
Yeah,
I
want
to
add
to
the
to
the
recent
list.
G
We
recently
changed
this,
probably
one
or
two
version
ago
of
kubernetes,
because
basically
having
a
the
control,
the
controller
manager
and
the
scheduler
pointing
at
the
same
api
server
on
localhost
makes
basically
everything
in
the
host
consistent
internal
version
and-
and
there
is
no,
there
is
less
risk
or
no
risk
to
having
unsupported
versus
queue
while
doing
immutable,
upgrades
and
and
the
one
one
example
is
that
recently,
some
api
was
removed
by
kuberpa
server
and
and
without
these
topology
that
we
are
using
using
now,
we
were
getting
a
request
from
from
an
older
kuber
scheduler
redirected
through
the
load
balancer
to
a
new
api
server,
and
things
were
failing.
A
Okay,
but
this
is
maybe
I
should
clarify
that
these
components
when
they
serve.
A
D
Just
I
just
want
to
say
on
the
first
topic
you
mean
tomorrow,
it's
going
to
be
discussed,
probably
in
the
in
the
office
hours,
the
cluster
bit
office
hours.
D
Cool
I'll
try
to
show
up
to
that,
because
we
do
have
something
in
chaos.
I
don't
think
we
love
it,
but
it
can
maybe
help
and
on
a
similar
note,
we
actually
added
a
proxy
in
chaops
for
the
health
z
endpoint
for
api
server.
So
we
have
something
we
could
also
like
contribute
collaborate
on
somewhere.
For
you
know
some
sort
of
like
very
limited
scope
proxy
that
can
enforce
some
of
the
stuff
it
sounds
like
openshift
might
have
something
similar.
D
I
don't
know
if
gke,
I
think
I
think
gke
might
use
envoy,
which
I
think
is
probably
a
little
heavy
for
what
we
want,
but
so
I
think
we
might
have
I'll
try
to
attend
tomorrow,
but
in
general
like
there
are,
I
think,
proxies
that
we
have
elsewhere
and
if,
if
a
proxy
is
helpful,
I'm
you
know
happy
to
collaborate.
A
I
see
so
yeah
we
could.
I
see
I
can
see
that
we
can
do.
This
is
something
like
a
proxy,
so
basically
the
pro
the.
What
I'm
gathering
from
the
meeting
is
that
we
probably
shouldn't
do
this,
or
only
because
of
the
metrics
like
it's
not
a
it's,
not
that
much
of
a
security
hardening
argument.
It's
just.
If
somebody
wants
it,
they
should
just
use
something
like
a
proxy
and
that's
it.
D
The
the
other
nice
thing
about
a
proxy
sorry,
the
other
nice
thing
about
proxy
is
we
could
probably
run
it
as
a
pod
and
have
it
appear
as
a
kubernetes
service.
So
if
some
I
don't
know
if
people
are
typically
running
prometheus
in
cluster
or
out
of
cluster,
I
don't
know
if
there's
a
trick
to
get
that
to
happen
anyway,
but
depending
on
the
use
case,
that
proxy
might
a
pod
proxy
might
have
an
advantage.
A
A
D
A
Yeah
and
to
be
clear
also
the
something
that
I
outlined
in
the
issue
is
that
kubernetes
already
supports
config
configuration
for
this,
so
you
can
potentially
use
patches
to
bind
to
the
node
ip.
So
it's
like
a.
A
If
nobody's
saying
that
we
should
just
do
it,
I
think
we
should
just
say
that
people
have
so
many
alternatives,
and
we
shouldn't
do
this
by
default.
C
Yeah,
I
was
just
thinking.
I
recalled
how
we
did
this
in
the
past.
You
can
change
your
deployment
to
basically
install
a
proxy.
You
know
if
I'm
deploying
prometheus,
if
you
want
to
grab
that
stuff.
So,
like
there's
multiple
ways,
you
can
do
this
that
doesn't
require
defaultly
changing
our
default
deployment.
C
Just
because
there's
limited,
there's
limited
benefit
for
a
non-kubernetes
user
grabbing.
The
kcm
and
scheduler
matrix
like
a
kubernetes
developer,
is
highly
useful
right
like
so
when
we
did
this
stuff
in
the
past,
we
would
install
those
proxy
agents
because
we
would
run
profiling
on
it,
but
when
you're,
actually
a
consumer,
the
api
server,
endpoint
matrix
are
totally
still
accessible
and
you
will
get
vast
majority
of
better
information
to
diagnose
most
user
specific
errors.
A
I
agree
completely.
It's
like
it
feels
like
probably
more
than
80
percent
of
the
user
base
will
not
care
about
metrics
for
these
components.
A
So
the
feature
is
official
request
is
a
bit
of
a
stretcher.
There
are
alternatives,
I'm
going
to
mention
that
we
discussed
this
during
this
meeting.
I
can
close
the
ticket
with
this
argument.
D
Kibbeydm,
okay,
moving
to
cops;
yes,
nothing,
particularly
across
project
class,
cross
project
interest.
We
did
release
our
119
zero
release.
We
always
intend
fast
follow
with
the
119.1
and
it
sounds
that
we're
going
to
need
that
we
missed
a
couple
of
things:
nothing,
critical,
critical,
but
certainly
things
that
justify
that
and
we're
going
to
try
to
do
120
much
faster
than
we
did
119
and,
basically,
you
know
catch
up
or
come
closer
to
being
on
our
preferred
delay
again.
G
If
you
want
to
take
a
quick
update
about
copy,
so
the
main
main
heads
up
for
the
brother
communities
that
we
are
trying
to
close
proposal
for
v1
alpha
4.
G
G
There
are
many
works
going
on
in
parallel
and
if
you
are
interesting
to
catch
up
of,
please
take
a
look
at
the
meeting.
A
A
Cool
this
is
all
we
have
to
for
today.
Any
final
topics
or
questions
for
the
group.
A
All
right
thanks,
everybody
see
you
again
in
a
couple
weeks:
bye.