►
From YouTube: SIG Cluster Lifecycle 2021-09-07
A
A
A
A
A
A
What
potential
group
topic
is
going
to
be
the
kubelet
migration
options
when
a
new
api
configuration
is
released,
for
instance,
we
want
beta2,
but
I
think
we
can
delay
this
discussion
when
this
actually
happens
and
I'm
assuming
that
the
v1
beta
1
is
not
going
to
be
removed
right
away.
A
B
Yeah,
I
only
one
comment
on
on
these
common
component:
config
things.
I
I
think
that
we
should
raise
the
the
topic
again
to
seek
architecture,
because
we
we
have
two
layers
of
problems.
One
is
what
is
the
future
of
this
feature
and
then,
of
course,
we
have
to
find
the
resources
to
make
it
to
make
it
happen,
but
for
the
first
part
in
term
of
what,
where,
where
we
are
going,
how
do
we
support
conversion?
How
are
we
bringing
the
user
on?
B
B
C
So
let
me
rephrase
this
fabricio
a
little
bit,
so
there
was
a
working
rook,
the
working
group
disbanded
and
he
and
each
each
sig,
whatever
component
conflict,
that
they're
using
they
are
picking
that
up
and
maintaining
it,
so
sig
node
is
doing
whatever
it
needs
to
do.
For
the
signal
related
component,
config
and
ckp
machinery
is,
is
a
place
that
will
help.
C
A
A
Basically,
the
rule
itself
to
not
expose
converters
publicly
is
also
one
of
their
own.
That's
their
idea,
and
I
guess
we
still.
We
basically
have
to
convince
them
that
there
has
to
be
a
way
to
up
convert
users,
and
if
you
look
at
crypto
converts
they
the
tool
basically
does
that,
but
it
only
works
with
core
apis.
A
A
A
The
other
one
is
that
you
kept.
This
is
to
replace
certain
legacy,
naming
that
cubadiem
uses
to
name
its
config
maps
for
the
couplet
configuration.
A
Basically,
when
kubernetes
creates
a
cluster,
it
stores
the
correlated
configuration
that
is
currently
shared
between
nodes
in
a
certain
config
map
and
for
some
reason
we
in
the
past.
We
decided
that
it's
a
good
idea
to
include
the
version
of
the
kubernetes
cluster
inside
the
kubernetes
configmap
name.
A
So
the
xy
in
this
case
are
basically
the
major
minor
version,
and
this
kept.
This
follow.
Follow
up
on
our
discussion
in
the
past,
where
we
basically
wanted
to
stop
doing
that,
and
pretty
much
only
quote
this
as
the
source
of
truth
and
the
cap
is
still
on
the
review.
I
think
the
deadline
is
in
a
couple
of
days,
but
I
plan
to
update
it
today
and
being
again
the
reviewers
for
more
feedback.
B
Yeah,
just
a
quick
update,
so
ongoing
discussion
on
our
activities.
We
have
an
ongoing
work
about
custard
class
and
management
topologies
and
yesterday
stephan
created
a
pr
with
an
amendment
to
a
proposal
introducing
basically
patches
that
allows
to
customize
template
on
a
cluster
basis.
B
B
The
last
bit
of
the
story-
is
that
in
line
we
are
calling
these
inline
patches,
because,
let
me
say
this
is
the
the
first
option
that
we
are
going
to
support,
which
is,
let
me
say,
a
kind
of
no
no
code
option
which
is
supported
by
the
api,
but
we
are
also
starting
to
discuss
to
provide
about
providing
basically
a
a
more
powerful
object,
based
on
on
an
extension
mode
that
and
external
component
that
has
has
been,
is
being
currently
discussed,
and
the
next
topic
for
cluster
api
is
that
basically,
we
are.
B
There
are
discussion
about
how
to
improve
the
integration
between
customer
api
and
auto
scaler.
The
discussion
is
driven
by
the
reddit
folks
and-
and
there
are
let
me
say
two
two
sides
of
the
discussion
that
we
are
trying
to
understand.
If
we
can
merge
the
first
one
is
to
manage
basically
to
provide
to
the
autoscaler
the
information
for
scaling
up
from
zero,
which
is
a
kind
of
corner
case,
because
it
requires
to
provide
the
autoscaler
some
information
of
the
tie
about
the
type
of
machines
that
you
have
even
before
arriving
machines.
B
And
so
this
is
a
kind
of
corner
case
that
we
are
trying
to
define
and
the
the
second.
The
second
problem
that
that
again
requires
to
provide
to
the
some
information
about
machine
to
the
auto.
Scaler
is
basically
related
to
gpu
and
computing
and
basically
getting
the
autoscaler
working
with
machining
with
gpus
and
the
the
the
problem.
B
The
background,
let
me
say
problem
is
the
same,
provide
the
machine
information
to
the
auto
scaler,
and
this
is
why
we
are
trying
to
reconvene
this
to
topic
if
you
have
interested
in
cluster
api
and
auto
scaler
integration,
please
chime
in
on
on
the
linked
proposal
or
in
the
upstream
discussion
and
reach
out
for
mike,
which
is
driving.
This
effort.
D
Yeah,
just
small
update,
there's
support
to
to
deploy
a
cd
in
a
static
pod
now
something
that
is
being
used
by
the
work
for
the
externally
managed
ncd
proposal
for
cluster
api,
so
that
that
helps
that
I
think
move
along
yeah.
That's,
I
think,
that's
that's
it
foreign.
I
don't
know.
If
you
did,
you
wanted
anything
justin.
E
No,
I
think,
that's
a
good
update.
Thank
you,
I'm
just
thinking
about
the
previous
proposal,
the
cluster
class,
one
because
that's
just
a
totally
different
pattern
from
anything
we
have
seen
before.
So
I'm
like
that,
we
wanna
trying
to
think
about
that.
But
now,
thank
you
on
the
honesty.
Thank
you.
B
B
I
think
that
it
will
be
super
nice
to
upgrade
this
tutorial
or
to
have
to
use
the
tcdm
or
to
have
a
a
variance
or
of
the
of
this
tutorial
using
a
tcdm,
and
so
we
start
basically
giving
visibility
to
the
tool
inside
the
kubernetes
dock
website.
D
Yeah,
I
actually,
I
don't
think
I'd
seen
that
I've
seen
that
doc,
yet
yeah
I'll
take
a
look
that
would
be.
That
would
be
nice
to
have.
D
I'll
I'll
I'll
create
an
issue
thanks.
A
B
I'll
defer
the
comment
on
the
maturity,
but
I
think
that
it
is
important
that
sooner
or
later
we
start
giving
visibility
to
the
tool.
Otherwise
we
we
we
are
kind
of
in
a
better
loop
because
without
visibility
that
the
tool
will
want
to
get
feedback
and
stuff
like
that.
So
if
we
are
not
sure
we
we
can
add
the
document
as
a
separated
one.
We
follow
the
disclaimer
of
the
that
we
think
are
worth
to
add.
But
let's,
let's
try
to
give
some
exposure
to
the
tool.
D
A
I
wanted
to
to
see
how
did
you
implement
the
switch
between
system
d
and
static
ports
are
using
a
cli
fog.
D
Yeah,
it's
sorry
in
its
system,
flag.
D
And
by
default
it's
it's
systemd
one!
Second,
let
me
oh
I'm
gonna,
add
the
link
to
the.
A
D
A
A
E
Thank
you.
If
I
recall
it's
a
little
messy,
but
I
think
it's
fine,
the
the
the
challenge
was
that
we
were
using
a
different
go.
Build
directive
in
that
was
so
go.
117
has
a
new
form
for
go,
build
before
go
build
directives
which
is
not
supported
in
goal
116
and
go
format
switches
to
the
new
one,
so
our
go
format
check
or
whichever
one
it
was
was
failing.
E
But
of
course,
if
we
were
to
switch
it,
we
would
break
go
116
and
earlier
ie
I'd,
say
everyone,
so
that
was
that
was
awkward,
but
we
were
only
using
in
one
little
place
so
yeah.
That's!
Why
that's?
How
we
do
this.
A
E
Yes,
yes,
the
dot.
Zero
are
often
a
little.
A
Yeah
right,
I
guess
this
is
the
important
change
for
hcdm.
If
somebody
is
interested,
you
can
have
a
look
any
other
questions
or
topics
around
hddm.