►
Description
Meeting notes https://docs.google.com/document/d/1ushaVqAKYnZ2VN_aa3GyKlS4kEd6bSug13xaXOakAQI/edit#
A
All
right
welcome.
Everyone
today
is
Wednesday
the
8th
of
February
2023,
and
this
is
the
cluster
API
Community
project.
Meeting
cluster
API
is
a
sub-project
of
Sig
cluster
lifecycle
in
the
kubernetes
world,
and
so
as
such,
we
adhere
to
the
kubernetes
sigs
code
of
conduct,
which
basically
means
if
you'd
like
to
talk.
Please
raise
your
hand
and
in
general,
please
be
kind
to
each
other.
A
So
let's
get
moving
here.
I'm
just
gonna
was
gonna.
Try
to
open
the
chat,
so
I
could
see
it
but
I'm
having
some
difficulties
finding
it
there.
We
go
all
right,
cool
alrighty.
So
at
the
beginning
of
our
meetings
we
like
to
welcome
new
attendees
or
new
members
like
cluster
API
community.
So
I
will
mute
myself
for
a
second
here
and
if
you
would
like
to
please
raise
your
hand
or
just
unmute
yourself
and
and
introduce
yourself.
A
So
let's
move
on
to
our
open
proposal
reasons:
I
don't
know.
Do
we
have
any
open
proposals
at
the
moment?
Looks
like
not.
Is
there
anyone
who'd
like
to
talk
about
the
stale
proposals
or
or
open
documents
that
we
have.
B
C
I
I
think
that
yeah
we
can
give
an
update
on
how
is
going
to
work
for
employment.
We
are
implementing
to
my
proposal
about
not
level
not
propagation,
yuvarages
and
Stefan
are
leading
the
work,
so
we
are
making
progress.
We
have
a
bunch
of
PR
out.
C
We
are
trying
to
drive
them
to
a
state
where
we
can
ask
for
feedback
for
a
wider
audience,
but
yeah
I
think
that
we
are
still
in
in
the
in
line
to
get
this
completed.
All
the
work
about
not
label
not
propagation,
to
get
it
completed
by
the
end
of
the
yeah
of
the
release
and
as
well.
We
made
an
amazing.
We
made
the
two
amendment
to
existing
proposal.
One
is
kcp
remediation,
PR
out.
C
No,
this
is
yeah.
This
is
those
are
proposals
that
were
already
merges,
so
they
are
not
anymore
there
in
the
document,
so
yeah
Casey
work
for
kcp
Remediation
in
line
for
the
release
we,
as
well
as
the
work
for
supporting
variable
Discovery
for
cluster
class
yeah,
so
things
are
proceeding.
If
someone
is
interesting
to
those
efforts
that
we
discussed
in
the
past,
please
reach
out
to
the
team.
I
don't
know
if
Stefan
or
other
fours
want
to
add
something
foreign.
D
I
think
you
probably
said
it
wrong,
so
we're
working
on
on
the
provocation
at
a
not
level
part
is
already
merged,
but
we're
working
mainly
on
machine
deployment,
machines
that
are
machine
provocation
and
kcp
to
the
machine
I
mean
mostly
URLs
actually
and
clean
on
the
variable
Discovery
part,
which
is
like
half
implemented
roughly.
D
We
hope
we
have
something
close
to
final,
the
next
one
or
two
weeks
for
variable
discovery
and
whatever
is
mentioned
at
the
end,
we
merged
a
PR
which
extends
the
defaulting
of
the
machine
deployment
replica
field
to
make
it
easier
to
use
it
together
with
the
auto
scaler
I
think
I
brought
it
up
one
or
two
weeks
ago.
So
there
should
be
a
link,
deck
102.
D
meeting,
notes
down
yeah.
A
Okay,
cool
thanks,
Stefan
and
Fabrizio
I!
Guess:
I,
don't
know
if
it's
worth
putting
links
to
that
stuff
in
here
afterwards,
but
yeah
thanks
for
mentioning.
D
I
can
add
some
links
to
them
like
higher
level
issues.
Okay,.
A
Awesome,
thank
you
so
we'll
go
now
to
our
discussion
topics
and
it
looks
like
cigar
and
Jacob
or
Jacob
have
have
the
first
one
up.
Are
you
folks
around.
E
The
the
repo
that
I've
added
here
is
the
in
cluster
iPad
provider
that
we
have
built
which,
which
we
use
in
in
the
gapy
land,
to
to
basically
leverage
ipam
for
node
IP
addresses.
So
currently
this
particular
repo
lives
in
the
the
Telecom
GitHub
organization,
considering
that
this
is
something
that
well
or
this
project
is
something
that
is
very
closely
related
to
the
cluster
API
family
of
projects.
I
was
hoping
that
maybe
we
could
find
a
better
Organization
for
the
Sleep.
E
The
report
will
live
in
I'm,
not
sure
like
what
folks
have
in
mind.
Is
that?
Is
there
any
suggestions,
or
are
there
any
suggestions
about
like
what
this
or
where
this
Reaper
should
be
living.
E
My
thought
was
that
maybe
we
could
move
this
out
of
the
kubernetes,
six
or
or
or
or
sorry
the
the
same
organization
that
all
of
the
cap
provider
repos
live
in,
just
because
I
I
think
like
since
the
last
1.3
release
we
probably
have
like
a
different
sort
of,
or
we
have
introduced,
ipam
as
a
different
sort
of
Provider
when
using
cluster
CTL.
So
probably
it
makes
sense
for
for
this
Reaper
to
live
under
that
organization.
A
Yeah,
no,
it's
it's
a
great
question.
Cigar
and
I
see
fabrizio's
raising
his
hand.
So
please
Fabrizio.
C
So
I
think
that
there
are.
There
are
a
couple
of
considerations,
so
when
you
bring
provider
inside
the
the
community,
it
will
be.
You
know,
by
the
disciple
by
the
community
and
so
I
think
that
vision
is
that
to
having
many
companies
to
contribute
with
the
same
provider,
making
a
mechanic
growth.
I
know
that
is
not.
It
does
not
always
happen,
but
I
think
that
that's
there
should
be
the
ambition
for
people
that
moves.
C
C
Technically,
let
me
say
from
if
I
think
at
this
from
a
cluster
from
foreign
I
think
this
is
a
good
idea.
It
will
be
the
Showcase
of
our
implementation
of
item
provider
and,
and
it
is
a
good
idea
to
have
under
the
second
umbrella
but
yeah.
C
This
is
something
that,
if
you
are
interesting
to
do
it,
you
have
to
come
to
the
C
plus
device
cycle
meeting
and
basically
make
the
case
to
to
open
it,
because
the
the
secret
leaders
to
plus
one
the
request
I
have
also
another
consideration,
which
is
final,
so
I
I
don't
have
a
strong
opinion.
Sometimes
it
is
a
good
idea,
sometime
not
but
from
from
the,
if
you
look
at
this
from
the
operational
cost
to
endola
repository
cut
version,
Look
at
end-to-end,
test,
etc,
etc.
C
So
the
maintenance
of
a
new
project
is
quite
tight,
so,
depending
on
how
big
is
the
community?
How
do
you
want
to
manage
this?
I
think
that
it
could
be
also
an
option
to
to
figure
it
out
if
to
host
these
together
with
some
other
project
could
be
a
couple
week
could
be
copy
as
well,
but
yeah
managing
a
repository
monitoring
and
to
enter
senior
all
the
automation
make
files.
C
E
So
from
what
I'm
hearing
it
is
testing
that
there
is,
or
that
could
be
a
potential
other
option
wherein
we
could
fold
this
repository
under
maybe
say,
Cappy
or
like
for
that
matter.
Cap
is
as
well
and-
and
that
could
be
one
thing
that
that
we
could
probably
look
at
as
well.
E
And
that's
what
Fabrizio
mentioned
that
that
that
could
probably
also
be
like
one
one
solution
or
one
one
option
that
we
could
look
at
eventually:
I
I
I'm,
not
suggesting
like
we
go
with
the
other
option,
but
just
something,
probably
that
Jacob
and
I
can
and
think
about
and
probably
come
up
with
some
sort
of
list
of
like
what
each
approach
brings
to
the
table.
C
Maybe
it
is
because
in
copy
we
already
have
three
or
four
server
provider
into
it,
but
yeah,
let
me
say
it
is
just
a
I
think
that
first
of
all,
as
a
maintainer
of
this
project,
then
I
let
Jacob
to
speak.
You
have
to
think
okay,
how
much
strength,
how
much
people
we
have
to
maintain
a
separate
repository.
Can
we
do
it,
etc,
etc?
And
then,
according
to
this,
we
we
can
steer
the
discussion
in
the
recycle
meeting.
F
You
still
have
a
comment:
yeah
just
wanted
to
add
a
bit,
so
the
idea
of
adding
it
to
some
other
repo
I
would
then
propose
to
I
mean
for
me.
It
only
makes
sense
to
do
it
in
in
the
copy
repo
itself,
because
it's
not
really
strictly
or
it's
related
to
cap
V
yeah,
but
it's
also
intended
to
be
used
by
other
infrastructure
providers
and
hiding
it
in
quotation
marks
in
the
cap.
V
repo
might
not
be
ideal
from
our
perspective
or
from
Telecom
perspective.
It's
also
not
about
visibility.
F
I
think
I
also
proposed
it
in
the
initial
or
during
the
proposal
of
the
iPhone
conflict
in
general
to
just
move
this
once
it's
ready
to
Cluster
to
kubernetes,
six
or
maybe
somewhere
else,
but
also
because
VMware
is
currently
contributing
so
much
that
they
are
I,
think
it
doesn't
take
long
until
they
overtake
me
because
I
just
don't
have
the
resources
right
now.
It's
also
not
really
fair
to
have.
F
It
live
at
the
Telecom
repository
and,
as
the
third
point,
the
initial
intention
for
it
was
to
to
serve
as
a
reference
implementation,
we're
actually
not
using
it,
and
we
don't
need
it.
F
It's
just
to
have
something
for
providers
to
test
with
and
something
as
like,
a
very
basic
implementation
to
have
for
a
iPhone
provider,
we're
actually
building
another
provider
that
integrates
with
infoblox
a
external
IP
address
management
tool.
F
It's
not
very
big,
so
it's
like
code,
Wise
It's,
not
that
that
complex,
it's
two
custom
resources,
just
one
Global
and
one
namespace
one,
but
both
do
the
same
thing.
F
A
So
if
I'm
hearing
this
right,
it
sounds
like
kind
of
the
next
steps
out
of
here
would
be
maybe
a
cigar.
Take
a
look
at
take
a
look
at
kind
of
documenting
the
various
options
and
then
figuring
out
like
just
kind
of
putting
it
into
a
documentation,
and
then
next
step
would
be
going
to
the
Sig
cluster
lifecycle,
meeting
and
kind
of
having
the
discussion
there
about
like.
Should
it
be
a
repo
of
its
own
or
or
maybe
could
we
just
add
it
to
one
of
the
other
cluster
API
repos.
D
Yeah
I
just
want
to
bring
up
on
1.4
copy,
just
in
case
you
want
to
move
it
in
core
copy.
We
definitely
need
a
difference
or
the
same,
a
dedicated
set
of
maintainers
for
it.
If
you
put
it
there
because
I
think
looking
at
a
current
code,
looking
at
the
current
is
looking
at
who's
doing
the
Travis,
we
are
already
at
a
at
a
limit
or
already
over
it.
So
well,
it's
definitely
probably
a
separate
discussionist
like
does
it
fit
into
rape
or
not.
D
We
definitely
need
a
separate
or
a
dedicated
maintenance
for
that,
because
we
just
can't
take
on
that
work
on
top
of
what
we
already
do
today,
because
I
think
that's
one
problem
of
the
Quarry
we
kind
of
put
more
and
more
stuff
into
it,
but
we're
not
necessarily
getting
more
reviews
or
maintainers
for
all
that
stuff,
and
today
it's
super
hard
to
understand
and
review
and
maintain
everything
that
we
have
in
there.
D
A
All
right
cool
fur
catas,
you
have
your
hand
up
yeah,.
B
Yeah
folks,
it's
right
here,
I
just
wanted
to
bring
up
that
as
a
cabin
entry
provider.
We
are
also
interested
on
this
and
we
have
discussions
with
Jacob
since
we
have
our
Captain
three
specific
iopam
solution,
but
we
are
really
interested
to
integrate
into
the
Cappy
provided
iPhone,
let's
say
so.
You
would
be
happy
to
contribute
and
even
be
in
the
list
of
reviewers.
There.
F
I
think
we'll
just
consider
the
options.
One
thing
also
to
think
about
is
release
management,
because
I
think
if
you
have
one
repository
hosting
multiple
things,
it's
hard
to
do
individual
releases
like
with
with
the
copy
repository.
It's
not
really
a
problem,
because
most
of
the
things
are
so
tightly
interlinked
that
it
doesn't
really
matter,
but
maybe
for
the
iPhone
provider.
F
You
don't
need
releases
as
frequently
or
also
it
would
maybe
be
good
to
have
them
decoupled
from
from
copy
yourself,
because
right
now
copy
hosts,
well
the
core
and
the
bootstrip
stuff
for
Coupe
admins
and
some
testing
or
the
docker
provider.
Basically
I
think
that's
all
having
typed
relationships
with
versions.
There
is
probably
okay,
but
for
other
things
like,
for
example,
that
I
don't
provide.
That
might
not
be
ideal.
A
A
Okay,
I'm
not
seeing
any
hands
going
up.
So,
let's
move
on
to
the
next
one.
Hiromu
has
some
use
cases
for
revision,
history
annotation!
So
please
haruma!
Oh.
G
Yes,
thank
you.
G
My
comment
is
basically
there's
a
discussion
for
the
comment
that
fabolizio
gave
me
at
this
PR
yeah
I
think
he
gave
me
the
comment
that
maybe
we
have
the
choices
or
or
using
version
history
or
rolling
back
strategy,
but
it
waved
to
me
is
currently
the
rollback
and
the
comment
it's
just
for
the
rolling
back
for
the
no,
no
somebody
they're
just
using
the
the
read
Distillery
Vision
number,
but
the
maybe
we
can
extend
that
feature
to
revision
history,
which
means
we
can
warrant
back
to
the
any
versions
included
in
Eurovision
history
and
teacher,
which
is
not
implemented
in
the
current
current
code.
G
G
C
Yeah
so
when
I
was
arriving
on
this
PR,
basically
in
parallel
that
we
are
doing
the
work
on
the
labor
propagation,
and
so
we
kind
of
figured
it
out
that
revision
is
actually
is
still
a
feature
used
in
cluster
API
and
it
basically
just
to
give
a
little
bit
of
contest.
So
whenever
you
change
a
machine
deployment,
a
new
revision
is
created.
C
But
if
you,
if
you
change
to
one
of
the
previous
state
of
the
machine
and
deployment
basically
and
all
the
machine
set,
is
used,
it
is
just
upgraded.
The
revision
number
revision
history
so,
instead
of
creating
new
machine
set,
it
kind
of
reduce
on
an
old
machine
set
and
then
and
keep
track
of
this
machine
set
being
into
position
of
the
religion
histories
using
a
residual
and
history
field.
So
it's
kind
of
complicated,
but
it
is
there.
So
when
I
was
reviewing
this
PR,
basically
I
I.
C
My
comment
was
oh
in
this
PR,
which
is
which
is
showing
the
revision
history.
We
have
to
keep
into
account
also
the
spirit
because
otherwise
we
are
giving.
We
are
basically
giving
a
revision
with
some
holes.
So
it
is
not
the
correct
history
as
a
cluster
behind
managing
okay.
Then
there
was
this
comment.
Maybe
it
is
a
good
time
for
the
discussion
I'm
not
against
it,
but
discussing
and
deprecating
a
feature
or
a
field
in
the
pi
or
a
behavior
is
a
longer
discussion.
C
C
A
All
right,
I
I,
have
to
admit
I'm
a
little
confused
by
by
exactly
what's
going
on
here,
but
it
sounds
like
revision.
The
revision
history
is
a
field
that
we
keep
and
you're
saying
you
can
pull
revision
numbers
out
and
then,
if
you,
if
you
try
to
reset
to
that
revision
number,
it
doesn't
actually
pull
that
old
record.
It
just
sets
everything
to
the
way
that
revision
was
at
that
number,
but
at
the
current
record,
I
guess
and
I
see.
You've
Raj
has
his
hand
up,
maybe
as
a
good
explanation
for
this.
C
A
H
Okay,
awesome,
so
yeah
I
can
briefly
explain
briefly
answer
what
Mike's
question
is
and
also
then
expand
on
the
comment
there.
So
imagine
this
case
where
we
have
machine
set,
one
and
machine
set
two
and
machine
set.
Two
is
the
current
machine
set
and
machine
set?
One
was
just
something
we
came
from
right.
If
the
user
now
rolls
back
to
machine
set
one,
what
would
happen
is
machine
set
once
revision
number
will
be
changed
to
three
and
in
the
revision
history
annotation
we
will
store
one.
H
So
it's
basically
a
right
machine
set.
One
is
revision
three,
but
it
also
used
to
be
revision.
One.
That's
the
idea
right
now
coming
to
this
particular
PR,
and
the
comment
here
I
believe
the
idea
behind
this
PR
is
to
be
able
to
list
all
the
revisions
that
a
user
can
roll
back
to
right
and
right
now,
if
you
have
a
machine
set
that
has
revision
number
three
and
revision
history
as
one
you
can.
H
Only
when
you
are
rolling
back,
you
can
specify
roll
back
to
revision
three,
but
you
cannot
specify
rollback
to
revision
one,
even
though
they
are
basically
the
same
same
machine
set
right.
It
says
that
it's
just
how
our
cluster
cuttle
command
is
written.
So
since
that
is
the
case,
the
prob.
The
idea
was
to
only
list
down
the
revisions
that
you
can
actually
provide
as
an
argument
when
you're
trying
to
roll
back,
but
maybe
also
add
a
column
or
some
other
way
of
representing
yeah.
H
This
you
can
roll
back
to
division,
three,
which
is
also
technically
revision.
One
right
so
have
a
list
of
have
a
row,
have
individual
row
items
for
the
reviews,
numbers
that
you
can
actually
roll
back
to,
but
maybe
have
like
an
additional
column
in
the
table.
That
also
says
like
duplicate,
revision
or
matching
revisions
or
whatever,
and
then
list
down
all
the
numbers
that
you
find
in
your
revision.
History,
annotation
that
that
was
the
that's.
The
brief
summary
of
the
of
my
comment
and
I
hope
that
helps.
D
D
Okay,
so
trusted
Allison
correctly,
so
essentially,
you're
saying
the
classical
rollout
command
currently
only
supports
several
to
the
revisions
which
actually
are
stored
in
the
original
notation.
It
just
doesn't
look
at
revision
history
on
a
patient
and
because
of
that
current,
the
current
limitation
of
the
rollout
command.
We
should
also
write
a
history
command
so
that
it
actually
just
fits
together
and
we
don't
show
stuff
that
the
road
command
can't
go
back
to.
D
But
what
you
could
also
do
is
just
we
could
write
a
history
command,
that
it
shows
all
revisions
and
also
adjust
the
rollout
command
that
it
can
also
deal
with
revisions
which
are
only
stored
as
history.
It
should
be
relatively
easy
because,
if
you're
looking
over
the
machine
size,
we
just
have
to
look
at
both
annotations
and
then
say:
Oh
yeah,
I,
don't
know,
I,
don't
care
if
you
roll
back
to
103,
it's
just
the
same
thing.
Also,
let's
do
the
same
thing.
A
C
Basically
saying
what
Stefan
is
is
saying
so
if
today
rollback
is
not
considering,
this
is
more
a
bug
than
a
feature,
so
we
should
get
a
revision
history
to
work,
to
work
well,
possibly
to
solve
this
complexity
for
the
user,
just
so
a
playing
sequence
without
three
was
one.
Let's
just
showed
one
two
three
and
let's
user
pick
one
and
then
basically
do
the
roll
back
applying
the
the
spec
of
the
machine
set
back
to
the
to
machine
deployment.
So
in
other
words
it
is
complex.
C
It
is
implemented,
complex,
I,
agree,
but
do
we
are
giving
when
we
Implement
cluster
tattoo
command?
We
should
make
it
simple
for
the
user.
A
Okay,
that
makes
sense
to
me
hiromo,
please.
G
Maybe
maybe
I
I
I
don't
get
the
whole
point.
I
I,
just
I,
don't
want
to
make
this
discussion
specific
for
this
peer,
hey
I
didn't
intended
that,
fortunately,
that
the
I
think
I
think
using
revision.
History,
in
actual
logic,
is
a
little
bit
complex,
so
maybe
I
I
felt
it's
not
a
good
idea
to
use
that
in
the
as
a
rollback
strategy,
so
so
I
think
we
have
to
confirm
the
use.
Cases
first
is
that,
in
line
with
your
thought,.
C
If
I
look
at
the
feature,
so
what
I
do
expect
is
that
as
a
user
I
change
the
machine
deployment,
this
change
gets
tricked
into
our
vision.
History.
Let's
forget
about
this-
is
implemented
so
for
a
certain
number
of
change,
which
is
my
revision.
History
number
they
get,
stretched
and
and
I
should
be
able
to
to
go
back
and
and
far
from
all
of
them.
Okay,
looking
at
this
part
of
code-
and
this
is
the
end-
then
the
user
goal.
C
C
If
you
change
me
already,
second,
today
is
not
the
record
in
the
revision
history,
which
is
bad
okay
or,
if
you
say,
revision
number
eight
and
you
and
you
you
start
using
revision.
History
like
we
were
discussing
you've
already
spent
before
you
can
actually
store
more
than
eight
versions,
so
there
are
a
couple
of
bugs,
but
in
the
implementation.
But
if
I
look
from
the
user
perspective,
really
the
requirement
is
simple.
Every
time
I
do
a
change.
I
want
a
revision.
C
I
want
to
be
able
to
to
to
route
back,
so
we
can
discuss
is
to
make
the
implementation
better,
but
I
think
that
the
goal
is
pretty
clear,
so
dropping
a
revision
history
because
it's
complex,
it
goes,
in
my
opinion,
in
a
wrong
direction,
because
we
are
not
reaching
the
goal
to
allow
the
user
to
go
back
in
history.
C
So
we
can
find
out
a
better
solution
that
I
agree.
We
can
fix
the
bug
that
are
in
this
feature
that
we
are
still
learning,
because
it
is
the
first
time
that
that
I
did
that
we
dig
into
this
part
of
the
code,
but
I
think
that
the
goal
the
original
goal
of
the
feature
is
to
allow
to
roll
back.
Then
it
is
up
to
us
to
find
a
good
implementation
to
solve
the
problem.
I
don't
know
if
it
makes
sense
as
a
solution
is
a
comment.
D
I
just
want
to
say
I
agree.
Essentially,
if
you
would
ignore
your
business
history,
it
would
mean
that
we
get
arbitrary
holds
in
our
history,
just
depending
on,
if
certain
machines
that
look
the
same
as
some
other
previous
machine
sets,
and
that
would
be
pretty
awkward
for
users.
So
I
think
that's
essentially
use
case
to
present
by
the
history,
which
makes
sense
and
and
don't
get
like
arbitrary
holes
in
the
middle.
D
Just
because
of
the
way
that
machine
deployment,
controllers,
tracking
the
history
and
I
think
it's
relatively
comparatively
easy
to
deal
with
The
annotation
in
cluster
cuddle.
It's
way
way
harder
to
keep
that
stuff
working
in
the
machine
deployment
controller.
D
If
you're
concerned
about
the
things
you
have
to
do
in
cluster
cover
specifically
I
think
we
can
follow
up
on
the
pr
and
just
give
some
hints
I
mean
I
I,
don't
know
the
classic
hard
code
to
be
honest,
but
what
I
know
based
on
the
annotations
and
how
does
cuddle,
probably
works?
I
think
it
shouldn't
be
too
complicated
to
handle
revision
history
on
top
of
handling
relations,
so
we
can
give
very
specific
hands
on
on
how
do
you?
D
How
we
think
that
what
has
to
be
changed
to
handle
that
as
well,
that
helps.
A
We
we
have
an
idea
of
what
the
the
goal
is,
what
we
want
the
user
to
be
able
to
do,
but
the
implementation
is
kind
of
complicated
and
right
now
that's
what
we
need
to
do
is
we
kind
of
need
to
focus
on
the
pr
and
kind
of
work
out
the
details
there,
so
I
I'm,
I'm,
trusting
Stefan
and
Fabrizio
and
hiromo
you
all
will
will
continue
to
follow
and
you
were
as
you
you
all
will
continue
to
follow
up
on
the
pr
then
does
that
help
here.
Remember.
A
Okay
cool:
if
there's
no
other
comments
on
that,
hiromo
you've
got
the
next
one
with
the
upgrading
strategy.
Yeah.
G
Oh
yes,
this
is
our
basically
just
the
questions
of
the
current
implementation.
G
I
think
this
is
about
also
rollback
and
I.
Think
the
current
code
only
supports
the
situation
that
we
upgrade,
what
work
workload.
Sorry
marker
worker
nodes,
so
how
there
is
there
any
strategies
for
upgrading
control,
plane,
node
or
maybe
we
have
to
expand
the
scope
to
management
cluster
itself.
G
And
that
that
was
the
first
question
and
and
the
second
one
I
talked
later.
A
Okay,
so
I
guess
does
anyone?
Does
anyone
have
an
answer
for
the
the
update
strategies,
I
guess
for
the
control
plane,
notes.
C
I
can
try
to
answer
some
someone,
it's
not
on
this
one,
so
I'm
trying
to
understand
the
question,
but
I
think
that
when
it
comes
to
control
plane,
Point
number
one
is
that
there
are
different
control,
plane,
implementations.
So
there
are
different
counter
plane
providers
and
each
one
can
decide
how
to
implement
this.
C
If
we
talk
about
the
Cuban
mean
control,
plane
provider
coming
provider
has
been
designed
to
always
prefer
stability
over
speed,
and
so
basically
what
it
does
is
that
when
it
has
to
upgrade
it
just
delete
it,
it
upgrade
notes
in
a
sequential
order,
one
at
a
time.
So
start
we,
we
have
three
start
from
the
first
one
delete
recreate
within
a
new
version.
C
G
Is
gone
already,
so
I
meant
World
bank
roll
out
comment,
I
mean
if,
if
we
use
sorry
I
think
I
have
to
describe
that
I.
If
we
use
rollout
and
do
command
or
or
and
anything
else
and
the
the
things
we
can
control,
always
just
the
worker
nodes
I
think
so
there
is
no
tools
to
control
the
rollout
of
control
plane
now,
so
that
that
was
my
question.
G
C
The
control
plane
that
can
be
used,
and
so
because
the
contract,
the
difference
between
a
control,
plane
and
machine
deployment,
is
that,
for
instance,
control
plane
does
not
have
any
any
notion
of
History,
but
it
has
a
field
which
is
allowed
after.
If
you
settle
allowed
after
to
Something
in
the
past,
it
will
immediately
start
her
out.
G
Okay,
I
asked
that
sorry,
maybe
I
I
asked
that,
because
once
you
mentioned
that
the
current
rollout
command
is
incomplete
and
there
is
a
limitation
that
when
we
when
we
there
is
a
limitation
for
for
when
for
the
when
we
use
it
for
downgrading
the
kubernetes
version
or
something
like
that
and
I
I
want
to
make
that
clear.
What
what
is
the
limitation
for
the
calendar?
World
I'll
come
on
and
I
I
guess.
Maybe
this
is
the
one
of
the
limitation
you
mentioned:
I,
don't
I,
don't
remember.
C
Okay,
I
I,
think
I
think
this
is
another
topic,
so
let
me
say
we
go
back.
Each
each
control
plane
provider
is
different
than
others,
but
what?
What
is?
What
is
the
issue
here
is
that,
for
instance,
if
you
use
kubernet
mean-
and
you
try
to
roll
back
a
control
plane
to
operation
a
previous
control,
kubernetes
version
Cuban
mean-
does
not
support
the
downgrade.
So
there
is
the
risk
that
rolling
back.
C
You
so
now
now
we're
talking
about
rolling
back
and
not
roll
out
roll
outward
means
just
rotated
in
machines.
Rolling
back
will
be.
Oh,
we
added
this
at
this
version
of
the
of
the
cutter
plane.
We
want
to
go
back
to
the
pressure
version.
Okay,
there
are
a
couple
of
problems.
First,
first
of
all
is
that
if
you
look,
if
you
look
at
kcp,
there
is
no
shared
notion
of
History
kcp.
Does
not
trade
history,
like
you
do
in
machine
deployment
with
machine
sets?
You
don't
have.
C
The
previous
version
of
KC
of
kcp
are
not
stored
anywhere
problem
number
one
problem
number
two
is
that
we,
it
is
the
control
pain.
We
have
to
be
careful,
and
so,
even
if
we,
we
think
about
a
mechanism
to
keep
track
of
the
of
the
version
of
the
object
and
to
roll
back,
we
have
to
make
this
mechanism
bulletproof.
That
means
that,
for
instance,
for
kubern,
mean
control
pain.
We
cannot
roll
back
to
a
present
version
if
the
pressure
version
imply
rolling
back
the
kubernetes
version,
because
this
is
not.
C
G
Yeah
yeah
I
understand
your
point
and
that
was
all
about
the
control
plane.
So,
regarding
current
loading
back
comments,
I
mean
and
the
command
or
something
like
that
that
the
day
out
of
copyright.
G
How
can
we
close
that
and
if
we
focusing
on
the
work,
work,
worker
nodes,
I
think
the
current
implementation
is
all
almost
complete,
but
if
we
want
to
consider
about
the
world
like
about
about
for
the
control
plane,
we
we
can
continuously.
We
have
to
continuously
work.
C
Well,
honestly,
I
I
don't
have
a
formal
opinion.
This
this
effort
was
driven
by
another
contributor
and
it
was
an
issue
that
was
not
a
design
proposal
that
we
agreed
upon.
So
it
if
you
want
I,
can
take
a
look
and
follow
up
on
on
the
issue,
make
up
on
my
mind
and
try
to
follow
up
but
yeah,
given
that
there
was
not
not
a
design,
it
was
something
that
we
were
implementing
that
basically
the
contributor
was
was
leading
and
we
were
reviewing
it
step
by
step
like
we're
doing
now.
G
A
A
A
A
G
Yeah,
this
is
a
also
about
yeah
upgrading
store
disease
and
I
I
can
see
the
difference
between
their
strategies
for
upgrading
kubernetes
version
between
the
you
see
a
PD
and
the
the
other
part
by
there.
I
I
just
tried
to
see
a
PO
CFO,
but
when
we
use
cfpd
we
can
if
we
want
to
upgrade
the
kubernetes
version.
G
What
we
have
to
do
is
just
changing
the
kubernetes
version
field,
but
in
Capo
we
have
to
make
an
image
for
that
version
and
the
changing
the
image
is
filled
in
the
marketing
machine
deployment
voices
and
it
is
completely
different
and
in
the
CPU
it
is
completely
I
think
the
version
field
is,
is
you
know
it?
It
doesn't
work
work?
Actually
it
just
is
like
an
annotation,
it's
it's
meaning
it
nothing
so
so
I
think
cfpi
have
to
show
their
maybe
clear
policy
for
using
that
version
field.
G
For
for
you,
for
for
the
ease
of
a
user
you
for
for
the
usability,
so
I'd
like
to
hear
the
community's
opinion
about
that.
That
topic.
A
So
I
do
I,
do
want
to
note
that
we're
getting
down
to
the
last
10
minutes
of
the
meeting,
and
we
still
have
a
few
topics
here.
I
I
appreciate
this
is
a
really
complicated
topic.
Haruma
I
think
we
need
to
keep
the
discussion
going
on
the
on
this
issue.
If
we
can
or
is
there
enough,
is
there
another
issue,
maybe
that
that
you
would
want
to
open,
or
is
that
the
proper
one
to
continue
the
discussion.
A
No,
no
I
meant
to
follow
up
on
these
questions
about
the
rollout.
You
know,
you're
asking
some
deeper
questions
about
the
rollout
mechanics
here
and
I'm.
Just
wondering
is:
is
this
issue
the
34-39?
Is
this
the
best
place
to
continue
the
discussion
about
your
questions,
because
I
I
want
to
make
sure
that
we
can
we,
as
a
community,
can
answer
your
questions,
but
it
sounds
like
with
the
time
we
have
left
in
this
meeting.
I'm,
not
sure
we'll
be
able
to
answer
them
so.
A
Then
I
would
say,
like
maybe
add,
add
your
follow-up
questions
to
the
issue
and
then
we
can
continue
the
discussion
there.
Okay,
thanks
and
you
you've
got
the
next
topic
too
about
the
I.
Can
never
remember
these
names,
CA
aph
topic,
so
please
go
ahead.
G
Yes,
but
this
is
for
the
ca
page
caph
and
not
not
the
copy,
so
I
I
I
can't
find
any
good
places
to
do
that
that
the
topic
for
CAF
but
I,
heard
that
I
heard
that
the
meeting
is
you
know
you
held
together
with
the
cfps,
so
I
I
just
put
the
topic.
C
G
C
A
Okay,
so
maybe
maybe
ask
this
question
again
next
week
when
the
Microsoft
folks
around
or
maybe
ping
them
in
the
Cappy
channel
as
well.
B
A
All
right,
so
we
got
a
couple
minutes
left
here
for
some
provider
updates
and
it
looks
like
Richard.
You've
got
both
of
them,
so
please
take
it
away.
D
Yeah
so
essentially
to
just
bring
it
up
that
there
are
a
lot
of
extreme
mentorships
open
until
14th
February,
one
for
AWS
one
for
gcp
that
one
obviously
about
managing
prereqs
and
cluster
AWS
ADM,
and
that
one
about
supportability
of
captioning
tracing
and
people
yep,
so
applications
open
until
14th
February,
and
then
it
starts
on
first
March
for
three
months.
If
you
know
anyone
who
wants
Supply
or
if
you
have
any
channels
to
multiply,
feel
free
to
tell
everyone
yeah.
A
Yeah
yeah,
thanks
to
both
you
and
Richard
for
bringing
this
up
yeah.
So
anybody
who
is
here
or
maybe
watching
a
recording
later
there
are
some
mentorships
starting
up,
and
these
two
issues
here
I
think
are
where
you
can
go
to
find
the
information
and
ask
questions
or
I'm.
Not
sure
would
they
do
the
application
through
here
or
I
guess?
Does
it
have
more
information
there?
If
someone
wants
to
follow
up.
D
I
guess
that
maybe
that
link
can
you
click
on
open
for
applications?
There
is
a
link
there.
D
A
D
A
Okay,
so
that
brings
us
to
the
end
of
the
meeting
or
the
end
of
the
agenda.
I
should
say:
are
there
any
other
last
minute
topics
or
questions
that
people
have
before
we
wrap
up
here.
A
Okay,
I'm
not
seeing
any
hands
going
out
so
three,
two
one
all
right
thanks
everyone
and
we'll
see
you
next
week.