►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
let's
go,
this
is
the
kubernetes
cluster
life
cycling
cluster
api
of
source
on
the
19th
of
january,
some
some
words
we're
binding
to
cncf
code
of
contact,
so
please
be
nice
to
each
other.
Please
use
the
right
science
feature
if
you
want
to
say
something
about
topic.
If
you
have
anything
you
want
to
talk
about,
please
just
add
we
add
a
new
topic
at
the
bottom
of
the
agenda.
A
A
A
Okay,
then
forbids
you,
you
first.
B
Hi
everyone
so
a
first
note
from
with
the
sig
cluster
life
cycle,
tech
lead
ad.
So
in
kubernetes
every
year
we
do
a
annual
survey
and
each
seek
is
required
to
provide
feedback
and
first
and
this
feedback
should
be
a
summary
of
the
feedback
of
the
sub
project.
That's
mean
that
each
cluster
api
provider
and
customer
api
itself
should
provide
this.
This
feedback
to
the
to
the
sig
lead
in
order
to
do
these
in
a
in
a
quick
and
a
sink
way.
B
B
Okay,
I
I
will
do
it.
Thank
you
for
noticing
this,
and
the
link
of
the
form
is,
is
the
no
the
one
below
the
the?
This
is
the
link
of
the
email
thread
in
the
c
cluster
life
cycle?
B
B
We
are
planning
rc
for
next
week
and
the
and
their
c
will
be
cut
from
a
release
branch
that
we
are
co,
that
we
are
going
to
create
a
little
later
this
week,
the
the
actual
release
we
plan
that
that
will
release
two
weeks
from
now.
B
A
Okay
and
then
I'll
continue
with
the
next
topic,
just
a
short
psa.
We
did
a
short
code
walkthrough
last
week
and
we
have
youtube
video
and
a
hackindy.
So
if
you
want
to
learn
more
about
api
conversions
or
have
some
pr's
open
and
you
don't
know
how
to
fix
the
tests,
there's
a
hack
indeed,
which
explains
why
we
need
conversions,
how
it
works
and
also
explains
what
you
have
to
do.
I'm
that
complicated
here
case
here
is
just
for
the
occupating
bootstrap
provider,
which
essentially
needs
twice
implementation.
A
C
Oh
yeah:
let's
see
yes,
hi
hello,
everyone,
so
I
just
wanted
to
bring
up
this
proposal
and,
first
of
all,
thank
you
to
all
the
people
that
reviewed
it.
C
So
the
first
question
is
really
so.
One
of
the
questions
was
that
you
know
this.
This
proposal
is
about
synchronizing
laser
labels
that
might
be
placed
on
a
machine
deployment
that
user
might
want
placed
on
a
workload
on
their
worker
clusters
worker
nodes.
Basically,
so
one
question
was:
should
we
support
just
arbitrary
prefixes
versus
predefined
prefixes?
C
I
think,
for
the
most
part,
everybody
kind
of
agreed
that
we
should
do
predefined
prefixes,
and
these
include
sort
of
these
really
important,
restricted
prefixes
such
as
node
role,
so
you
can
actually
assign
a
worker,
a
particular
role
and
also
a
certain
set
of
prefixes
that
we
would
define
that
were
very,
very
much
cluster
api.
Specific
second
second,
two
points
I
was
hoping
we'd
have
some
sort
of
a
discussion
on.
So
one
of
the
questions
is,
you
know:
okay,
I,
as
a
user,
want
to
place
certain
labels
that
match
these
prefixes.
C
Let's
say
at
the
machine
deployment
level
at
that
at
that
type
or
at
the
machine
machine
type
level
versus
just
having
these
labels
be,
like
all
other
labels,
that
people.
D
Yep
good
so
minions,
flash
preferences
like
for
me,
like
I,
I
think
I
haven't
thought
about
this
through
from
a
couple
of
years
ago,
like
the
having
a
single
prefix,
that
we
would
sync
it's
probably
more
in
line
like
because
the
prefix
would
be
new
will
be
in
our
own
domain.
So,
like
I
think,
fabricio
proposed
node
dash
restrictions,
dot
cluster
like
xk
study,
or
something
like
that
which
would
be
in
line
with
what
kubernetes
expects
as
well
and
node
restriction
is
like
kind
of
a
nice
word.
D
The
word
like
it
tells
you
what
it's
doing.
It's
like
you
know
it's
a
label
for
potentially
node
restrictions
of
paint
and
toleration.
Things
like
that
as
well,
could
order
node
selectors
and
we
could
take
ownership
over
this
prefix
altogether.
We
could
say
like
if
that
the
machine
is
the
ultimately
like
the
owner
for
this
prefix
and
like
if
we
find
like
extra
labels,
we'll
remove
them
x.
D
Labels
within
this
prefix,
we'll
remove
them
or-
and
we
can
add
them
so
like
it
will
be
always
a
one-way
thing
could
rather
than
like,
think
about
like
two-way
sync
or
like
whatever.
What
about
like,
if
you
said
note,
labels
within
cube
idm,
for
example,
and
I
was
like
we
could
just
make
the
rule
contract
that,
like
we'll
take
ownership
over
that
domain?
C
Yeah
yeah.
Definitely
I
think
that
was
one
of
the
points
and
the
proposal
is
yeah.
Definitely
we
could
have
a
cluster,
a
cluster
api
dot
io,
some
prefix,
that
you
know
we
define
and
only
we
we
synchronize.
C
There
was
also
the
second
point
about
there's.
There
are
restricted
labels
within
within
the
kubernetes
dot,
io
prefix
namespace.
I
think
most
people
agreed
that
we
should
support
a
couple
of
those
as
well
and.
D
So
yeah,
on
the
other
two
questions
I
think
like
I.
If
we
do
put
it
on
the
template,
we
need
to
make
sure
that,
like
the
machine
deployment
doesn't
roll
it
roll
out
which
we'll
have
to
test,
if
remember
correctly,
will
so
we'll
have
to
think
about
it.
So
it
should
be
something
that's
like
sun
in
place.
In
this
case.
E
F
Yeah
so
two
things
the
first
one.
I
agree
with
what
vince
said,
especially
that
like,
even
if
we
do
it
at
the
machine
controller
level.
We
just
have
to
be
wary
of
like
not
including
those
labels
in
the
logic
of
the
rollout
in
the
past,
the
cubelet
had
the
same
issues
with
containers
where,
if
something
changed
in
the
spec
inadvertently,
they
could
roll
out
with
intended
effects.
So
I
think
yeah.
F
If
we're
introducing
some
special
cases,
we
really
need
to
be
worried
and
account
for
those
and
the
second
one.
Regarding
regard
yeah
regarding
the
restricted
labels
yeah,
I
agree
that
that's
that's
definitely,
there's
definitely
a
use
case
there
for
operator
for
kubernetes
cluster
operators.
F
We
just
have
to
be
worried
on
the
way
we
do
and
we're
doing
that,
because,
if
we're
doing
that,
for
example
through
if
we
decide
to
not
do
it
through
machine
controllers
and
directly
on
the
kubernetes
node,
but
through
the
cubelet,
there
might
be
some
restrictions
because
the
cubelet
is
not
able
to
do
things
by
itself.
So,
depending
on
the
place
where
we
do
things,
we
might
face
restrictions
or
not.
C
At
least
in
my
mind,
everything
that
sort
of
falls
under
the
template
is
is
prone
to
roll
out
right.
If
somebody,
if
the
user
comes
along
and
they
want
to
now
add
a
new
label
to
it.
All
of
a
sudden.
Your
template
has
changed.
At
least
that's
my
understanding.
C
Are
there?
Is
there
a
precedence
where
you
know
where
we
allow
certain
fields
to
be
changed
in
place
within
the
templates
spec.template,
because
that's
effectively
what
we
would
need,
we
definitely
don't
want
to
roll
out.
We
just
want
the
user
to
be
able
to
arbitrarily
change
labels
in
place.
Nothing
changes
except
the
synchronization
logic,
sync,
so
the
labels
accordingly.
D
D
Something
else
that
we
might
want
to
think
about
is,
instead
of
a
new
field,
to
show
we
just
reuse
the
same
fields,
but
if
it's
prefixed,
we
will
sync
those
labels
right
then.
D
The
benefit
of
that
is
that,
like
then,
those
node
restriction
labels
will
show
up
both
on
the
machine
and
the
node,
which
could
be
useful
to
just
you
know,
link
them
together
as
well
like
when
you
just
go,
look
at
the
objects,
but
it
could
also
be
confusing
when,
like
it's
only
some
set
of
labels,
then
are
synced
just
to
throw
it
out
there
not
have
a
preference
either
way.
C
Yeah,
that's
what
the
second
point
here
was
sorry,
if
it's
not
clear
enough,
but
like
yeah,
we
could
just
mix
it
in
with
all
the
other
labels
that
are
on
the
machine.
But
yeah,
like
you,
said
it's
not
a
very
clean
user
experience.
B
Yeah
with
regards
to
the
arbitrary
versus
per
define,
I
think
that
pedestrian
is
a
good
way
to
start,
and
I
really
would
like
to
get
this
moving,
so
we
can
eventually
write
the
code,
so
we
can
add
arbitrary
later
to
make
it
easier,
but
I
will
start
with
predefined
and
just
close
the
set
we
with
regard
to
where
to
put
labels
yeah.
This
is
where
I
mostly
divided,
because
adding
mixing
them
with
the
other.
B
B
From
the
other
side
we
are
setting,
we
are
pushing
down
only
a
subset,
and
so
it
is
kind
of
a
peak
of
opacity
to
to
see
so.
B
C
Okay
yeah,
so
maybe
as
a
starting
point,
we
can
just
mix
in
the
labels,
go
with
option
two
for
now
and
we
can
revisit
this.
G
Hey
there
yeah.
This
is
my
my
two
cents
here,
like
my
feeling
from
a
ux
point
of
view,
is,
if
I
wasn't
any
familiar
with
a
copy-
and
I
look
at
the
api,
I
think
like
having
this
exposed
in
the
template
as
a
new
field
like,
let's
call
it
node
labels
whatever.
I
think,
that'd
be
easier
at
least
to
to
to
understand
when
looking
at
the
api,
but
also
other
than
that,
I
think
anything
would
work
and
then,
in
terms
of
of
reconciliation,
I
wonder
if
we
like.
G
Okay,
we
know
that
we
want
to
propagate
the
labels
at
creation
when
creating
a
machine
deployment
and
scaling
all
machines.
We
want
to
prepare
the
labels
to
into
the
nodes,
but
is
there
a
real
use
case
for
for
keeping
them
in
sync,
like
all
the
time
like
after
the
node
is
created,
do
we
have
like
a
real
use
case
out
there
for
for
for
changing
the
labels.
C
Yeah,
that's
a
that's
a
good
question.
I
mean
at
least
to
my
mind
that
that
initial
sync
is
important
and
the
fact
that
if
you
remove
a
label
that
remove
that
node
also,
you
know
the
labels
actually
removed
off
the
node
as
well
is
valuable.
C
G
C
G
So
most
of
the
use
cases
that
I'm
aware
of
for
for
this
feature
are
for
cluster
admins.
That
wants
to
you
know
like
set.
G
Labels
on
a
pool
of
notes
for
the
targeting
particular
workloads
right,
so
that's
you
only
want
to
do
that
at
creation.
Basically,
so
I
was
just
wondering
if
that
maybe
there
is
no
value
on
being
reconciling
this
all
the
time
just
thinking
out
of.
C
C
Doing
keeping
them
in
sync
may
not
be
a
whole
lot
more
work
right
and
at
least
at
the
very
least.
It
provides
this
consistency
that
the
user
knows
that
if
the
label
exists
on
the
machine,
then
I
know
for
a
fact
that
it
also
exists
on
the
corresponding
node,
and
you
know,
cappy
will
ensure
that
that
that
continues
continues
to
be
the
case,
even
if,
let's
say
accidentally,
user
removed
it
on
the
node
itself.
A
F
Yeah,
I
think
that
at
least
there
are
two
values
I
see
with
keeping
in
sync
the
first
one
is
actually,
if
you
want
to
do
enforcement
so
like
if
someone
is
operating
the
clusters
and
creating
the
clusters,
but
unwanted
wants
to
enforce
labels
and
topologies.
F
But
someone
changes
something
from
the
workload
cluster
than
yeah.
I
assume
I'd
assume
that,
like
they'd,
want
to
still
enforce
those
and
the
other
one
is
even
if
we
didn't
like
it
wasn't
in
that
case,
like
the
operator,
I
assume
that,
like
would
still
want
to
interact
with
cluster
api
rather
than
getting
the
cube
config
and
then
making
changes
so
yeah.
I
think
that,
like
having
handling
the
change
case
makes
sense
in
those
in
those
two
cases.
A
Okay,
then
tim.
E
Yeah,
when
I
was
talking
to
customers
about
some
of
the
input
here,
hey
I'm
a
pm
kind
of
like
in
this
space
when
I
was
talking
to
customers
around
some
of
the
use
cases
that
they
envisioned
that
workload.
Placement
thing
was
a
big
one
right
and
I
think
the
other
one
that
jumps
out
to
me
is
with
workload
placement
like.
E
I
have
some
customers
that
want
to
maybe
label
sets
of
nodes
as
as
like
ingress
right
like
where
we
want
to
run
our
ingress
workloads,
and
so
that's
interesting
to
me
from
a
security
perspective
in
terms
of
like
what
that
entails,
so
ensuring
that
we
can
satisfy
the
use
case
of
scheduling
and
then
ensuring
that
nodes
that
were
labeled
are
appropriately.
E
I
guess
relabeled
in
that
space
seems
to
be
something
that,
like
that
sort
sort
of
pushes
us
towards,
but
I
just
wanted
to
provide
that
other
little
bit
of
detail
around
like
ingress
specific
nodes
as
being
like
one
of
the
target.
One
of
the
targets
for
this.
A
Okay
sure.
H
From
the
other
other
side
of
that
same
argument,
actually
ingress
nodes
or
specific
hardware
that
you
might
need
to
have
running,
you
know
you
have
workflows
running
on
specific
hardware.
There
are.
There
are
good
reasons
to
want
to
remove
the
label
temporarily.
H
You
know,
for
instance,
if
your
ingress
knows
there's
something
gone
wrong
with
the
network.
I
don't
want
to
go
to
this
ingress
node.
I
want
to
remove
that
label.
For
now.
I
don't
want
a
controller
sticking
it
back
on
there
arbitrarily,
I'm
removing
it
for
a
reason:
hardware,
maintenance.
You
know
you
might
want
to
swap
out
a
hot
swap.
H
Gpu
there's
all
kinds
of
good
reasons
to
want
to
to
want
to
remove
nodes
and
remove
labels
from
specific
nodes
without
it
without
worrying
that
things
are
going
to
get
undone.
F
Now
see
yeah,
I
think
that,
like
I
agree,
those
are
very
much
like
very
valid
use
cases.
I
think
the
question
we
would
want
to
ask
is
that,
like
how
do
we
expect
users
to
drive
changes
into
your
cloud
cluster?
Is
it
through
through
cluster
api,
or
is
it
through
interaction
directly
with
the
cluster,
because
even
if
we
enforce
those
technically,
if
you
remove
it
from
the
cluster
api
objects,
then
there's
no
controllers
to
put
it
back,
because
it's
basically
gone.
A
C
C
C
The
machine
controller
should
take
ownership
of
actually
doing
the
the
label
sync,
but
then
that
brings
up
the
natural
question
that
if
I
want
these
labels
to
be
to
be
specified
at
the
machine
deployment
level,
then
you
know
I
need
to
somehow
propagate
it
down
to
the
machine
where
the
label
sync
is
running
and
two
approaches
that
that
were
mentioned
is
one
is
just
you
know.
C
The
machine
controller
looks
up
the
associated
machine
deployment
that
that
machine
belongs
to
and
finds
the
labels
there,
and
the
other
approach
would
be
to
somehow
just
propagate
it
down,
so
that
the
machine
deployment
passes
it
along
to
the
machine
set.
The
machine
set
passes
it
down
to
the
machines
that
it's
carving
out
any
any
opinions
on
that
or
any
pitfalls.
You
guys
might
see
in
either
of
those
approaches.
C
D
I
have
to
drop
something,
but
so
my
personal
preference
would
be
to
use
spec
template
metadata.labels,
which
is
already
synced
with
the
machine
itself.
The
only
thing
we
need
to
test
is:
if
that
causes
a
rollout,
which
I
I
believe
we
shouldn't,
but
you
know
we
should
definitely
like
100
make
sure
that
it
doesn't
before
relying
on
that
and
like
then,
the
machine
will
just
generate
those
labels.
D
We
can
check
if
it
doesn't
go
this
way.
We
could
just
add,
like
another
field,
to
template,
to
see
on
the
on
on
the
machine
itself.
I
mean
that
it's
in
place.
A
B
Yeah,
I
have
added
a
link
to
a
dock
in
in
our
books.
Let
us
play
metadata
propagation
and
also
annotate,
that
in
in
cluster
topology,
we
are
treating
changing
labels
like
a
special
case,
and
we
avoid
the
rotation
when
it's
not
necessary.
C
Okay,
let
me
take
a
look
at
that.
C
So
I
mean
it
sounds
like
we
should.
We
should
utilize
this
field,
spect
template
metadata.labels.
C
Yeah,
let
me
let
me
read
under
this
a
bit
more.
Thank
you
for
the
links
and
the
information.
A
Perfect
done
yeah.
F
Yeah,
just
a
quick
psa,
so
I'm
stepping
down
as
the
cap
v2
tech
lead
and
like
yeah
sagar
is
nominated.
We
have
a
lazy
consensus
like
that
is
gonna
end
up
this
thursday
and
yeah
also
be
around
being
involved
with
multiple
providers
and
cluster
api.
A
Questions
comments
concerns
okay,
yeah,
so
I
think
then
we're
at
the
end
any
last
minute
topics.