►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
hello,
everyone
today
is
june
15,
and
this
is
the
cluster
api
officer
meetings
before
starting
a
quick
note
about
the
meeting
and
etiquette.
So
you
have
to
use
the
razer
and
the
feature
in
zoom.
A
A
In
order
to
get
to
get
the
edit
access
to
the
document,
you
have
to
be
subscribed
to
the
sigma
secretary
cycle.
Many
lists.
A
The
last
point
is
that
this
is
this
is
a
meeting
abide
to
the
cncf
called
cotton
contoured
so
be
kind
with
each
other.
So
let's
get
started
with
today
agenda.
First
of
all,
let's
welcome
a
new
new
attendee.
Is
there
some
new
attendee
that
I
want
to
speak,
feel
free
to
raise
the
end,
and
I
pause
for
a
second.
A
A
B
Looks
like
jack's
out
this
week.
Actually
I
I
was
out
last
week,
but
okay
yeah.
Just
I
guess
quick
reminder.
If
you
haven't
seen
the
demo
we
gave
of
the
prototype,
I
can
drop
a
link
in
the
meeting
notes
later.
A
Oh
yes,
and-
and
this
is
a
nice
idea-
and
please
add
the
link
also
below
the
the
proposal
on
top
of
the
document
yeah,
let
me
say
from
from
what
I'm
concerned
at
this
week,
I've
seen
only
some
comment
on
the
some
work
on
the
cluster
p.
I
don't
orchestration
proposal.
Basically,
we
are
moving
from
the
first
draft
to
the
document
to
something
that
better
summarize.
What
is
what
has
been
working
in
the
prototype
but
yeah
still
working
progress.
B
Yeah
and
if
you're
confused
about
the
proposal,
I
think
the
demo
should
help
clarify
some
things
about
what
we
the
direction.
We
intend
to
go.
A
Thank
you
jonathan.
So,
since
there
are
no
comment
for
the
proposal,
so
we
can
move
to
today.
Discussion
topic.
Oh
since
please
everyone
ask
your
name
to
the
attending
list,
so
we
we
keep
track
of
where
we
participated.
C
Yeah,
so
I
put
a
couple
of
things
on
there:
both
requests
for
ideas
and
input,
the
the
first
one
is
well,
I've
explained
it
there.
We
we
have
a
user
who,
for
whom,
the
the
default
failure
domain
spreading
behavior
of
cappy
on
the
control
plane
is
actively
problematic,
and
I
suspect
this
is
relatively
common
in
openstack
deployments.
C
So,
basically
we're
looking
for
a
way
to
disable
it.
We
do
not
want
kathy
to
distribute
control
plane
nodes
across
fairly
domains.
We
we
just
want
to
leave
that
to
the
openstack
scheduler
and
we
can
do
that.
C
The
way
we
can
do
that
or
the
best
way
we
currently
have
of
doing
that
is,
is
basically
adding
something
to
the
openstack
cluster
spec
to
say:
disable
control,
plane,
failure,
domains
and
then
we
just
went.
We
just
won't
populate
any
yeah
and
we
can
do
that
and
that'll
work.
C
The
my
question
is:
is
there
a
better
way
of
doing
that?
Are
there
any
downsides
of
doing
that?
Are
there
any
other
clouds
who
might
not
want
the
the
explicit
failure
domain
spreading
behavior
of
cappy,
because
it's
an
api
change?
I'm
I'm
I'm
looking
for
input
before
going
with
the
most
obvious
thing,
which
is
to
do
it
locally,
yeah
any
thoughts.
A
Thank
you
before
asking
for
answer,
so
let
me
just
add
a
little
bit
of
context.
So
in
class
api
we
have
fair
domain
which
are
defined
on
the
cluster
object.
A
So
basically,
what
we
are
saying
is
that
in
most
of
the
your
deployment
today
in
cluster
api
provider
for
openstack
today,
you
are
okay
with
this
behavior,
but
some
customer-
or
in
some
case
you
don't
want
this
behavior
and
basically
you
you
want
a
way
to
stop
populating
the
the
federal
domain.
That's
that's
the
question.
C
Well,
I
suspect
that
most
clouds
won't
care
because
it's
not
a
feature,
that's
widely
used
in
on-prem
clouds.
At
least
you
frequently
find
clouds
only
have
a
single
failure
domain
anyway.
So
the
the
features
kind
of
silently
doing
nothing.
C
The
problem
is
when
the
when
the
cloud
does
implement
failure
domains
there
are.
There
are
other
resources,
machine
resources
which
can
also
be
dependent
on
failure
domain
and
so,
for
example,
flavor
different
flavors
may
be
valid
only
in
some
availability
zones
because
they
reference
resources
which
are
only
available
which
were
only
in
some
availability
zones.
C
So
then
it
becomes
it
becomes
kind
of
a
landmine
for
users.
They.
They
then
have
to
make
sure
that
they
match
up
the.
They
explicitly
request,
only
certain
availability
zones
and
they
use
the
right
flavors
for
the
right
availability
zones
and
what
have
you
or
we
can
just
not
do
availability
zones
and
just
specify
the
other
thing
and
let
openstack
do
its
thing.
C
But,
yes,
the
the
availability
zones
are
returned
by
our
provider
and
our
current,
our
current
best
thought
is
to
simply
not
return
them.
A
Well,
I
I
can
give
an
opinion,
but
my
my
opinion,
so
from
a
customer
of
view.
Let
me
see.
A
A
A
However,
I,
and
so
I'm
personally
open
both
to
consider
an
improvement
to
kcp,
which
will
could
require
api
changes
and
and
some
discussion,
but
I
really
would
like
to
have
a
feedback
about
how
this,
how
much
this
use
case
can
be
generalized
outside
of
the
openstack,
which
is
a
valuable
point.
So
if
they
don't
want
to
speak
now,
I
I
kind
of
invite
invite
people
for
the
other
provider
to
comment
on
the
issue
which
is
linked
in
the
in
the
document,
and
it
is.
C
Yes,
please
I,
I
would
very
much
appreciate
that
and,
like
I
say
it's
not
as
if
we
can't
do
this,
we
can
do
this.
I
I
just
I
I
would
like
to
do
it
cleanly
in
in
a
way
that
isn't
going
to
create
well,
that
is
going
to
create
the
minimum
amount
of
api
in
this
in
in
capo,
and
I
and
it
might
if
it.
If
it's
spawns
another
discussion
about
the
federal
domain
behavior
of
kct,
then
then
that
is
also
interesting.
A
C
Thank
you
very
much.
The
other
thing
I
I
wanted
to
bring
up,
and
this
isn't
an
immediate
ask-
is
we
we
have
a
an
open
shift
customer
so
there's
a
red
hat
customer
who
wants
to
deploy
their
control
plane
across
separate
l2
domains,
so
their
control
plane
nodes
are
going
to
be
connected
to
different
physical
networks.
C
A
Yeah
also
in
this
case,
so
my
my
first
reaction
is
that
in
cluster
api,
if
you
look
at
core
cluster
api,
we
don't
have
really
a
notion
of
network
and
a
network
network,
our
infrastructure
provider,
details
in
the
current
api
model,
and
so
given
that,
given
this
these
caveats,
if
you
use
a
thing
like
kcp
to
manage
your
control,
plane,
kcp
basically
take
it
and
play
it
and
and
build
and
duplicate
it.
A
So
it
make
it
difficult
to
have
a
different
network
for
each
for
machine
under
the
same
classic
managed
by
the
same
psp
object.
So
there
is
an
alternative.
Is
that
that
is
to
manage
the
the
control
pane
without
kcp.
But
I
would
say
currently
a
use
case
is
not
supported.
A
C
Is
is
there
anybody
else
who
who
is
interested
in
this
who
it's
it's
kind
of
related
to
the
the
data
center
concept
of
spine
leaf
networks
as
well?
So
it's
not
it's
not
directly.
C
A
requirement
of
of
spine
leaf,
however,
however,
if,
if
you're
looking
to
do
that
kind
of
separation
of
network
fabric,
I
mean
it's,
it's
a
reasonably
logical
ask
I
I
wonder
if
anybody
else
has
similar
requirements,
but
this
might
be
more
of
an
on-prem
thing,
so
maybe
vmware.
A
Personally,
I'm
not
aware,
but
I
am
I'm
not
really
working
much
on
the
provider's
side.
Yeah
I
mean
maybe
force
working
provider
can
comment
on
this.
Is
there
an
issue
for
this
one?
I
don't
see
it
lincoln.
C
No,
no,
there
isn't
I'm
I'm
currently
at
the
stage
of
this,
where
I
don't
really
know
where
to
go
with
it.
So
I'm
I
was.
I
was
hoping
to
find
somebody
else,
so
somebody
was
saying:
oh
somebody's
already
done
this
or
or
there
was
this
other
thing,
don't
look
at
that
or
or
you
should
start
by
writing
a
proposal
or
something
like
that.
So
that's
that
that's
really
the
level
I'm
I'm
at
with
this.
A
Okay,
so
yeah.
Let's
hope
that
someone
chime
in
if
you
create
an
issue
just
explaining
the
problem
or
the
idea
and
you
link
it
in
the
document.
People
have
a
place
where
to
work
on.
A
Just
before
moving
on
I
I
saw
a
comment
in
the
chat
from
vince
about
the
previous
point,
a
fair
domain
and
which
is
kind
of
interesting.
So
in
custer
pi,
when
you
create
failure
domain,
you
you
have
a
flag
to
set
to
specify
which
fader
domain
is
relevant
from
for
the
control
plane.
So
if
you
want,
you
can
use
basically
this
flag
to
specify
fair
domain,
but
not
having
kcp
to
spread
on
them.
C
Yeah
we
were
aware
of
that
one,
but
that
that's
that's
kind
of
I
I
think
the
user
who
reported
it
matt.
C
He
he
commented
on
that
that
it's
that
it's
essentially
it's
a
it's
a
frequent
source
of
error,
because
the
the
user
has
to
manually
ensure
that
that
they're
matching
up
the
right
azs
with
the
right
with
the
right
flavors
and
if
they,
if
they
get
it
wrong,
then
then
it
fails
and
whereas
what
they're
actually
trying
to
achieve
the
behavior
that
they're
actually
trying
to
achieve
is
that
there
is
no
failure
domain
spreading.
C
So
yes,
that
is
another
way
we
could
do
it.
Okay,
but
but
again
we
were
looking
for
a
a
slightly
easier
user
experience.
A
Okay
yeah,
my
best
suggestion
is
to
try
to
rally
people
on
the
issue.
Ask
maybe
you
can
ask
in
the
cup
of
v
channel
or
in
metal
three
channel
to
see
if
other
forces
are
interested
and
point
them
to
the
issue.
A
A
Tltr
is
that
we
are
planning
to
cur
to
cut
the
first
alpha
or
beta
tag,
probably
monday,
and
the
most
important
thing
is
that,
as
soon
as
the
tag
is
out,
we
really
need
feedback
from
provider,
mostly
on
the
on
the
change
that
we
made
on
cluster
class
for
server
side
apply,
giving
a
little
bit
more
of
detail
on
this.
So.
A
Some
detail
about
release
planning,
so
we
we
plan
to
cut
the
tag
monday
immediately
after
create
release
branch
start,
adding
all
the
new
end-to-end
et
cetera,
et
cetera
and
and
work
stabilizing,
the
the
release
branch
doing
selected
cherry
picking.
So
we
cherry
pick.
A
Poly
change
fix
all
small
changes,
fixes
the
documentation
and
stuff
like
that.
So
we
we
we
cool
down.
Basically
the
influx
of
pr
in
the
release
branch.
The
goal
is
to
cut
the
one
two
release
in
in
three
weeks.
This
is
usually
what
we
do,
but
this
really
depends
from
the
provider
feedback.
So
again
it
is.
It
is
very
important
that
providers
help
us
to
validate
the
new
release
next,
what
what
is
in
the
release?
So
it
is
an
another
huge
cluster
api
release.
A
A
The
biggest
change
in
this
release
are
runtime
sdk,
so
extensibility
model
with
like
psychology
and
topology
mutation
hooks.
These
changes
are
almost
already
or
we
we
are
looking
the
last
two
or
three
pr
that
we
plan
to
get
merged
this
week
included
also
end-to-end
test.
The
only
exception
with
regards
to
the
proposal
is
that
we
are
not
going
to
have
the
before
the
late
cluster
hook,
because
we
we
discover
some
some
things
that
require
a
little
bit
of
thinking.
A
We
are
still
planning
to
have
it
in
the
release,
but
it
won't
be
in
the
first
in
the
first
beta
and
yes,
you
should
expect
a
demo
soon
about
runtime,
sdk
lifecycle
looks
and
all
this
new
extensibility
point
in
in
the
document
you
can
see.
There
are
issue
there
are
link
to
the
tracking
issue
where
we
are
keeping
track
of
all
the
implementation
work.
A
Another
change
which
is
pretty
common.
When
we
cut
a
minor,
we
have
bumped
all
the
dependencies
as
usual.
We
are
still
waiting
for
a
controller
and
time
patch
release
because
we
discovered
a
bug.
The
fixes
is
already
been
merged
in
control
around
time,
but
yeah.
We
have
to
wait
for
maintainer
to
cut
a
release
and
we
also
need
a
fix
for
controller
tools,
because
we
discovered
basically
a
bug
when
they
generate
crds,
and
this
bug
is
yeah.
A
We
really
need
the
speaks,
and
also
we
discover
some
issue
about.
How
do
they?
How
they
manage
server
side
apply
markers
that
if
we
can
get
this
merged,
this
could
simplify
the
work
in
the
provider
for
adopting
server
side.
Apply
talking
about
server
side
apply.
A
A
A
We
try
to
make
things
as
clear
as
possible
for
all
the
changes,
yeah
and-
and
we
are
looking
for
feedback
also
also
for
this
last
point-
is
that
the
release
contains
also
some
initial
work
for
structural
logging.
It
is
also
this
one
is
documented,
and
then
there
is
no.
A
A
A
Oh,
I
don't
see,
ends
oh
where
it
there
is
stone.
F
Hi
yeah,
I
have
a
question
since
the
second
neutral
is,
and
this
server
side
apply
fix
is
important.
Is
it
possible
to
like
back
parties?
I
know
there
are
lots
of
dependency
bumps
and
it
will
be
super
difficult,
but
just
wanted
to
understand
if
there's
a
possibility.
A
G
Hey
so
just
a
quick
question
with
the
documentation,
this
is
awesome,
it
looks
like
is
maybe
kappa
has
has
already
gone
through
testing
and
gone
through
this
process.
Is
that
correct.
G
Or
I
guess
I
guess
better
way
to
say
this
is:
have
there
been
any
providers
that
have
tested
this
gone
through
this
process
and
were
there
any
gotchas
that
popped
up
that
may
not
be
in
this
documentation
yet?
Does
anybody
know.
E
E
So
we
basically
encountered
two
issues,
and
one
is
the
one
publisher
mentioned
about
the
controller
tools
having
problem
generating
the
markers
so
which
is
checked
in
the
first,
the
first,
the
test
under
other
issues
to
follow-
and
I
know
that
christian
is
like
working
on
fixing
this
now.
So
that's
one
issue
and
we
basically
need
to
add
the
list
type
map
and
this
map
key
with
some
unique
key
and
for
kappa.
The
problem
is
that
in
some
net
spec
we
don't
have
any
required
and
unique
field.
E
So
I
actually
have
an
issue.
So
basically
you
need
the
affiliate,
that's
required
and
also
it
needs
to
be
unique,
and
I
think
like
for
azure
cap
g,
they
have
a
name
field,
but
for
kappa
we
don't
have
it.
So
we
will
probably
need
to
add
a
fair
that
I
don't
know
the
uuid
or
like
whatever,
whatever
fit
so
that
there's
an
issue
all
right
below
at
the
required
field
to
subnet
spec.
So
if
people
have
an
opinion,
yeah
yeah
feel
free
to
jump
on
to
this
issue
and
comment.
A
Thank
you
weenie.
You
see.
H
Yeah
so
yeah,
I
think
this
is
like
we're.
Gonna
definitely
need
some
some
guidance
here
about
four
providers
that
have
co-authored
slices
but
have
like
only
optional
fields.
H
So
there's
probably
going
to
be
some
nuance
there,
because
if
you
like,
like
we're
going
to
have
we're
going
to
have
to
see,
if
like
we
need
to
be,
we
need
to
have
the
field
that
is
present
at
all
time
or
if
waiting
for
the
field
to
be
filled
back
by
the
controller
is,
is
enough
because
I
believe
like
when
you
when
we're,
at
least
in
kappa
when
we're
doing
creation.
H
A
Thank
you,
so
I
go
a
little
bit
quickly
back
to
to
do
a
question
about
the
the
migration
guide.
So
how
do
we
write
this
migration
guide?
We
write
this
migration
guy,
while
developing
new
pr
and
usually
the
the
first
provider
to
remove
the
basically
to
implement
these.
These
changes
is
the
company
provider
that
use
for
our
ci
cd,
but
capital
provider
is
really
a
test
provider,
so
it
provides
a
good
senior,
but
not
the
best
senior
that
we
can
have
so
tsdr.
A
Those
documents
does
document
our
let's
say
the
best
that
we
can
do
at
this
stage.
If,
when
adopting
the
the
release,
you
find
that
documents
are
not
clear
enough
or.
A
A
I
had
a
little
I
tried
to
answer
to
justine
and
and
give
a
little
bit
more
more
context.
So
what
what
we?
What
we
do
with
server
side?
Private
we
service
apply.
This
is
the
documentation
in
in
the
kubernetes
website.
A
Server
side
private
basically
provide
a
more
granular
field
management
and
it
leverages
on
on
these
managed
field
information
to
basically
keep
track
of
each
controller,
manage
them
each
field
in
every
dollar,
and
then
it
has
all
its
own.
A
Policy
to
manage
conflicts
stuff,
like
that,
what
really
is
in
the
end
of
the
controllers
is
to
define
the
merge
strategy.
So
basically,
when
a
controller
defines
its
own
api,
it
has
to
choose.
It
has
the
opportunity
to
choose
if
at
least
has
to
be
treated
as
an
atomic
list,
which
is
the
default.
A
So
basically
it
does
not
allow
two
two
different
outers
to
to
work
or
if
it
has
to
treat
it
as
a
set
or
as
a
map.
Okay,
in
case,
we
use
list
list
type
map
server
side
apply
basically
requires
that
you
have
to
define
a
list
map
key
at
least
one
list
map
key
that
basically
allows.
When
you
compare
the
list,
it
allows
to
identify
the
same
item
and,
according
to
the
specification,
those
values
must
be
scalar
and
as
far
as
having
tested,
they
cannot
be
empty.
A
But
let
me
say
these
are
these
are
not,
let
me
say,
cluster
api
requirement.
This
is
the
behavior
of
server
side
apply.
You
can
test
it
by
kubercut
or
applying
the
any
yammer
file.
H
You
see
yeah,
I'm
mostly
worried
to
be
honest
on
cases
where
we're
not
doing
bring
your
own
infrastructure,
meaning
that,
like
where
cases
in
cases
where
capstar
are
actually
creating
the
network
constructs
and
a
bunch
of
these
things
that
are
usually
a
slice.
H
In
that
case,
I
think
in
kappa
winnie
said
that
correctly,
if
I'm
wrong,
but
like
the
whole,
the
whole
network
spec
is
optional,
so
like
if,
if
yeah,
if
we,
if
we
can't
know
in
advance
how
many
elements
that
we
need
to
put
in
the
sl
into
the
slice
and
how
many
keys
we
need
to
auto
generate,
I
think
it
can
be
a
bit
challenging.
There.
Also.