►
From YouTube: 2021-01-13 Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
today
is
wednesday
january
13th,
and
this
is
the
cluster
api
office
hours.
As
always,
please
follow
the
cncf
code
of
conduct.
A
If
you
can,
please
add
your
name
to
the
attendee
list,
so
we
can
keep
track
who's
attending
and
yeah,
so
we'll
start
with
psas
and
fabrizio.
I
think
you
have
two
so
go
ahead.
B
Thank
you
cecile,
so
good
morning,
good
evening,
everyone
as
an
answer
to
us
last
week,
I'm
trying
to
organize
a
end-to-end
walkthrough
for
cluster
api,
and
there
was
a
I
opened,
a
doodle
asking
for
preferences
and
the
times
a
lot
most
voted.
It's
tomorrow
same
time
of
this
meeting,
and
so
tomorrow
there
will
be
this
wall
through.
Thank
you
for
for
the
great
feedback.
So
far,
I'm
looking
forward
for
the
meeting
the
link
of
zoom
is
there.
B
A
Thanks,
it's
great
that
you're
doing
this
any
questions
for
fabrizio
about.
B
Yeah
yesterday
or
the
day
before,
apr
nerd
implemented
a
a
new
company,
caster
cattle,
which
is
caster
cattle,
describe
a
cluster.
This
command
basically
generates
a
tree
of
cluster
object,
with
the
condition
near
each
object,
and
it
is
a
nice
view
that
was,
I
shared
the
sun
some
time
ago.
In
this
meeting
I
got
a
great
real
feedback,
so
now
it
is
implemented
as
a
comment
in
class
cutter
and
yeah.
Please
test
it
break.
It
provide
feedback
looking
forward
for
that.
A
Awesome
and
this
will
be
available
in
v1004,
correct.
B
Yeah,
for
now
it
is
a
master.
To
be
honest,
it
should
not
be
difficult
to
backport
it
because
it
is
in
you
know,
in
a
separate
way
of
the
code
base,
so
it
should
not
touch
everything
else,
but
yeah.
It
is
a
new
feature.
We
have
to
agree
if
you
want
to
back
part
or
or
not
so,
let's
start
to
see
if
it
works
and
people
appreciate
it
in
master.
A
Sounds
good
yeah,
let's
have
people
try
it
and
if
you
try
it
and
you
think
it's
really
cool
and
you
want
to
see
it
in
zero.
Three
then
give
us
a
shout
out
and
we
can
see
if
we
can
do
a
backboard.
A
Fabrizio
all
right,
seeing
no
hands
erased,
vince,
says
plus
one
to
back
part
in
the
chat.
A
All
right.
Let's
move
on
to
discussion
topics.
We
have
the
first
one
about
back
parting,
kcp,
spec
mutation,
203,
cheyenne,.
C
Yeah
so
recently
we
merged
pr
to
make
some
of
the
kcp
specs
mutable
like
api
server,
control
manager
and
scheduler.
So
we
thought
it
might
be
useful
for
zero
or
three
year
old
x
as
well.
So
just
one
there's,
no
major
breaking
changes,
except
that
the
specs
that
were
previously
thought
to
be
immutable
will
now
become
mutable.
So
just
want
to
get
the
community
feedback
on
if
it's,
okay
to
backport
it
to
0
3x.
A
So
we
had
recently
updated
the
back
porting
policy,
slash
guidelines,
and
so
the
current
policy
is,
it
should
be
back
ported
if
it's
a
bug,
fix
or
a
small
feature
that
doesn't
create
any
behavioral
breaking
changes
or
api
breaking
changes,
and
it's
not
a
significant
refactor
is.
A
A
D
Yeah
so
like
technically,
this
should
be
in
in
theory,
okay,
but
we
should
definitely
test
it
out.
The
reason
I'm
saying
that
because,
like
while
it
was
immutable
before
if
you
did
have
like
conflicting-
let's
just
say,
opinions
with
like
what's
in
the
spec
today,
that
would
have
been
kind
of
like
it
would
have
been
an
error
so,
like
the
the
validation
web
book,
will
kick
in
in
place.
D
So
this
will
only
allow
modifications
going
forward
rather
than
like
it's
not
really
a
behavioral
change
in
terms
of
the
controller
behavior.
It's
more
behavioral
change
in
terms
that,
like
now,
we
accept
these
modifications
and
we
do
some
like
some
other
changes
in
in
the
back
end,
but
the
user
has
to
opt-in
in
those
changes.
A
Has
anyone
specifically
requested
this
change
in
zero,
three
who's
using
cluster
api
and
it's
blocked
by
not
having
this
available.
A
Oh,
we
have
another
plus
one.
Two
back
port
zach
says
he
would
like
it:
okay,
yeah.
I
think
what
ben
said
makes
sense.
Definitely
we
need
to
do
some
testing
to
make
sure
it's
not
breaking
backwards
compatibility
and
if
it's
not
a
breaking
change
like
I
don't
see
any
reasons
not
to
backward.
A
It
is
an
extra
large
pr
or
extra
extra
large
pr,
so
just
want
to
make.
I
think
most
of
it
should
be
tests,
but
I
want
to
make
sure
we're
not
like
introducing
super
hard
chair
like
conflicts
that
make
it
harder
to
cherubic
but
yeah.
I
think
the
next
step
would
probably
be
to
open
a
pr
to
back
porch
I
am,
and
then
we
can
have.
Everyone
review
and
yeah
looks
like
there's
a
lot
of
interest
for
this
feature.
So.
D
If
you
are
gonna
review
it
the
best
part
that,
like
you,
should
review,
I
think
the
most
complicated
one
as
well.
It
would
be
the
one
that's
like
now
changing
also
to
cubitium
config
map
as
well.
That
is
the
part
that
could
change
the
behavioral
changes
but
like
when
I
went
through
it
like.
I
did
not
detect
any
any
behavioral
changes
in
there
because
it
would
just
represent
what
we
the
state
of
right
now
with
the
state.
That's
in
the
cluster.
D
A
D
You
would
get
this
that
config
map
outdated,
although
we
do
say
that,
like
when
you're
under
kcp,
we
take
ownership
of
that
we
can
put
gates
in
place
if
we
want
to
enable
this
to
just
as
a
feature
gate.
That
would
be
one
way.
D
Oh
yeah,
I
think
I
think
I
already
did
just
saying
like
when
we
work
the
backboard.
Maybe
we
want
to
add
a
feature
gate
to
avoid
that
but
yeah,
and
also
for,
like
a
damn
feature,
gate
like
we
could
just
enable
it
by
default.
A
D
A
And
doing
this
anyway,
so
might
as
well.
Do
it
in
a
safer
way,
are
any
other
questions
or
notes
of
approval
or
disapproval
on
this
topic.
E
E
I
spent
some
time
looking
at
the
open
questions
section
and
I
sort
of
put
in
my
thoughts
and
on
how
to
move
forward.
That's
kind
of
like
the
place
if
you
are
sort
of.
If
you
have
already
reviewed
it,
you
know
just
take
a
look
at
that
open
question
section
and
yeah
give
it
a
final
look,
see
maybe
thumbs
up
yeah,
it's
not
working
on
it
yeah
so
yeah.
That's
all.
D
There
is
one
thing
that
the
maintainers
should
decide
on
those
proposals.
I
don't
only
see
me
and
you,
though,
and
I
guess
jason
is
not
here,
but
I'm
gonna
actually
like
ask
everyone.
So
there
was
a
question
that
was
brought
up
last
week
from
a
community
member
that
they
would
like
to
support
multiple
controllers
in
the
operator,
although
I
have
not
yet
seen
the
folks
to
actually
put
a
proposal
like
in
place,
it's
they
just
said
like
we
want.
We
would
like
this
feature
from
our
perspective.
D
The
feature
of
like
supporting
multiple
various
repair
management
cluster
adds
a
lot
of
complexity
that
we're
not
ready
to
take
in
terms
of
the
work
that
needs
to
be
done
to
support
it.
So
if
we
kind
of
like
a
need
to
decide
like
do,
we
think
this
is
a
blocker
or
not.
D
F
Hear
you
thanks,
I
turned
my
mic
off
because
it
was
loud
here.
I
just
wanted
to
ask
kind
of
a
clarifying
question
just
to
make
sure
I
understood
this.
This
is
so
you
could
have
one
management
cluster
that
can
operate
on
several
different
cloud
providers.
Using
the
same
set
of
primitives
was.
Was
that
the
issue
this
one
was
about.
D
The
issue
was
that
some
folks
want
to
install
multiple
pro
controllers
that
manage
the
same
like
a
cloud
provider
so,
for
example,
like
multiple
aws
instances,
multiple
azure
instances
in
multiple
vsphere
instances
and
only
watch
like
a
specific
name
spaces
as
part
of
the
provider
operator,
we're
moving
to
or
like
we're
proposing
to
move
to
a
model
which
will
be
managed,
which
is
much
simpler,
which
only
runs
one
instance
of
this
controller
that
watches
all
namespaces
instead
and
instead
we're
we're
saying
we're
suggesting
folks
to
if
they
do
want,
like
that
separation
to
use
different
management
cluster,
because
there
is
also
some
high
level,
for
example,
like
credentials,
there
are
going
to
be
global
objects
in
the
future,
like
there
is
an
aws
proposal,
they're
not
yeah,
like
in
in
terms
of
managing
those
resources,
it's
gonna
be
hard,
so
yeah
like
they
jack,
is
asking
like
that
is
support.
D
F
I
guess
did
they,
I
mean
thank
you
for
the
clarification
it
makes
sense
and
I
can
see
why
I
was
confused.
A
G
Thank
you.
Thank
you,
so
so,
first
of
all,
just
to
reiterate,
I'm
completely
fine
with
going
forward
like
it
is.
I
don't
want
to
like
be
the
cause
of
more
complexity,
added.
G
I'm
completely
fine
to
go
forward
like
this,
and
just
not
blocking
this
behavior
would
be
nice
if
we
end
up,
then
not
using
the
operator
or
later
on
contributing
to
make
this
possible
and
also
take
this
complexity
that
that's
up
for
discussion,
obviously,
and
we
would
like
to
help
that,
but
I
don't
want
to
be
the
guy-
that's
like
blocking
this
for
any
reason
for
the
course
of
why
we
want
to
do
this
is
basically
the
main
reason
for
us
is
the
complexity
of
running
one
management
cluster
for
each
minor
version
of
the
operator
would
be
extremely
difficult
for
us
currently
and
we
are
afraid
of
breaking
changes
in
the
minor
changes
of
the
core
cluster
api
operators,
as
well
as
the
provider
operators,
but
we
recently
found
another
example
where,
for
example,
there
was
a
race
condition
with
kcp
that
was
kind
of
introduced
by
a
minor
release.
G
G
I
know
that
the
kcp
stuff
was
resolved,
I'm
just
naming
it
as
an
example
because,
multiple
times
there
was
a
question
of
examples
and
for
that
we
currently
operate
on
a
way
of
running
multiple
minor
versions
of
operators
in
parallel
and
simply
shifting
the
crs
between
them,
without
the
need
of
having
more
management
clusters
or
a
lot
more
management
clusters.
I
hope
that
explains
it
once
again.
I
don't.
I
don't
want
to
block,
and
I
see
other
people
raising
their
hand.
F
B
So
my
according
to
the
issue
that
we
have
so
my
specification
is
that
we
are
going
to
remove
the
support
for
running
multiple
instances
of
the
same
provider
from
cluster
cattle.
B
With
the
controller,
he
is
the
that's
fine
with
with
or
with
the
goal
to
enable
this
use
case
or
not.
A
Warren,
I
guess
you
had
the
same
question.
I
see
you
lowered
your
hand,
so
my
question
is:
are
you
doing
this
today?
Already
marcel,
like
is,
is
this
something
you're
already
doing
with
your
own.
A
G
So
we
are
already
doing
this,
we're
just
not
doing
it
with
upstream
controllers.
We
want
to
switch
to
upstream
controllers
we're
using
upstream
types,
but
our
own
controller
implementations.
This
comes
from.
We
initially
wanted
to
adopt
cluster
api
in
version
v1
alpha
one,
but
the
changes
from
v1
alpha
1
to
v1
for
2
were
so
drastic
that
we
as
an
organization
couldn't
go
along
with
those
changes.
G
So
we
implemented
those
controllers
on
our
own,
so
we
have
like
basically
the
duplicate
controllers
and
those
already
run
like
I
described
and
do
the
same
act
as
the
upstream
controllers.
Now
we
would
like
to
use
the
upstream
controllers.
That
means
we're
not
reliant
on
any
operator
to
install
them,
but
like
a
web
hook.
That
would
deny
us
from
doing
this.
G
What
yeah
would
suck
really
bad
for
us?
I
don't
know
how
else
to
say
it.
A
Right
vince
go
ahead.
D
So
there's
gonna
be
no
web,
so
we're
going
to
keep
the
namespace
flag,
but
so
that
you
can
specify
that
namespace
to
watch.
But
what
for
brixton
brought
up
about
the
web
box?
It
still
stands.
So
I
guess
I
assume,
like
you,
might
have
this
problem
today.
D
D
That's
why
we
moved
the
web
books
before
in
a
different
deployment
and
name
space
is
because,
like
the
web
books,
like
can
only
talk
to
one
service,
like
you
can't
just
say
like
hey
like
this,
the
cid
version
has
to
talk
to
that
to
that
specific
web
book,
and
this
erp
version
should
talk
to
this
other
web
book.
All
of
them
have
to
go
to
one
service,
so
I
guess
you
would
have
to
handle
that
on
your
side.
D
G
Keep
yes,
on
the
one
hand,
the
name
space
flag
and
the
other
thing
we
still
kind
of
want
to
do
is
like.
I
don't
want
to
completely
derail
this,
but
why
are
we
only
fitting
by
nayspace?
Why
are
we
not
allowing
like
filtering
by
label
because
we
have
predicates?
I
think
it's
called
already
in
all
controllers.
Allowing
more
flexibility
on
the
predicates
would
be
would
be
pretty
awesome
for
us,
but
this
is
something
that
I
don't
want
to
blow
up
right
now,
here,
just
a
general
thought
either
way.
G
This
is
a
fine
for
now
for
us,
we
have
to
anyway,
discuss
internally
how
we
continue,
and
we
have
a
meeting
for
that
in
in
giant
swarm
tomorrow
on
how
we
can
get
to
adopting
the
upstream
controllers
right
now.
A
G
And
and
just
for
clarification,
if
there
would
be
like
acceptance
for
using
label
filtering,
I
I
can
provide
that
to
upstream.
I
already
prepared
like
a
per
request
for
myself
and
have
been
looking
into
it,
and
so
I'm
happy
to
provide
that
as
a
pull
request.
A
E
Yeah
regarding
the
on
the
cap
proposal,
I'd
like
to
just
like
for
pizza,
brought
up
I'd
like
to
document
all
of
this,
because
in
the
future,
like
you
know
one
month
on
the
line,
I
just
want
to
make
sure
that,
in
order
to
enable
some
of
these
use
cases,
we're
not
taking
away
guard
rails
so
that
we
are
allowing
the
other
users
to
shoot
themselves
in
the
foot.
E
Sort
of
those
are
some
of
the
they're
more
like
details
or
like
like
implementation
concerns,
but
I'm
I'm
trying
to
sort
of
put
the
pieces
together
in
my
head
and
I'm
just
trying
to
make
sure
that
when
we
say
oh,
we
were
going
to
allow
certain
things,
but
then
we're
going
to
take
certain
things
out
like
I
just
want
to
make
sure
the
proposal
provides
a
consistent
story
and
not
like
you
know.
It's
been
a
while,
since
this
proposal's
up
so
I'll,
try
and
document
some
of
this.
E
A
B
A
Yeah,
I
agree
it's
it's
definitely
not
something.
Yeah
like
it's,
not
the
operator
is
not
causing
this
new
rule
or
this
new
way
of
seeing
things.
It's
just
that
it's
reinforcing
what's
already
there
and
that
the
proposal
clearly
states
that
we
want
one
controller
to
run
per
one
instance
of
the
of
the
capi
controllers
per
cluster.
A
A
All
right,
cool
and
yeah,
if
you
could
open
an
issue
for
the
labels
marcel
that'd,
be
great.
A
A
Awesome,
could
you
link
it
here
if
possible,
so
we
can
have
it
thanks
all
right
anything
else
on
this
topic.
A
Nope
all
right
well
hope
you
all
have
a
good
rest
of
the
day
and
thanks
for
taking
the
time
to
be
here,
see
you
next
time.