►
From YouTube: 20200217 Cluster API Office Hours
A
All
right
welcome
everybody,
we've
started
recording
this
is
the
cluster
api
project
office
hours.
We
do
have
some
meeting
adequate,
make
sure.
If
you
wanna
speak
up,
you
can
raise
your
hand,
use
the
raise
of
hand,
feature
of
zoom
and
we'll
call
on
you.
A
We
do
have
an
agenda
in
the
document
that
I'm
sharing
here
at
the
below.
Please
add
any
information
you
have
there
and
I'm
I'm
gonna
try
to
keep
us
on
track
by
encouraging
you
all
to
follow
that
meeting
adequate.
So
this
is
the
cncf
meeting
and
we
do
have
a
code
of
conduct.
So
please
be
sure
to
follow
that
as
well.
A
Okay,
so
I'm
james
kind
of
been
in
the
meetings
a
few
times
here,
let's
see
what
we
got,
we
have
public
service
announcements.
Joel
looks
like
you
have
the
first
one.
B
Thank
you,
so
there's
a
externally
managed
cluster
infrastructure
proposal
open
at
the
moment.
It's
actually
raised
by
alberto,
but
he's
not
here
today.
I
don't
think
we
had
a
meeting
about
it
last
week.
It
was
very
productive
and
we
decided
on
some
various
changes,
but
basically
this
change
is
like
a
contract
change
for
the
providers,
the
idea
being
that
we'll
have
an
annotation
that
will
prevent
cluster
infrastructure,
reconciling
happening
in
any
of
the
providers
so
because
this
obviously
is
going
to
affect
all
provider.
B
Implementations
I'd
like
to
encourage
anyone
who
maintains
any
of
those
or
is
interested
in
this
to
take
a
look
at
the
proposal,
as
it's
now
been
updated
with
the
feedback
from
last
week's
discussion.
B
A
Cool
anybody
have
any
comments
or
thoughts
on
that
cecil
go
ahead.
C
B
I
don't
think
there
was
anything
decided
after
that
meeting.
Do
you
know
which
ones
you're
referring
to,
in
particular.
C
B
The
okay,
I
I
was
at
the
impression
we
sort
of
agreed
on
the
annotation
approach,
so
the
proposal's
been
updated
to
go
down
the
annotation
approach,
but
we
can
I'll
I'll
take
those
out
and
double
check,
and
maybe
trump
come
back
to
this
at
the
end
of
the
meeting.
If
I
can
work
out.
C
Okay,
I
think
vince
wrote
both
down
in
his
comments
after
the
meeting
on
the.
A
Cool
any
other
comments,
all
right,
so
we'll
go
on
to
the
discussion
topics
here.
So
freezio
you've
got
the
first
one
on
a
cape
for
the
cuba
dm
types.
D
Thank
you
james,
so
so,
a
few
days
ago,
I
fired
the
pr
for
a
new
type,
which
is
about
cuban
mean
types.
So
I
would
like
to
describe
this
initiative
a
little
bit
to
to
the
community
so
first.
The
first
point
is
that
the
these,
this
is
a
required
activity,
because
kubernetes
types
are
are
being
removed.
The
commandment
types
in
one
beta,
one
are
being
removed
from
command
mean,
so
it
is
something
that
we
have
to
do
as
soon
as
possible.
D
Second,
is
that
the
proposal
is
organized
around
the
goal,
which
is
to
remove
the
constraints
that
today
we
have.
That
basically
requires
that
the
kubernetes
types
in
cuban
mean
are
equal
to
the
kubernetes
types
that
are
supported
on
the
machine,
and
this
constraint
is
particularly
bad
because,
basically
it
is.
It
requires
that
all
the
kubert
mean
version
on
in
our
sku
have
to
support
the
the
same
api,
which
is
technically
very,
very
difficult.
D
So,
in
order
to
to
basically
overcome
this
limitation,
we
are
introducing
in
cluster
api
conversion
from
the
kubernetes
types
which
are
hosted
in
cluster
api
and
the
right
version
of
the
kubernetes
type
which
are
expected
on
the
machine.
So
at
the
end
of
this,
let
me
say
part:
cluster
pi
will
will
owns
its
own
copy
of
kubernetes
types
and
in
step
two,
which
is
future
work,
not
including
in
this
proposal.
We
will
basically
start
cleaning
cleaning
up
this
type,
because
now
they
are
different
and
we
have
conversion
and
do
stuff
like
simplifying
the
the
ux.
D
For
instance,
we
can
remove
the
control,
plane
and
point
from
the
command
mean
by
type
or
the
networking
configuration
for
the
cobalt
mean
time,
because
those
types
those
values
are
already
defined
in
cluster
level
and
and
basically
today
they
are
defined
in
two
parts
and
and
the
user
can
eventually
even
show
shoot
on
on
the
their
own
foods.
So
these.
D
More
or
less
the
background
on
the
proposal
and
if
our
question
I'm
happy
to
to
answer.
D
E
Proposal,
jason
yeah.
I
think
the
question
that
I
have
is
is
what
what
is
the
plan
for
unknown
versions
of
kubernetes?
Or
is
this
going
to
basically
mean
that
you
have
to
run
a
version,
a
cluster
api
that
knows
about
the
version
of
kubernetes
that
you're
going
to
deploy
on
a
host
with
this
proposal.
D
I
I
I
think
that
the
answer
is
yes,
but
we
have
a.
This.
Was
our
one
also
one
of
the
comment
on
on
the
proposal?
The
answer
is
yes,
but
we
have.
We
still
have
a
certain.
D
Let
me
see
tolerance,
because
when,
whenever
hubert
min
is
going
to
introduce
a
new
a
new
api,
then
he
still
have
to
respect
one
year
to
the
in
order
before
duplicating
the
next
one
and-
and
this
gives
cluster
api
a
certain
amount
of
time
to
basically
catch
up
with
the
unknown
version
that
pop-ups
in
the
future.
E
Yeah,
I
just
wanted
to
make
sure
that
we
weren't
going
to
lock
into
you,
know
having
to
know
about
new
versions
of
kubernetes
for
a
particular
version
of
cluster
api.
As
long
as
we
can
create
a
compatible
cube,
end
type.
D
A
A
Great
any
other
comments,
questions
do
we
have
anybody
may
be
able
to
take
notes.
Anybody
want
to
volunteer
for
that.
Just
noticed,
you
don't
have
a
notetaker.
A
Okay,
yep
go
ahead
for
breezy.
You
got
the
second
one
here
too,.
D
Okay,
so
this
is
an
implementation
detail,
but
I
would
like
to
share
with
the
with
the
community
so
today,
in
in
in
cluster
cattle,
we
have
two
types:
one
is
the
provider
type,
which
is
basically
managing
the
inventory
of
providers
that
you
are
studying
and
the
second
one
is
the
metadata
type
that
most
of
the
infrastructure
provider
knows
because
they
are
adding
this
file
to
their
artifacts
the
original
to
date,
it
is
a
types,
are
ev1
alpha
3
and
the
original
idea
was
to
use
the
operator
work
in
order
to
to
make
a
clean
conversion
from
current
types
in
cluster
cattle
to
the
target,
one
which
are
the
one
in
defining
in
the
operator.
D
Given
that
the
the
operator
work
most
probably
is
not
happening,
this
cycle,
or
or
we
don't
know
yet
in
you
know
where
we
we
will
get.
My
proposal
is
to
keep
the
provider
metadata
types
in
cluster
cattle,
so
these
two
types
which
are
cluster,
cut
specific
in
the
one
alpha
three
for
this
cycle
and
basically
wait
for
the
operator
for
doing
a
proper
conversion.
D
D
A
F
Thanks
yeah,
I
mean
it
sounds
to
me
like
it
would
be
the
least
hassle
for
users
to
wait
until
the
operator.
You
know
to
wait
until
we
have
the
operator
before
doing
that
that
conversion
assuming
people
are
okay,
with
continuing
to
use
v1
alpha
3
for
now.
A
Do
we
have
a
maybe
an
issue
open
for
this
that
we
could
potentially
just
document
the
that
that
decision
there.
D
The
the
point
come
up
because
I
created
a
pr
going
with
the
alternative
approach
and
then
basically,
I
found
it
not
nice
and
so
had
a
quick
alignment
with
beans
and
we
decided
to
to
raise
the
topic,
but
it
definitely
makes
sense
to
take
the
decision.
Okay,
awesome.
A
All
right,
otherwise
joel
will
give
it
back
to
you
for,
following
up
on
a
couple
of
the
items
from
earlier
in
the
meeting.
B
Okay
yeah,
so,
as
I
still
mentioned,
there
were
a
couple
of
sort
of
unanswered
questions
that
we
didn't
get
answered
on
during
the
call
last
week.
So
one
of
these
was
whether
to
use
an
annotation
to
signal
the
externally
managed
cluster
infrastructure
or
whether
to
use
a
spec
field.
B
There
was
some
suggestion
that
it
would
be
easier
to
provide
utilities
towards
making
the
providers
stop
using
or
stop
reconciling
these
objects
if
it
was
an
annotation
based,
so
the
proposal's
been
updated
to
lean
towards
the
annotation,
there
isn't
too
much
on
an
alternative
for
that.
So
I
can
add
another
bit
to
the
proposal
that
says
you
know
an
alternative
is
to
use
a
spec
field
and
I'll
try
and
add
a
section
to
weigh
up
the
pros
and
cons
of
that.
B
The
second
part
was
about
how
does
the
cluster
infrastructure
become
ready?
So
part
of
this
proposal
is
to
reuse
the
existing
cluster
infrastructure
resource
so
like
a
aws
cluster,
for
example,
as
part
of
the
cluster
api
contracts
before
any
machines
can
be
created.
Something
has
to
mark
the
status
of
that
to
say
that
that
infrastructure
is
ready.
B
Initially,
it's
proposed
that
you'll
have
to
write
something
yourself
like
some
sort
of
controller.
That
will
mark
that
as
ready
for
you,
when
your
external
provisioning
is
done.
There
was
a
question
also
about
like.
B
If
what,
if
I'm
using
something
like
terraform-
and
I
don't
have
the
ability
to
write
a
controller,
could
I
have
some
tooling
to
mark
that
ready
for
like
manually,
that
has
been
documented
as
like
a
future
piece
of
work
for
adding
some
sort
of
like
cluster
cut
and
extension
to
do
that
or
key
control
extension
or
or
some
sort
of
tooling?
That
would
allow
someone
to
manually
do
that.
That's
all
discussed
in
a
future
future
work.
A
C
It's
not
directly
related
like
what
I'm
doing
is
mostly
for
bootstrap
providers,
so
it's
a
contract
for
the
bootstrap
providers
and
then
it's
up
to
infrastructure
providers
to
leverage
it
or
not.
It's
completely
optional.
The
signal
is
there:
they
can
consume
it
if
they
want
or
they
can
not
consume
it.
So
in
this
case,
if
you're
bringing
your
own
infrastructure,
you
could
have
something
that
does
that
check
or.
A
Not
somebody
else
have
any
thoughts
fabrizio.
D
Okay,
a
little
bit
about
this,
so
my
personal
opinion
is
that
I
I
don't
have
strong
preferences
with
regards
to
field
of
annotation.
What
what
I
think
I
would
like
to
prefer
is
is
that
there
should.
We
are
not
let
me
say,
and
bending
into
all
the
infrastructure
provider,
the
basically
the
taking
care
of
an
external
provider
system
so
infrastructure.
Our
infrastructure
provider,
in
my
opinion,
should
remain
as
simple
as
possible.
D
B
Yes,
to
sort
to
add
to
that,
the
proposal
at
the
moment
states
that
if
this
annotation
is
there,
then
the
provider
infrastructure
cluster
controller
should
just
ignore
it
completely.
B
So
no
reconciliation
whatsoever,
whatever
is
managing
this
external
infrastructure,
should
be
doing
that
and
doing
any
checks
that
are
necessary
to
make
sure
the
provider
is
actually
ready.
So
yeah
it
shouldn't
really
in
terms
of
the
changes
to
the
providers.
It
should
just
be
a
case
of
if
you
see
this
annotation
or
spec
field,
whatever
just
ignore
it,
so
it
should
add
very
little
extra
complication
to
to
the
current.
A
Okay,
so
I
think
there's
no
nothing
else
there
we're
at
the
end
of
the
agenda
here
unless
anybody
wants
to
add
anything
at
the
end.
G
B
B
It
would
fulfill
the
spec
of
the
aws
cluster
resource,
in
this
case,
with
the
details
from
the
infrastructure
that
it
provided,
and
then
that
is
basically
used
as
a
a
point
for
the
machine
controller
to
consume
that
data,
so
we're
not
actually
using
that
resource.
As
a
you
know,
this
is
my
desired
state.
Something
is
going
to
reconcile
it,
although,
theoretically
you
could
using
this
model
it's
more
of
a.
This
is
an
informational
resource
using
existing
interface
that
we
already
have
know.
If
that
helps
answer
the
question.
G
G
B
So
there
are,
there
isn't
really
two
different
things:
we're
trying
to
consolidate
this,
so
that
we
keep
that
same
interface
and
that
same
contract,
so
the
aws
cluster
resource,
like
the
cr
custom
resource
at
this
point,
just
becomes
a
data
structure
like
there's
no
control
in
terms
of
like,
if
you're
using
something
like
cops.
Maybe
like
there's
no
controller,
that's
running
cops!
That's
watching
this!
B
It's
just
something
populates
that
data
in
there
at
the
beginning
and
so
like
in
the
normal
case,
where
you're,
using
like
the
cube,
adm
control,
plane
provisioning
like
the
when
the
cluster
infrastructure
provider,
creates
the
vpc.
It
sets
the
vpc
id
into
that
spec
field
where
in
this
case,
it
would
be
just
coming
from
from
whatever
cops
has
created,
but
we're
still
going
to
use
the
same
resource
so
there's
no
duplication
of
like
the
data
structures,
at
least.
B
Yeah,
so
the
status
to
to
be
usable
to
create
machines,
the
cluster
infrastructure
has
to
have
a
ready
condition
on
the
status
so
because
it's
very
hard
to
update
the
status
of
resources
using
like
cube
control
or
something
we're
going
to
need
something
to
do
that
long
term.
So,
in
the
first
implementation
you
would
have
to
write
your
own
controller.
B
G
B
A
No
okay
and
thanks
to
nadir
for
taking
some
notes-
and
I
think
I
saw
a
couple
other
folks
doing
it
as
well.
So
thank
you
and
I
think
and
sorry
if
I
got
your
name
wrong
there
and
otherwise
have
a
great
day.
I
think
that's
the
end
of
the
meeting
today.