►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
today
is
wednesday,
the
24th
of
february
2021,
and
this
is
the
cluster
api
project
community
meeting
cluster
api
is
a
sig
kubernetes
project
and
we
are
following
the
community
guidelines
for
this
meeting.
If
you'd
like
to
talk,
please
raise
your
hand
and,
as
always,
treat
everyone
as
you
would
expect
to
be
treated,
and
in
case
it's
not
explicit,
we
mean
kindly
so
to
start
with,
looks
like
we
have
some
psas.
A
I
don't
know
who
added
these.
Does
anyone
want
to
take
credit
for
that.
B
Yeah,
this
is
a
follow-up
from
last
time,
so,
given
that
we
haven't
gotten
like
all
the
the
backlog
for
zero,
four
zero
like
in
place
yet
so
running
a
little
bit
behind,
so
the
new
proposed
dates
for
breaking
changes
would
be
due
by
march
1st.
So,
like
monday,
next
week,
we
should
have
like
either
issue
proposals
open
or
a
cap
proposal
open,
and
we
keep
track
of
like
google.cap
in
here,
usually
at
the
top
of
the
the
dock.
B
So
if
you
do
have
like
a
breakage
that
you
would
like
to
propose
like
please
open
an
issue
before
then,
and
then
we'll
kind
of
like
put
it
to
a
boat,
I
guess
like
if
we
want
to
like
make
it
release
blocking
or
not
for
new
features,
like
kind
of
on
the
same
length
like
if
you
have
cap
super
bowl,
there
are
non-breaking
changes.
B
B
B
B
A
Okay,
great,
so
I
guess
the
march
14th
one
seems
a
little
easier
to
hit.
If
anybody
out
there
has
ideas
for
a
breaking
change
that
you
haven't
yet
shared
with
the
rest
of
the
group.
You
probably
want
to
do
that
relatively
soon,
because
I
think
march
1st
is
what
next
monday
or
something
so
yeah.
If
anybody's
got
those
out
there
raising
raise
something
in
chat
or
open
an
issue
or
yeah
share
your
docs.
Somehow.
A
All
right
anything
else
on
that
vince,
okay,
cool,
so
we'll
move
on
to
discussion
topics.
I've
got
the
first
one
here.
So
there's
been
some
internal
work
at
red
hat,
to
build
some
tooling
around
cappy
and
the
the
teams
that
are
working
on
that
ran
into
some
issues
around
trying
to
vendor
in
certain
pieces
of
the
api,
and
they
had.
They
had
asked
us
if
there
was
any
plan
in
the
upstream.
A
You
know
in
the
community
here
to
maybe
make
those
api
types
more
modular
or
put
them
in
a
in
a
way
that
would
be
easier
for
a
third
party
to
then
include
them
in
their
go
project.
Part
of
the
issue
we
ran
into
was
that
they
hit
some
snags
around
like
dependency
loops
when
they
were
trying
to
bring
in
the
api
from
the
cappy
repo
and
then
also
you
know,
trying
to
bring
in
pieces
of
some
of
the
individual
providers
and
whatnot.
A
So
I
just
wanted
to
bring
the
question
here
to
see.
If
maybe
there
was
any
history
about
this
from
the
group
or
if
anybody
had
been
thinking
about
maybe
trying
to
isolate
the
api
types
in
a
way
that
they
could
be
more
easily
imported
so
yeah-
I
don't
know
if
anybody's
thought
about
this
before
or
raised
this.
A
B
A
Or
something
else,
so,
as
I
understand
it
there,
there
was
like
a
team
building
an
experimental
component
and
they
wanted
to
vendor
in
the
cluster
api
types
into
the
controller
they
were
building
and
then
they
also
wanted
to
vendor
in.
A
I
think
they
were
trying
to
vendor
in
specifically
like
the
aws
provider
code
or
something
like
that
they
had
built
a
binary
that
was
bringing
these
two
things
together
and
what
this
person
you
know
who's
doing.
The
work
it
said
was:
first,
they
ran
into
a
cyclical
dependency
between
the
aws
provider
and
the
and
the
main
capy
repo
that
they
were
bringing
in,
and
I
don't
I'll
have
to
gather
the
details
on
that.
A
I
was
still
trying
to
understand
exactly
what
they
were
doing,
but
that
was
one
problem
they
had
and
then
that
kind
of
led
him
to
to
ask
me
like
was:
was
there
ever
an
intention
from
the
community
to
maybe
generate
the
api
types
from
open
api
spec
or
even
make
the
api
package
an
actual
module
of
its
own?
That
could
be
imported?
You
know
just
on
its
own,
so.
B
Yeah,
it
sounds
like
a
misconfiguration
with
go
modules,
as
maduro
pointed
out
like
if
you're,
using
both
main
branch
latest
commits,
like
you
can't
do
that,
because,
like
alpha
3,
copper
is
still
using
alpha.
B
3
types
and
well
copy
is
using
alpha
4.,
but
go
modules
will
try
to
import
both
and
things
will
break
so
yeah,
like
the
versions
have
to
match
so
like
with
I
mean,
maybe
so,
I'm
100
sure
that,
like
you,
should
be
able
to
import
both
if
the
version
tags
do
match,
especially
for
the
api
types,
especially
because
we
keep
most
of
the
dependencies
lined
up.
B
For
example,
the
controller
runtime,
the
ones
they
will
be
the
same
for
the
same
like
series,
if
that
makes
sense,
but
yeah,
we
do
have
example
like
internally
vmware
downstream.
They
like
are
able
to
import
code
which
it
works.
Yeah.
Those
types
do
work
in
terms
of
making
like
a
different
module.
That
would
be
really
really
challenging.
B
A
C
B
Yeah,
I
think
I
think
things
will
get
better,
as
davis
mentions.
Like
you
know,
once
we
get
to
v1,
things
should
be
definitely
more
stable
but
yeah.
If
that's
not
working
like.
Let's
try
to
like
get
down
to
why,
but
it
should
definitely
work
like
the
the
repositories
are
like
are
meant
to
work
together
like
for
at
least
like
there's,
a
matrix
of
versions
that,
like
are
meant
to
work
together.
A
A
Okay,
so
it
it
sounds
like
there's,
not
not
really
much
to
do
or
change
here
you
know.
Perhaps
we
need
to
get
a
little
better
internally
at
how
we
vendor
these
projects
in
mind
the
mind
the
versioning
a
little
bit,
but
really
there's
nothing.
We
can
do
to
make
it
more
modular
in
the
code
is
what
I'm
hearing
I
mean
aside
from
breaking
it
out
into
a
separate.
You
know
api
package,
which
would
be
kind
of
crazy.
B
Yeah
one
the
only
thing
that
I'm
thinking
it
is
also
like
a
controller
runtime
has
to
match
the
version.
So,
if
you're
trying
to
use
some
operator
sdk
versions,
I
think
there
will
be
two
new
for
copy
alpha
three,
so
yeah.
You
need
to
kind
of
to
figure
that
out,
but
I'm
happy
to
take
this
offline.
A
Okay,
cool
yeah,
maybe
I'll
just
I'll,
try
and
get
a
few
more
details
internally
and
then
maybe
yeah.
We
can
just
meet
up
sometime.
You
know
later
next
week
or
something
to
talk
about
it.
A
Okay,
I
guess,
if
there's
nothing
else
to
add
on
that
topic.
Looks
like
fabrizio.
You've
got
a
couple
topics
here,
starting
with
cert
manager.
So
why
don't
you
take
it
away.
D
A
A
Okay,
great,
I
guess
yeah
if
people
are
interested
in
that
topic,
check
out
the
check
out
the
linked
issues.
There.
A
Okay,
I'm
not
I'm
not
seeing
any
hands
or
questions
on
those,
so
I
guess
we'll
move
on
to
the
next
topic.
Nadir
talking
about
some
new
caps
that
are
coming.
C
Hi
so
yeah
me
and
your
seniors
file,
two
new
caps
they're.
They
sort
of
deal
with
two
sort
of
ongoing
problems
that
they
had
like
reliance
on
cloud
in
it.
Obviously
not
everyone
is
using
cloud
in
it
red
hat's,
using
ignition
flat
car
linux,
which
means
you're
not
able
to
use
some
of
the
core
cluster
api
controllers,
such
as
kubernetes
control,
plane
and
kubernete
and
bootstrap.
C
So
there's
a
proposal
to
basically
have
a
binary
shim
that
you
we
don't
care
how
it
runs
as
in
you
can
execute
it
using
ignition
or
cloud
in
it,
and
then
it
does
some
of
these
sort
of
core
bootstrap
requirements.
It's
designed
to
be
pluggable,
so
it's
got
kind
of
like
a
data
model
and
you
can
bring
your
you
can
add
your
own
things
to
it.
I
guess
it
kind
of
looks
like
some
configuration
management
systems
of
the
past.
C
I
guess-
and
it
also
provides
mechanism
to
secure
user
data
by
putting
it
into
cloud
storage
and
downloading
it
and
doing
encryption.
I've
had
some
comments
from
mosh,
like
thanks
a
lot
for
those
for
pointing
out.
Sops
is
now
in
go
which
so
we
can
consume
that
directly
and
the
second
one
I'll.
Let
you
see
talk
about
these
are
they're
both
kind
of
they're,
quite
broad
and
there's
a
lot
of
detail
that
needs
to
go
them.
C
So
still,
we
need
to
add
a
lot
more
fine
level
detail
to
them,
but
please
start
having
a
look
at
the
sort
of
like
the
board
idea
of
it
and
see
if
you,
if
it's
if
the
approach
is
the
way
that
we
want
to
go
and
we'll
continue,
adding
details
as
we
get
towards
march
14th
I'll.
Let
you
seen
talk
to
the
second
one.
E
For
the
second
proposal,
it's
basically
to
enable
secure,
node
attestation-
it's
gonna
be
done
through
the
same
bootstrap.
That
nadir
was
talking
about
earlier,
and
the
idea
is
that
we're
gonna
through
the
this
bootstrapper
propose
and
do
a
csr
with
some
attestation
data
and
then
on
the
validation
side.
A
cluster
api
would
propose
a
generic
controller
that
can
be
vendorable
by
infrastructure
providers.
E
This
controller
is
going
to
have
already
like
some
generic
bits
to
do
the
csr
validation.
Some
of
these
validations
are
already
linked
into
the
proposal
and
it's
going
to
be
up
to
the
infrastructure
provider
to
implement
and
a
go
interface
to
basically
validate
this
agitation
data
that
is
provided
yeah
so
like
this
is.
This
is
still
ongoing.
Worse,
we
still
need
to
add
some
details.
Please
have
a
look
see
if
it
addresses
some
of
your
use
cases
in
terms
of
security
and
some
of
your
concerns.
F
Yeah,
I
have
a
quick
question
to
the
os
independent
machine
bootstrapper.
Just
briefly,
did
you
check
out
the
pro
request
by
ken
fogg
about
support
for
generating
bootstrap
data
and
ignition
to
the
cube
adm
booster
provider?
Currently
because
it
seems
like
the
goals
are
kind
of
similar,
like
the
os
independent
machine
boost
wrapper,
wants
to
use
ignition
and
once
enable
that,
while
this
pull
request
also
does
the
same
in
a
different
way.
C
Yeah,
I
remember
the
the
previous
one
for
pull
request.
We
still
have
the
additional
problem
around
securing
user
data
around
across
different
infrastructure
providers
so
for
control,
plane
secrets
you
want
to
basically
not
have
the
private
key
material
of
the
cas,
visible
in
the
say,
the
ect
user
data
or
azure
user
data
or
vmware
guest
info.
So
we
kind
of
need
like
a
mechanism
to
solve
that
as
well,
that
works
across
any
any,
whether
or
not
you're
using
cloud
data
or
ignition.
C
F
Okay,
I'm
just
asking
because
I
would
find
it
like
interesting
if
or
desirable,
if
we
can
find
like
a
solution
that
is
kind
of
consistent
and
we
don't
implement
like
similar
things
now
in
different
places,
with
the
same
kind
of
goal
in
mind,
and
then
we
have
suddenly
two
ways
to
do
something.
That's
already
highly
specific
that
are
then
kind
of
competing.
F
C
I
totally
agree,
I
think
it
did
share
like
a
version
of
this
doc
last
the
other
week
with
someone
who
can
work
folk.
So
it's
basically
converting
this
into
github,
but
I
totally
agree
with
you.
So
the
idea
is
to
have
a
new
bootstrapper,
that's
separate
to
the
cuvee
and
bootstrap
and
sort
of
make
it
make
sure
that
we
have
good
contracts
with
the
infrastructure
providers
right
now.
What
cluster
api
aws
is
doing
for
cloud
in
it
is
extremely
hacky
and
fragile,
and
it
won't
work
for
ignition
at
all.
C
So
if
we
just
add
ignition
into
qb
and
bootstrap,
we
sort
of
we
defeat
some
of
the
security
stuff
that
we
got
through
the
hackery
and
cloud
in
it.
F
Okay,
okay
seems
fine
to
me.
Let's
just
keep
the
discussion
going
yeah
and
we
can
keep
talking
to
ken
folk
as
well
and
then
see
if
we
find
alignment
yeah.
E
Yeah
just
to
add
one
more
thing,
so
in
the
past,
we've
had
some
issues
with
cloud
in
it,
and
this
proposal
is
gonna
also
enable
us
to
narrow,
specifically
what
we're,
depending
on
for
cloud
and
ignition,
which
is
basically
and
any
other
os
bootstrapper,
which
is
basically
the
ability
to
drop
specific
files
into
a
specific
location.
And
then
cluster
api
would
be
able
to
take
care
of
the
rest.
B
So
the
proposal
for
cubelet
authentificator
suggests
a
vendorable
controller,
and
I
just
I
would
like
to
share
some
experience
that
we
had
downstream
with
vendorable
controllers
and
that
might
become
a
pain
very
quickly
because
because
each
time
there
is
a
bucket
controller
and
because
all
providers
need
to
vendor
it,
we
will
need
to
update
dependencies
across
each
provider.
And
my
question
is:
how
will
this
affect
release
lifecycle?
E
The
controller
so
the
so
the
difference
here
is
like,
if
there's
an
issue
on
the
generic
controller,
then
yes,
we're
gonna
like
we're.
Gonna
have
to
do
a
security
release
as
like,
as
for
any
other
security
related
issue,
the
only
extra
step
is
infrastructure
providers
would
have
to
like
update
to
the
next
cappy
version
and,
like
frankly,
that's
at
least
for
for
some
of
the
providers.
E
That's
what
we
do.
Usually
when
there
is
an
issue
or
a
security
issue,
we
tend
to
bump
right
away
the
version
of
cappy
that
we
render.
So
the
the
like.
The
extra
step
here
for
like
or
the
cost
of
having
a
vendorable
controller
is
the
fact
that
infrastructure
provider
has
have
to
you
know,
update
their
dependency
when
there's
when
they
want
to
pull
a
new
version
of
that
controller.
A
So
yeah
some
new
caps
for
folks
who
are
interested
to
get
more
involved
with
the
project.
It
sounds
like
these
caps
could
use
some
eyeballs
and
some
suggestions,
or
at
least
comments
and
looks
like
we
have
one
more
topic
for
today.
Cecile
talking
about
some
cap
z
types
here,.
G
Yeah,
this
isn't
specifically
cappy,
but
I
just
wanted
to
call
this
out
quickly
here.
G
Since
you
know
the
audience
might
be
interested,
we're
very
close
to
merging
the
v1
alpha
4
types
in
cab,
z,
so
just
want
to
say
thanks
to
nader
for
all
his
hard
work
there
and
if
you're
interested
in
the
cab
z
provider,
then
we'll
we're
going
to
be
accepting
breaking
changes
after
this
pr
merges
in
the
main
branch
and
we're
going
to
ask
all
the
current
prs
to
re-target
their
changes
to
the
alpha
4
types,
if
they're
in
the
alpha
3
changes
ongoing
so
yeah,
and
also
thank
you
to
everyone
who
participated
in
updating
the
docs,
the
developer
docs
in
capi
for
the
v1
alpha
32v1
of
a4
provider,
documentation.
G
A
So
I
guess
any
any
questions
or
comments
on
that
topic.
I
don't
see
any
hands.
Are
there
any
other
topics?
People
would
like
to
bring
up.
We
reach
the
end
of
the
meeting
quite
early,
so
we
can
either
take
the
time
back
or
if
folks
have
ad
hoc
things,
they'd
like
to
bring
up
now's
the
time.