►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
today
is
wednesday
march
3rd,
and
this
is
the
cluster
api
office
hours,
hope
you're
all
having
a
good
week
so
far.
I
would
just
like
to
remind
you
or
if
you're
new,
to
the
meeting.
We
have
a
meeting
etiquette.
So
please
make
sure
you
follow
it.
If
you'd
like
to
speak,
raise
your
hand
using
the
racehand
feature
on
zoom
and
if
you
want
to
bring
up
anything
in
this
meeting,
please
add
it
to
the
agenda.
It's
right
here
and
yeah
I'll
make
sure
to
watch
the
participant
list.
A
So
I
can
call
on
you
if
you
raise
your
hand,
okay,
so
we'd
like
to
start
with
a
few
psas
today.
So
the
first
one
is
that
the
cluster
api
main
branch
is
now
using
go
1.16.
A
So
if
you
are
developing
on
the
main
branch
make
sure
that
you
update
your
go
version
locally,
the
controller
runtime
version
has
also
been
updated
to
0.9.
That
has
not
been
released
yet,
but
if
you
are
a
provider
using
the
main
branch
or
nightly
builds
that
will
come
to
you
pretty
soon
and
and
also
we
also
have
the
kubernetes
version
that
got
dependency
got
updated
to
121,
also
not
released,
so
that
should
only
affect
infra
providers
developing
on
main
branch
and
yeah.
A
I
think
that's
it
and
there
was
a
question
in
chat
about
cluster
api
uses
gobind.
Should
we
switch
the
new
go,
1.16,
embedded,
feature
and
vince
says
yes,
and
I
assume
that's
not
done
yet
vince.
Did
you
want
to
comment
on
that.
B
No,
we
need
to
get
around
that,
but
yeah
like
huge
plus
one
to
remove
gold
bean
data
so
that
we
can
remove
those
generator
files
because
go
will
do
that
at
compile
time.
Awesome.
A
Updates
cool
fabrizio.
You
want
to
take
the
next
view.
C
A
So
just
to
clarify
it's
what's
what's,
which
releases
are
affected.
C
Or
the
zero
range,
I
guess,
starting
from
0
3
13,
which
is
the
first
one
with
kcp
remediation.
I
I
can
check
it
and
add
it.
A
And
but
it's
not
a
regression,
it
was
always
broken
for
kcp
remediation
when
it
was
added.
Okay
thanks.
Yes,
thanks
to
that
for
finding
that
and
fixing
it
all
right,
I
don't
see
any
hands
raised,
so
we're
gonna
keep
moving.
Let's
move
on
to
discussion
topics!
We
have
folks
from
ignition
wanting
to
tell
us,
oh
actually,
the
yes
flight,
car
and
ignition
support.
D
Sure
I
will
start
off
I'll
put
it
on
on
the
agenda,
largely
just
because
it's
been
something
that
I've
worked
with
and
threw
a
few
folks
on
for
a
while
now.
So
it's
now
at
this
point,
rather
than
just
having
had
discussion
topics
over
the
last
six
months
or
so,
we
have
now
got
prs
opened.
D
Mateo
shoes
is
on
this
call
also,
as
pr
is
open
for
both
cluster
api
and
then
we're
working
our
way
into
some
of
the
providers,
but
particularly
the
one
that
last
we
spoke
with
nadir
and
otherwise
you
know
a
way
to
handle
ignition
as
well
as
secrets,
and
we
discussed
whether
or
not
to
make
it
a
kept
so
in
the
cluster
api,
puller
plus
4172
I'll
paste
it
into
that
here
yeah.
D
So
this
is
the
one
that
is
about
ignition.
Mateos
can
go
through
more
details,
but
it's
effectively
working
through
the
the
bootstrap
provider
so
that
any
kind
of
conversions
that
need
to
happen
from
ignition
or
otherwise
happen
there.
It
also
avoids
some
of
the
you
know,
secrets
handoff.
That
was
a
point
of
contention
also
with
the
other
pull
requests.
D
But
it's
it's
getting.
Actually,
a
fair
amount
of
reviewers
at
this
point,
so
I'm
not
worried
about
that.
There's
not
been
no
major,
pushbacks
or
otherwise.
Just
this
is
just
kind
of
like
raising
it
to
attention
that,
since
we've
started
working
on
it,
even
folks
that
are
not
using
flat
car
and
otherwise
they
are
interested
in
cluster
api
support
and
ignition
have
jumped
in.
So
it's
actually
gotten
a
lot
of
really
nice
productive
feedback
and
otherwise
iterating.
This
is
just
raising
it
up.
D
If
anybody
has
questions
on
implementation
or
choices
that
we
made,
that
aren't
so
architecturally
change,
you
know
are
so
architecturally
financially
different
to
need
a
full
like
caep
process.
A
Awesome,
who
has
any
questions
or
any
comments,
all
right
marcel.
E
Yeah
hi
just
real
quickly.
I
reviewed
this
pro
request
as
well,
and
I
looked
at
several
of
the
progress
with
matthias
and
we
talked
about
it
also
separately.
The
thing
that's
still
a
little
bit
unclear
to
me
and
maybe
more
a
question
towards
the
vmware
folk.
E
This
call
would
be
like
there's
a
cap
at
machine
adm,
bootstrapper
proposal,
which
I
also
previewed,
and
I
think
I
don't
know
if
the
person
I
created
is
in
here,
but
I
would
like
to
know
if
there
was
any
more
thought
put
into
this
now,
also
with
the
open
pro
request
from
materials,
because,
in
my
opinion,
they
kind
of
have
the
similar
concerns
of
using
ignition
I'm
fine
with
going
for
either
solution.
I'm
just
curious.
If
there
was
more
discussion.
F
Yeah
so
so
like
there's,
some
overlap
between
the
two,
but
the
the
new
bootstrapper
is
like,
doesn't
only
solve
this.
It
like
there
are
two
things
that
it
does
it
apps
like
it
abstracts.
First,
what
os
bootstrap
are
you
gonna
use,
whether
ignition
or
or
cloud
in
it,
and
it
relies
only
in
on
a
specific
subset
that
is,
that
is
implemented
by
both,
which
is
like
placing
files
into
the
right
locations
and
it
handles
it
and
it
hands
out
to
cluster
api.
F
A
A
Yes,
you
know
neither,
but
this
pr
adds
support
to
the
current,
like
cubadian
bootstrap
provider
for
ignition,
so
it
modifies
the
cuban
bootstrap
provider
to
support
ignition,
whereas
this
proposal
is
proposing
a
whole
new
bootstrap
provider
on
that
kind
of
supersedes,
qbdm
bootstrap
provider,
but
for
now
they'll
live
alongside
each
other
and
that
new
bootstrap
provider
allows
to
solve
other
problems
that
we've
had
like
securing
the
bootstrap
secrets,
but
I
think
we
want
both
to
support
ignition,
because
if
we
do
eventually
like
remove
kubernetes
bootstrap
provider,
we
want
the
new
one
to
also
have
ignition
in
mind.
G
That's
that's
precisely
it
it's
another
issue.
Is
we
some
of
the
hackiness
we
have
around
secrets
pulling
in
is
not
great
in
cloud
in
it
and
it's
broken
a
number
of
times,
as
canonical
have
rightly
made
changes
to
cloud
in
it
which
we
are
using
it
in
unexpected
ways
and
that's
led
to
breaking
changes
and
ubuntu
that
then
had
to
have
be
hot,
fixed
and
feature
flag
to
reproduce
the
older
behavior
that
we
rely
on.
G
So
we
need
to
move
away
from
that
and
it
yeah
it's
precisely
that
whatever
we
move
to
has
to
support
ignition
and
cloud
in
it
that
we
don't
end
ourselves
in
the
same
situation
where
we
only
support
one
another,
I'm
quite
happy
for
us.
If
everyone
is
happy
with
changing
the
making
those
changes,
the
the
current
bootstrap
provided
support
ignition,
then
we
should
proceed
with
that,
because
I
think
the
machine
adm
bootstrap
is
going
to
be
a
long-term
process
of
migrating.
We're
not
going
to
support.
G
E
Okay,
it
makes
sense
to
me-
and
I
understand
the
reasoning
why
I'm
asking
is
especially
out
of
the
machine
adm
bootstrapper
cap
this.
This
didn't
really
get
clear
to
me.
So
so
it
would
be
really
nice
if
we
could
like
add
a
little
bit
more
meat
to
the
cap,
to
kind
of
explain
the
reasoning,
because
I
also
reviewed
it
and
around
it's
a
little
bit
rough
around
the
edges
still
in
my
opinion,
so
that
would
be
nice
cool.
Thank
you.
A
Thanks
for
bringing
this
up
any
other
questions
from
anyone.
D
No,
no,
that
was,
that
was
very
productive.
I
mean
basically
on
our
side
is
like
we
got
the
feeling
from
the
reviews
that
are
happening,
and
you
know
it's
kind
of
fun
to
see
it
become
a
party
of
like
nine
reviewers
tagged
for
it,
but
so
far
we
haven't
had
any
major
pushback,
so
learning
about
its
development
in
parallel
with
kind
of
the
broader
machine.
Adm
conversation
is
good
and
we
can
get
around
that
and
making
sure
that
the
ignition
support
is
ironed
out
once
that's
in
the
works
as
well.
D
A
Thanks
all
right,
let's
move
on
to
the
next
topic
and
actually
marcel
you
have
you
want
to
tell
us
about
your
experience
with
focusing
on
cappy.
I'm
very
curious
to
hear
this.
E
Yeah,
so
I
kind
of
want
to
make
this
more
psa
because
we're
just
getting
started.
As
you
know,
I've
been
in
this
sink
now
for
two
months,
roughly
or
maybe
more
time
flies,
and
we
are
now
in
giant
swamp,
focusing
more
heavily
on
cluster
api,
to
the
extent
that
we
have
now
started
like
a
month-long
internal
hackathon
of
sorts,
where
around
25
developers
are
now
focusing
purely
on
cluster
api
and
switching
us
to
cluster
api.
E
Why?
I
want
to
make
this
psa
a
little
bit
here
is
for
maintainers
not
only
of
cluster
api,
like
the
main
repository,
but
also
the
providers,
to
be
aware
that
there
might
be
more
pull
requests
coming
up
in
the
near
future.
I
know
cecil
you
have
already
spoken
to,
for
example,
nikola
or
some
other
people
from
our
side
that
are
working
on
azure
and
that's
gonna,
intensify
in
the
next
month,
as
we
are
now
on
a
broader
scale,
getting
into
it
and
adopting
cluster
api
controllers.
E
Also,
not
only
the
types-
and
I
just
wanted
to
raise
a
little
bit
of
awareness
that
this
might
happen
and
in
case
there
are
any
problems
unclear
whatever
feel
free
to
ping,
and
we
can
sort
it
out.
A
E
E
Started
in
a
broader
scale,
so
it's
currently
yeah
25
people,
starting
out
like
25
developers
now
working
on
it.
A
So
welcome
any
questions
from
giant
swarm
or
any
thoughts.
E
There's
lots
of
thoughts
but
they're
not
refined
enough
to
bring
this
year
like
we
are
hitting
on
a
whole
bunch
of
issues
on
on
several
levels,
so
some
of
which
we
can
solve
for
in
a
short
amount
of
time,
and
some
of
which
will
obviously
take
quite
a
bit
longer
yeah
roughly
around,
like
pki,
is
like
probably
our
biggest
headache
right
now,
because
we
use
vault
and
obviously
our
root
case
are
then
involved,
but
that
doesn't
quite
mesh
with
with
how
class
api
works
right
now.
A
No
all
right
pretty
self-explanatory,
so
all
right.
I
think
this
is
the
last
topic,
so
unless
anyone
has
any
last
minute
topic
that
they
want
to
squeeze
in
and
if
so,
please
raise
your
hand,
so
I
can
know,
but
I
think
this
will
be
a
shorter
reading,
all
right,
no
hands
raising,
I'm
gonna
call
it.
Thank
you.
Everyone
and
I'll
see
y'all
next
week.