►
From YouTube: 20200506 Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
All
right
welcome
everyone,
I'm,
sorry
for
the
delay.
There.
We
had
some
trouble
getting
the
zoom
started,
but
today
is
May
6
2020,
and
this
is
the
cluster
API
office
hours.
Cluster
API
is
a
sub-project
of
cluster
lifecycle.
We
are
following
the
meeting
etiquette
that
you
can
find
on
this
Google
Doc
here.
Please
use
the
raise
hand
feature
if
you
want
to
participate
and
go
ahead
and
add
your
name
to
the
attendee
list
and
any
topics
you
want
to
discuss
the
agenda,
so
we
always
start
with,
welcomes
and
introductions.
C
A
A
E
A
F
A
All
right,
it's
not
all
move
on
so
before
we
move
on
to
PSA
is
I
just
want
to
point
out
for
anyone,
who's
new
or
who
hasn't
been
here
in
the
last
few
weeks.
We
have
this
general
questions
section
now
it's
right
below
the
demos
and
pocs,
and
these
are
really
intended
to
help
anyone
who's
implementing
a
provider
who
has
any
newcomer
questions
about
the
projects
so
feel
free
to
add
any
questions
throughout
the
meeting
and
we'll
answer
them
async
and
take
the
time
to
go
through
them
at
the
end.
A
B
Thank
you
so,
just
like,
we
did
a
zero
to
five
release
following
up
from
zero
to
four
last
week.
To
have
some
bug
fixes,
particularly
cluster
cuddle
in
it,
and
those
are
cuddle,
move,
feel
free
to
look
at
the
release.
Notes
like
they're
worth
two
bucks.
They
were
affecting
users,
and
the
most
important
thing
here
is
the
copra
is
180
support
clusters
and
like
and
now
like
we're
able
to
actually
like
upgrade
or
in
it
to
118
clusters
for
cost
rapey
I
that
this
will
be
the
minimum
version.
B
Okay,
proceeding
with
the
other,
like
we
identified
a
bug
yesterday.
So
if
you
have
alpha
2
and
alpha
3
controllers
running
at
the
same
time,
with
conversion
webbook
in
place,
there
is
an
infinite
loop
bug
which
was
actually
kind
of
interesting,
and
but
it's
a
fool
like
walk
through
in
the
PR
description.
Unlike
how
this
happen
and
like
what
we
did
to
kind
of
solve
it.
B
So
we
need
to
cut
a
new
release
of
alpha
2,
which
is
going
to
be
0
to
11,
which
is
going
to
be
also
the
new
minimum
version
that
you
will
need
to
upgrade
to
alpha
3,
and
it
will
also
need
like
a
2
or
3
6
update,
which
figures
a
lot.
Also
like
another
issue
in
the
web
books
that
we
have
there's
a
lot
of
more
details
in
the
PRS.
So,
if
you're
interested,
please
take
a
look.
A
Right,
okay,
so
I
don't
see
any
demos
for
today
so
we'll
move
on
to
questions
okay,
so
we
have
a
first
one
from
that.
I
hope!
That's
how
you
pronounce
your
name.
What
is
the
reason
behind
making
qbm
commands
and
files
filled
of
the
cranium
control
plane,
spec
immutable?
This
is
related
to
this
issue.
A
G
So,
for
example,
we
most
recently
allow
you
to
change
the
image
repository
that
is
used
in
the
cubm
configuration,
and
we
had
to
write
a
bunch
of
code
to
get
that
data
synced
over
to
a
config
map
in
the
workload
cluster
and
do
some
other
just
some
other
manipulations
to
make
sure
that
that
works.
So
the
short
answer
is
everything
was
immutable
and
we
are
totally
open
to
entertaining
the
idea
for
evaluating
various
fields
to
allow
them
to
be
mutable
as
well.
G
A
G
So
we
need
the
provider
ID
so
that
we
can
ultimately
associate
a
node
in
a
workload
cluster
with
a
machine
in
the
management
cluster,
and
we
do
this
in
large
part
to
be
able
to
check
the
nodes.
Health
so
for
machine
deployments
and
machine
sets
we're
able
to
tell
how
many
ready
versus
unready
replicas
that
we
have
and
for
machine
health
check
were
able
to
evaluate
the
state
of
the
node
and
decide
if
a
machine
if
it's
associated
machine
is
healthy
or
not.
G
So
when
we're
looking
at
machine
health
or
machine
set
and
machine
deployment,
replicas
status,
we're
going
to
map
between
the
provider
ID
on
the
machine
and
the
provider
ID
in
the
node,
we
could
look
at
the
infrastructure
machine
provider
ID,
but
we've
had
it
on
the
machine
pretty
much
from
the
beginning.
I
think.
H
G
So
we
have
the
Machine
controller,
is
responsible
for
copying
the
value
from
the
infrastructure
machine
to
the
generic
machine
and
as
far
as
I'm
aware,
that's
the
only
controller.
That's
looking
at
the
infrastructure
machines
provider,
ID
the
cubm
control
kcb,
the
many
that's
that
needs
to
do
like
a
note,
ref
match
is
going
to
do
it
actually
I
only
pause,
yeah
I
mean
basically.
If
we
need
to
evaluate
the
provider
ID
anywhere
and
I
apologize.
I,
don't
remember
if
we
have
other
places
in
the
code,
that's
looking
at
it,
but
they're
gonna.
G
H
Okay,
it's
it
sounds
like
I'll
file,
an
issue
about
this
because
I
mean
I
haven't
come
across
the
copying
behavior
I
just
come
across
the
rejecting
behavior
says:
oh,
you
don't
have
a
provider
ID
and
and
if
it
could,
if
it
can
look
in
the
I
mean
it
to
me
that
the
provider
specific
code
is
where
it
should
be.
So
I
think
that
the
Machine
controller
does
look
there.
I
think
you
should
just
always
look
there,
but
that's
two
different
issues:
I
guess.
A
B
Yeah
I
was
checking
like
so
it's
like
double
checking
like
it's
not
required
on
the
machine
type,
so
the
Machine
inspector
is
actually
optional.
It
is
synced
to
like
just
provide
a
way
for
others
to
like
to
use
it
as
well
without
inspecting
the
end
objects
which,
like
you,
need
a
structure
then
like
it's
not
type
and
things
like
that,
so
we
do
it
by
contract,
but
it
it's
not
like
required
on
the
machines
back
yeah.
B
So
for
that
check,
it's
actually
like
that.
That
means
like
it's
missing,
but
the
reconcile
like
if
you
can,
if
you
can
open.
That's
like
yes
well
like
that
check
like
pretty
much
says
like
I,
don't
have
a
node
graph
yet
so,
like
I
need
or
never
provider.
Id
sorry
I
cannot
assign
a
note,
note
reference,
which
means
the
infrastructure
provider
hasn't
said
it
yet,
and
we
cannot.
We
were
not
able
to
sync
it
back.
B
It's
not
like
a
required
from
an
API
point
of
view.
It's
a
like
after
a
while,
like
the
the
Machine
controller,
will
try
to
reconcile
the
node
reference,
so
we're
going
to
work
a
little
cluster
and
say:
do
I,
have
a
matching
kubernetes
node
and
can
I
link
it
back
to
this
machine.
But
without
the
information
coming
from
the
infrastructure
provider,
we
don't
know
how
to
make
that
Association,
and
so
we
wait
for
that
data
to
be
ready
before
syncing
it
back.
Yeah.
H
G
B
H
B
A
A
F
Sorry
I
didn't
have
time
to
add
a
link
to
this,
but
in
the
GCR
registry,
where
the
there
is
the
0.2
that
one
kept
the
image,
but
the
0.30
has
not
been
pushed,
and
it's
probably
due
to
the
fact
that
there
is
there
has
been
a
move
from
like
the
provider
was
previously
in
it,
its
own
repo
and
now
it's
in
the
cluster
api
repo.
So
I
don't
know.
I
Yeah,
you
are,
if
you're
trying
to
use
the
doctor
provider
yeah,
the
experience
has
changed
slightly.
There
is
documentation
in
the
cluster
api
book,
there's
a
quick
start
page
and
then
from
the
quick
start.
It
would
be
great
if
you
could
open
it,
though.
Thank
you
on
the
quick
start
page
for
for
docker.
I
think
there
should
be
a
eyes.
I
I
F
I
F
A
A
So
so
I
opened
a
Google
Doc
proposal.
It
was
initially
I'm
motivated
by
wanting
to
move
the
Etsy
data
directory
to
an
external
disk
I'm,
not
on
the
US
disk
and
as
in
cabs,
II
and
I
know.
Kappa
has
a
similar
issue
to
improve
performance,
and
this
actually
requires
some
changes
to
option
cluster
API
because
we
need
to
be
able
to
mount
the
disk
which
requires
some
changes
to
come
in
it.
A
So
this
proposal
is
I,
guess
twofold:
there's
one
part
about
implementing
a
general
way
to
mount
its
who
set
up
partitions
and
file
systems
and
mounts
them
in
cloud
and
it's
as
part
of
the
cubm
config
spec,
and
so
it's
this
one,
and
so
this
doesn't
have
any
knowledge
of
XP.
It's
supposed
to
be
generic
and
reusable
for
other
purposes
and
then
there's
the
second
part,
which
is
how
to
actually
leverage
that
to
put
the
exits,
are
the
edge
data
on
the
disk
in
cap
scene.
So
there
is
a
prototype
I.
A
Think
I
put
the
link,
I
have
a
prototype
working
for
a
book,
cluster
API
and
jabbsi
and
I
met.
Who
I
don't
know
he's
in
the
meeting,
but
has
been
helping
me
review
because
she
also
has
a
prototype
for
kappa,
so
we're
making
sure
that
this
works
for
both
but
yeah.
Please
review
and
I'm
gonna
move
this
to
your
PR
next
Monday
since
I've
already
got
a
lot
of
comments,
but
I'll
let
al
it
sit
for
the
rest
of
the
week.
So
other
people
can
take
a
look.
A
A
Alright
well
I'm,
seeing
a
few
plus
1.so
Sims
yeah
great
yeah.
One
thing
I
want
to
say
is:
there
is
a
bit
of
a
difficulty
about
like
knowing
which
dated
which
device
he
ends
up
on
or
the
discs
are
he
ends
up
on
its?
It
is
an
infrastructure,
specific
problem,
though
so
that's
I,
think
that
shouldn't
affect
what
we
do
for
Cathy
in
terms
of
like
supporting
generic
file
system
mounting
for
a
comedian,
bootstrapper.
B
I
think
four
bits
you
like
it's
not
here
you
just
what
I
would
want
it
like
some
more
eyes
on
the
conditions
proposal.
I
think
it
has
already
moved
to
a
PR
as
well,
but
if,
like
folks
like
have
time
to
review
it,
that
would
be.
It
would
be
great
like
on
the
PR
and
yeah
we're
probably
proceed,
maybe
like
four
I
guess
like
another
week
of
comments
so
until
next
week
and
then
try
to
emerge
as
provisional
and
start
implementing
it
awesome.
A
B
So
for
PR,
so
we
in
the
past
we
did
a
one-week
kind
of
timeout
if
their
third,
like
no
comments
or
like
there's
like
a
very
low
flow
of
comments
for
the
Google
Doc.
It's
like
up
to
like
the
author
to
decide
like
okay,
like
I,
don't
have
enough
comments
like
and
we're
like
it's
time
to
like
open
up
PR.
So
that
is
like
a
more
like.
You
know,
use
your
best
judgment
for
the
PR.
It's
like
one
week,
time
frame.
Okay,.
J
Yep,
so
this
one's
specific
to
the
AWS
provider
but
I
know
other
people
interested
in
doing
the
same
for
their
providers.
So
we
had
some
discussions
on
Friday
with
people
are
interested
in
issue
across
they
WS
inertial
and
basically
the
idea
is
that
to
balance
concerns
around
our
back
etc,
do
have
cluster
scoped
principles
that
can
then
be
delegated
to
namespaces
for
use.
K
K
B
A
L
Thanks
this
Neal
hi
everyone,
so
I've
updated
the
the
issue
over
here,
but
basically
we've
decided
that
we
well
sided.
If
you
have
any
other
teens.
Let
me
know
I'm
that
we
went
to
first,
a
lot
of
providers
to
PR
Docs
into
their
respective
repos
and
then
use
a
config
file
to
basically
pull
all
those
docks
in
to
copy.
L
So
we
want
users
to
land
in
the
same
place,
which
would
be
the
copy
book,
whether
it's
for
specific
features
like
machine
pulls
and
how
to
implement
it
in
your
own
cluster,
etc
or
networking,
and
so
with
that
we
want
to
create
a
separate
section
within
the
Cathy
book
for
providers
and
create
basically
common
dark
structures.
So
it's
easier
to
write
these
docs
for
people,
for
example,
machine
pool,
support,
etc
and
yeah,
so
I
think
that's.
The
biggest
request
I
have
is
for
the
maintainer
Ziff
Cathy.
L
If
you
guys
could
send
an
email
out
to
the
CNCs
Service
Desk,
they
have
support
for
Dockers
and
I'm
totally
fine
working
closely
with
them,
but
I
think
it
would
be
really
nice
to
get
an
actual
doctor
like
an
resource,
someone
that
knows
how
to
structure
Doc's
really
well
for
this,
and
the
CNCs
should
be
providing
support.
I've
worked
with
them
another
open
source
projects
before
so
yeah.
That's
my
request.
You
can
like
the
issue
and
it
should
easily
let
go
through
feel
free
to
see
see
me
need
help
yeah
any
comments.
L
A
It
sounds
really
good
I,
like
the
idea
of
having
one
place
for
users
to
see
documents
Mike,
you
have
your
Henry's.
M
Yeah
I
just
wanted
to
say
that
I
think
I
think
the
proposal
sounds
great
I
guess
the
experiences
I've
had
in
the
past.
With
this
kind
of
thing
it
seems
like
we
could
create
some.
You
know
some
templates
that
could
live
and
the
main
repo
and
then
you
know
we
could
kind
of
give
instructions
on
how
people
could
copy
those
and
make.
L
B
L
Actual
markdown
document
was
most
likely
living
the
providers,
and
so
we
would
basically
like
pull
those
in
so
people
like
it
PR
their
own
Docs
within
each
provider
repo,
and
we
would
have
this
config
file
to
pull
all
those
stocks
into
the
website,
so
they
would
actually
live
somewhere
else.
Yeah.
B
B
L
L
M
Just
I
just
want
to
make
a
comment
to
like
the
technical
issues.
It
might
you
know
I
I
know
there
are
several
people
in
this
community
who
have
also
worked
in
the
OpenStack
community,
but
it
might
be
worth
looking
at
the
way
they
assemble
their
Docs
because
they
did
something
very
similar.
The
individual
project
teams
could
control
the
documentation
and
then
they
had
the
centralization
of
the
Doc's
took
place
through
like
a
publishing
mechanism
that
basically
pulled
all
that
in
so
there
might
be
some
good
ideas
to
kind
of
crib
from
there.
Okay.
A
Right,
do
you
have
any
other
discussion
topics,
anything
that
you
forgot
to
put
on
the
lists.
A
All
right
thanks,
everyone
all
right,
I
guess
we'll
move
on
to
issue
triage
now,
but
my
favorite
part
of
this
meeting
that
I
still
don't
know
how
to
do
all
right,
so
tissues.
A
A
All
right,
so
we
have
a
Q
test
framework
once
feel
free
to
raise
your
hand
at
any
point,
if
you
want
to
add
anything
about
triage,
yeah,
okay,
so
this
one
is
about
adding
a
machine
helper
to
wait,
invalidate
status
of
machine.
A
G
A
About
modifying
comedian
commenced
during
an
update,
so
I
think
NZ
brought
up
some
reasons
for
why
this
was
like
that
at
the
movement
ends.
If
you
keep
maybe
like
summarize
you're
in
Surrey
scared,
you
gotta
do
that
check
it
and
then
I
don't
know.
We
want
to
even
put
that
in
the
mouse
and
I'm
not
I'm,
not
sure
what.
B
A
J
Just
asked
for
it
to
be
made,
so
it's
to
do
with
the
experimental
control
plane
join.
We
do
because
of
the
pre-flight
checks
operate.
You
need
to
have
the
static
pod,
manifest
for
the
encryption
provider
available
for
the
API
server
to
come
up
successfully
on
a
join
I
think
so
it's
only
gets
blocked
and
you
can't
use
kms
I
think
it
needs
a
bit
of
Investigation
got.
B
A
Okay,
this
one
close
to
petal
ignores
later
files
defining
chi
config
variable
seems
like
that
should
be
current
milestone.
Oh,
it
started
to
type
actually
what
it
sure
already
got
removed.
I
guess
it
didn't
work,
you
can
straight
so
it
doesn't
hold
most
of
permissions,
but
it
sounds
like
it's
already
been
triaged
yep.