►
From YouTube: 20200115 - Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
Wednesday
January,
15
2020.
This
is
the
cluster
API
office
hours.
Meeting
cluster
API
is
a
sub
project
of
sit
cluster
lifecycle.
We
do
have
an
agenda
document
which
I'm
sharing
right
now
included
in
that
is
meeting
etiquette,
which
basically
boils
down
to
be
kind
to
each
other,
and
please
use
the
raise
hand
feature
in
zoom.
If
you'd
like
to
talk
about
something
and
I
will
call
on
you.
We
also
in
the
agenda
document,
have
an
attendee
list.
A
Please
add
your
name
if
you
are
here
and
if
you
have
any
particular
discussion
topics,
demos,
pocs,
PSAs,
etc.
Please
add
them
to
the
appropriate
section
in
the
agenda
and
first
up
something
we
started
doing
recently
is
a
welcome
and
introduction
for
new
attendees.
So
if
this
is
your
first
time
joining
welcome
and
if
you're
interested
and
want
to
introduce
yourself
I'll
give
you
all
a
minute
to
do
so,
but
no
obligations,
so
I'll
pause
for
seconds
you've.
Anybody
wants
to
say
hi.
B
C
D
So
it
could
be
like
an
infrastructure
template
or
a
visual
template,
and
then
we
have
Kappa
zero,
four
eight,
which
now
equals
a
new
feature
that
passes
the
easiest
acts
through
with
the
instance
volumes
as
well,
and
a
few
bug
fixes
one
which
is
a
nullpointerexception
which
would
happen
when
we
can
get
the
root
device
eyes
from
it
up.
Yes,
so
that
usually
I
think
happened
on
the
leash.
You
know
if
there
was
a
phase
condition
when
you
create
a
machine
and
the
API
return
nil,
for
that.
D
So
fix
that
as
well,
and
we
also
added
till
to
support,
is
if
you
interested
in
developing
locally,
both
for
alpha
3
alpha
2,
we
have
killed
Sarah
did
a
PSA.
Slash
like
a
question
for
everybody
here
is
trying
to
do
more
frequent
releases,
and
one
thing
that
we
also
have
been
working
on
is
I.
Don't
million
different
beasts
process
but
like
just
the
same
set
like
update
in
mind
for
everybody
to
target
like
to
set
like
on
our
see
one
date
and
I've
been
thinking
next
month.
So
like
February,
4
feet.
A
E
Here,
it's
basically
that
the
Cappy
controller
basically
needs
eyes
for
review,
and
if
you
want
a
few
more
tests,
but
yeah
I
wanted
to
give
a
heads
up
reviews
and
then
I
think
the
next
steps
for
us
would
probably
be
to
implement
this
further
prouder
on
the
cap
Z
side.
So
that
would
be
something
we
will
probably
be
looking
at
soon
once
we
get
that
version.
Thank.
F
And
may
maybe
you've
already
done
this,
and
and
so
it's
not
a
big
deal,
but
all
right
I'm
wondering,
if
is
it?
Is
it
a
prescribed
guidance
yet
by
copy
to
update
cap
D
and
it's
tests
to
account
for
changes
that
occur
in
quest
or
API,
so
that
there
is
coverage
much
like
if
you
were
to
do
a
PR
that
affected
code
that
should
be
documented
in
the
book
or
something
you
would
you
would
update
that
in
the
same
PR,
ideally
so
I'm
curious.
F
A
E
E
E
A
Yeah
cap
D
is
meant
to
ensure
that
the
code
is
compiling.
So
if
we
change
something
in
one
of
the
library
methods
within
Cappy
and
it
breaks
something
in
the
cap
d
code
obviously
want
to
fix
that
behavioural
testing
is
important
too
I.
Don't
know
that
it
can
cover
everything,
though,
and
machine
pools,
is
generally
a
behavior,
that's
specific
to
cloud
providers
where
they
have
a
primitive
that
does
what
a
machine
pool
is
supposed
to
be
doing.
Chuck
yeah.
G
F
Chuck
I'll
I'll
ping
you
on
this
and
I
was
actually
gonna
open
an
issue.
I
don't
have
anything
working
yet,
but
I
started
into
some
type
of
very
lightweight
writer
for
kind
of
use
with
cat
D
so
that
it
could
mean
I,
wasn't
thinking
specifically
in
this
PR
I
just
identified
that
as
a
gap
or
would
be
useful
to
to
be
able
to
put
that
in
place.
Cool.
G
I
A
G
G
Oh
I
was
gonna
say
that
it
doesn't
have
a
direct
correlation
to
the
same
concept
of
machine
tools
and
other
cloud
providers.
I
think
we
could
probably
make
one
like
what
Andrew
was
talking
about,
but
as
of
as
far
as
a
CSP
are
because
I
don't
think
we
need
to
block
on
that
testing
and
capti
right
now.
J
A
A
K
Hey
everyone
yeah
I,
wanted
to
talk
about
the
documentation,
so
I,
just
updated
our
cabs
II
documentation
for
sure
and
I
had
to
make
sure
it
worked
with
a
cluster
API
book,
and
then
we
also
have
a
development
guide.
So
the
the
docs
that
I
wrote
were
hopefully
for
first-time
users.
I
wanted
to
pick
up
cluster
API
I'm
gonna
use
it
with
Azure,
but
I
was
wondering
if
we
had
any
like
standard
on
what
exactly
went
into
the
cluster
API
book
versus
what
goes
into
our
provider
repos.
K
And
where
do
we
expect
users
to
come
in
first?
Would
they
come
into
with
it?
Do
we
expect
them
to
find
the
cluster
API
book
and
then
go
to
our
user,
because
this
is
the
flow
right
now?
Ideally,
they
would
find
the
cluster
API
book
before
finding
the
provider.
They
would
realize
that
to
get
the
management
cluster,
they
needed
the
azure
Docs.
They
would
go
to
the
azure
Doc's
and
then
go
back
to
the
cluster
API
to
create
the
workload
clusters,
the
close
for
API
book.
So
it's
a
little.
K
It's
a
it's
a
lot
of
jumping
from
my
perspective,
so
I
wanted
to
discuss
that,
unlike
where
it
should
all
use
your
Doc's
you're,
speeding
across
your
API
book,
and
that
means
we
should
have
this
set
up
cluster
in
there
too,
and
all
the
other
credentials
put
in
the
cluster
API
book.
Or
do
we
expect
people
to
kind
of
jump
back
and
forth
right
now.
K
K
A
D
So
like
you,
can
have
the
like
source
live
in
two
different
places,
but
then,
when
we
serve
it,
it
can
be
in
one
place.
So
as
long
as
it
get
up
which
I
know,
it's
like
a
big
ask
to
be
it
in
get
up,
it
could
be
in
one
place,
so
I
would
say
like
if,
if
you're
worried
about
a
sure,
I
would
say
like.
Let's
try
to
make
that
into
one
page.
D
Let's,
let's
try
to
like
bring
as
much
information
as
possible
to
make
sure
that,
like
a
user,
doesn't
have
to
jump
back
and
forth
I
know.
I
will
be
frustrated
if
I
have
to
drop
back
and
forth,
and
if
you
have
any
other
ideas
to
make
it
better
open
issues
or
we
can
work
in
slack
together.
I'm
happy
to
like
hand
the
MD
book
off
to
someone
else.
F
Where
danger
Thanks
so
I
pointed
someone
at
this
earlier
this
week,
a
person
at
VMware
I,
don't
believe
they're
on
the
call
I
mentioned
this
color
them
Stewart
Clemens
he's
in
our
doc
group
and
he's
working
on
some
documentation
for
something
else.
But
it's
going
to
involve
a
lot
of
the
Foss
product
or
Foss
projects
and
I
said
you
know
when
we
are
leaking
too
fast
documentation.
I
said
there
are
gaps
there
and
I
think.
Would
you
know
highlighted
and
I
said
it
would
be
great
if
we
could
get
some
vmware
or
others.
F
F
I
can
reach
back
out
to
him
Andy
if
you
haven't
engaged
him
yet,
but
this
is
exactly
the
kind
of
thing
that
I
was
talking
about
with
him,
so
at
least-
and
he
agreed
so
I-
know
of
at
least
one
doc
resource,
but
it
would
be
great
to
get
people
other
doc
resources
from
other
companies
involved
as
well.
Other
contributors.
A
A
B
Yes,
hello,
everybody
again,
so
we
are
currently
implementing
reimplemented
part
of
our
control,
plane
management
and
for
us,
as
we
want
to
kind
of
implement
cluster
api
and
move
towards
like
being
more
close
to
upstream,
is
our
implementation.
For
us.
The
question
is,
and
if
we
can
already
use
something
and
we've
seen
that
there
is
a
queue
by
diem,
control,
plane,
spec
and
one
of
the
goals,
is
it
to
be
a
generic
control,
plane,
provider,
template
or
spec,
but
for
now
it
seems
to
be
very
specific
to
cube
ATM.
H
A
A
J
I
am
so
yes,
we
have
a
specification
under
the
on
the
master
branch
on
the
dots
book,
source
architecture,
controllers
control,
plane,
door,
MD,
so
I
think
that's
what
you
were
talking
about,
that
that
has
a
description
of
how
a
generic
control
plane
products
should
work
with
some
discussion
of
how
it
works
in
the
cube
ADM
specific
case
in
terms
of
building
your
own.
The
one
in
the
repo
is
is
cube
builder.
J
A
So
+1
to
what
you
just
said
in
a
dear,
this
document
is
sort
of
half
generic
and
maybe
half
or
a
portion
cube
idiom
specific.
We
are
going
to
be
moving
the
generic
portion
of
this
content.
That's
under
this
controller
section
to
provider
implementers
and
you
can
see,
for
example,
if
you
wanted
to
write
a
machine
infrastructure
provider
as
an
example,
it
talks
about
the
requirements
for
the
data
types
it
talks
about
the
required
behavior.
All
of
this
is
entirely
generic.
There's
nothing,
that's
specific
to
Azure
or
view
vSphere
AWS
or
anything.
A
So
we
want
to
try
and
do
something
very
similar
for
the
control
plane
and
we
just
haven't
gotten
it
in
there
yet
in
terms
of
the
cue
medium
control
plane.
Specifically,
that
is
a
non-generic,
very
specific
type
that
is
meant
to
do
machine
based
control
planes
using
the
cube
idiom
bootstrapper,
that's
part
of
cluster
API.
A
So
if
you
have
your
own
bootstrapper
or
you
have
your
own
way
of
managing
the
control
plane,
then
what
you
would
want
to
do
is
take
a
look
at
what's
in
here
in
terms
of
what
a
generic
control
plane
controller
is
supposed
to
do,
and
what
its
behaviors
should
be
and
work
on
implementing
this,
but
it
doesn't
have
to
be
cube.
Atm
based,
it
doesn't
have
to
be
a
machine
based.
It's
really
up
to
you
as
long
as
you
do,
you'll
need
things
like
a
ready
feel
on
the
status.
A
B
Yeah,
thank
you.
It
helps
I,
wasn't
aware
of
this
document.
I
was
reading
the
control,
cube
area
and
control
planes
back
in
the
specs
section
of
the
repo
and
one
of
like.
Maybe
maybe
there
is
left
over
from
another
story,
but
one
of
it
one
of
the
goals
says
like
provided
generic
implementation.
So
I
was
totally
confused
by
that
where
the
language.
A
A
Would
say
just
was
in
the
book
and
talking
about
in
a
bit
more
detail,
there's
not
much
in
the
way
of
interactions.
That's
expected
from
a
generic
control,
plane,
interaction
with
the
cluster
there's,
basically,
a
ready
field
and
there's
a
control
plane,
end
point
field
that
gets
set
on
the
cluster
object
and
those
two
together
basically
give
you
access
to
a
control
plane
with
a
URL
and
there's
awesome.
A
Also,
some
behavior
around
the
cube
config
secret,
the
control
plane
provider
is
supposed
to
be
generating
that,
if
you're
using
one
so
in
terms
of
all
of
the
things
that
the
cube
idiom
control
plane
code
is
doing,
a
good
chunk
of
it
is
cube.
Idiom
and
machine
specific
and
the
parts
that
I
just
mentioned
are
the
generic
ones
that
you
would
need
to
take
care
of.
We
will
be
trying
to
get
the
documentation
better,
but
it's
not
there.
Yet.
Okay,.
L
Yes,
I've
been
tracking
the
implementation
of
the
cube,
ATM
control
plane
and
as
far
as
I
understand
it,
it's
up
to
lock
delete,
and
the
next
part
is
the
big
part
being
the
actual
orchestrated
upgrade
of
the
control,
plane
or
downscaling
of
the
of
the
control
plane.
Fun
characters
tend
to
be
corrected,
yes,
cool,
so
from
what
I
see?
Is
that
the
we
there
is
a
list
of
health
checks
that
are
may
be
performed
so
I'm
trying
to
understand
the
say,
a
link
to
it
yeah.
L
So
to
me,
the
hard
part
about
the
orchestrated
upgrade
is
the
health
trick.
So
if
you
perform
an
upgrade
and
you're
upgrading
to
a
new
control
plane
that
is
broken,
you
bring
down
your
cluster
and
you
can't
recover
it
so
I'm
trying
to
understand
a
little
bit
about
what
we're
gonna
do
here,
because
I'm
I
can
potentially
contribute
some
of
it
yeah.
So
just
any
any
comments
on
this.
J
Area,
yeah
yeah,
so
it's
going
to
be
based
on
the
cluster
api
upgrade
tool.
One
of
the
changes
that's
being
made
is
that
we're
going
to
be
doing
a
bit
more
interaction
with
SED
to
monitor
its
status
and
ensure
that
we
do
see
like
that.
So
basically,
the
crux
of
the
matter
is
anything:
that's
xcd
related
making
sure
that's
healthy
as
we've
scaled
up
and
down
the
speed
upgrades.
J
So
there's
a
couple
of
PRS.
That's
gone
in
to
essentially
let
you
wire
up
to
the
XE
d
and
if
you
a
client
that
can
do
the
health
checks
on
that
CD
and
connect,
it
create
a
proxy
through
the
api
server.
But
there's
going
to
be
some
additional
logic
that
we
need
to
put
in
around.
We
twice
that
we
will
borrow
from
the
cluster
api
upgrade
tool.
A
C
J
J
J
J
Actually,
IDM
itself
is
pretty
if
it
doesn't
currently
work
in
a
way
that
cube
ATM
managers
that
CD
control
planes,
so
we're
keeping
those
two
concerns
completely
separate
for
now,
then,
as
@cd
IDM
progresses,
there
might
be
some
convergence
on
QA
game
and
see
the
idea.
I
don't
want
to
really
mix
the
streams
at
this
stage.
L
L
All
no
so
we
so
we
re
implementing
that
fire.
The
API
calls
yes.
J
Sir,
can
we
dandy
about
this
earlier,
so
I
would
love
to
you
consume,
cube
ATM
as
a
library,
so
it's
sitting
in
community
/,
giving
areas
at
the
moment
which
is
gonna
cause,
go
module
hell
and
a
lot
of
the
functions
in
it,
I'm
not
directly
consumable
with
library
types.
So
there
is
work
in
this
I
think
in
the
current
stream
for
communities
to
break
cube
ATM
out
into
a
separate
repo,
but
that's
not
gonna,
be
happening
it
and
won't
be
ready
in
a
B
1/2
with
read
time
scale.
G
Yeah,
just
one
comment
about
CDA
DM,
so
we
do
support.
Cluster
API
does
support
having
a
@
CD
hosted
externally.
It
will
not
manage
the
@
CD
cluster
whatsoever.
But
if
you
want
to
use
f
CD
a
DM,
you
can
set
up
a
net
CD
cluster
using
that
outside
of
apps
outside
of
kubernetes
and
then
link
the
pro
pro
brief
fields
as
the
doc
in
the
documentation,
and
you
should
be
good
to
go
there.
G
A
Okay,
let
me
get
to
the
right
place
here
there
we
go
just
refresh
real,
quick
all
right,
so
we
have
five
open
issues
that
don't
have
a
milestone
and
we
will
start
with
the
oldest
one
and
work
our
way
up.
So
this
one's
from
Daniel
move
some
functions
that
fetch
and
filter
many
machines
or
make
functions,
infection
filter,
many
machines
consistent
with
one
another,
looks
like
you
wanted
a
long-term
priority
on
here,
which
tells
me
this
goes
into
the
next
milestone
that
work
for
you,
Daniel
yeah,.
I
A
J
A
All
right
moving
on,
then
we
have
something
from
Jay
about
how
to
cluster
up
and
running
and
the
cube
ATM
bootstrapper.
This
is
alpha.
2
cubing,
a
bootstrapper
just
seemed
to
stop
processing
things
and
as
soon
as
he
restarted
the
pod,
the
Kuban
bootstrapper
immediately
started
processing
the
cube
again
configs
that
were
sitting
around.
F
If
J's
on
you
mentioned
this
to
me
and
I
basically
said
like
look,
this
is
good
to
know,
but
without
law
at
the
very
minimum
logs.
It's
really
not
much
be
done
about
it,
especially
since
it's
b1
a2.
It's
also
the
case.
I
told
them
that
I
don't
think
anyone
else
is
raised
this,
so
it
could
be
related
to
a
watch
issue,
specifically
cat
B
and
restarting
the
pod
cause.
This
ain't,
so
I'd
be
like
Jay
said:
I'd
be
okay,
closing
it.
F
A
That
works
for
me,
so
I'm,
just
gonna,
do
awaiting
more
evidence
sounds
good.
Thank
you
thanks,
Andrew,
okay,
then
we
have
one
from
Michael
about
the
QuickStart
documentation,
not
telling
you
how
to
determine
if
the
control
plane
is
up
and
running,
I
think
this
priority
wise
would
be
nice
to
have
sooner
rather
than
later
and
I'll.
A
F
Have
a
issue
that's
not
filed,
but
it's
related
to
Cathy
and
it
was
closed
on
cat
V
seem
relevant,
did
not
be
a
milestone
issue
to
pinged
you
on
it.
It
was
regard
no
event
metadata
around
these
curvy
machines
and
I
said
that's
something
I
think
they're
trying
to
work
out
and
Cappy
I
think
for
Brito
it's
from
Jay
as
well,
but
breats
you
I
mean
I
could
post
it
in
chat.
F
A
Know
what
I
would
expect
and
hope
for
is
that
during
the
alpha
4
development
cycle,
we
will
properly
create
conditions
the
status
portions
of
all
of
our
custom
resources,
and
while
we
may
keep
events
or
get
rid
of
them,
the
important
information
will
be
persisted
in
conditions,
and
we
do
have
a
very
vague
open
issue.
For
this
then
basically
says:
let's
figure
out
conditions.
I
would
love
it.
If
somebody
would
take
a
stab
at
looking
at
anything
that
where
we
have
events,
Kappa
has
a
ton
of
them.
A
So
that
might
be
a
good
case
study,
even
if
you're
not
super
familiar
with
what
goes
on
in
AWS
land,
but
there's
a
ton
of
events
that
CAPA
emits
and
trying
to
figure
out
a
way
that
convert
them
to
status.
Conditions
on
the
a
TBS
cluster
and
a
TBS
machine
would
be
a
good
thing.
Basically
anything
around
brainstorming
on
how
we
should
handle.
F
F
A
You
can
see
there's
a
lot
of
TVs
in
here,
so
I
think
yeah.
You
know
it
needs
a
proposal
with
ideally
like
a
case
study.
Take
a
look
at
Kappa.
Take
a
look
at
Kathy.
Take
a
look
at
something
where
we
have
some
some
events
and
some
some
state
information
and
see
what
it
should
look
like
or
propose
what
it
could
look
like.
It's.