►
From YouTube: 20200122 - Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
Wednesday
January,
22nd
2020.
This
is
the
cluster
API
office
hours
meeting
cluster
API
is
a
sub
project
of
sequester
lifecycle
and
this
meeting
is
being
recorded.
We
do
have
meeting
etiquette,
which
oils
down
to
let's
all
be
kind
to
each
other
and
if
you've
got
something
to
say,
please
use
the
raise
hand
feature
of
zoom.
And
finally,
we
do
have
this
agenda
document.
If
you
do
have
discussion,
topics,
demos,
PSAs
etc,
please
add
them
to
the
document
below
and
finally
before
I
get
started.
A
Let's
make
sure
that
we
all
add
our
names
to
the
attending
list.
Please,
and
with
that,
one
thing
that
we've
started
doing
recently
is
just
saying:
welcome
to
new
attendees.
So
if
this
is
your
first
meeting
thanks
for
joining
us
and
if
anyone
is
new
and
interested
in
just
saying
hi
with
a
brief
introduction,
feel
free
to
unmute
and
introduce
yourself
and
if
not
we'll,
move
on
to
the
discussion
topics.
A
A
B
Yeah
thanks
Andy,
actually
I
just
wanted
to
get
a
clarification
on
the
you
know
the
entrance
show
or
the
sequence
of
actions
which
are
involved
when
we
use
the
various
controller,
like
the
kappa
controller
cap,
PK
controller
and
infrastructure
controller.
So
what
is
the
sequence
of
action
that
is
involved?
I
was
looking
at
this
link
for
the
controller
collaboration
and
specifically
the
cluster
provisioning
process.
That
step
and
I
can
see
that
here
we
have
the
explanation
like
what
are
the
sequence
of
actions
that
happen,
but
I
was
willing
to
know.
B
A
Sure
that's
a
really
good
question,
so
I'm
gonna
hop
over
to
the
documentation
for
our
master
branch
for
cluster
API,
because
this
is
the
most
up
to
date
for
what's
in
development
right
now,
and
what
we're
working
on
for
view
1
alpha,
3
and
I'll
point
you
to
a
couple
of
places.
I
do
think
that
it
would
be
useful
to
try
and
consolidate
what
we
have
in
section
3
3
here
under
controllers
with
3
4,
4
provider,
implementers
and
I
know.
A
We
had
a
topic
last
week
with
RIA
about
trying
to
evaluate
our
document
documentation
organization,
so
I
won't
get
into
that
right
now.
But
I
will
point
you.
For
example,
if
you
take
a
look
at
the
cluster
controller,
it
does
have
a
flow
diagram
here,
for
what
is
the
expectation
that
or
what
are
the
expectations
for
the
cluster
controller
and
how?
What
does
it
interact
with?
What
is
it
waiting
on?
Some
of
these
diagrams
may
be
out
of
date
slightly
based
on
changes
that
we've
made
to
the
master
branch.
A
But
if
you
combine
something
like
the
cluster
controller
with
a
cluster
infrastructure
provider,
implementer
specification
you'll
see
some
behavior
here
as
well
for
a
provider
implementer.
So,
for
example,
and
one
thing
you'll
see
here,
is
when
a
cluster
infrastructure
provider
such
as
the
azure
or
the
Azure
cluster
provider,
the
AWS
cluster
provider,
the
easter
cluster
provider
and
so
on
when
it
sees
a
new,
updated
deleted
resource,
it
first
checks
to
see
if
it's
deleted.
A
If
it's
not
deleted,
then
it
looks
to
see
if
it's
owned
by
a
cluster
and
if
it's
not
owned
by
a
cluster.
You
follow
this
arrow
and
it
basically
says
I'm
done.
I
just
need
to
keep
waiting
and
then,
if
you
go
back
and
you
look
at
cluster
controller,
you'll
see
well.
Unfortunately,
this
is
a
bad
example,
because
this
doesn't.
A
But
there
will
we
should
update
the
documentation
so
that
it
shows
how
it
interacts
with
the
infrastructure
resources.
We
do
have
some
of
it
listed
in
here,
but
in
terms
of
the
flow
it's
not
fully
put
together,
but
I
would
say
the
informations
and
the
controller's
here
slightly
out
of
date
in
places
provider.
Implementers
is
up
to
date
for
what
we're
doing
in
v1
alpha
3
or
should
be
because
they
were
written
more
recently.
A
A
Yes,
so
the
bootstrap
controller
does
talk
about.
What's
going
on,
this
came
from
the
proposal,
so
this
is
all
we
have
for
the
QP
M
bootstrapper
we
potentially
could
do
more
and
then,
if
you
were
trying
to
write
a
bootstrap
provider,
be
it
for
cube,
ATM
or
something
else.
This
does
no
forget.
The
diagram
doesn't
render
on
this.
I
need
to
fix
that.
We
do
have
a
diagram
for
this
once
we
get
the
get
it
rendering
and
it
does
have
the
same
documentation
for
the
specs.
For
writing
a
bootstrap
provider.
A
The
master
branch,
which
is
what
I'm
displaying
here
so
you
can
see
the
URL,
is
master
dot,
cluster
API
so
on
and
so
forth.
If
you
just
go
to
the
cluster
API
book
that
doesn't
have
master
at
the
front
of
the
URL,
this
is
for
B
1,
alpha,
2
and
you'll
see
that
there's
the
like
the
provider.
Implementers
are
not
here,
there's
nothing
in
here
about
the
control
plan
provider,
because
that
doesn't
exist
in
do
you
want
alpha
2,
so
there's
different
versions
of
the
documentation,
depending
on
if
you're,
looking
at.
A
B
Yeah
yeah
I
will
do
that,
but
I
mean
more
specifically.
I
was
trying
to
understand
the
interaction
between
the
you
know:
the
Cappy,
the
cab,
BK
controller
and
the
infrastructure
controller.
You
know
how
they
work
in
a
sequence
you
know
all
together,
I
was
trying
to
get
a
consolidated
view
of
all
of
these
together.
Okay,.
A
Sure
I
can
give
an
overview,
so
the
cluster
resource
is
meant
to
give
you
infrastructure
that
supports
running
nodes
as
a
cluster.
It
doesn't
give
you
servers
that
make
a
cluster
so
for
for
the
AWS
provider,
for
example,
that
will
create
a
V
PC,
an
Internet
gateway,
nat
gateway
security
group,
firewall
rules
and
so
on
and
so
forth.
A
A
If
everything
goes
well,
you
will
have
a
one-note
control
plane
that
is
accessible
at
the
load.
Balancer,
that
database
provider
set
up
and
similar
things
would
happen
with
Azure
or
OpenStack
or
vSphere.
There
may
be
some
slightly
different
approaches
for
load,
balancers
and
h.a
and
whatnot,
but
generally
speaking,
that's
how
the
pieces
fit
together.
A
D
I
think
this
is
just
something
you
know
as
we're
working
on
the
control,
plane,
management
for
view
and
alpha
3,
something
that
occurred
to
me
is
right.
Users
might
be
running
their
own
workloads
on
the
control
plane
machines,
so
you
know
just
wanted
to
consider
that
whether
we
should
make
any
kind
of
you
know
we
should
support
those
workloads,
especially
or
take
them
into
consideration.
When
we're
you
know,
scaling
up
scaling
down
to
leading
etc.
It
sounds
like
you
know.
D
As
far
as
I
come
from
the
comments
so
far
that
you
know
know
we'll
just
continue
to
focus
on
a
continued
focus
just
on
the
control
plane,
workloads
and
if
there
are,
you
know
if
there
happen
to
be
workloads,
that's
fine,
but
we're
not
going
to
take
him.
The
special
consideration
and
so
I
think
I.
I.
Think
that
that
sounds
fine
to
me.
D
I
I'm,
going
to
you
know
like
the
PR
Doc's
change
somewhere,
just
to
make
that
very
clear
to
end-users
and
then
I
think
we
can
close
this
out,
but
yeah
I
was
also
I
mean
if
they're.
If
they're,
you
know,
if
there's
any
comment
here
in
the
meeting
that
haven't
been
made
to
the
issue,
I'd
love
to
look
to
get
feedback
here,.
E
D
A
Okay,
any
other
topics,
anything
anybody
wants
to
talk
about
before
I
move
on
to
looking
at
the
couple
issues
that
we
have
that
are
not
in
a
milestone,
yet
you
all
right.
Well,
if
anybody
thinks
of
anything
while
I'm
going
to
these,
let
me
know,
let's
see,
let's
take
a
look
at
the
oldest
one
here:
worker
node
management
only.
A
G
A
Priority
wise
I
think
it's
probably
some
documentation
in
terms
of
what
the
work
little
documentation
and
maybe
some
code
changes
pending.
If
there's
any
roadblocks,
so
I
would
just
say,
probably
backlog
at
this
point,
given
that
we
are
focusing
on
features
for
the
foreseeable
future,
where
cluster
API
is
fully
in
charge
of
things,
but
we
will
certainly
won't
stop
this
from
being
possible.
A
H
There
sorry
yeah
I
just
happened
to
notice
this
I,
don't
know
if
it
is
expected
or
the
labels
are
correct,
but
it
seems
like
the
labels
for
these
metric
services
has
control
plane
and
it
just
says,
controller
manager
and
not
really
specific,
like
cluster
API
or
QbD
and
bootstrap
controller
manager.
So
I
just
created
an
issue
just
to
bring
into
attention
I
don't
know
if
this
is
the
correct,
behavior
or
not
I'm.
A
A
A
So
that
is
it
for
issues
that
don't
have
a
milestone
in
terms
of
v1
alpha-3.
One
thing
that
we
did
mention
at
last
week's
meeting
and
I
think
it's
worth
repeating.
Is
that
we're
going
to
try
and
cut
whether
you
call
it
an
alpha
of
beta
or
a
release
candidate
we're
going
to
try
and
tag
a
release,
a
pre-release
of
DV
1,
alpha
3
for
cluster
API
around
Valentine's
Day.
A
So
if
every
14th
to
17th
the
14th
is
a
Friday,
17th
is
a
Monday
and
ultimately
we
would
still
like
to
maintain
our
early
March
release
of
0.3.
So
I
think
it
would
be
useful
to
try
and
do
some
burndown
meetings.
If
we
go
look
at
the
issues
for
cluster
api
that
are
in
the
milestone,
for
example,
there's
54.
A
lot
of
these
are
old,
long-standing
issues
like
documenting
machine
sets,
and
then
several
several
of
them
are
newer
and
based
on
work
for
the
cube,
ATM
control,
plane
or
cluster
cuddle.
A
So
we
could
do
it
now.
If
people
want
to
stick
around
and
do
it,
we
could
try
and
set
up
another
time.
I
know
it's
not
particularly
exciting
to
necessarily
go
through
54
or
more
issues,
but
given
that
we're
trying
to
get
a
pre-release
out
the
door
in
less
than
a
month,
I
think
that
it
would
be
useful
to
take
a
look
at
what's
out
there,
whether
you
do
it
with
us
or
independently,
and
if
there's
any
areas
that
you
have
time
and
can
help
out.
A
J
J
A
J
J
E
A
A
Okay,
I
got
a
plus
one
from
Vince
to
do
the
so
start,
getting
a
burn
down
now
so
I
hope
you'll
stay
and
see
what's
here,
but
if
you
don't
want
to
I,
understand
I'm
gonna
start
at
the
bottom
oldest
ones.
First,
so
we
have
document
machine
set
and
machine
deployment.
I
do
think
this
would
be
nice.
If
you
look
at
our
documentation,
it
doesn't
really
go
into
too
much
detail
about
how
they
work.
C
A
G
J
A
So
sorry
Prakash
Michael,
to
answer
your
question.
These
are
not
sorted,
so
the
breech
you
have
suggested
may
be
filtering
issues
by
area.
I
can
do
that.
We
could
filter.
I
could
do
them
by
priority,
but
it
would
be
unsorted
so
like
we
could
look
at
all
of
the
critical
ones.
All
of
the
important
soon's,
all
the
important
long
terms
together,
but
within
an
individual
priority.
Github
doesn't
give
us
any
ability
to
say
that
one
issue
is
more
important
than
another
one
yeah.
G
A
G
A
I
J
A
C
A
J
A
A
K
A
F
K
A
A
A
K
A
A
It
before
the
week,
okay,
can
you
make
sure
that
your
PR
has
the
fixes
15
25
minute?
Yes,
thanks,
okay
document
how
to
upgrade.
We
know
we
need
to
do
that,
and
I
definitely
think
this
has
to
be
written
before
we
release
add
documentation
guide
for
how
to
implement
a
bootstrap
provider.
We
did
have
I
know.
Liz
had
added.
A
A
I
think
it
would
be
easier
to
keep
for
tracking
purposes
not
to
have
to
look
at
this.
Every
time
we
do
a
burn
down
like
theirs.
They
can
certainly
continue
to
work
on
the
documentation
and
if
they
get
it
done
in
time,
it
can
come
into
the
milestone,
but
I
am
leaning
towards
just
bumping
it
to
neck
so
that
we
don't
look
at
it
anymore.
Unless
it
there's
a
PR.
Does.
K
A
H
J
A
Yeah
I
mean
this:
one
is,
basically,
you
can
kind
of
get
stuck.
If
you
actually
summarize
this,
you
can
have
a
cluster
infrastructure
reference
that
points
to
a
kind
that
doesn't
exist
that
points
to
a
resource
that
doesn't
exist
that
there's
some
sort
of
error
in
a
couple.
Different
situations,
I
think
we
might
need
to
tackle
them
individually,
but
I
will
check
in
again.
A
K
I
A
Okay,
what's.
A
Yeah,
it
has
a
tiny
example
right,
I
would
say
for
now
we
can
probably
close
it.
I
A
Right,
add
speck
dock
control
plan
provider,
this
one
I'm
trying
to
get
the
what
was
AK.
What
was
initially
added
for
the
control
plan
provider
moved
into
the
provider
implementers
section
and
I
would
like
to
see
this
done.
It's
this.
It's
on
my
plate,
so
keep
this
in
tracking
issue
for
cluster
cuddle.
V2
definitely
stays
in
Chuck
the
be
testing
framework
and
implementation
tracking
issue
all
right
things,
yeah
yeah.
I
A
Document
approaches
for
infrastructure
providers
to
consider
for
securing
sensitive
bootstrap
data
I
do
think
this
is
still
useful.
We
have
a
work
in
progress,
pull
request
for
Kappa
to
encrypt
the
bootstrap
data
so
that
it's
less
easily
accessible,
I
think
getting
some
documentation
in
here
would
be
useful,
but
I'm
gonna
put
this
on
the
backlog.
Unless
anybody
feels
strongly
that
it
needs
to
stay
higher
priority.
A
You
will
stay
this
one
that
came
around
or
came
from
a
while
ago
on,
if
you
have
custom
images
for
kubernetes
at
CD
or
DNS,
that
there's
a
lot
of
work,
that
you
have
to
do
by
hand
to
generate
a
QB,
DM
config
that
will
use
those,
and
this
was
initially
saying
a
really
nice
if
our
images
could
just
kind
of
figure
that
out
and
set
a
bunch
of
things.
Where
did
we
end
up
on
this?
L
L
A
A
A
And
maybe
this
is
a
decent
question
for
the
group.
Don't
need
to
answer
it
right
now,
but
we're
trying
to
figure
out
what
port
to
use
for
the
aliveness
and
readiness
probes
and
at
least
four
core
cluster
API
I
believe
we're
using
eighty
eighty
four
metrics,
and
so
we
have
this
one
that
does
9440
just
picking
one,
and
then
we
have
another
pull
request.
A
That
is
a
follow-up.
Neither
of
these
have
merged
to
change
Kappas
metrics
from
eighty
eighty
to
ninety
ninety,
but
we
already
have
cluster
API
on
eighty
eighty,
although
I
think
we
need
to
revisit
every
single
metric,
we
have
I,
don't
know
they
provide
too
much
value
at
this
point,
so
food
for
dot.
If
you
have
a
preference
for
metric
support,
let
us
know.
A
A
A
A
A
A
Michael,
are
you
still
here
there
you
are
Michael
or
anybody
from
your
team
I
know,
there's
a
couple
PRS
out
which
I
have
yet
to
have
time
to
review,
but
if
anybody
else
has
time
that
would
be
awesome,
y'all
are
still
looking
like
you're
on
track
for
sometime
in
the
next
few
weeks.
For
this
I
imagine
yeah.
A
G
L
A
A
A
H
A
So
in
alpha
two
totally
fair
point
in
alpha:
three,
like
our
answers
for
an
alpha,
2
versus
alpha
3
may
be
different
if
you're
using
the
cube,
idiom
control
plane.
Although
generically
you
can
use
I,
don't
know
cute
control
version
or
something
along
those
lines
to
check
and
see
how
the
server
is
doing
how
the
control
plane
is
doing
so.
A
A
G
H
A
You
okay,
so
that
is
all
of
the
important
sins
and
we
have
now
got
it
down
to
20
by
closing
some
and
moving
some
to
a
different
priority.
So
that's
better!
Let's
now
look
at
the
long
terms.
We
have
about
nine
minutes
left,
let's
see
what
we
got
here,
15,
so
all
right.
Let's
make
a
decision
on
what
to
do
with
this
thing.
This
has
been
around
for
a
long
time.
I
know:
I
talked
with
Vince
about
it
a
while
ago,
and
basically
there
was
some
code
that
was
changed
during
a
major
refactor
in
I.
A
That
or
no
that
wasn't
2018
October
2018
that
doesn't
appear
to
have
any
adverse
effects,
but
somebody
really
just
needs
to
go
through
and
track
down.
Are
we
okay,
and
can
we
close
this
or
do
we
need
to
fix
this
and
I
know
I
had
assigned
it
to
myself?
I
do
not
have
time
to
look
at
this,
so
if
anybody's
interested
that
would
be
really
helpful
or
I
think
we
can
just
close
this.
But
it's
lingering
and
I'm
tired
of
just
kicking
it
down
the
road
I.
A
Let's
say
it
was
three
places
that
it's
invoked
I
think
two
of
the
three
just
don't
care
about
this
first
return,
value
and
I
think
there
was
one
that
maybe
did
but
I
didn't
have
enough
time
to
see
if
we
could
ever
get
into
a
situation
where
this
would
return
nil
and
it
would
be
a
bad
thing,
but
it
could
be
that
you
get
down
in
here
and
you
never
it
never
returns
or
it
never
goes
through
this
code
path.
But
I
don't
know.
I
didn't
have
time
to
fully
investigate
it.
K
A
A
C
K
F
D
Okay,
I
left
I
left
a
comment,
particularly
about
end
users,
because
I
think
you
know
someone
that
is
not
interested
in
developing
cluster
API,
but
just
wants
to
understand
you
know
once
a
demo
is
right,
we'll
kick
the
tires
so
to
speak
without
having
to
deploy.
You
know,
resources
that
that
cost
money
having
you
know
having
the
docker
providers
is
a
great
way.
It's
just
it's
just
a
fast
way
and
there
are
fewer.
D
D
H
I
wanted
to
connect
Oh
Daniels
point
as
a
new
user
coming
onto
the
cluster
API
project.
Having
you
know
if
I
didn't
have
any
AWS
accounts
or
anything
like
that,
the
CAF
deep
documentation
was
very
helpful
to
get
started
and
kind
of
understand
what
components
are
coming
up
and
down.
I
also
kind
of
+1
it
moving
to
a
developer
Docs,
but
having
sort
of
the
same
kind
of
QuickStart
format.
A
F
F
L
I
D
Daniel
I,
if
you
know,
if
we,
if
we
get
inundated
by
by
end-users,
saying
hey
this,
is
you
know
I'm
trying
to
do
this
and
you
know
it's
not
working
for
me.
I
haven't
seen
those
in
a
channel,
but
you
know
if
there
is
that
if
that's
impacting
us
or
if
we
think
it's
a
really
bad,
you
know
experience
for
end
users,
and
you
know
nobody
who
you
know
like
nobody
who
wants
to
keep
this
available
or
discoverable
by
end-users.
Is
you
know
like
like
me
right?
D
If
you
know,
if
I'm
not
volunteering,
to
sort
of
improve
that
then
danielĂs,
you
know
let's,
let's
get
out
of
there,
but
it
seems
for
now.
I
I
haven't
seen
any
any
any
problems,
but,
like
I
said,
if
there
are
I'm
happy
to
try
to,
you
know
improve
that
or
and
if
I
can't
then
I'm
fine,
you
know
or
if
nobody
else
is
interested
in
in
trying
to
address
those
and
now
let's
move
it
out.
Does
that
got
something
yeah.
A
That's
there
I
updated
my
comment
just
to
say
it's
nice
to
keep
the
examples.
We
can
debate
the
location
of
the
docks
over
time.
We
are
a
minute
over,
but
Jason
even
have
a
question
about
cap
D
on
a
Mac
and
Warren
says:
there's
some
documentation
in
there
and
I
think
we
should
probably
wrap
and
we
can
resume
looking
at
another
burndown
next
week
and
we
can
do
it
sooner
Adam
if
people
are
interested
as
well.
Thanks
everybody
for
joining
and
see
you
next
week.