►
From YouTube: 2020-03-25 - Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
Wednesday
March,
25th
2020.
This
is
the
cluster
API
office
hours
meeting
this
meeting
is
being
recorded
and
will
be
posted
on
YouTube
once
it's
available.
Cluster
API
is
a
sub-project
of
sic
cluster
lifecycle.
We
do
have
meeting
etiquette.
Please
use
the
reason.
The
raise
hand
feature
in
zoom
and
we
do
follow
the
kubernetes
code
of
conduct
conduct,
which
basically
means,
let's
all
be
nice
to
each
other.
A
B
Thank
you,
so
we're
really
0-3
to
last
week
we're
trying
to
release
0
3
3
with
like
potentially
a
lot
of
bug,
fixes
the
most
notable
one
is
the
retry
in
Cuba
I
am
joined
for
control
plane.
We
have
had
like
quite
a
few
issues
that
were
seen
like
a
cube
idiom
joint
for
controlling
machines
to
fail,
sporadically,
especially
in
Caffey.
There
was
one
issue
in
cap
Z.
C
A
There
here
we
go
so
basically
what
we're
trying
to
do
is
split
apart,
join
for
Cuba
diem
into
all
of
its
phases,
so
that
we
can
have
more
finely
grained
control
over
each
step.
So,
for
example,
what
we've
frequently
seen
is
we
get
through
all
of
these
portions
of
join
so
pre-flight?
Get
the
certs.
Do
the
certs?
Do
the
cute
config
get
the
control
plane
going
start
to
couplet,
get
the
Ltd
join
in
place
and
then
either
updating
the
status.
A
Updating
the
config
map
for
Cuba
diem,
fails
or
doing
the
taints
and
labeling
that
it's
the
control,
plane
node,
fails,
and
if
you,
if
you
make
it
all
the
way,
past
Etsy
D
and
you
fail
in
either
updating
the
status
or
marking
the
control
plane
like
you've
made
it
pretty
much
all
the
way,
and
these
are
things
that
we
can
retry.
So
as
Vince
was
mentioning.
A
This
is
just
a
stopgap
until
we
can
get
this
in
get
this
logic
into
Cuba
in
itself,
but
we
found
that
it
has
been
very
helpful
in
testing,
especially
in
an
environment
where
we
have
very
variable
latency
with
disk
I/o,
which
affects
at
CD.
So
please
take
a
look
if
you're
interested-
and
you
said
this
is
going
to
be
behind
a
field
on
the
boot-
strap
right,
yeah.
D
B
A
So
you,
if
you
had
a
machine
deployment
that
referenced
an
infrastructure
template
which
is
typical,
but
for
whatever
reason,
the
Machine
deployment
controller
couldn't
find
the
template
either
it
wasn't
in
the
cache
yeah
or
it
hadn't
actually
been
created.
Yet
then
what
happened
is
the
Machine
deployment
controller
would
log
that
it
failed
to
reconcile
the
Machine
deployment
and
then
returned
nil
for
the
error.
A
So
this
makes
it
so
that
if
they
can't
find
the
template
or
there's
any
other
failure
trying
to
reconcile
the
Machine
deployment,
it'll
go
into
exponential
back-off,
and
especially
if
it's
a
caching
issue
or
just
you
can't
find
the
infrastructure
template
for
a
very
brief
period
of
time.
This
helps
significantly
Prakash.
E
E
A
A
B
Last
one
yeah:
well,
there
was
the
kisi
PCL
done
bug
fix
so
that
if
anybody
is
actually
using
kcp
with
with
failure,
domains
like
they
just
killed
an
issue,
that's
sort
of
no
and
fixed
so
that
this
will
come
into.
The
next
is
only
effects
scaled-down
behavior,
and
we
found
this
while
adding
end
to
end
tests
for
kCPT
any
questions
on
this
one.
B
All
right
so
this
morning
the
mail
filed
that
2775,
which
is
an
issue
they're
like
popped
up
for
from
metal
three.
So
in
in
our
contact
definition,
we
said
that,
like
we
wanted
to
have
a
label
that,
like
like,
understands
the
contract,
they
be
providers,
your
II
respect,
but
the
label
cannot
have
commas,
and
this
was
a
number
side.
So
we
need
to
change
that
separator
to
be
something
else.
I'm
fine
would
think
either
underscore
or
like
that's,
probably
fine
and
yeah.
It's
not
gonna.
Look
good.
B
Work
nice
showdown
indeed
like
it
there,
so
there
was
also
like
another
issue
that
we
found
out
like
in
this
function.
This
is
mostly
related
to
infrastructure
providers,
but
that
these
util
go
the
maps
which
I
think
the
infrastructure
object
like
we
didn't
drop
the
subversion
from
it.
So
you
will,
you
might
miss
country
conciliation
requests.
This
is
the
only
an
issue
if
you
have
multiple
providers
running
that
satisfy
the
same
contract
conversion,
so
we'll
change
that
and
make
sure
that
bitterly
snow,
they're
very
clear.
F
B
F
E
E
So
what
the
team
asked
me
is,
can
we
connect
with
somebody
who
you
attend
there,
so
I
found
that
Vince
is
here
and
so
I
had
sent
him
email
asking?
Is
there
a
way
we
can
invite
him
to
the
March
31st?
We
have
a
video
what
he
called
it.
Normally
in
OpenStack,
open
infrastructure.
Its
partnership
is
part
of
that
and
usually
we
do
have
a
PT
G.
They
call
it
project
technical
gathering
and
since
this,
because
of
the
cove
in
19,
the
PT
G
is
not
feasible.
E
So
we
decided
to
do
a
quad
rending,
which
is
March
31st,
a
a
meeting
where
we
wanted
some
representation
from
the
cluster
API
control
plane.
Specifically
Vince
was
working
on
it
I
requested.
If
he
can
make
his
time
available
to
tell
us
what
are
you
doing
in
the
control
plane,
respect
to
load-balancing
H
a
proxy,
and
what
is
the
roadmap
going
forward
for
even
alpha
4
or
whatever
use
an
alpha?
3,
which
you
have
just
completed
I,
believe,
and
so
that's
the
ask
from
airship
community
to
you.
E
So
let
me
know
if
anybody
can,
if
Vince
can
do,
it
will
be
happy
if
not
if
somebody
else
can
present
as
long
as
we
have
some
representation
on
the
asks
which
I
think
Rudolfo
Pacheco
from
AT&T
and
madnes
cumin
disorder
to
people
who
they're
asking
about
it
so
so
rule
of
core
DNS
role
of
load,
balancer,
rule
of
HC
daemon
and
any
other
copy
control
roadmap,
which
involves
that's
the
ask
if
somebody
can
come
and
present
foreign,
maybe
our
a
45
minutes
to
an
hour
to
help
us
understand.
That's
the
ask.
E
It
we
assume
that
it
is
metal
queue
which
is
really
moving
from
currently
to
CN
CF
as
a
project,
but
in
the
meantime,
what
we
have
is
we
have
tied
in
our
version
of
do
it.
We
call
it
alpha
beta
+,
2
.,
o
el
segundo
is
working
in
some
30
40
nodes
in
the
AT&T,
but
this
one
is
specific
to
what's
running
what
we
are
trying
to
experiment
to
find
out.
E
How
can
we
get
it
more
production
worthy
with
the
new
changes
and
the
new
changes
involved
that
there
is
a
new
airship
cuttle
earlier
we
were
to
follow
the
cluster
cuttle,
but
that
cluster
that
he
was
not
ready.
So
we
started
with
airship
cuttle
to
obtain
the
control
and
then
work
with
cluster
EPA.
So
that's
the
context
and
in
that
context
we
want
to
make
it
production
early.
So
what
are
the
elements
which
are
available
to
us
and
that
means
understand
first
before
we
ask
what
we
can,
what
we
want.
E
So
that's
the
take
on
the
AT&T,
so
I'm,
just
a
intermediate
between
the
two
working
for
Dell,
so
plus
I
am
on
the
open
infrastructure
board.
So
the
only
reason
I
am
plotting
is
ok.
This
is
ask:
what
can
we
do
to
collaborate?
You're
saying
you
bring
it,
they
are
saying
you
take
its
now.
I
am
I
mean
between
tell
me
what
best
we
can
do,
because
if
you
don't
understand
what
you
have,
what
is
us
we
can
ask
you
you
can,
I
think.
B
B
A
Not
seen
a
bay
does
anybody
I
know.
We
talked
at
the
last
meeting
about
doing
doing
away
with
the
provider
implementers
office
hours
and
trying
to
do
any
sort
of
questions
here.
I
apologize
for
not
remembering
to
do
that
at
the
beginning
of
the
meeting,
but
if
anybody
has
any
general
questions
about
close
to
API
or
providing
implementing
a
provider
or
just
anything
in
general,
please
feel
free
to
speak
up.
A
A
H
A
B
A
Need
to
have
a
template
up
at
the
top.
That's
got
all
these
sections
and
have
one
for
questions
up
above
yeah
all
right.
Let's
look
at
the
issues
with
no
milestones,
we'll
go
bottom.
The
top
cluster
cut
a
log
win
over
using
overrides
for
cluster
cuddle
all
right.
So
let's
talk
about
priority
and
milestone
on
this.
One
looks
like
we
had
somebody
who
wants
to
work
on
it.
I
Hey
Andy,
can
you
hear
me
yes,
hi,
it's
Andrew
I
was
looking
for
the
raise
hand
feature,
but
it
looks
like
it
was
removed
from
zoom
or
I.
Don't
know.
I
Anyway,
it's
been
a
while,
since
I've
been
on
the
meeting
we
caught
up
about
B
1
alpha
2,
basically
I
wanted
to
confirm
that
it
was
possible
to
create
control
planes
without
necessarily
associating
them
with
a
virtual
machine
or
a
physical
machine
and
I
just
wanted
to
get
a
sense.
If
that's
still
possible
in
v1
alpha
3
I'm,
assuming
it
is,
create
some
kind
of
control
plan
provider
that
does
not
necessarily
associate
with
the
machine
yeah.
A
The
there
is
no
hard
requirement
that
the
that
a
control
plane
provider
use
machines
in
cluster
API,
so
the
cube
idiom
control,
plane
provider
does
and
it
requires
them,
but
there's
actually
very
little
from
a
contract
perspective
around
control,
planes
and
requirements,
like
the
expectation,
is,
if
you
create
a
control,
the
control
plan
provider,
and
you
reference
that
from
your
cluster,
you
will
get
a
control
plane,
but
the
definition
there
is
I,
don't
know
that
we
ever
really
expected
out.
We
do
have
we.
A
We
have
an
open
issue
where
we
need
to
move
some
things
around.
So
when
you
look
in
this
provider
implementers
section
of
the
documentation
you'll
see,
we
have
cluster
infrastructure,
machine
infrastructure
and
bootstrap.
We
need
to
also
have
control
plane.
We
have
it
up
here
under
controllers,
which
is
supposed
to
be
more
about
the
technical
details
of
what
these
actual
controllers
are
doing.
A
The
documentation
here
is
somewhat
generic
and
not
necessarily
specific
to
cube
ATM
control
planes,
so
I
think
we
probably
just
need
to
move
it
and
if
there's
any
updates
that
need
to
happen
we'll
make
those
but
like
basically
the
control
plane.
Oh,
we
did
spec
out
this
stuff,
so
we
have
required
control,
plane,
services,
optional
services,
prohibited
services.
A
If
there's
anything
in
here
that
causes
problems,
please
open
an
issue
if
you're
trying
to
do
a
control
plan
provider-
and
this
doesn't
work
for
you-
we
do
talk
about
contracts,
so
every
control
plane
resource
has
to
have
spec
replicas
a
scale
sub
resource.
We
have
required
status
field,
so
you
have
to
have
initialized
and
ready
if
you're,
using
replicas
than
we
have
or
using
replicas.
We
do
have
required
fields
in
status
related
to
those,
and
then
we
have
some
optional
fields.
Although
I'm
guessing
these
probably
need
to
be
made
required,
and
so.
A
A
But
yeah
I
mean
basically
everything.
That's
that
I
just
went
over
like
there's
no
heart,
there's
no
requirement
on
cube
ATM,
there's
no
requirement
on
machines.
You
basically
need
to
provide
these
things,
although
I
would
say
at
CD
is
potentially
an
optional
one.
Given
that
we
do
support
external
at
CD,
but
I
mean
it
just
depends
on
what
you're
implementing,
but
you
definitely
need
an
API
server,
a
controller
manager
and
a
scheduler.
A
You
probably
are
going
to
want
to
have
TNS
and
a
service
proxy
and
maybe
a
cloud
controller
manager
as
well.
But
if
you've
got
these
things
plus,
ideally
at
CD,
you've
got
your
required
control,
plane
services,
how
you
spec
out
your
control,
plane
resource
is
totally
up
to
you
and
this
ties
back
into
the
cluster,
because
the
cluster
has
a
control,
plane,
ref
on
its
back
and
there's
some
status
fields
that
the
cluster
itself
has
related
to
control
planes.
A
I
Cool
and
one
more
question:
can
machines
still
be
referenced
via
a
coop
cuddle
secret
for
the
sake
of
the
machine
deployment,
sure
what
you
mean?
Sorry,
a
coop,
config
secret,
basically
Vince
implemented
something
like
almost
a
year
ago
now,
which
allows
machine
deployments
to
check
node
health
via
a
coop,
config,
secret
and
I'm
guessing
that
still
in
there.
Yes,.
A
F
A
A
For
asking
okay,
all
right
back
to
the
issue,
triage
I'm,
going
to
so
we're
looking
for
a
priority
and
milestone
on
this
one
I'm
gonna
go
with
unless
anybody
objects
just
sounds
like
probably
somewhere
between
long
term
and
soon.
I
can
I
know
that
we've
we've
had
several
people
get
tripped
up
using
the
overrides
and
expect
that
we're
like
forget
that
they're
in
place
and
not
realize
that
they're
happening,
but
when
you're
using
cluster
cuddle,
so
I'm
inclined
to
say
soon
over
a
long
term.
A
All
right
don't
see
any
objections
and
milestone
we'll
put
this
in
zero
three
X,
so
that
we
get
it
done
on
the
Sooner
side.
Okay,
next
up,
we
have
another
one
from
Cecile.
Thank
you
for
filing
these
things.
That
means
we
know,
people
are
using
them.
Move
to
target
cluster,
fails
at
an
it
step.
Since
it
looks
like
you
did
some
a
little
bit
of
debugging
here.
Did
you
get
anywhere
on
this.
B
J
A
G
My
sake
of
my
own
sanity
here
there's
been
a
number
of
scenarios
that
exist
as
part
of
the
phase
workflow
for
cluster
cutter
limit,
we're
talking
directly
with
the
other
controller
provider
seams
that
could
go
through
a
set
of
retry
operations
that
put
probably
beneficial
across
the
board
as
it
walks
through
its
individual
bag.
We
did
this
for
sum
of
phases
and
convenien,
but
didn't
do
problem.
Maybe
it
makes
general
sense
to
apply
that
same
pattern
across
things.
A
A
A
If
you
have
enough
memory,
you
can
usually
run
it
on
your
laptop
or
desktop.
It
does
run
multiple
docker
containers.
So
that's
where
memories
that
have
an
issue,
but
it
ideally
acts
as
an
infrastructure
provider
that
can
try
and
suss
out
the
bulk
of
any
sort
of
bugs
or
race
conditions
or
timing
issues
with
control
planes
and
machine
deployments
and
whatnot.
So
it's
designed
to
be
like
a
self-contained
cluster
API
provider.
K
Hey
thanks,
I
just
wanted
to
add:
I
talked
to
a
Chuck
a
little
bit
online
about
this
and
I.
Think
I.
Think
it's
a
really
cool
idea.
I
think
it'd
be
neat
to
have
kind
of
some
different
backends
for
this
like
right.
Now
it's
just
docker,
but
you
know
you
could
imagine
in
the
future,
like
maybe
pod
man
or
other
OCI
compliant.
You
know
backends
for
it.
So
I
guess
I,
don't
have
anything
really
valuable
to
add,
but
I
just
I
think
this
is
a
cool
thanks.
Mike.
A
And
thank
you
for
pasting.
The
link
in
the
zoom
chat
to
our
glossary.
Let
me
just
pull
that
up
real
quick.
So
if
something's
not
in
here,
then
please
either
open
up
a
pull
request
or
talk
to
us
on
slack
or
file
an
issue.
But
if
we
go
to
here,
you've
got
we
have
a
whole
slew
of
acronyms
that
all
start
with
cap.
So
you
can
see.
We've
got
core
cluster
API
AWS
bootstrapping,
with
cube
ADM,
docker,
Google,
IBM
and
so
on.
So
please
take
a
look.
A
A
B
A
F
A
F
There
were
a
few
ideas
that
Stephan
brought
up
and
I
think
there
are
some
good
ideas
there
about.
Maybe
not
worrying
about
the
TTL
as
much
when
we
create
the
token,
and
rather
with
some
of
the
other
work
that
we're
doing
to
watch
the
nodes
on
the
workload
clusters
use
that
to
actually
revoke
the
bootstrap
tokens
after
they've
been
used.
But
we
gotta
be
careful
that
we
don't
mess
up
the
machine
pool
workflow.
If
we
go
down
that
route.
M
G
F
F
40
days,
I've
worried
a
little
bit
because
if
we
start
looking
at
you
know,
obviously
this
was
an
extenuating
circumstance
that
we
hit
this
end.
But
if
we
start
looking
at
some
of
the
issues
that
we've
seen
potentially
around
BM
infrastructure
with
slow
contentious
storage,
this
is
something
that
might
also
crop
up
in
those
situations
as
well
concerned.
F
A
C
Not
yet,
but
like
I,
think
there
are
main
items
on
this
one.
The
first
one
is
that,
basically
we
can
see
that
we
are
reporting
in
available
replicas
when
we
have
like
real
replicas
already,
so
it's
not
mean
for
most
of
the
KGB's
that
I
had
but
like
there
are
three
and
one
was
only
up
to
date.
The
second
issue
side
of
the
an
available
replicas,
was
the
semantics
of
their
ready
column
that
we
are
reporting,
so
it
seemed
like
from
looking
at
array.
C
At
first
sight,
it
seems
like
it
is
indicating
that
the
control
plane
is
provisioned
and
ready.
So
it
means,
like
all
replicas,
are,
are
provisioned
and
join
the
cluster,
which
is
not
like
a
semantics
that
the
field
convey.
So
we
might
want
to
clarify
this,
and
finally,
the
third
one
was
that
the
we
don't
actually
output
the
desired
number
of
replicas
anywhere
when
we
do
kyouko
get
kcp.
A
C
E
B
C
A
C
A
We
have
a
couple
new
ones.
So
I
filed
this
one
a
little
while
ago,
we
have
a
use
case
where
we'd
like
to
be
able
to
change
the
image
repository
that
is
used
for
all
of
the
control
plane
images
after
the
fact.
So
after
you've
deployed
a
control
plane,
it
would
be
nice
to
be
able
to
change
the
image
repository
and
then,
if
you're
doing
a
rolling
update,
whether
it's
just
an
a
replacement
for
an
actual
upgrade.
A
A
A
A
B
B
I
was
able,
like
that's
a
strong
word,
that,
like
we
respond
out
like
one,
was
to
actually
get
the
area,
unlike
where
we're
missing
unit
tests,
and
this
is
just
like
to
give
a
direction
of
where
we
should
focus,
and
this
is
also
a
good
time
if
you
want
to
contribute
like
to
kind
of
like
learn
the
code
base.
So
you
can
start
like
running
some
unit
tests
for
really
cube.
B
G
A
few
minutes
early
I
just
wanted
a
PSA
that
for
those
of
you
who
have
been
talking
about
their
little
provisioning
for
a
while
Tinkerbell,
which
is
packets,
internal
system,
that
they
use
for
doing
OS
provisioning
has
finally
been
open
sourced
across
so
I.
Don't
know
if
anybody's
been
in
this
game
for
a
long
time
like
I've
been,
but
we've
been
talking
with
them
for
six
months
now,
trying
to
help
them
along
or
possible,
but
they
after
being
acquired
by
Equinox,
had
a
long
tail.
It
is
now
open
sourced.
G
It
has
a
well-defined
year
capi
you
can
directly
connecting
so
you
can
actually
do
Direct
Connect
provisioning
me
in
a
secure
fashion,
which
is
awesomesauce
most
experimental
provisioning
systems.
Don't
give
you
all
that
capability
and
it's
all
well
factored
into
a
micro
service
deployment,
which
is
awesome,
awesome
sauce
if
you're
interested
in
that?
Please
take
a
look,
and
hopefully
it
time
we'll
see
a
turbo
provider.