►
From YouTube: 20191211 - Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
Wednesday
December
11th.
This
is
the
cluster
API
office
hours
meeting
cluster
API
is
a
sub-project
of
sig
cluster
life
cycle
and
we
do
have
an
agenda
document
with
our
meeting
etiquette,
which
includes,
using
the
raise
hand,
feature
of
zoom.
If
you
have
anything
you'd
like
to
say-
and
please
add
yourself
to
the
attending
list
and
finally,
if
you
do
have
discussion,
topics
or
PSAs
or
demos
feel
free
to
add
that
to
the
agenda
and
this
meeting
is
being
recorded.
So.
A
First
up,
we
do
have
something
that
I
think
we
I
had
wanted
to
do
last
week
and
didn't
realize
it
until
about
midway
through
the
meeting.
But
if
there's
anybody
who
is
new
and
who
is
interested
in
introducing
yourself,
we
would
love
to
have
you
say
hello,
it's
not
required,
and
so,
if
you
don't
feel
like
doing
it,
that's
great
too,
but
I'll
give
give
it
a
few
seconds
to
see.
If
anybody's
interested
in
the
same
height
I
see.
Is
it
slammer?
Is
that
how
it's
pronounced
close
enough,
I'm
sure
more
or
less?
It's
somewhat.
B
C
A
D
I
had
added
on
whenever
chasing
I
mean
that
is
a
heads
up,
especially
for
folks
who
are
involved
with
the
downstream
infrastructure
providers,
because
this
does
introduce
some
changes
that
need
to
be
made
on
the
provider
side.
So
bootstrap
data
can
now
be
provided
or
is
now
generated
as
a
secret,
rather
than
just
being
text
in
line
for
the
machine
resource.
D
So
it
improves
our
security,
but
it
does
create
additional
burden
on
the
providers
now
that
they
need
to
retrieve
that
secret,
and
they
need
to
also
make
sure
that,
if
they're
passing
it
into
cloud
in
it
like
we
did
in
the
AWS
provider
that
you
may
need
to
actually
base64
encode
the
data
after
you
retrieve
it
from
the
secret.
Because
client
go
is
nice
and
it
automatically
base64
decodes
the
secret
data
for
you.
E
D
Should
have
said
that
it
is
now
exclusively
as
a
secret.
Fear
is
one
catch
in
that
if
the
V
1
alpha
2
type
is
updated
to
be
one
alpha
3,
the
data
is
still
present
in
the
bootstrap
data
field,
just
to
be
able
to
continue
backwards,
conversions
for
those
upgraded
resources,
it's
going
to
create
it
as
a
v1
alpha
3
resource.
You
will
only
have
the
data
in
the
secret
so.
E
E
A
Field
there
or
the
data
field
there's
also
data
secret
name
and
the
cluster
API
controllers,
like
machine
and
cluster
or
whatever,
is
using
I
guess.
Machine
controller
is
looking
for
the
data
secret
name.
Now
the
cube
ATM
bootstrap
provider
has
been
updated
to
support
migrating
from
the
inline
data
string
field
to
a
secret
for
a
one
way
forward.
Conversion
from
alpha
2,
alpha,
3
and
Kappa
is
the
first
one:
we've
updated
to
support
this
change
and
Kappa
has
been
updated,
so
it
only
works
with
the
secret
in
master.
Now.
E
G
H
H
So
I
pushed
about
like
part
of
this
procedure
for
release
test
generation.
This
sounds
like
an
actually
required.
I
would
love
to
have
release
notes
auto-generated
for
a
release
so
that
we
could
have
a
full
description
with
details
as
part
of
the
PR.
So
when
the
if
a
PR
goes
in,
they
have
the
details,
you
know
that
these
are
the
actually
required
for
v-103.
Yes,.
A
In
this
document
this
is
not
a
user
facing
change
from
an
action
standpoint
because
we
migrate
automatically
if
you're,
using
the
cube,
ATM
bootstrap
provider.
So
assuming
you
upgrade
from
alpha
2
to
alpha
3
and
everything's
deployed
and
running
and
you're
using
the
cube
am
bootstrap
provider.
Then
things
will
just
work,
but
it
is
down
here
at
the
bottom
in
this
section
here,
but.
H
A
Mean
I
think
that
we
are
going
to
need
to
curate
the
release
notes
for
alpha
2
to
alpha
3
when
we
come
out
with
the
first
0.3
release-
and
you
know
from
like
I
said,
if
you're
using
any
of
the
usually
keeping
a
booster
provider,
there's
nothing
you
have
to
do
if
you
are
writing
a
bootstrap
provider,
so
I'm
gonna
I
see
Spencer's
here
Talos,
for
example.
If
you
want
to
support
Auto
migration,
you
would
want
to
do
something
similar
to
what
the
cube
ATM
bootstrapper
does,
which
is
migrate.
A
G
D
I
just
wanted
to
bring
attention
to
folks
that
we
are
well
on
the
way
to
implementing
the
control
plane.
Cap
now
and
the
PR
link
there
in
the
notes
is
to
the
actual
first
start
of
the
real
implementation.
Everything
so
far.
It's
been
scaffolding,
but
this
actually
enables
the
ability
to
stand
up
the
initial
instance
for
a
cube,
ATM
base
control
plane.
So
yeah.
D
H
A
So
this
particular
github
issue
was
really
a
question
about.
Should
we
have
a
roadmap
and
then
assuming
the
answer
is
yes,
I
didn't
really
have
anything.
Super
concrete
in
mind,
like
I,
have
a
list
of
items
that
I'm
interested
in
seeing
in
the
next
couple
of
releases
and
I
could
do
stick
him
in
a
Google
Doc
I
could
put
them
in
a
pull
request.
I
could
put
them
in
an
issue,
so,
whatever
you
think
makes
sense
to
him.
I.
A
H
H
One
of
the
things
that
this
sub
project
has
suffered
from
is
that
the
shared
awareness
gets
this
sort
of
deadlock
state
where
people
aren't
aware
of
a
long-standing
issue,
and
then
they
want
to
chime
in
because
they
feel
they
feel
like
they've
been
involved,
but
like
usually,
we
should
have
a
backlog.
If
we
have
the
roadmap
and
these
high-level
issues
kind
of
spread
out
over
time,
it
should
be
a
fait
accompli
for
the
next
release
of
like
people.
Xyz
have
committed
to
a
feature.
A
We
have
not
had
any
discussions
about
what
comes
in
the
next
line,
our
release
after
this
one
and
that's
more.
What
I'm
focused
on
in
terms
of
trying
to
not
have
endless
debate
on
this
that
or
the
other
thing,
and
more
just
try
and
focus
on
like
here's,
something
and
I'm
interested,
no
I.
We
are
interested
in
doing
it
in
0.4
and
I.
Don't
know
just
not
going
around
in
circles
like
we've
done
in
the
past
on
what
should
be
in
a
release
or
out.
Since
you
got
your
hand
up
yeah.
F
G
Like
everybody
can
PR
and
we
can
put
it
at
the
tops
like
if
you
want
to
add
something
that
you
want
to
work
on.
Please
open
up
PR
for
this,
and
then
you
know
like
when
people
open
up
here,
for
it
like
there
is
come
kind
of
like
an
unoffending
like
someone's
gonna
work
on
it.
Otherwise
it's
gonna
drop
out,
but
I
would
like
to
make
that
clear.
G
G
A
A
F
A
A
A
A
I
It's
a
proposal
to
create
an
abstraction
using
the
lease
object
and
the
coordination
API
it's
currently
used
for,
like
the
node
lease
could
what
updates
like
heartbeat
or
whatever,
and
so
similarly,
we
could
use
a
similar
object
of
the
you
know
totally
opt
in,
and
you
know
things
could
look
to
see.
If
something
has
that
lease
and
if
not
acquire
it,
do
its
thing,
release
the
lease,
etc
and
I've
got
a
couple
use
case
scenarios
in
there.
I
H
H
I
I
think
there's
a
couple
different
routes.
We
could
go
but
I'd
like
it
to
live
in
kubernetes
property,
but
it's
totally
optional,
behavior
and
probably
just
need
to
get
feedback
from
everyone
to
see
what
they
think
is
best
one
one
possibility
is
get
or
create.
If
this
lease
doesn't
exist,
the
other
possibility
is
kubernetes
starts
creating
at
some
point
and
well
I
see
both
those
scenarios
working
together.
A
E
Okay,
Andrew
is
this
entirely
different
than
the
proposal?
The
issue
that
I
submitted
a
while
back
about
maintenance
I
mean
that
was
specific
to
copy
components,
but
it
doesn't
have
to
be,
and
I
mean
the
same.
Basic
underlying
desired
state
is
the
same
that
nothing
bothers
those
resources,
no
controller
socialist
resources,
while
some
bit
is
flipped
until
that
bit
is
flipped
back
I.
I
I
E
I
Yeah
I
I
wouldn't
run
on
my
head
a
lot
and
downstream.
They
went
around
a
lot
about
where
this
thing
should
live
and
that
we
were
talking
about
maybe
making
a
server-side
drain
and
taking
drain
away
from
the
machine
controller
totally,
and
we
have
this
other
thing
called
the
Machine
configure
operator
which
basically
updates
stuff
on
disk
and
having
to
coordinate
all
the
stuff
to
this
third
party
and
well.
I
Maybe
that
would
be
a
good
primitive
to
build
on
top
of
something
like
this,
it's
very
opinionated,
and
so
we
just
needed
a
place
where
everybody
can
coordinate,
because
not
everybody
is
going
to
be
using
machine.
Api
and
they're.
Gonna
want
to
have
these
other
automated
components
that
do
things
to
nodes
generally
and
they
don't
want
to
have
to
update
those
components
to
worry
about
stuff
like
machine
API
they're
not
using,
but
yeah
I.
Think
it's
it's
coming
from
a
common
standpoint.
I
think
you
probably
are
all
thinking
the
same
thing.
A
Yeah
I
think
there's
a
slight
difference.
I
mean
just
in
terms
of
scope
which
maybe
that's
not
particularly
meaningful,
but
the
proposal
you
have
Michael's
about
nodes
and
the
issue
that
Andrew
Oakland
is
about
cluster
API
resources
like
machines
and
AWS
machines
and
vSphere
machines.
So
I
don't
know
if
it
would
make
sense
to
try
and
have
a
generic
type
of
facility
where
you
could
essentially
grab
a
lease
on
any
type
of
criminal
resource
or
if
it
makes
sense
to
treat
them
differently.
A
There's
a
death
thing
for
if
we
need
to
move
so
we
have
a
management
cluster,
that's
got
cluster
API
bits
in
it
and
we
need
to
move
those
to
a
different
management
master.
That's
essentially
why
there's
issue
is
supporting
so
I
do
think
that
there's,
but
there
are
different
use
cases
here,
alright,
anyways
Tim!
You
want
us
to
move
on
well.
H
A
A
All
right
next
up
machine
pool,
lazy
consensus:
I,
don't
have
the
link
to
this
right
here,
but
it's
been
a
week
and
this
has
been
open
for
a
long
time,
so
I
figured
we
could
probably
go
ahead
and
do
this
together.
I
haven't
seen
any
blocking
comments.
We
do
have
some
approvals
here,
so
I
think
unless
there's
any
last-minute
objections,
we're
just
gonna
lgt
in
this
Michael.
You
know
yeah.
A
F
I
E
Just
a
quick
question
so
as
we
start
to
update
cat
V
for
me
183
and
we
need
to
start
doing
testing,
you
know
we
thought
well,
we
can
still
use
a
static
machine
to
do
a
non
load,
balancer
deployment
while
we're
doing
some,
you
know
testing,
and
then
it
occurred
to
me
and
only
this
morning,
so
it
may
be.
This
is
really
the
answer
here.
But
does
cluster
cuddle
currently
on
master?
J
E
Okay,
so
I
guess
then,
would
it
be
worth
having
a
cluster
I
could
compile
on
myself,
but
basically
a
cluster
code?
Oh
that's
just
the
cluster
cuddle
from
v1
a
2,
but
with
updated
types
so
that,
if
you
weren't
you
know
because
the
cost
Ricardo
B
2
is
not
going
to
be
ready
for
a
bit
right,
and
so
it
would
be
nice
if
we
could
continue
to
use
cluster
cuddle.
The
way
we
have,
but
while
we're
doing
the
v1
83
work
on
our
own
branches,
we're
sorry
on
their
own
providers.
E
D
A
A
F
A
E
Say
Andy
that
coming
from
some
of
the
projects
on
which
I'm
working
this
is
actually
a
high
priority.
Nathanson
you.
I
think
I
tagged
you
in
a
a
juror
from
VMware's
internal
tracker,
saying
this
is
high
priority,
but
that
was
like,
as
the
meeting
was
starting,
so
I
thought
you
would
have
seen
that's,
but
maybe
we
can
sync
offline
about
this
and
see
if
it
yeah
yeah.
A
A
Document
logging
standards-
yes,
I-
think
we
should
do
this,
so
what
we
have
I
guess
not
necessarily
discovered
but
come
to
realize
after
lots
and
lots
of
debug.
Is
that
if
you're
writing
a
controller-
and
you
return
an
error
or
even
if
you
log
in
error-
that
it's
not
anything,
that's
visible
to
the
user
and
if
you
are
just
returning
an
error
without
logging,
it
where
it
occurs,
it
bubbles
back
to
a
single
log
line
in
some
of
the
controller
runtime
code
and
it
will
log
the
error
message.
A
E
Right,
keep
jumping
in
I
posted
an
issue
in
chat.
I
filed
this
issue
a
while
back
yeah
well
by
basically
saying
be
nice
to
be
able
to
return
an
error
that
doesn't
make
something
enter
the
back
off
queue.
I
need
to
kind
of
follow
up
on
this
with
go
113
allowing
you
to
include
the
error
or
even
print
stacktrace
with
line
numbers.
It
may
be
nice
if
we
could
push
this
into
controller
runtime.
E
A
G
I
Yeah
I
think
we're
doing
something
similar,
maybe
a
little
different,
we're
detecting
some
things,
particularly
in
the
machine
controller
and
the
actuators.
We
consider
a
failure
like
the
failed
state.
So,
for
instance,
if,
like
you,
specify
an
invalid
AZ,
we're
gonna
log
that
and
that's
all
we're
ever
gonna
do
and
there's
nothing
more
that
that
machine
to
do
you
got
to
delete
it,
and
so
we
don't
actually
return
an
error
to
control
our
runtime.
We
just
return
like
empty
set
or
whatever
it
stops,
killing
it
yeah.
A
Allowing
the
content
of
a
file
to
come
from
a
config
mapper
secret
for
the
cube,
ADM
bootstrap
provider,
for
the
files
that
are
written
to
disk
in
the
machines
and
there's
an
open
PR
for
this,
we're
still
talking
about
the
data
model.
So
if
you
all
are
interested
in
the
API
here
and
the
proposed
changes,
one
thing
that
you'll
see
is
a
discussion
about
how
to
deal
with
with
modeling.
This
and
I
would
recommend
taking
a
look
at
this
I'm
going
to
I'm
gonna.
A
J
A
Okay,
yeah,
let's
at
least
remove
since
we're
not
doing
full
pivot,
but
they're
all
the
same
thing
yeah.
Let
me
put
that
on.
Actually
would
you
mind
commenting
on
1860
about
needing
to
take
into
account?
Yes,
I'm,
dr.
Coleman?
Thank
you,
alright.
So
back
to
this
one
about
supporting
clusters
that
don't
have
a
load,
balancer
or
stable
endpoint
in
alpha
three
I,
probably
need
to
amend
or
adjust.
My
comment
here
based
on
some
discussions
with
Andrew,
where
I
think
there's
probably
nothing
stopping
an
infrastructure
provider
and
alpha
3
from
doing.
F
A
A
E
Can
close
it
I
would
I?
Let
me
do
it
because
what
I
think
I'll
do
is
I
think
outs
that
diagram
there.
They
go
update
it
to
what
we
expect
it
to
look
like
for
v1
a3.
So
people
will
know
like
this
is
what
it
should
look
like
for
v1
or
three.
If
you're
trying
to
do
this
at
all,
I'll
close
it
and
then,
if
we
do
encounter
issues,
I
can
open
a
new
one.
Sounds.
A
A
D
One
of
it
is
is
that
the
object
reps
that
we're
using
are
basically
saying
that
we
should
reference
this
as
a
specific
type
and
right
now
in
the
code
we're
not
actually
using
the
API
group
inversion
info
from
there,
so
it's
almost
kind
of
misleading.
So
maybe
we
do
need
to
go
to
just
string
references
for
these,
and
then
we
wouldn't
even
have
to
deal
with
any
type
of
conversion
around
these.
Well.
D
Well,
I
mean
we
would
still
need
I
mean
this
is
where
it
gets
tricky,
because
yeah
we
do
need
to
have
the
specific
kind
in
there
to
look
it
up,
but
it's
particular
for
this
particular
case
as
we
update
the
cube
ABM
config
at
some
point,
we're
gonna
be
leaving
in
an
object
reference
in
there.
That
is
no
longer.
D
A
Worst
case,
it's
a
documentation
issue
I,
don't
anticipate
that
we
I.
Don't
really
think
we
need
to
worry
about
the
longevity
of
some
of
these
resources,
given
how
we're
still
alpha
I
do
recognize
that
it's
important
to
do
as
much
as
we
can
but
like
to
go
from
an
alpha,
2
2
and
alpha
3.
If
we
give
people
a
script
or
we
include
code
to
do
the
changes
like
it's
kind
of
best
effort
and
when
we
move
to
beta
mga,
it's
much
more
important.
A
G
A
G
A
All
right,
cube,
config
secrets,
not
cleaned
up
when
control
plane
machine
is
deleted,
created
a
cluster,
got
the
five
secrets:
I
deleted
the
control
plane
machine
and
for
the
five
secrets,
one
away
the
queue
config
secret
is
created
and
owned
by
the
cluster.
So
that's
why
it's
not
going
away
when
the
control
plane
machines
deleted.
I
know
we
talked
about
moving
this
ownership
around
in
the
past,
I
think
going
forward
with
the
kubaton
control
plane.
It's
gonna
be
owned
by
the
control
plane,
resource
right,
Jason.
That's.
A
A
I
So
the
qu
control
library
drain
library
will
just
always
fail
due
to
time
out
because
it's
waiting
for
the
pods
to
go
away
and
they
go
so.
This
option
is
to
allow
exiting
that
wait
for
those
pods,
because
if
it
has
a
deletion
timestamp
that
means
PD.
Bees
have
already
been
verified.
It's
been
marked
for
deletion,
it's
just
never
going
to
go
away
because
the
couplets
unhealthy-
and
so
this
is
a
pretty
simple
revenger-
add
new
option
type
task.
I
just
put
there
for
tracking,
so
anybody
can
pick
it
up.
Okay,.
I
A
F
A
Yeah
I
saw
Michaels
comment
earlier
about
doing
this
per
provider,
which
I
think
may
be
the
best
thing
in
the
short
term.
I
really
thought
about
this
in
detail
in
terms
of
how
it
would
impact
machine
sets
and
machine
deployments
or
work.
You
could
even
use
them
and
what
that
would
look
like,
but
I
think
it's
worth
continuing
the
discussion
and
if
we
find
out
that
it
makes
sense
to
do
it
per
provider.
A
A
A
I
A
B
A
Let
me
file
an
issue
it
or
find
one,
and
thank
you
for
bringing
that
up.
I
think
probably
the
easiest
way
for
you
to
get
a
sense
or
anybody
to
get
a
sense
for
what
the
releases
look
like
would
be
to
hop
on
Zoom
with
one
of
us
when
we
do
one.
What
time
zone
are
you
in
London,
London?
Okay,
because
we're
gonna
do
a
cluster
API
release
soon
we
may
do
it
today,
but
I
realize
it's
late
for
you
at
this
point.