►
From YouTube: 20200701 Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
the
July
first
cluster
API
office
hours,
meeting
cost
review
has
a
project
of
say,
cluster
lifecycle.
We
have
meeting
etiquette.
So
if
you
want
to
speak
up
use
the
precent
feature
you
can,
you
can
find
it
under
participants
list
in
June,
I
post
a
link
to
the
attend
the
document
in
the
chat,
if
you
don't
have
access
to
edit.
This
document
join
the
Seacoast
lexicon
in
list
and
definitely
feel
free
to
add
your
name
to
the
attending
attending
list.
A
A
A
A
A
My
folks,
that
don't
know
like
we're
trying
to
kind
of
push
through
7rc
next
week
and
then
cut
the
release,
dirty
jiving
kind
of
like
a
big
release.
We
have
it
in
on.
We
have
like
a
huge
list
of
changes
on
the
roadmap,
so
feel
free
to
take
a
look.
If
you
want
to
get
more
from
me,
we're
doing
and
we'll
be
relieved
as
features.
Not
everything
in
here
will
be
actually
zero,
three
seven,
but
a
lot
of
these
items
will
actually
be
in
the
release.
C
C
So
a
little
bit
of
background,
we
want
to
add
some
hooks
at
points
in
the
deletion
lifecycle
potentially
later
down
the
road
at
other
points
in
the
lifecycle
of
a
machine.
So
it
allows
for
third-party
or
custom
components
to
do
some
action
and
we're
stopping
some
of
the
things
from
happening
in
the
machine
controller
such
as
draining
the
node
or
having
the
instance
delete
it.
So
we
want
to
pause
those
locations
and
allow
other
things
to
take
some
action,
if
necessary,
so
the
machine
controller
implementation
side.
C
So
if
someone
or
something
sets
an
annotation
that
starts
with
these
strings,
reconciliation
will
basically
stop
at
these
predefined
points,
one
just
prior
to
draining
in
one
just
prior
to
having
the
instance
be
removed
from
the
cloud
and
so
it'll
return,
basically
nil,
so
I'm
out
of
air.
So
that
just
tells
the
machine
said:
hey
you
don't
need
to
do
anything
right
now.
Somebody
else
is
probably
going
to
work
on
this.
Excuse
me
machine
controller
and
that's
pretty
much
it
in
that,
then
that
other
component
would
be
in
charge
of
removing
its
corresponding
annotation.
A
A
C
Exactly
idea
like
we
want
to
be
able
to
have
one
or
more
hooks
at
a
given
lifecycle
point
and
so
there's
several
different
ways.
We
accomplish
that.
The
way
I
think
it's
probably
the
cleanest
and
easiest
for
administrators
to
kind
of
wrap
your
head
around
is
if
everybody
standards
on
standardized
is
on
this
prefix,
so
the
actual
form
is
covered
in
the
long
form
is
covered
in
the
maquette.
C
But
basically
it's
a
prefix,
slash
like
what
you're
intending
to
do
so
like
for
the
pre
drain
hook.
I
might
write
this
prefix
slash,
custom
drain
right
and
then
I
have
another
controller
that
says:
okay
I
see
this
machine's
delete.
It
I
see
that
I
have
this
annotation
on
this
machine.
That
I
know
about
that's
gonna.
Tell
me
I'm
gonna,
I'm
gonna
do
whatever
this
custom
drain
thing.
C
Is
this
other
component
and
when
that's
done
I'm
going
to
remove
that
annotation,
so
you
can
obviously
have
any
number
of
patients
with
that
prefix
and
you
can
get
down
into
ordering
and
stuff
like
that
and
that's
covered
in
the
cap
as
well.
There
there's
no
ordering
and
forced
by
the
machine
controller
the
ordering
would
be
enforced
by
the
controllers,
implementing
the
hooks.
So
let's
say
I
have
two
things:
that's
supposed
to
happen
to
pre
drain
I
have
like
my
custom
drain
component
and
then
I
need
something
that
runs
right
after
the
custom
drain
component.
C
So
that
thing
that's
supposed
to
run
right
after
it
might
wait
until
the
custom
drain
annotation
is
removed
before
it
starts
it's
logic,
so
it's
kind
of
like
you
know,
dependency
ordering
can
be
enforced
by
whoever
cares
about
that.
If
you
don't
care
about
it
and
you
want
to
throw
at
the
same
time,
then
your
component
doesn't.
B
C
A
Now
this
is
this,
make
sense,
oh
I,
like
that,
it's
a
very
simple
solution,
but
effective
okay,
let's
give
it
like
a
little
bit
a
few
days
like
it
seems
like
a
Andrew
who
has
some
time
this
week
or
early
next
review
and
yeah
like
it
will
be
probably
the
same
theme
for
for
me.
I
just
want
a
question:
what
is
pre
term
pre
termination
or.
A
C
This
cop
Delina
machine:
well,
what
does
that
mean?
Well,
we
reconciled
down
this
delete
pathway
and
there's
a
lot
of
things
that
happen
there
we
drain
it.
We
remove
the
instance
from
the
cloud
we
delete
the
node
and
then
finally,
we
remove
the
finalizar
which
results
in
the
machine
objects
actually
being
deleted.
So
there
was
some
question
about
well,
what
does
originally
it
was
pre
delete,
but
what
is
pre
delete
me
right?
Yes,.
D
C
A
D
A
Going
once
twice
three
times,
alright
got
a
plus-one
from
nico
worrying's,
also
like
kind
of
look
at
it
this
week.
So
thank
you
all
and
my
co
yogi
got
the
next
one
as
well.
C
So
this
is
something
I've
brought
up
here.
I,
don't
know
if
directly,
but
I
know
I've
brought
it
up
at
some
point.
So
I
have
this
upstream
kubernetes
enhancement
that,
but
basically
it's
mostly
driven
on
kind
of
the
needs
of
the
cluster
API
project,
but
our
needs
I
think
are
very
similar
to
many
other
people's
needs,
and
that
is
basically,
we
have
multiple
things
that
are
doing
things
and
nodes
that
you
could
consider
disruptive
and.
C
So
for
one
instance
in
openshift,
we
have
this
thing
called
the
machine
config
operator,
so
we
do
upgrades
in
place,
so
we
upgrade
the
the
instances
on
disk
configuration
and
then
we
reboot
it.
So
if
you
couple
that
with
machine
health
checks
well,
sometimes
like
on
bare
metal
or
something
it
might
take,
several
minutes,
15
plus
minutes
for
a
host
just
to
reboot.
Due
to
you
know,
that's
how
long
post
takes,
and
so
we
don't
want
the
machine
out,
checker
going
out
and
saying
hey.
C
This
thing
is
broken
and
then
deleting
something
in
the
middle
of
an
upgrade.
That
would
be
bad
same
thing
with
a
machine
held
checker.
Let's
say:
I'm
investigating
some
issue
on
the
host
and
I
need
to
stop
the
couplet
for
some
reason:
right,
I'm
troubleshooting,
this
thing,
I,
don't
want
the
Machine,
how
checker
to
then
delete
that
machine
and
make
it
go
away
while
I'm
doing
stuff
to
it.
Then
we've
got
other
like
proposals
about
adding
power,
start
stop
that
kind
of
stuff
to
the
Machine
API,
potentially
cluster
API
machines
and
so
more
broadly.
C
What
I
think
is
needed
is
some
way
for
various
components,
whether
they
cluster
epi
or
other
components,
to
have
a
central
point
of
reference
to
say,
hey
I,
think
there's
something
I
want
to
do
through
this
node
isn't
an
okay
time
to
do
this
right
now.
So
that's
the
idea
of
the
maintenance
lease
so,
for
instance,
the
Machine
hell
checker
before
it
decides.
Oh
I
see
this
node
is
unhealthy.
Let
me
check
if
anybody's
currently
got
a
maintenance
lease
on
it
before
I
decide
to
take
an
action.
C
I,
don't
see
that
at
least
it's
how
and
say
okay,
this
is
this:
is
under
maintenance
I'm
not
going
to
do
anything
I'm
just
going
to
ignore
this
node
for
now,
and
so
you
can
kind
of
see
why
it's
related
to
cluster
API
directly,
but
also
the
point
of
coordination.
I
think
should
be
the
node
so
other
things
that
want
to
in
or
interact,
but
this
functionality
they
don't
need
to
know
about
the
cluster
API
pieces.
Necessarily
they
just
need
to
know
about
this
lease.
That's
related
to
the
node.
C
Oh
that's
the
background
anyway.
So
previously
I've
ran
this
proposal
by
Sig
note.
They
seem
to
like
it
and
suggested
that
sig
cluster
lifecycle
should
own
it.
So
recently,
I
went
to
cook
sig
cluster
lifecycles
upstream
meeting
and
I
got
some
positive
feedback
there.
They
seem
to
generally
think
it's
a
good
idea.
C
They
also
think
sig
node
should
own
it.
The
feedback
I
got
from
there
was
hey,
let's
schedule
this
for
the
next
sig
arch
meeting
and
we
can
hash
out
some
of
these
ownership
details,
maybe
some
of
the
implementation
details
and
that
is
scheduled
to
take
place.
That
discussion
is
on
the
agenda
for
tomorrow
SiC
arch
meeting.
So
if
this
is
something
that
you're
interested
in
I'd
love
feedback
and
we'll
be
discussing
it
I
think
tomorrow.
B
A
D
D
B
D
We're
gonna,
try
and
keep
support
backwards,
support
for
those
but
planning
to
hopefully,
sometime
in
the
future,
remove
support
for
variables
that
have
like
spaces
between
the
curly
braces
and
variables
that
have
any
sort
of
other
meta
characters
like
dollars
and
whatnot
in
there
we
kind
of
removing
support
for
those.
So
if
you
have
provider
templates
or
cost
of
templates
that
have
obviously
variables
in
them,
please
take
a
look
at
this.
D
A
D
E
C
Sure,
yeah
sorry
soon
freezes
up
on
me,
so
I
can
hear
you,
but
then
my
PC
doesn't.
Let
me
do
anything
and
I'm
powerless
so
yeah
as
far
as
like
timeout,
that's
pretty
much
all
covered
in
the
cab.
The
lease
object
itself
doesn't
have
any
concept
of
time
out:
there's
not
any
kind
of
timeout
control
or
anything.
So
a
lot
of
this
is
kind
of
predicated
on
well-behaved
clients
and
I've
kind
of
outlined.
Some
of
what
I
think
from
a
high
level
makes
good
sense
for
using
the
Solis
object.
C
I
would
suggested
to
me
to
use
this
lease
object
as
it
already
exists
as
far
as
like
the
administrator
taking
control
that
is
also
accounted
for.
Basically,
there's
like
an
owner
field
or
a
holder
feel
every
or
what's
called
off
top
my
head
and
the
way
that
I've
written
the
logic
is
again.
This
is
all
dependent
on
the
clients
doing
what
they're
supposed
to
do,
because
everybody's.
C
This
thing
this
thing's
not
really
prohibiting
anyone
from
doing
anything,
but
if
you're
gonna
have
the
lease
for
more
than
an
hour,
then
you
should
set
the
time
period
for
expiry,
because
there
is
there's
like
a
lease
duration.
You
should
set
it
for
no
more
than
an
hour,
and
you
should
then
periodically
update
the
renew
time,
which
basically
says
one
last
acquired
us.
So
there's
an
idea
of
duration
seconds
and
renewed
time.
So
you
you
have
the
lease
for
whatever
the
time
is
at
the
review
time,
plus
those
number
of
seconds.
C
So
if
you
conform
to
the
rules,
then
anything
that
suddenly
like
stops
running
or
whatever
should
have
a
relatively
small
window.
You
don't
want
to
set
a
lease
second
duration
of
like
three
days
and
then
never
update
there
a
new
time.
That
would
be
bad,
but
on
top
of
that
for
the
administrator,
because
the
renew
time
is
like,
like
a
time
that
time
field
or
micro
time
field,
that's
kind
of
like
not
ideal,
for
an
administrator
to
take
control.
C
C
The
reason
it's
a
prefix
I
said
in
one
field
is:
it
might
be
helpful
if
you
administrate
the
cluster
with
other
individuals,
I
might
say
a
cuvette
team
in
:.
You
know
michael
Cochino,
and
then
you
know
that
okay,
hey
this
thing's
on
release
and
I,
should
ask
this
person
about
it.
But
as
far
as
like
automated
components,
the
other
controllers
and
stuff
they'll
just
see
the
coop
ATM
prefix
and
know
that
ok,
this
lease
is
being
held
and
I'm
not
allowed
to
do
anything
with
it.
E
All
right,
Andy,
thanks
Vince
I,
just
want
to
encourage
everyone
to
be
thinking
about
where
you
might
see,
see
us
taking
close
to
API
in
the
future
terms
of
feature
requests
or
any
issues
that
you
have
I
know
we're
not
shy
filing
issues
in
github,
but
just
a
renewed
call
for
anything
that
you
think
would
be
helpful
for
the
project
going
forward.
We're
gonna,
probably
start
v1l
for
roadmap
planning
sometime
in
the
next
few
weeks.
E
A
E
E
E
So
I
know
that
you
had
several
that
were
closed,
I
think
having
having
them
open
is
fine,
so
we
can
reopen
them,
and
you
know
there's
going
to
be
some
amount
of
debate
that
happens
on
the
github
issue
and
then
we'll
transition
to
a
Catholic
if
needed
and
then
proceed
from
there.
Does
that
work
for
you,
oh
yeah,.
C
A
F
Yeah
we
cool
for
like
the
next
feature.
Request
thing
is
that
we
done
with
that
prior
topic,
good
into
anyone's
face.
I'll
just
start
talking,
so
one
of
the
things
that
we're
thinking
about
this
might
be
a
little
bit.
Aws
specific,
but
like
flexibility
in
machine
tools
is
going
to
be
a
thing
that
matters
to
us
and
by
flexibility.
I
mean
so
like
behind
the
scenes.
A
A
F
Yeah
I
think
just
one
thing
that
I'm
interested
in
is
like
so
like
cluster
autoscaler,
for
example,
understands
all
of
the
individual,
auto
scaling
group,
implementations
really
well,
and
so
I'm
curious,
like
you
know,
are
we
gonna
go
down
a
path
of
implementing
that
for
a
machine
pool,
for
example,
or
is
there
a
way
that
we
can
just
defer
to
cluster
Auto
scalars
knowledge
of
how
those
provider
specific
items
work?
Because
it's
it's
you
know
it's
like
cluster
autoscaler
is
gonna.
F
G
We
have
a
few
different
challenges
there,
specifically
around
what
information
is
available
to
the
cluster
autoscaler,
based
on
how
the
cluster
autoscaler
is
integrating-
and
I
know
this
is
something
that
justin
garrison
brought
up
in
slack
as
well,
where
we
may
want
to.
In
the
case
of
machine
pool,
backed
resources,
you
know,
use
the
native
cluster
autoscaler
features,
so
we
just
got
to
figure
out.
You
know
how
we
want
those
pieces
to
interact
and-
and
you
know,
go
from
there
I
think.
H
We
need
a
mic
new
taxes,
so
I
just
want
to
back
up
a
little
bit
of
what
Jason
was
saying
there.
Yeah,
like
the
provider
implementations
on
the
cluster
autoscaler
side
kind
of
you
know
they
look
a
little
different
and
I
think
there's
room
for
us
to
create
extra
functionality
through
the
Cappy
provider,
because
in
in
my
opinion,
the
Cappy
provider
does
things
a
little
bit
differently
with
the
way
users
like
machine
sets
and
machine
deployments,
and
we
actually
have
a
channel
through.
H
Like
I
mean
we
have
a
loose
channel
through
the
annotations
to
pass
information
back
and
forth
between
the
way
the
cluster
autoscaler
live
is
operating
and
the
way
the
Cappy
backend
is
operating.
So
you
know
for
some
of
these
cases
where
we
say
well,
it
might
be
more
efficient
to
have
the
AWS
provider
use.
You
know
the
auto
scaling
groups
instead
of
trying
to
auto
scale
them.
You
know
basically
I
think
there
might.
There
might
be
some
options
or
like
some
wiggle
room,
where
we
could
actually
add
some
functionality
in
this
area.
C
B
G
C
I
think
the
distributed
architecture
should
look
like
the
cluster.
Autoscaler
is
simply
making
the
decision
to
scale
and
then
making
a
decision
of
what
group
I
scale
and
the
implementation
of
actually
scaling
whatever
that
is,
should
be
totally
separate,
so
I
think,
like
the
aSG's
could
be
abstracted
into
that
could
be
some
other
like
scalable
resource
and
I.
Think
I
brought
up
on
one
of
the
issues
somewhere
in
this
project,
like
we
shouldn't
be
integrating
the
autoscaler
with
cluster
api.
C
Simply
the
autoscaler
should
provide
some
CRD
and
we're
writing
a
controller
implementation
to
reconcile
the
state
that
the
autoscaler
is
giving
us.
So
in
this
case
it
might
be
like
today
we
use
machine,
auto,
scalars,
basically
map
to
Machine
sets,
so
it
creates
these
things
for
any
kind
of
cloud
provider
and
it
scales
these
things
and
then
some
other
controller
should
be
reconciling
those.
G
G
Independent
of
the
cloud
provider
makes
a
lot
of
sense,
but
then
there's
the
use
case
where
a
specific
cloud
provider
may
provide
different
mechanisms
that
can
help
in
making
scaling
decisions
that
aren't
very
generic
and,
in
those
cases
deferring
to
the
cluster
autoscaler
to
manage
those
makes
more
sense.
So
I
think
we've
got
a
kind
of
suss
out
the
use
cases
that
we
care
about.
We
want
to
support,
and
then
you
know
the
implementation
goes
from
there.
H
That
makes
sense
to
me
Jason,
you
know
back
to
kind
of
what
Mike
was
saying
there,
which
I
find
to
be
like
a
really
interesting
concept
and
I
know.
You
know
he
and
I
have
talked
about
this
kind
of
two
separate
channels,
but
I
think
again
to
me,
like
the
Cathy
integration
into
autoscaler
actually
gives
us
a
really
interesting
window
to
experiment
in
this
area,
because
you
know
the
way
we've
written
it.
Now,
it's
very
easy
to
change
the
resource
group.
H
You
know
that
goes
with
the
research
group
version
that
goes
with
the
machine
set
cement
machine
deployments,
so
I
have
a
feeling
that
it
would.
It
would
probably
wouldn't
take
a
ton
of
work
to
build
a
proof-of-concept
where
you
could
have.
You
know
some
stubs
CRD
that
sits
in
the
middle
between
Cathy
and
that's
where
this
generic
kind
of
I'd
like
to
scale
up
I'd
like
to
scale
down
type
activity
could
happen,
and
then
something
could
controller
could
be
watching
that
on
the
back
end,
you
know
to
make
to
make
changes
in
a
different
way.
H
A
F
I'm,
just
kind
of
doubling
down
on
Jason's
point
that
about
collecting
use
cases
I
think
that's
something
we
should
do
like
in
the
New
Relic
case
like
we
definitely
have
not
super
specific,
but
like
we
have
like
a
you
ski,
a
use
case
that
we
like
are
looking
to
get
out
of
this.
That
maybe
we
could
share
at
some
point,
and
you
know
in
the
near
future,
but
it
has
a
lot
to
do
with
like
well.
F
A
A
A
D
E
If
you're
on
AWS,
we
have
a
tool
that
you
can
run
to
set
up,
some
I
am
permissions
that
Kappa
needs,
and
so
basically
the
idea
was
we
could
have
essentially
hooks
at
various
parts
of
the
cluster,
cuddle
lifecycle
or
new
commands.
Like
I,
don't
know
it
would
be.
You
know,
since
we
already
have
an
init,
but
ya
know
an
ability
to
plug
in
provider
specific
code
into
the
workflow
so
that
you
don't
have
to
go
out
to
another
tool
necessarily
to
set
up
things
like
I
am
make.