►
From YouTube: 20200902 Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
thanks,
hi,
everyone
today
is
wednesday
september
2nd,
and
this
is
the
cluster
api
office
hours.
Cluster
api
is
a
sub
project
of
sig
cluster
life
cycle.
Please
follow
the
cncf
code
of
conduct,
respect
everyone
and,
if
you'd
like
to
speak,
please
use
the
raised
hand
feature
on
zoom.
A
B
Yes,
so
we
had
kind
of
like
a
release.
I
was
quite
something
to
get
it
out
of
the
door,
so
the
big
warning
here
is
like
039
does
not
support
to
upgrade
118
clusters
to
119
0..
We
identified
a
bug
to
that
pretty
much
like.
So,
if
you
have
components
in
in
your
control
plane,
you
have
api
server,
you
have
the
controller
manager
and
all
the
other
components.
B
What
happened
during
an
upgrade
was
that
the
api
server
stayed
on
the
on
one
on
the
118
nodes
and
the
controller
manager
was
was
elected
on
on
the
new
190
node
and
new
apis
for
certificate
signing
were
moved
to
b1,
so
the
controller
manager
was
asking
for
v1
certificates,
but
the
api
server
responded
that,
like
it,
didn't
have
that
api
version,
and
so
the
up
upgrade
stalled
and
the
and
kcp
couldn't
move
forward.
So
what
ended
up
in
the
release
was
that
we
now
does
allow
to
upgrade
entirely
to
119
0..
B
Yes,
that
is
the
the
current
timeline,
but
yeah
we
should.
We
should
definitely
have
the
fix
and
people
should
be
able
to
upgrade
that
point.
C
A
Okay,
thank
you,
yeah
and
thank
you
to
everyone
who
huddled
to
like
debug
and
investigate
what
was
going
on
and
gets
the
root
cause
of
this
problem.
A
Topic
cool,
so
we
had
said
that
zero,
three
nine
was
going
to
be
the
last.
You
know
big
0.3
release.
What's
the
plan,
for
you
know,
big
feature
prs
are
currently
in
the
queue.
Are
we
trying
to
get
those
into
zero
three
ten
or
are
we
moving
everything
to
zero?
Four.
B
There's
a
few
things
in
the
in
the
backlog
there
at
zero.
Three
ten,
although
they
have
been
either
already
discussed-
and
you
know
they're
good
editions.
The
code
is
there
just
need
some
fixing
so
yeah.
We
can
definitely
merge
those
they're,
pretty
small.
It's
like
a
matter
of
adding
conditions
here
and
there
so
yeah
like
0
310.
Hopefully
it's
going
to
be
our
last
release,
we'll
we'll
probably
release
more.
B
If
you
know,
if
there's
other
bugs
that,
like
we
find
along
the
way
so
just
today
this
morning
I
created
the
release,
zero
three
branch
and
moved
the
book
to
point
to
the
zero
three
branch
as
well
so
now
to
actually
make
a
release
like
we
will
merge
the
bug
fixes
into
the
main
branch
and
then
backboard
it
and
then
release.
So
it's
a
little
bit
more
work,
but
in
yeah
it
should
work.
Just
fine.
A
B
Yeah,
they
won't
be
backwards,
yeah,
great
okay.
I
think
there
was
one
exception
that
we
mentioned.
It
was
kcp
remediation
though,
but
that
depends
on
how
it
goes
and
how
we
feel
about
the
changes
to
backboard
if
they're
too
invasive
we
might
not
want
to
so
but
yeah.
That
was
the
only
exception
that
was
raised,
like
I
think,
last
week
or
two
weeks
ago,.
A
B
B
One
was
the
runtime
metrics
which
have
been
completely
removed
from
alpha
three,
given
that
one,
they
were
not
really
useful
and
two
they
were
using
a
high
memory
usage
because
of
the
cardinality
of
the
sets
depends
so
like
the
more
cluster
you
had
the
more
metrics,
and
I
think
it
was
also
actually
based
on
machines
names,
which
is
a
pretty
large
cardinality
set
and
the
the
other
one
was
a
bug.
B
A
memory
leak
for
in
kcp
was
that
the
lcd
connections
that
were
not
being
properly
closed
so
definitely
upgrade
to
039
as
soon
as
possible.
If
you
can.
A
Great
okay,
thank
you
all
right.
So
moving
on
to
discussion
topics
james,
you
have
the
first
one.
D
Hi
yeah,
so
kalya
presented
the
cappy
windows
proposal
last
week,
thanks
to
everybody
who
commented
on
it,
I
just
wanted
to
give
a
reminder
to
take
another
look.
I
didn't
see
any
blocking
issues
or
concerns,
so
I
was
going
to
plan
on
opening
up
the
pr
for
next
week,
but
please
take
a
look.
If
you
get
a
chance.
A
Sounds
great,
thank
you.
Any
questions
about
the
windows
proposal.
A
Nope
all
right,
so
please
take
a
look
when
you
have
your
chance.
Oh
next
one's
mine.
Actually
so
I
was
just
gonna
bring
up
so
I
saw
vince.
You
opened
a
pr
this
morning
to
write
or
kind
of
document
our
like
compatibility,
like
effort
in
terms
of
what
breaking
changes
are
acceptable
and
not
currently
in
the
project,
and
I
was
just
going
to
propose-
and
I
think
he
actually
documented
that,
but
I
think
in
general
we
should
strive
to
always
mark
breaking
changes.
A
As
you
know,
the
warning
type
of
commits
so
they're
clearly
documented
in
release
notes,
even
if
we
accept
those
breaking
changes,
because
you
know
we're
in
alpha
and
some
areas
of
the
projects
are
going
to
be
breaking
changes,
especially
if
we
don't
expect
them
to
impact
anyone,
because
no
one
is
using
that
particular
function.
I
think
we
should
still
like,
as
reviewers
always
make
sure
that
if
the
api
diff
job
is
failing,
that
comes
with
a
warning
commits
message.
B
Got
my
plus
one,
I
think
I
covered
this
in
a
few
places.
The
only
exception
I
think
that
was
in
place
was
for
capti,
but
all
the
others
like
yeah.
I
should,
but
because
mostly
that's
what
we
have
been
doing
today,
but
if
we
want
to
change
that
as
well
that
I'm
completely
fine
as
long
as
like
it's
documented.
A
A
Cool
any
other
questions,
I
don't
see
anyone
raising
their
hand,
so
I'm
gonna
assume
everyone
agrees.
A
C
Thank
you
so,
okay,
basically
after
investigating
the
problem
around
119
upgrades,
basically,
it
was
really
evident
that
we
are
lacking
some
functionality
in
our
entire
test
and
the
the
the
one
which
is
most
important.
One
of
the
most
important
is
the
ability
for
our
end-to-end
tests
to
come
to.
C
Logs
from
the
workload
clusters,
there
was
already
an
old
issue
or
discussion
around
this,
and
I
opened
the
pr
few
hours
ago
in
order
to
extend
the
end-to-end
test
framework
to
support
a
pluggable
logo
collector
in
the
end-to-end
framework.
There
is
a
an
implementation
of
the
logical
collector
that
works
with
cable,
but
every
provider
should
be
able
to
inject
their
own
log
collector
at
the
end,
so
at
the
end
of
each
end-to-end
test.
C
Basically,
the
local
letter
is
triggered
and
you
have
the
chance
to
implement
your
mechanism,
to
connect
to
the
machine,
your
specific
mechanism
to
connect
to
the
machine
and
and
fetch
files
and
yeah.
If,
if
you
have
chance,
please
take
a
look,
because
this
is
something
which
is
part
of
the
framework
and
everyone
which
is
using
the
framework
cool
benefit
of
of
it.
E
Yeah
fabrizio
I'm
very
interested
in
this,
as
we
are
in
cap
v,
setting
up
a
plan
for
our
end-to-end
tests
and
we're
trying
to
increase
our
coverage
so
yeah
happy
to
help
on
this.
A
F
Thanks
yeah
first
one
I
just
wanted
to
to
say
thanks
to
for
beto
and
for
making
cafe
easier
to
use.
That's
that's
a
really
for
me
personally,
really
a
really
great
improvement.
So
I
just
want
to
shout
that
out
and
everybody
who
who
reviewed
that
that
er
and
gave
feedback
and
yeah
it's
really
nice,
the
the
other
thing
I
I
just
wanted
to
mention
this
again
or
start
a
conversation
if
others
are
interested.
F
We've
we've
talked
about
in-place
upgrades
before
at
times
so
yeah
I
just
wanted
to
you
know,
raise
the
balloon
or
whatever
you
know
yet
again.
F
This
time
I
just
wanted
to
mention,
you
know
possible
motivations
and
wanted
to
get
some
feedback
like,
for
example,
you
know
if
you
have
workloads
that
are
you
know
there
are
in
memory
that
you
you
know
you
want
to
preserve.
You
want
to
avoid
draining.
That
might
be
a
motivation
to
to
do.
You
know
in
place
upgrades.
That
is
where
the
machine
isn't
isn't
removed
where
the
the
node
isn't
even
drained.
F
I
I
know
that
that
is,
you
know
like
upgrading
without
draining
is
not
even
officially
supported,
but
again
I
just
wanna
see
if,
if,
if
that's
you
know,
that's
a
valid
motivation,
if
others
have
this,
this
use
case
love
to
talk
about
it
and
see
you
know
what
what
can
or
should
change.
Then
the
other
motivation
might
be
to
you
know
if
you
have
a
larger
cluster
and
you
want
to
patch
a
cve,
and
you
know
you,
you
know.
F
Ordinarily,
maybe
the
upgrade
process
takes
a
long
time,
but
you
want
to
be
able
to
patch
quickly.
That
might
be
another
reason
also
would
love
to
hear
other.
You
know
motivations
or
use
cases.
So
that's
basically
that
if
you
want
to
you
know
sink
in
offline
in
slack,
that
would
be
great
too,
and
that's
it
thanks.
G
Yeah
hi,
I
so
in
the
book
in
the
cluster
api
book.
It
states
that
machines
are
immutable,
so
I
think
you're
actually
going
two
steps
here.
You're
saying
not
only
am
I
going
to
change
the
machine,
but
I'm
all
even
going
to
avoid
draining
it
just
interested
if
you
considered
the
or
what
what
anyone
thinks
of
actually
changing
the
first
step,
which
is
just
allowing
mutability
on
machines.
F
Yeah,
I
I
mean
there's
also
yeah
there.
There
might
be
ways
to
you
know
to
work
around
that.
Where
you
have
a
you
know,
you
actually
end
up
changing
the
machine,
but
maybe
preserving
the
node.
I
I
mean
I
I
I
just
want,
if,
if
others
are
interested
and
and
and
think
these
are,
you
know
valid
use
cases
that
I
that
I've
listed
or
have
other
use
cases,
I
think
it's
it's
great
to
continue
the
conversation.
F
You
know,
notwithstanding
the
assumptions
that
that
we've
been
you
know,
making
making
thus
far
and
yeah
that's
yeah,
I
agree
machines
machines
are
immutable
and
you
know
we've
been
working
under
under
the
assumption.
All
our
you
know,
controllers
work
under
that
assumption.
F
But
if
you
know
if,
if
if
there
are
these
use
cases,
it
would
be,
it
would
be
great
to
to
just
think
you
know
what
or
how
you
know
how
we
can
support
them.
Maybe
if
we,
if
machines
do
remain
immutable,
maybe
there
are,
you
know
like
live
pod
migration,
maybe
that's
a
viable
alternative
and
yeah
I'm
also
interested
in
in.
I
guess
others
experiences
with.
F
You
know
patching
large
numbers
of
nodes
for
for
cves
and
yeah,
so
I
I'm
I'm
not
trying
to
make
any
make
any
claims.
You
know
like
where
the
project
should
go
or
what
you
know
what
changes
we
should
make.
I
I
really
just
want
to
get
a
conversation
started
and
that's
it.
I
Yeah
we
do
something
similar
to
openshift,
not
exactly
what
you're
proposing,
though.
D
I
Talked
about
updating
different
elements
on
disk
without
rebooting-
I
guess
technically
it's
feasible,
but
in
our
model
the
system
configuration
management
is
separated
from
the
image
deployment.
So
you
know
layer,
rail
or
fedora
core
os,
and
then
that
gets
its
configuration
from
a
centralized
system.
I
I
Model
we
have,
but
as
far
as
like
do
I
think
it's
a
good
idea,
not
really
in
this
case
to
just
update
like
a
kubelet
version
outside
of
all
your
configuration.
I
I
Breaking
workloads
that
are
running
pod
disruption,
budgets
are
your
answer
there.
That's
why
we
introduced
drain
to
the
machine
controller,
and
so
that
gives
users
administrators
a
lot
of
flexibility
over
how
a
machine
actually
gets
replaced.
So
you
can
set
a
pdb
of
zero
disruptions
if
it's
like
a
batch
job,
when
that
thing
stops
drain
will
complete
and
then
the
machine
will
go
away.
K
Hello,
thank
you
for
bringing
this
up.
Actually,
I
work
for
atnt
and
we
have
a
very
similar
use
case
where,
for
example,
so
we
use
a
metal
cube
quite
a
bit.
We
have
a
lot
of
bare
metal
clusters
and
not
only
do
we
want
to
sort
of
maybe
change
the
let's
say
the
cubelet
parameters,
but
oftentimes.
We
want
to
do
more
general
host
configuration.
K
K
So
we
were
actually
thinking
of
a
more
slightly
different
approach
to
doing
this,
and
I
have
a
proposal
in
the
works
right
now
that
I'll
definitely
share
with
you
guys,
but
the
general
idea
is
that,
while
machines
are
immutable
recently,
there
was
discussion
about
making
in-place
changes
possible
at
the
machine
template
level.
K
Can
actually
contain,
for
example,
certain
fields
that
that
can
be
mutated
by
the
user
and
the
question
would
be
then,
if
let's
say
you
had
like,
you
know
a
particular
host
configuration
that
you
wanted
mutated
and
that
was
part
of
the
the
machine
template
and
the
user
changed
it.
You
would
need
some
kind
of
mechanism
to
first
of
all
map
that
machine
template
to
the
actual
machines
that
were
created
from
it,
and
then
you
would
need
some
kind
of
an
executor.
K
For
example,
we
had
a
poc
where
we're
using
something
called
a
ansible
operator
which
could
run
various
ansible
scripts
just
as
an
example.
So
this
executor,
for
example,
would
see
would
be
told
that
hey
the
following
template
has
changed,
and
these
were
the
machines
that
were
based
on
it,
go
ahead
and
make
those
changes
in
place
on
those
machines.
So
the
executor
would
do
its
own
magic.
It's
completely
outside
of
the
the
cluster
api
or
metal
cube
world.
K
K
L
I
Yes,
yes,
we
we
have
a
similar
system
in
open
shift,
we
call
it
machine
config
operator,
machine,
config,
daemon
and
the
naming
runs.
I
B
Yeah
just
just
provide
some
some
background,
like
this
conversation
came
up
in
the
past
as
daniel
mentioned.
I'm
glad
that
then
you
mentioned
like
we're
not
trying
to
necessarily
change
our
current
project
goals,
but
it
seems,
like
you
know
when
we
talked
about
this
in
the
past,
like
there's,
definitely
some
ways
to
do
this
outside
of
cluster
api.
It
doesn't
have
to
be
like
part
of
like
our
api
and
support,
matrix
and
yeah,
like
maybe
we
should
just
document
it
like
that.
B
This
question
come
comes
up
like
probably
every
six
months
but
yeah
like
maybe
we
should
definitely
put
more
documentations
like
hey.
B
If
you
have
these
use
cases,
there
is
these
tools
out
there,
like
the
one,
the
machine
config
one
or
the
one
that
animal
was
proposing
that
could
that
should
live
outside
of
the
cluster
api
projects
that,
like
we,
don't
impact
our
roadmap
to
beta
nga,
but
that
could
fulfill
other
people's
needs,
and
then
we
can
think
about
more
like
how
do
we
make
it
more
like
approachable
to
do
these
mutable
changes?
B
Maybe
it's
just
a
matter
of
thinking
back
up
like
the
changes
to
cluster
api
with
an
external
controller,
and
things
like
that,
and
also
I
wanted
to
mention,
like
there's,
also
the
cubed
m
operator
effort
as
well.
That,
like
should
do
this,
and
so,
if
folks,
that
want
to
be
involved
and
they're
like,
I
think,
fabrica
and
lubomir
are
interested
to
push
that
forward
at
some
point
next
year.
But
yeah.
F
So
thanks
vince
yeah-
I
I
I
guess
I
yes,
we
we
have.
We
have
had
this
conversation
before
and
and
yes,
I've
heard
that
there
are
ways
to
to
do
this
outside
of
of
cappy.
But
I
I
haven't,
and
forgive
me
if
I
you
know
if
I'm
being
dense
here,
but
I
I
haven't,
I
haven't
really
figured
out
how
you
know
I
would,
for
example,
patch.
You
know
the
cubelet
version.
F
If,
if
that
you
know,
if
that's
something
that
or
rather
if
I
you
know
how,
I
would
patch
that
on
the
on
the
machine
and
but
without
also
upgrading
the
you
know
the
the
cappy
resources,
but
anyway,
I'm
happy
to
take
that
offline.
If
you
know,
if
we
can,
if
we
can
write
some
documentation,
I
would
be
very
happy
to
to
drive
that,
and
you
know,
if
others
have
ideas,
then
I'm
happy
to
put
those
down
on
paper
as
it
were.
A
I
I've
brought
this
up
previously,
but
my
whole
thing
is
community
versus
project.
So
there's
the
cluster
api
project,
that's
the
code
and
there's
the
community.
That's
all
the
people
here
and
all
the
people
that
are
using.
G
I
I
I
So
I
think
like
we
should
try
to
support
people
that
want
to
do
these
other
things
inside
of
this
like
kind
of
cluster
api
world,
and
we
should
have
a
place
for
people
that
are
interested
in
those
things
to
collaborate
on
them,
but
not
doesn't
necessarily
have
to
be
part
of
what
lives
in
github.
At
the
moment.
I
H
Candy
yeah,
I
think
that's
a
really
good
point,
michael.
I
I
think
that
we
don't
really
have
a
single
established
place
for
people
to
come
together
and
coordinate
on
use
cases
of
cluster
api
plus
other
things,
that's
more
than
or
different
than
this
meeting
or
the
github
repositories
or
the
sick
cluster
lifecycle,
google
group
or
the
discuss
forum
like
it.
It's
I
don't
know
what
it
would
be
given
that
those
tend
to
be
the
the
tools
that
we
have
at
our
disposal,
but
yeah.
It
would
be
nice
to
see
some
experiments
documentation.
H
I
know
how
cluster
api
is
designed
and
I'm
interested
in
changing
some
file
on
my
vms
or
what
arvinder
was
mentioning
with
sys
controls
and
doing
those
live
and
trying
to
to
put
together
components
that
are
distinct,
but
can
work
together
with
cluster
api
to
do
these
sorts
of
things
so
yeah.
If
there's
a
group
of
folks
that
want
to
try
and
get
together
and
brainstorm,
how
to
collaborate
and
then
start
to
pull
in
people
from
within
this
community
and
potentially
outside.
That
would
be.
A
M
So
I
just
wanted
to
offer
the
observation
that,
from
my
point
of
view,
an
experience
it
seems
like
upgrade
is
less
of
an
intrinsic
cluster
api
gesture
and
more
of
a
sort
of
opinionated
set
of
like
using
a
bunch
of
cluster
api
primitives
in
certain
ways,
according
to
certain
orders
and
having
certain
failure,
scenarios
and
dependencies,
it's
that
kind
of
a
thing
trying
to
figure
out
a
single
upgrade
that
would
work
for
all
customers
and
all
environments
would
seem
really
hard,
and
I
actually
don't
know
enough
about
clustered
upgraded.
Note.
M
M
I
guess
I
just
mean
all
the
the
stuff
that
was
we've
been
discussing,
a
bunch
of
different
varieties
of
upgrades,
so
upgrade
the
kubernetes
version,
maybe
upgrade
the
underlying
kernel.
Os
upgrade
some
other
configuration
management
thing
on
a
particular
node
or
pool
of
nodes,
and
then
we've
also
been
talking
about
how
that
impacts,
the
ideas
that
machines
are
immutable,
and
so
that
makes
things
slightly
complicated.
Like
daniel
said,
I
don't
even
know
how
I
would
do
an
out-of-band.
M
F
Sort
of
yeah
I
it
was
just
the
like
the
the
the
the
the
the
resource
would
no
longer
actually
hold
the
the
right
I
guess
desired
state.
I
don't
think
the
controller
would
actually
note
is
that
the
version
is
changed
on
the
on
the
node
on
the
corresponding
node.
M
B
I
I'm
sorry,
I
I
see
exactly
what
you're
saying
now
like,
and
I
think
that
this
got
exactly
correct
like
we,
we
have
had
the
same
exact
problem
in
defining
this
behavior
in
kubernetes
control
plane
because
we
have
a
field,
that's
called
upgrade
after,
but
it's
a
little
bit
misleading
because
it's
actually
like
just
saying
the
spec
is
not
up
to
date.
B
I
want
to
make
it
kind
of
up
to
date
and
matching
so
so
yeah
like,
I
think,
that's
a
good
observation
like
it
might
be
good
to
actually
add
this
concept
in
to
the
book.
M
Yeah
like
if
you're
being
super
academic
about
it,
I
think
there's
an
argument
that
cluster
api
upgrade
is
like
out
of
scope
for
cluster
api
cluster.
Api
exposes
primitives
that
you
can
compose
together
to
do
things
like
quote,
unquote
upgrade,
but
that's
really
like
a
cli
orchestration
responsibility
and
it's
not
there's
no
like
upgrade
vector.
That's
a
part
of
the
cluster
api
spec.
G
N
Came
up
recently
in
kappa
where
people
are
working
on
eks
support
like
being
able
to
run
eks
clusters
through
cappy
and
like
there.
We
have
this
problem
that
node
images
like
get
removed
by
aws
like
when
they
you
know,
I
think
they
rev
like
os.
You
know
like
when
they
make
os
patches.
N
A
All
right,
thanks
for
bringing
this
up
daniel,
definitely
lots
of
people
with
lots
of
ideas.
So
it's
good
to
have
the
conversation.
I
don't
see
any
other
hands
raised,
so
I
think
that's
it
on
this
topic.
Anyone
have
anything
else
that
they
want
to
add
any
questions.
A
H
Thanks,
so
I
did
want
to
circle
back
to
something
we
talked
about
a
couple
meetings
ago
around
the
next
minor
version
of
cluster
api
v1,
alpha
4
for
the
api
version
and
how
long
we'll
keep
main
open
for
zero
three
changes,
and
when
will
we
open
it
for
breaking
changes?
H
H
A
Raise
my
hand,
I
think
we
should
at
least
wait
a
week
after
the
release
to
you
know,
let
give
us
time
to
uncover
any
issues
or
critical
problems
if
any.
H
And
I
know
that
for
kubernetes,
for
example,
there
is
a
features
proposal,
time
period
where
the
community
can
propose
what
features
they'd
like
to
have
included
in
the
upcoming
milestone
and
then
there's
a
feature
freeze
date
and
if
the
for
the
kubernetes
project,
if
the
sigs
that
are
appropriate
for
a
given
feature
haven't
approved
it,
then
it
won't
necessarily
make
the
milestone
it
might
get
kicked
out.
So
we
don't
exactly
have
that,
but
we
we
do
or
we
could
put
together
something
around
getting
proposals
approved
by
some
cut
off
date.
H
And
if
you
miss
it,
then
we've
got
to
wait
for
the
next
release.
Maybe
we
should
try
and
put
together
some
sort
of
lightweight
release
cycle
process.
B
I
think
we
briefly
discussed
it
during
the
backlog
grooming.
I
think-
and
you
mentioned
we
wanted
to
have
the
whole
month
of
september-
very
much
like
to
start
understanding
what
what
the
roadmap
is
and
then
kind
of
propose
a
roadmap.
I
don't
know
if
all
the
proposals
can
go
in
and
get
approved.
Hopefully
they
can
in
just
one
month.
But
given
the
history
like,
we
took
probably
six
plus
weeks,
sometimes
to
review
things
so
yeah
we
can.
B
How
about
we
come
up
with
a
calendar
and
then
maybe
we'll
go
probably
like
into
the
first
two
weeks
of
august
sorry
october
and
we'll
we'll
see
if
that
works
for
everybody.
H
A
All
right
cool
so
before
we
end,
I
didn't
do
this
at
the
beginning,
but
if
anyone
here
is
new
and
would
like
to
introduce
themselves
and
just
say,
hi
tell
us
a
bit
about.
You
know
why,
during
this
meeting,
what
what
brings
you
to
cluster
api
I'll
leave
a
few
minutes
for
that
so
go
ahead
and
just
unmute
yourself.
L
J
Hey
there
yeah.
So,
oh
sorry,
can
you
see
me
better
now,
yeah,
so
I'm
scott
rigby?
I
am
just
interested
generally
in
what
cluster
api
can
do.
J
I've
been,
you
know,
I've
been
maintaining
clusters,
at
least
in
the
past
role
and
relied
pretty
heavily
on
tops,
as
well
as
other
methods
and
just
what
clustering
damage
excuse
me
clustering.
I
promise
it
just
seems
like
the
right
way
to
go.
I
also
in
other
areas
of
my
life,
I'm
I'm
a
helm,
maintainer
and
I
just
am
generally
interested
in
this
space.
So
I
thought
I
would
join
nice
to
meet
you
all.
A
All
right,
I'm
going
to
take
that
as
a
note
cool,
so
I
think
that's
it
for
today
thanks
everyone
for
joining,
see
you
all
on
slack
and
github
and
have
a
great
rest
of
your
wednesday.