►
From YouTube: 20200520 Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
hello
and
welcome
to
the
cluster
api
meeting
today
is
wednesday.
A
We
are
a
project
of
sea
cluster
life
cycle.
There
is
some
meeting
etiquette.
If
you
want
to
speak
up,
please
use
the
race
ends,
feature
you
can
find
on
the
participant
list
or
if
you
have
any
other
comments,
you
can
put
it
in
chat
I'll
post.
A
link
to
the
meeting
notes
add
yourself
to
the
attending
list.
A
A
All
right,
it
seems,
nobody's
a
muting
so
as
we
can
keep
going
with
psas
the
first
psy
that
I
put
it
here
is
the
036
release.
This
comes
with
a
lot
of
bug,
fixes
and
improvements.
There
is
one
breaking
change
that
I
wanted
to
outline
and
that's
the
machine.
A
Selector
machine
l
check
spec
selector
field,
which
it
was
able
to
be
empty
before,
although
this
didn't
actually
work
as
expected,
because
our
selectors
in
util
methods
internally,
actually
don't
allow
you
to
use
a
selector
that
actually
empty
a
match
on
everything,
and
this
is
mostly
to
like
make
sure
the
users
don't
shoot
themselves
in
the
foot.
If
you
have
a
machine
health
check
with
with
an
empty
selector,
it
will
now.
The
api
will
actually
now
reject
that
machine.
Health
check
there's
also
some
new
feature.
A
Jason
contributed
these
utilities
for
cluster
resource
pause
handling.
This
is
mostly
for
providers
that
want
to
adhere
to
the
contract
of
having
the
cluster
being
plus,
and
also
the
reconcilers,
for
your
infrastructure,
provided
to
beatpos
as
well.
So
take
a
look
at
2877
and
we
also
have
like
a
bunch
of
bug
fixes
as
well
and
documentation
and
and
a
lot
of
focus
on
the
end-to-end
tests.
A
Concerns
all
right,
I
think
we
can
move
on,
for
we
don't
have
any
demos
or
pocs.
We
have
a
space
for
general
questions
or
like
any
kind
of
questions
if
you're
implementing
a
new
provider.
This
is
a
good
time
to
ask
questions.
Then
we
can
try
to
help.
So
is
there
any
questions
that
you
would
like
to
ask
us.
A
A
Going
once
twice
three
times
all
right
nadir,
why
don't
you
kick
off
with
the
bootstrap
reporting
file,
detection.
C
Hi
yep
I'll
keep
it
short.
So
there's
often
issues
say
when
a
machine
comes
up
and
it
doesn't
join
a
cluster
or
something
goes
wrong
with
it.
And
how
do
you
find
out?
What's
happened
with
it
so
at
the
moment,
I'm
looking
for
use
cases
right
now.
If
you're
going
to
dock
I've
kind
of
tried
to
draw
out
the
sort
of
landscape
of
possibilities
around
what
kind
of
methods
we
can
use
but
really
need
the
use
cases
to
be
able
to
find
out
which
one
to
choose.
C
D
C
Right
now
we
only
really
have
their
happy
case
so
cube
adm
succeeds,
it
joins
the
cluster,
the
node
appears
and
we
can
mark
the
machine
as
ready.
If
anything
else
happen,
if
cube
adm
doesn't
succeed,
then
you,
depending
on
where
it
failed
either
it
might
be
in
a
half
ready
state
and
no
one
might
join,
but
containers
aren't
running
and
kubernetes
might
still
have
time
down
and
failed
otherwise,
or
we
have
nothing.
It's
just
spinning
there
and
provisioning
and
that's
it.
C
C
We
do
have
health
checks,
so
the
health
check
can
delete
the
machine
after
the
timeout,
but
it
depends
whether
or
not
people
are
interested
in
root
cause
analysis.
It's
really
good
question
here.
E
Yeah,
so
just
to
kind
of
add,
like
the
only
thing
that
we
can
do,
that
we
do
in
our
case
is
like
we
have
to
shell
onto
the
host
so
like
in
the
case
of
an
ec2
instance.
We
have
to
ssh
onto
it
and
cat
logs
and
start
doing
that.
I
think
the
real
interesting
thing
here
is
perhaps
being
able
to
declare
something
terminal
like
if
there
is
something
that
is
happening
in
bootstrap.
E
That's
just
gonna
happen
every
time
it
would
be
great
to
know,
but
with
with
the
machine
health
check
right,
like
we've,
had
one-off
failures
too
we're
just
like
bootstrap
failed,
deleting
the
machine
got
rid
of
it
and
then
the
machine
deployment
controller
or
I
guess
machine
set
controller,
replaced
it
and
then
the
new
one
joined
just
fine.
I
think
identifying
the
terminal
conditions
is
really.
A
All
right
and
the
cluster
resource
set.
F
Thanks
vince,
so
I
put
this
in
the
rfc
section
because
we
were
doing
some
integration
work
with
this
proposal.
So
for
those
who
aren't
familiar
with
it,
we
are
trying
to
come
up
with
a
server-side
approach
for
declaring
what
additional
things
you'd
like
to
install
on
your
clusters
when
they're
created,
so
a
typical
example
would
be
cni
or
storage
classes.
F
What
we
found
is
that
we
can
potentially
drive
this
per
cluster
instead
of
saying
I'm
going
to
create
a
thing
that
uses
a
label
selector
to
then
match
against
new
clusters,
and
so
I'm
wondering
if
people
have
needs
to
get
this
functionality
that
I
just
described
and
if
it
makes
sense
to
try
and
do
it
with
label
selectors
or
if
people
were
thinking.
Maybe
at
the
at
a
per
cluster
level.
It
would
be
nice
to
specify
what
things
to
install
or
some
combination
of
the
two.
F
G
Is
a
question
for
the
for
the
selector
and
label
approach?
Does
that
mean
that
there
is
a
single
cluster
resource
set
that
that
you
could
so
so
it
might
apply
to?
You
know
some
some
sort
of
some
sort
of
clusters,
but
then
later,
maybe
that
cluster
resource
that
gets
gets,
removed
and
and
of
course
all
the
you
know,
all
the
changes
have
been
applied
already,
but
you
sort
of
no
longer
know
what
the
what
the
source
was.
F
In
the
proposal
we
describe
a
way
to
track
the
status
of
what's
supplied
to
what
so
I
I
think,
there's
there's
ways
to
do
that.
I'm
hoping
we
can
maybe
focus
just
on
the
like
the
initial
ux
around.
Is
it?
F
Do
I
want
to
specify
this
per
cluster,
or
do
I
want
to
have
this
applied
to
multiple
clusters
and
like
what
we
discovered
was
that
the
design
that
we
came
up
with
makes
sense
from
a
an
api
standpoint,
but
it's
not
necessarily
something
that
translates
directly
to
a
cli
or
use
like
a
web
ui
that
is
driving
cluster
creation.
E
Yeah,
so
this
pattern
is
actually
something
that
we
use
pretty
heavily
internally,
so
in
the
new
relic
use
case,
we
pivot
all
of
our
clusters,
so
we
have
to
use
a
different
crd
like
for
inventorying
them
all,
but
we
actually
use
label
selectors
to
like
create
applications
in
argo
cd
totally
relevant
right
now,
but
the
pattern
is
still
the
same.
Where,
like
we
match
on
environment
or
like
an
environment
label,
things
like
that
to
determine
like
how
to
roll
things
out,
I
think
as
a
pattern
it
it
works
pretty
well.
E
The
thing
that
I
don't
know,
if
is
in
scope
for
this
is
like
well
is
this
just
you
know,
for
the
first
time,
how
do
I
upgrade
my
cni
if
I'm
using
a
cluster
resource
set?
Is
it
a
one
and
done
like
those
kinds
of
questions
kind
of
become
interesting.
F
Yeah,
let
me
answer
that
real,
quick
and
then
we'll
go
over
to
the
other
hands.
So
the
proposal
in
its
current
form
is
a
one-time
apply,
but
it
describes
that
full
synchronization
could
be
a
future
enhancement
and
then
specifically
around.
How
do
I
upgrade
cni?
F
Ideally,
what
could
happen
is
you
would
use
this
functionality
to
either
get
your
cni
applied,
plus
something
that
can
manage
it
or
just
apply
something
that
can
manage
a
cni,
pre
cni
and
get
it
installed
and
managed
that
way.
So,
as
I
said
before
about
this
being
a
bridge,
it's
really
not
meant
to
replace
life
cycle
management
of
things.
Ideally
it's
just
enough
to
get
the
basic
things
installed
and
then
they
can
be
managed
by
other
operators.
D
I'm
a
little
torn
traditionally,
I
like
labels
for
feature
editions
because
it
forces
you
to
go
through
an
iterative
process.
The
one
part
I
don't
like
about
this
particular
use
case
is
that
we're
a
lot
of
the
things
we're
talking
about
are
non-fungible
properties
of
a
cluster
and
labels
are
fungible
properties
of
a
cluster.
D
So
that's
kind
of
the
thing.
That's
a
little
different
right,
so
sure
you
should
be
able
to
update
the
contents,
but
label
selector
implies
like
I
want
to
switch
a
cluster
from
dev
to
prod
and
that
could
be
a
bunch
of
different
changes
or
incantations
and
that's
a
little
awkward,
the
nice
thing
about
a
resource
so
long.
I
think
the
resource
constraint
per
cluster
is
nice,
so
long
as
they
can
point
to
a
common
location
very
similar
to
how
you
would
do
it
with
labels.
D
So
if
I
had
a
a
common
secret
for
dev
or
if
I
come
in
secret
for
prod,
that
basically
is
my
yaml
manifest
that
I
want
to
apply.
I
think
that
solves
the
problem.
I
don't
know,
that's
that's
my
hot
take
opinion,
so
I'm
a
little
torn
on
labels.
I
think
the
resource,
I'm
leaning
towards
resources.
H
You
see
yeah,
I
I
kind
of
agree
with
tim.
I
think
this
reminds
me
of
the
service
endpoint
analogy
with
like
where,
like
basically,
you
would
need
to
maintain
a
mapping
between
the
resources
and
the
cluster
resource
sets
to
be
actually
able
to
report
status
accurately
for
all
of
the
clusters
that
you
are
selecting
with
your
selector.
H
This
is
for
the
selector
and
like
if
we're
acting,
also
on
per
cluster
basis.
My
question
would
be
how
we
would
operate
this
at
scale
when
we
have
multiple
clusters
to
manage.
F
Yeah
all
good
points
and
just
circling
back
to
the
the
cli
aspect
like
what
we
were
looking
at
is
basically
something
like
cluster
cuddle.
Config,
cluster
or
cluster
total
create
cluster,
and
would
that
create
one
cluster
resource
set
per
cluster?
F
Or
would
we
modify
the
cluster
spec
as
part
of
this
work
to
add
in
a
list
of
resources
that
you
could
get
applied
and
so
or
something
else,
and
when
we
were
looking
at
well?
Let's
say
we
create
one
cluster
resource
set
per
cluster.
That's
not
particularly
efficient!
F
F
Anyways,
I
appreciate
the
feedback
and
comments.
If
you
have
time,
please
take
a
look
at
the
linked
proposal
and
I
don't
see
any
reason
not
to
use
the
label
selectors,
but
we
may
do
some
tweaking
to
try
and
make
it
slightly
easier
to
do
a
per
cluster
setup.
F
C
Yeah,
so
it's
come
out
of
a
issue,
someone
created
on
kappa
and
so
one
thing's
with
the
way
aws
works.
Is
we
create
a
vpc
by
default
for
clusters
in
kappa
and
then,
if
you
install
the
various
number
of
load,
balancer
options
for
services
in
kappa,
then
they're
going
to
create
resources
in
in
that
vpc?
C
And
that
means
when
you
go
and
delete
the
cluster,
you
can't,
because
all
all
resources
that
are
consuming
the
epc
have
to
be
deleted
beforehand.
So
one
of
the
approaches
are,
is
we
could
add
these
for
each?
We
could
add
logic
in
kafa
that
understands
the
services
which
are
being
created
by
each
of
these
different
controllers.
C
But
that
seems
not
great
because
we're
having
to
copy
code,
which
is
essentially
not
in
our
control
or
could
we
do
another
approach
if
other
infrastructure
providers
are
similarly
constrained,
where
we
delete
all
the
services
which
are
a
type
of
load
balancer
when
you
on
the
workload
cluster,
wait
for
them
to
go
away
and
then
finally
delete
the
delete,
the
cluster
itself.
A
One
us
also
thinking
like
I
mean
a
user-
could
create
anything
like
even
security
tool,
but
that
would
block
deletion
right
in
a
vpc.
C
Yeah,
I
mean
there's
not
much.
We
can
do
around
external
resources
to
the
cluster,
but
maybe
like
that
seems
this
seems
to
be
the
common
one
and
I'm
not
sure
if
other
infrastructure
providers
are
in
the
same
position,
I'm
guessing
not
actually
the
way
you
think
about
how
azure
works,
for
example.
But
if
there
is
a
common
enough
use
case,
should
we
do
it
in
core
cluster
api.
A
Makes
sense
I
had
another
question
but
michael
you
have
your
hand
raised.
B
I
think
for
me
this
goes
back
to
what,
if
the
cluster
is
unhealthy,
some
way
and
that's
why
you're
deleting
it
and
so
then
you're
always
going
to
need
that
extra
service.
That's
running
that's
going
to
go
and
clean
up
these
things,
and
so,
if
you're
always
going
to
need
that
anyway,
then
I
think
it
makes
sense
to
just
invest
in
that
piece.
D
H
Yeah,
I
just
wanted
to
say
that
this
reminds
me
of
also
the
days
where
we
didn't
have
finalizers
for
service
type
load,
balancers
and
basically,
users
were
building
custom
controllers,
the
added
finalizers
and
ensured
that
everything
was
removed
on
infrastructure
sites.
So
we
might
or
can
replicate
this
for
cluster
api
and
service
checkbook
browser.
D
C
Yeah.
Well,
yes,
in
the
sense
that
we
that's
how
we
support
classic
erb,
because
we
share
we
already
set
up
erbs
or
api
server
and
therefore,
when
we
tear
down
those
which
are
tagged
appropriately,
we
do
actually
take
care
of
a
service
type
equal
load
balancer
when
it's
elb
v1,
but
there's
loads.
More
there's
like
more
have
been
created
since
then.
C
So
nlb
alb,
like
kappa
itself,
does
technically
doesn't
care
about
those
resources,
but
people
do
install
that
particular
controller
and
they
do
create
them
and
those
are
kubernetes,
hosted
projects
so
yeah.
We
could
write
that
code
in
to
kappa
to
do
the
similar
approach
for
those
apis.
C
D
D
Was
like,
is
it
possible
to
do
an
auto
tagging
mechanism
to
basically
say
anything
with
inside
of
this
vpc
that's
been
created
by
this
cluster?
I
mean
it's
like
a
uniform
tiger
and
cleaner,
it's
more
of
a
uniform
tanker,
so
that
cleanup
occurs.
C
No,
you
have
to
do
it
for
each
api.
You
have
to
make
those
requests
and
make
you
have
to
do
that,
discovery
and
deletion.
It
is
fairly
uniform
across
those
apis,
but
it
is
another
api
that
we're
not
necessarily
we
don't
necessarily
care
about
in
terms
of
like
our
core
competency
that
we're
then
having
to
call
into.
C
I
Would
you
then
say
like
if
you
couldn't
delete
the
vpc,
because
there
were
dependent
resources
within
there
keeping
it
alive?
Would
you
then
just
forgo
deleting
it?
What
would
be?
What
would
you
end
with.
C
Yeah,
that's
what
happens
right
now.
The
you
go
to
n
error
appears
saying:
vpc
can't
be
deleted
because
of
dependent
resources.
C
We
do
right
now,
so
it
will.
It
will
just
retry
that.
C
I
In
azure,
we
often
see
folks
going
in
and
adding
resources
to
some
of
the
infrastructure
that
we've
already
built
so
unmanaged
items
that
you
know
we
we
haven't
tagged
but
also
build
dependency
webs,
where
it
fails
on
deletion
having
a
a
good
way
of
understanding
hey.
This
is
how
we're
gonna
behave
like
we.
Just
we
don't
delete
things
that
don't
have
tags
right.
We,
or
at
least
we.
I
We
say
we
don't
we
try
to
hold
to
that,
but
it
would
be
a
good
thing
to
say:
hey
if
we
can't
delete
this
we'll
try
a
few
times
and
maybe
just
dead
letter
that
at
some
point
I
don't
know
if
others
feel
the
same
way.
B
B
J
Yeah,
so
I
think
that
we
should
actually
be
handling
this
in
a
kubernetes
native
way,
so,
rather
than
looking
at
the
infrastructure
side
see
how
you
can
move
the
the
dependencies
into
the
kubernetes
cluster.
So
all
of
the
major
clouds
have
an
operator
that
will
let
you
create
dependencies
in
aws
or
azure
or
gce
or
whatever
it
is,
and
if
we
then
just
apply
a
cluster
api
finalizer,
so
to
speak
on
those
objects.
J
And
then
we
can
then
delete
anything
with
those
those
tags
and
let
whatever
operates.
Those
objects
handle
the
back,
end
deletion,
and
then
it
becomes
this.
It's
the
same
solution,
whether
it's
for
a
native
load
balancer
that
that
the
cloud
provider
will
do
or
whether
it's
a
dynamodb
table
that
the
aws
service
operator
will
do.
You
have
a
consistent
approach.
K
Jason
and
andrew,
so
I
definitely
like
the
idea
of
what
moshe's
trending
towards
the
the
only
thing
that
I
worry
about.
There
is
because
this
is
sitting
in
the
workload
cluster
rather
than
in
the
management
cluster.
J
I
know
the
list
all
is
a
little
bit
tricky
with
with
crds
but
iterate
over
every
single
object
in
that
cluster,
and
then
whatever
has
has
it
has
a
tag,
do
a
delete
on
it.
Wait
for
that
delete
to
succeed
and
then
move
on.
J
So
that
you
just
make
a
a
condition
of
using
cluster
apis
that
whatever
resources
you're
going
to
create
that
you
need
to
tag
them,
whether
that's
that's
something
that's
done
at
each
individual
operator
or
it's
a
rule
that
you
create
using
an
admission
hook.
That
says,
whenever
I
create
these,
these
these
objects
annotate
them.
J
E
Yeah
this,
this
is
a
tough
one
from
my
perspective,
because
so
like
is
there
an
expectation
that
the
vpc
that
is
created
for
cluster
api
be
solely
used
for
cluster
api
things
so
like
if
I
start
to
kind
of
build
this
big
like
ecosystem
inside
the
vpc
like
is,
that
is
that
kind
of?
Is
it
an
expectation
that
like
when
I
want
to
get
rid
of
my
cluster?
I
want
to
get
rid
of
all
those
other
things
as
well.
E
I
think,
in
the
case
of
load
balancer
like
there's
a
little
bit
of
simplicity
because
like
for
aws
load,
balancer
is
a
thing
that
is
understood
so
like
if
you
have
a
service
that
had
like
an
owner
reference
pointing
to
something
that
would
be
in
like
the
gc
chain
for
your
kubernetes
cluster,
like
that,
would
result
in
the
lb
being
deleted,
but
starting
to
bring
in
like
a
data
to
be
a
service
operator.
E
It's
like,
I
don't
know
that
that
is
within
the
scope
of
cluster
api,
like
to
my
view,
to
start
cleaning
up
all
the
other
bits
of
my
aws
infrastructure
like
if,
if
I'm
trying
to
build
an
environment
that
is
larger
than
the
kubernetes
cluster,
like
that's
an
architectural
decision
that
I've
made-
and
I
don't
know
that
my
kubernetes
cluster
is
the
core
of
that
environment,
and
I
don't
know
if
I
would
want
that
cluster
api
controller
to
be
in
charge
of
all
of
those
resources.
E
At
the
end
of
the
day
like
like
right
now,
if
I
use
aso
aws
service
operator
and
I
have
a
pivoted
cluster
and
I
make
a
dynamodb
table
and
I
set
an
owner
reference
on
that
dynamodb
table
to
my
cluster
object,
I
will
have
the
functionality
that
we're
talking
about
just
native
to
kubernetes.
So
I
have.
I
have
some
hesitation
about
putting
code
to
do
this
in
the
cluster
api.
D
A
dear
yes
yeah,
so
I
I
do
think
that
cpi
should
have
a
generic
feature
which
it
currently
does
not
if
I'm
memory
seriously
correctly
to
tag
resources.
So
I
think
the
expectation
flow,
at
least
for
a
cpi
perspective,
I'm
trying
to
isolate
the
broader
question
so
like
if,
if
you've
provisioned
a
cluster
through
cluster
api
in
the
cluster
api
deployed
the
cpi,
I
think
it's
reasonable
expectation
for
the
cpi
to
have
a
parameter.
D
You
specify
that's
says,
auto
tag,
anything
that
the
cpi
creates
with
this
tag
and
that
way
the
tagging
mechanism
flows
through
cluster
api
into
the
cpi.
I
think
that
expectation
is
is
fair,
so
that
way,
the
cleanup
for
that
particular
user
story
should
be
clean.
The
broader
user
story
of
generic
user
intervention-
I
think,
is
a
policy
question
more
so
than
a
technology
question,
and
I
do
agree
with
some
of
the
statements
that
were
made
earlier,
I
think
was
by.
D
I
don't
remember
who
said
it
but
like
if
our
user
creates
something
with
inside
of
a
vpc,
we
do
our
best
to
clean
up.
But
if
we
can't,
then
we
just
hold
or
be
punt
right
at
a
certain
point
and
we
get
into
a
maybe
a
a
state.
Where
is
can't
delete
or
something
like
that
or
can't
clean
up
so
that
it's
known
that
user
intervention
is
required.
C
Yeah,
so
just
to
be
clear,
cpi
does
tag
the
resources
with
the
cluster
name,
so
we
are
able
to
do
that.
It's
just.
We
have
no
code
for
network
load
balancer.
We
have
no
code
for
application
load
balancer.
We
just
don't
provision
those.
Today
we
could
add
them
as
like.
We
can
special
case
them.
Okay,
the
thing
is:
if
amazon
ads
network
loadouts
are
version
3
tomorrow
and
they
release
a
controller
for
that
and
someone
installs
that
as
a
possible
cpi
integration,
we
would
then
have
to
add
that
as
well
so.
D
K
D
The
the
workload
cluster
the
cpi
deletion,
like
I
think
the
question
is
here,
is
if
you
delete
a
cluster
and
it
has
all
these
nlbs
and
the
lbs
created
somehow
through
the
cpi
and
it
it
doesn't.
Cluster
api
doesn't
know
how
to
clean
those
stuff
up.
There
should
be
a
way
to
tell
the
cpi
deletion
of
the
cpi
itself,
or
the
object
related
to
the
cpi
should
call
the
ccm
the
ccm
should
do
its
own
cleanup.
B
Michael
yeah,
I
mean
it
seems
like
a
little
bit
far
off
topic,
but
also
we're
talking
about
vpcs,
there's
also
the
use
case
where
somebody
brings
their
own
vpc.
I
don't
know
if
that's
exactly
supported
today,
but
I
know
it's
something
that
we're
trying
to
tackle
and
so
trying
to
have
like
a
one
size
fits
all
solution.
A
Cases
justine
did
you
have
something
I
would.
I
would
like
to
table
this
discussion
for
now.
Given
we
have
like
other
topics,
I
would
like
to
give
space
to
others
as
well.
Can
we
continue
the
conversation
on
the
issue
and
but
yes,
you
need
to
have
something
to
add
like
let's.
H
A
No
worries
we
can
take
this
offline,
okay,
perfect
nier.
Do
you
wanna,
discuss
about
external
remediation
and
the
3056
pr.
L
Yeah,
thank
you,
vince.
I
wanted
to
share
a
few
things
about
the
external
remediation
proposal
and
just
we
didn't
receive
a
lot
of
feedback,
so
we
had
some
comments
and
I
think
we
addressed
most
of
them.
L
Let
let
us
know
how
do
you
you
want
to
make
a
progress
on
this
and
what
would
be
the
appropriate
next
step?
This
is
for
the
remediation,
the
external
remediation
and
as
for
the
pr,
I
think
that
joel
mentioned
this
as
well.
I
think
that
it's
not
the
simple
annotation
we
thought
in
the
previous
pr,
so
it
become
more.
A
Sounds
good,
I
think
we
can
sync
up
with
ben
and
others
that
are
working
on
3056
to
try
to
just
come
to
a
conclusion
like
what
we
want
to
do
longer
term
but
yeah.
I
think
if
others
have
thoughts
as
well.
Please
take
a
look
at
this.
This
document
and
spr
and
yeah
any
other
questions
for
near.
B
Yeah
just
kind
of
bumping
this
topic-
it's
been
up
there
for
a
couple
weeks
now
and
feedback
has
kind
of
tailed
off,
so
I
was
seeing
if
it's
appropriate
time
to
just
open
a
pr
or
what
do
we
think
the
next
step
should
be.
A
Yeah,
I
think,
if
you
think
it's
ready
to
go
to
a
pr
like
we
should
just
probably
move
it.
Personally,
I
think
I've
done
like
a
couple.
I
looked
at
it
at
a
couple
of
times.
I
would
take
a
look
today
and
like
yeah.
We
can
probably
move
it
to
a
pr
and
just
move.
A
There
and
try
to
finalize
it
a
little
more.
I
think
there
was
one
question
that
I
had
was
to
just
scope
it
down
to
just
pre-deletion
for
now,
like
just
to
keep
the
scope
tied
up
to
the
bare
minimum
that
we
need,
rather
than
try
to
yeah
like
do
everything
if
that
makes
sense.
B
A
Yeah
that
makes
sense
sounds
good.
Maybe
we
can
re-title
it
a
little
bit,
but
yeah
I'll
take
a
look
later
and
give
more
feedback,
but
plus
one
to
moving
to
a
pi.
A
I
don't
see
any
hand
raised
warren
for
templating
cluster
cattle.
M
M
Yeah
hi.
Thank
you.
This
is
my
first
time
during
the
meeting
and
I
joined
it
like
out
of
blue,
so
yeah
I'm.
I
joined
a
bucket
two
weeks
ago,
three
weeks
ago,
when
we
or
less-
and
we
are
refreshing
our
first
attempt
to
do
cluster
api-
that
comes
back
to
like
v1
alpha
one.
So
my
first
task
is
to
move
it.
That
was
to
move
it
to
v1
alpha
3..
We
are
almost
there.
M
There
is
a
blocker
in
our
side
to
get
the
multimaster
working
because
we
can
pre-allocate
ips
in
a
reliable
way
for
some
reason.
So
we
have
to
fix
that.
But
yeah
I
mean
the
the
current
like
yeah.
The
repository
is
there
and
we
successfully
are
able
to
spin
up
a
single
like
master,
provide
clusters
in
packet
and
we
are
working
on
documentation
right
now
and
we
are
also
starting
the
discussion
around
if
and
how
and
what
it
means
for
us
to
join
and
move
the
project
to
the
kubernetes
sig
organization.
M
So
I
think
this
is
a
follow-up
question
that
I'm
happy
to
leave
here.
I
don't
know
I
didn't
find
any
process
enough
written
forms
that
can
help
us
discuss
even
what
it
means
to
move.
The
repository
at
some
point.
K
K
K
D
I
don't
know
logistics
already
pre-built
into
the
repository,
and
then
it
makes
it
easier
for
adoption,
because
we
do
have
to
vet
that
as
part
of
the
adoption
process,
and
usually
it's
about
making
sure
that
the
owner's
files
are
updated
and
referenced
appropriately
and
as
jason
mentioned
as
soon
as
you're
in
the
org.
All
rights
and
privileges
as
part
of
the
org
can
be
adopted
to
you.
So
that
means
all
of
the
test.
Build
release,
apparatus
that
currently
exists
inside
of
kk
and
k6
will
be
available.
M
Okay,
yeah.
Thank
you.
I
think
we
yeah
I'm,
I
I
have
to
discuss
it
internally,
but
it's
a
decision
that
we
will
like
take
very
short
like
shortly,
because
the
the
main
work
is
is
done.
M
There
are
like
low
hanging
fruits
that
has
to
be
fixed,
but
they
will
also
be
simplified
or
you
know
they
will
disappear
if
we
decide
to
and
if
we
are
allowed
to
to
move
the
repository,
because
a
lot
of
them
are
related
to
release
release
life
cycle
even
more
because
we
we
are
like
on
our
for
a
lot
of
our
customers,
so
the
multi
architecture
stuff,
it's
something
that
we
have
in
mind
in
the
core
of
of
the
project
itself.
So
I
I
I
saw
our
other
projects.
M
Classifier
api
providers
handle
that
and
we
are
kind
of
thinking
about
going
along
that
line.
So
in
some
way
we
we
are
proceeding
like
looking
at
the
other
providers.
So
we
are.
M
I
I
think
there
is
nothing
too
different
compared
with
what
what
we
have
it's,
not
a
clone,
because
we
started
it
like
back
in
the
day,
but
I
think
it's
close
but
yeah
I
mean.
Thank
you
for
for
dance,
but
I
think
I
will
I
I
know
what
I
have
to
discuss
it
internally,
but
I
need
to
discuss
it
internally.
A
I
just
was
going
to
add,
like
it's
great,
to
see
the
community
growing
so
much
like
it's
great
to
see
involvement
from
other
infrastructure
providers,
and
you
know
other
companies
as
well,
so
you
shout
out
to
get
this
getting
this
done
in
so
quickly
and
also
to
fabricate.
I
think
like
helped
a
lot
along
the
way
as
well.
I
saw
it
in
chat.
M
Yeah,
definitely
you
made
my
you
know
like
having
a
reachable
community
of
maintainers.
It
definitely
helped
me
a
lot
and
we
also
keep
track
of
like
the
learning
process
and
I
think,
as
soon
as
I
have
time
it
will
be.
I
will
have
a
bunch
of
like
documentation
tonight
to
write
and
share,
so
we
do
that
as
well.
M
A
Thank
you.
Is
there
any
other
question
before
we
issue
we
go
to
issue
triage.
I
think
there's
many
issues
to
look
at
today,
but.
F
I
want
to
comment
on
the
last
question
in
chat
about
if
a
provider
project
isn't
a
sig
on
repo,
how
hard
is
it
to
just
plug
in
a
custom
provider
from
a
separate
repo?
I
think
there's
a
couple
answers.
One
is
that
we
do
have
in
our
documentation
a
list
of
bootstrap
and
infrastructure
providers,
and
I
guess
eventually,
we'll
have.
We
should
get
control
plan
providers
on
there
as
well.
F
I
know,
additionally,
that
there
has
been
some
discussion
recently
about
documentation
and
how
we
plug
in
other
providers
into
say
the
quick
start,
and
that
I
believe
the
proposal
was
that
that
documentation
would
be
maintained
in
each
provider's
repository
and
we
would
do
some
sort
of
magic
to
pull
it
in.
That's,
I
think
a
proposal
at
this
point
for
some
of
the
other
providers,
but
that's
an
additional
way
that
we
can
do
that.
A
All
right:
let's,
let's
go
to
the
copy.
A
I'm
gonna
find
it
the
next
for
now
and
here
or
just
because
like
it
needs
more
discussion
yeah,
we
have
add
status.,
observe
generation
to
all
objects
andrew.
I
think
you
filed
this
yeah
an
hour
ago.
I
briefly
saw
it
like.
I
don't
have
strong
objections
to
this.
It's
probably
a
good
thing
to
do
and
it's
a
backward
compatible
change.
E
Yeah
we
just
have
we're
implementing
a
controller
that
does
our
upgrade
for
us
like
machine
deployments
and
kcp,
and
we
found
that,
like
there,
there
wasn't
an
observed
generation
on
the
kcp,
which
makes
it
easy
to
detect
if,
like
the
kcp,
has
been
reconciled.
So
like
it's
just
just
a
nice
nice
thing
I
think
maybe
this
could
even
be
like
good
first
issue.
I
don't
know,
help
wanted
good
first
issue,
it's
basically
like
at
a
field
and
just
in
every
controller
ever
do
it
thing
yeah.
A
C
A
Plus
one,
what
else
do
we
have
improved
access
to
the.
A
So
I
think,
like
so
around
discs,
I
think
yeah
being
sociolic,
and
there
is
already
a
cap
that
has
merged
on
the
cappy
k
volumes
disk
setup.
There
was
like
another
discussion
for
like
ndp
or
like
already
this
kind
of
like
in
general,
like
the
cuban
bootstrap
provider.
It's
like,
I
would
say,
like
limited
to
like
what
it
can
do
today
like,
but
we
do
want
to
expose
more
things.
K
I
was
just
gonna
say
we
need
to
probably
be
cautious
about
what
we
expose
if
we
do
intend
to
honor
the
format
field
and
allow
for
output
of
additional
kind
of
bootstrap
config
outside
of
cloudinet,
because
I
know
we've
discussed
potentially
supporting
like
ignition
in
the
past
and
when
you
start
looking
at
the
fields
or
like
the
config,
that's
supported
across
different
bootstrap
tools.
There's
not
necessarily
parity
there
with
cloudant.
As
far
as
what
we
could
expose
and
support.
A
Yeah
with
regards
to
this
issue,
like
I
would
probably
just
I
just
say
like
we
need
specifics
like
what
we
want
to
expose
and
like
a
little
bit
more
deep
down
steel.
N
I
think,
there's
two
different
questions
here,
there's
one
like
the
specific
ones
that
are
missing
that
are
needed
right
now
by
someone
for
a
specific
use
case
and
I
think,
there's
a
wider
discussion
of
how
do
we
want
to
evolve
this
in
the
future,
because
right
now,
there's
a
lot
of
like
cubadium
config
shouldn't
necessarily
need
to
care
about
the
infrastructure
and
the
fact
that
we're
using
cloud
init
specifically-
and
it
feels
wrong
to
put
some
of
these
things
in
there.
N
So
I
think
that
was
a
discussion
that
also
started
from
the
proposal
for
disk
setup
and
michael
pointed
out.
I
think
that
it'd
be
nice
to
have
a
generic
boy
to
just
pass
in
custom
like
bootstrap
yaml.
N
N
C
A
That,
like
we,
can
discussion.
A
And
just
if
you
want
to
point
out
the
comment
about
the
the
format.