►
From YouTube: 20190731 scl capi office hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
A
A
A
I've
got
the
first
two
or
three
here
so
I
was
asked
to
put
together
a
justification
for
travel
for
the
face-to-face
meeting
that
we
are
trying
to
do
in
September
and
I
have
a
link
to
the
document
there
and
one
question
I
had
was
if
we
should
send
this
to
the
C
cluster
light
pole
cluster
life
cycle
mailing
list.
Michaels
comment
is
yes,
so
I'm
happy
to
do
that
and
Pablo
is
asking
about
any
decision
on
the
date
or
location.
A
A
So
we
are
still
working
on
the
location,
as
I
mentioned.
As
soon
as
we
have
it
nailed
down,
we
will
send
out
a
follow-up
and
I
think
at
that
point,
when
we
send
out
that
follow-up,
what
I
would
ask
for
is
that
everyone
who
is
either
definitely
planning
on
coming
or
pretty
certain
that
they
will
please.
A
B
C
A
So
the
what
ended
up
happening
and
where
this
discussion
came
from
is
we
had
someone
who
was
using
cat
v,
the
vSphere
provider
and
the
virtual
machine
that
was
associated
with
the
cluster
API
machine
had
been
deleted
or
somehow
went
missing,
and
the
user
for
this
particular
issue
expected
that
cat
v
would
remediate
this
lost
virtual
machine
and
replace
it
so
that
the
cluster
API
machine
would
still
be
functional,
and
this
this
node
in
the
cluster
would
reappear
as
a
functional
node.
Even
though
it
was
a
different
virtual
machine.
A
And
so
the
question
at
hand
is,
should
infrastructure
providers
or
must
infrastructure
providers
depending
on
what
language
we
choose
perform
this
remediation?
Or
should
we
consider
that
a
cluster
API
machine
is
a
one-and-done
operation,
where
once
you
create
a
machine
and
the
infrastructure
provider
provisions
or
requisitions
whatever
infrastructure,
it
needs
that?
That's
that's
it,
and
if
the
VM
or
server
gets
into
some
sort
of
terminal
bad
state
or
goes
away
that
the
infrastructure
provider
can
mark
the
closed
straight.
D
Yes,
I
just
want
to
mention
and
I'm,
not
sure
how
this
interacts
exactly,
which
is,
we
do
have
the
node
controller
right,
so
if
at
least
on
some
cloud
providers,
if
you're
running
controller
manager
with
the
cloud
provider
configured
if
the
node,
if
the
ec2
instance
in
particular,
is-
is
deleted,
the
node
will
be
deleted.
Also,
yes,
and
it
feels
like
it
feels
like
whatever
we
decide
to
do.
We
should
be
big
cognizant
of
that
and
probably
replace
it
I
guess
or
do
something
super
non-racing
with
that.
So.
A
Another
wrinkle,
in
addition
to
that
one
is
that
with
the
vSphere
provider,
for
example,
I
don't
know
that
this
is
necessarily
what's
in
the
latest
release,
but
at
least
in
an
earlier
release
when
the
virtual
machine
was
replaced
automatically
by
cat.
Be
the
name
of
the
virtual
machine
was
the
same
I
don't
know
about
the
IP
address,
but
the
the
information
in
the
virtual
machine
in
terms
of
its
provider
ID
was
different
because
it
was
a
different
virtual
machine
and
that
caused
problems
with
persistent
volume.
A
E
A
Just
another
wrinkle
that
it
may
be
provider
specific.
There
may
be
cloud
issues
with
the
kubernetes
node
controller,
so
I'm
sort
of
leaning
towards
suggesting
that
we
considered
a
one-and-done
deal
and
that,
if
there's
any
problems,
then
it's
up
to
the
user
or
some
component
to
replace
it
and
I
know
Daniel
had
some
comments
in
github
that
I
think
we're
interesting
as
well.
So
Kay
I
saw
your
hand
go
up.
First.
D
I
just
wanted
to
say
that
quick
like
we
should
delete
the
note.
There
is
a
unknown
like
thing
where
we
don't
use
the
node
UID.
We
use
the
node
name
and
we
work
around
distant
like
you
also
explicitly
delete
the
note
if
you're
gonna,
if
there's
any
chance,
they're
going
to
create
a
note
with
the
same
name.
E
E
You
know
from
from
the
machine
object
so,
for
example,
right
if
I,
if
I,
if
I'm,
writing,
controller
and
I
expect
that
the
machine
object
represents
some
machine
that
has
you
know
like
as
long
as
that
machine
object
is
there.
E
Different
network
identity
but
I
am
still
doing
sort
of
the
same
machine
object
and-
and
you
know
maybe
I-
don't
even
know
that
that
that's
changed
or
or
maybe
it
I
do
know
that
it's
changed,
but
I
didn't
have
an
opportunity
to
you
know.
I
didn't
have
an
opportunity
to
like
respond
to
those
changes
in
a
particular
sequence
that
maybe
my
controller
cares
about.
Then
I
think
that's
that's
a
problem.
E
B
Yes,
briefly,
a
because
it's
all
of
this,
a
comma
before
is
like
if
we
can
not
guarantee
a
consistent,
behavior
I,
think
that
this
is
opening
the
box
problems.
I
think
that
we
discuss
that
a
really
in
the
beta
phase
meeting
here
in
Barcelona
that
we
were
trying
like
machine
like
you,
know,
mutable
in
the
sense
that
it
also
for
a
greater
like
you
once
you
create
it.
B
The
only
thing
you
can
do
is
delete
agree
only
one,
and
we
are
relying
and
older
component
to
have
intelligence
so
that
make
a
decision,
but
not
modeling,
that
inside
it
was
ready
I,
because
there
are
plenty
of
thing
that
can
go
wrong
and
as
before,
the
problem
is
that
we
cannot
ensure
that
any
provider
can
do
that.
So
we
have
like
creating
a
problem
for
providers
didn't
if
we
said
that
this
has
a
mandatory
behavior,
then
it's
the
possible
platform.
What
we
do
if
it's
not
mandatory,
then
this
is
predictable.
B
C
C
We
can
then
correlate
that
to
a
machine
object
and
then
take
specific
remediation
steps
if
it's
backed
by
a
machine
such
as,
if
it's
in
a
machine
set
just
delete
it
we'll
get
a
new
one
and
then
the
Machine
object
will
go
through
its
workflow
to
drain
the
node,
delete
the
node
finally
delete
itself,
and
so
we're
getting
a
lot
for
free.
In
my
opinion,
if
we
don't
implement
the
recreate
the
instance.
F
I
just
want
to
add
to
Justin's
comment
that
yes,
like
there
is
a
note
controller
that
will
go
and
delete
a
note
if
it's
corresponding
VM
is
deleted.
But
with
that
assumption,
there's
some
technical
debt
that
we
incurred.
We're
like
deletion
and
shutdown,
semantics
incriminates
for
kubernetes
nodes
themselves
become
problematic,
as
different
providers
like
implement
that
shutdown
behavior
differently.
F
So
for
one
example,
one
provider
might
actually
reassign
a
provider
ID
on
a
one
on
an
instance
that
gets
stopped
and
then
started
again
and
that
causes
a
plethora
of
issues
with
yeah
like
TVs
and
other
stuff.
So
I
think
we
can
learn
from
that
mistake
and
actually
maybe
lean
towards
a
policy
that
can
express
all
the
various
ways
that
every
provider
implements
like
shutdown,
deletion
and
all
that
stuff
and
if
we're
actually
correlating
like
machines
to
pods
and
not
communities
nodes.
F
Like
that's
just
my
assumption
that
cluster
8
yeah
the
machine
represents
pod
in
a
in
a
regular
cluster
for
applications,
then
we
should
also
support
an
equivalent
to
restart
pulse
a
restart
policy
for
pods.
In
the
same
way,
we
do
for
machines
because
I
don't
think
we
can
prescribe
one
method,
that's
going
to
actually
apply
well
for
every
single
provider
out
there.
A
So
I
would
tender,
grew
the
most.
What
you
say,
Andrew,
except
I,
think
that
the
I
like
Michael's
idea
of
having
the
restart
policy
so
to
speak
and
a
separate
layer
as
a
part
of
a
separate
component,
so
that
it's
not
necessarily
built
into
a
cluster
API
itself
or
any
of
the
specific
infrastructure
providers.
A
If
you
have
a
machine
that
doesn't
belong
to
a
machine
set,
just
if
you
do
and
the
VM
disappears,
then
that
machine
is
kind
of
wedge.
The
node
is
no
longer
working
and
there's
nothing
there.
That's
going
to
deal
with
it,
but
right
I
would
say:
Michael,
you've
got
your
hand
up
and
then
I
think
we
can
probably
redirect
back
to
the
issue
for
further
discussion.
That
makes
sense
Michael.
So.
C
Just
to
clarify
so
what
we're
working
on
is
the
machine
health
checker
and
we're
basing
that
on
the
node
state.
So
right
now
it's
a
little
bit
rudimentary
and
eventually
we'd
like
to
integrate
with
the
node
problem
detector,
but
we're
also
building
the
capability
to
reboot
the
machine
that
that
exact
mechanisms
TVD.
But
so
it's
basically
it's
for
us.
It's
gonna
be
coupled
to
cluster
API
very
closely,
and
hopefully
that's
something
that
we
can
share
upstream
if
everything
aligns.
C
F
A
Thanks
so
my
goal
is
to
come
to
some
sort
of
consensus,
as
it
applies
to
V
1,
alpha
2
and
write
it
down,
and
that
could
be.
We
start
with
there's
just
either
the
machines
good
or
it's
not.
If
it's
not
good,
we
mark
it
terminally
failed
and
then
we
decide
we're
gonna
evolve,
potentially
in
alpha
3
and
beyond,
or
something
else.
But
let's
see
if
we
can
come
to
an
agreement
in
the
github
issue
in
could
document
it.
So
it
sound
good,
sounds
good
awesome.
A
So
I
had
an
issue
or
a
an
item
here
on
manifest
generation
providers.
I'm-
probably
going
to
skip
doing
it
today,
because
Andrew
cuts
is
on
vacation.
I
will
just
I'll
give
a
brief
overview,
but
I
think
andrew
is
better
suited
to
discuss
so
there's
some
work
that
he
did
for
cat
V
to
have
a
go
program
that
will
generate
all
of
the
manifests
that
you
need
all
the
ammo
that
you
need
for
staples,
edits,
namespaces
secrets
and
so
on
and
so
forth.
Crts.
A
Why
not
I
would
like
to
see
if
we
could
try
and
get
some
consistency,
an
agreement
that
you
know
this
approach
or
some
other
approaches?
What
we
want
to
do,
but
I
would
like
Andrew
to
to
be
present
to
present
it.
So
I'll
hold
this
off
for
next
week.
Let's
giving
these
back
okay,
either
Jason
or
Justin
on
AWS
account
status.
G
D
G
H
H
I
Yes,
so
I
know:
last
week
there
Vince
had
put
together
like
a
proposal
and
counter
proposal
for
how
pieces
of
cluster
cuddle
function.
Yes,
too,
many
andrews
confirmed
how
pieces
of
cluster
cuddle
function
in
v1
alpha
2.
So
there
has
been
some
back-and-forth
involved
with
that.
However,
I
think
with
the
with
the
time
frame
for
V
1,
alpha
2
I,
was
curious.
I
Like
is
this
something
that
we
expect
to
not
only
will
reach
a
consensus
on
you
know,
but
deliver
on
within
the
V
1
alpha
2
timeframe,
and
so
the
reason
that
I
bring
this
up
is
our
team
depends
very
heavily
on
the
pivot
functionality.
Currently
in
V
1
alpha
1,
it
is
its
core
to
how
we
use
cluster
API
and
like
we
would
be
happy
to
help
deliver
on
pivot
functionality.
You
know
for
V
1
alpha
2
I
just
wanted
to
get
a
better
understanding
of
you
know.
Do
people
use
it?
I
Do
we
just
need
to
jump
in,
and
you
know
open
some
big
PRS
to
make
it
happen,
or
do
we
need
to?
You
know
develop
some
internal
tooling
to
manage
this,
for
us,
V
1,
alpha,
2
and
beyond,
and
I
know
that
the
cluster
ADM
or
you
know,
feature
name
whatever
proposal
touches
on
this
a
little
bit
so
that
you
know
that's
my
question
so.
H
Whether
or
not
we
need
to
beef
up
cluster
cuddle
or
whether
or
not
there's
enough
people
who
want
to
execute
in
Clostridium
as
well
as
solve
some
of
the
core
pivoting
problems
that
exist
and
what
are
the
boundary
lines
so
I
think
we
have
to
solve
this
for
everyone
else
to
do,
or
at
least
have
an
answer
that
isn't
like
it's
totally
broken
I.
Think
it's
just
a
question
of
a
versus.
We
have
to
weigh
the
options.
We
have
to
figure
out
who's
going
to
work
on
it.
So.
G
I
think
Tim
touched
on
what
I
was
gonna
say
as
well.
Is
that
I
think
we
have
to
have
some
story
for
it,
especially
for
the
use
case
of
self-managed
clusters?
We
don't
want
to
abandon
that
use
case
for
B
1
alpha
2,
but
we
do
need
to
weigh
the
amount
of
work
that
it
would
take
either.
You
know,
how
long
is
it
going
to
take
to
do
deliver
cluster,
a
DM
versus
the
functionality
in
cluster
cuddle,
and
where
should
we
kind
of
spend
that
time
and
energy
I.
A
Think
when
Vince
and
I
talked
about
this,
the
other
day
that
what
we
were
looking
as
the
gap
that
needed
to
be
addressed
was
the
bootstrap
and
infrastructure
references
would
need
to
be
pivoted.
Is
there
anything
else
that
is
missing
from
that
that
we
aren't
currently
pivoting
or
wouldn't
be
covered
by
the
current
code?
So.
G
A
E
Just
wanted
to
say
that
a
pivot
procedure
seems
like
a
at
least
for
now,
a
good
way
of
maybe
keeping
us
keeping
us
honest
about
being
able
to
migrate
the
control
plane
from
one
cluster
to
another,
something
that
you
know
we
might
you
know
want
to
do
or
something
that
end-users
might
want
to
do
anyway.
So
we
don't
have
tests
for
that,
but
keeping
it
pivot
around.
It's
kind
of
indirect
there's
a
proxy
for
that.
H
At
least
while
the
thoughts
in
my
brain
I
think
we
should
probably
try
to
start
using
labeling
with
unique
identifiers
to
potentially
use
to
make
to
ease
the
burden
come
in
a
burden
that's
associated
with
managing
the
pivot.
If
the
provider
has
extra
components
that
are
not
readily
visible
to
cap
cap
Eve.
A
All
right
so
I
added
an
item
here,
just
to
talk
real
briefly
or
will
really
ask
about
provider
statuses
for
converting
to
B
1
alpha
2
I
can
speak
to
Kappa.
This
is
it's
a
work
in
progress
and
we
have
a
WS
machine
and
we
are
I,
am
working
on
a
DeVos
cluster
right
now.
I
think
we
are
fairly
close
to
having
something
that
hypothetically
would
be
functional
depending
on
the
state
of
the
cube,
ATM
booth
show
provider,
so
I'm
happy
with
the
progress
that
cap
was
making
cap
V
next
up
on
the
list
here.
F
So
Andrew
cuts
open
an
issue
on
this
and
just
to
get
a
sense
of
anyone
would
Jack
and
I
don't
think,
there's
any
issues,
at
least
in
the
Cathy
community
on
this,
so
we're
gonna
probably
start
on
this
one
for
the
next
release.
I
think
we're
mostly
blocked
on
the
ascent
OS
cloud
init
issue
for
this
one,
but
otherwise
right
we
should
so
you
can
use
Ubuntu
there
right
yeah.
We
could
like
we're,
not
blocked,
but
it's
gonna
be
a
thing
that
comes
up
for
a
sent
to
us
users.
A
K
K
Let's
see,
there's
an
open
peer
so
for
supporting
view
and
alpha-2
there's
an
open
PR
for
cat
pee
control,
which
is
a
more
opinionated
cluster
control,
but
be
when
alpha
one
is
in
full
support
on
the
release.
Oh
one
branch.
So
if
you're
looking
for
a
working
cluster
API
provider,
that
is
the
branch
to
go
to
otherwise
there's
just
one
PR
open
for
everyone
off
the
and
if
you
follow
on
peers
on
me
to
add
after
that,
merges
okay.
D
Not
yet
I
I
do
intend
to
I
keep
putting
it
on
the
list
and
keeps
getting
bumped
down,
but
yes,
no,
no,
no
updates,
as
of
yet
but
a
statement
of
intent.
I
could
use
some
more
public,
shaming,
alright,.
A
J
A
A
K
A
G
A
C
D
E
Exactly
what
oh
well
oh
I,
there
needs
to
be
a
change
to
the
set
gen,
so
it
can
be
used
outside
of
those
outside
of
that
directory.
So
I
just
wanted
to
yeah
I'm
working
on
that,
but
I
wanted
to
know.
If
anybody
is,
you
know
it's
like
that's
an
immediate
use
case
and,
of
course,
that
that
could
be
used
by
other
providers
as
well,
not
just
not
just
Kathy
book
for
providers,
provider
types.
A
I
D
D
The
the
implementation
in
KK
alia,
error
developer
had
previously
done
some
work
to
do
most
of
that
I
sort
of
finished
that
off
I
have
a
PR
which
we
circulated
and
people
seem
to
like
in
general,
I
need
to
tidy
up
some
very
valuable
and
accurate
feedback
and
then
I
hope
we
can
get
that
merged
into
KK.
Whereupon
my
belief
is,
we
would
copy
literally
copy
the
the
code.
D
It
should
be
criminales
version
independent
until
such
time
as
the
the
code
from
that
communities
version
landed
in
the
vendor
or
go
module
directory,
whereupon
we
could
effectively
just
delete
it
if
that
makes
sense.
So
the
the
short
answer
is
I
think
we,
it
looks
like
we
can
use
the
drain
code
from
KK
and
have
one
piece
of
drain
code.
C
Michael
yeah,
just
to
chime
in
the
patch
set,
is
alive
and
well
I'm,
just
waiting
on
Justin
and
get
that
going
for.
There's
C
mentioned
the
status
on
that
I've
already
got
the
prototype
out
for
using
that,
based
on
his
pull
request
and
I
did
that
locally
in
openshift
on
you
know
what
you
would
consider
V
1
alpha
1,
and
so
it's
going
good.
So
once
once
that
merges
I'll
cut
an
update
for
both
V
1
alpha
2
and
V
1
alpha
1,
but
I've
already
tested
it
and
it
works
good.
D
Point
on
that
I
believe
I
can
take
point
on
that.
I
think
the
the
current
lots
of
you
ever
reviewed
it
I,
don't
get
an
issue.
I
think
I
need
to
incorporate
the
changes
that
I
will
be
ashamed
to
do
and
but
I
I
will
try
to
find
someone
that
can
it
actually
has
the
Payette
suite?
We
probably
should
remove
it.
Currently,
it's
under
6
e
Li,
because
it's
under
coop
cuddle
I
mean
for
you
should
like
to
move
it
somewhere,
but
first
step
is
to
get
them
to
like
agree.
A
Okay,
let's
go
on
to
triaging
shoes.
So,
looking
at
this,
we
have
one
two
three
and
this
last
one
I
think
I
just
put
in
so
looks
like
three
that
are
not
I'm
actually,
but
that's
everyone
all
right,
so
the
first
one
is
stuff
that
remove
Rocky
log
calls
and
machine
and
cluster
controllers
Chuck.
Do
you
want
to
give
an
overview
of
this
one.
K
Yes,
so
this
is,
this
is
trying
to
tackle
the
global
logging
calls
throughout
the
controller's.
I
can
escape
to
take
it
down
to
just
figuring
out
B
you're,
getting
rid
of
the
K
log.
Calls
in
the
machine
in
cluster
controller,
I,
I,
know
I,
imagine
they're
scattered
throughout,
because
I
haven't
I,
don't
really
care
about
the
other
points,
just
this
one
so
that
I
can
add
my
own
logger
and
go
if
I
want
to
okay.
C
K
K
K
A
A
Okay,
we
have
an
issue
from
someone
who
says
currently,
in
the
case
of
users,
without
permissions
to
create
namespaces,
they
try
to
click
create
clusters
in
an
existing
namespace.
The
ensure
namespace
function
fails
due
to
missing
permissions
and
there's
a
pull
request
that
she
put
together
as
well
that
Jason
and
I
have
taken
a
look
at
it.
I
think,
given
that,
let's
see,
why
did
she
do
this?
A
G
So
I
don't
necessarily
object
to
this,
but
I
do
wonder
the
ability
of
cluster
kuddle
to
properly
function,
given
a
provider
components
generally
with
the
customized
file
attempt
to
create
a
namespace
for
where
those
components
land,
so
a
user
not
being
able
to
create
a
namespace,
they
would
have
to
not
just
pre,
create
whatever
namespaces
for
the
cluster
API
objects
they're
creating.
They
would
also
have
to
pre
create
those
kind
of
provider
system,
namespaces
that
the
provider
components
generate
as
well.
I.
G
Yep
the
question
is
those
it
would
also
have
to
be
documented
that
they
would
have
to
do
that
with
an
existing
bootstrap
cluster,
because
any
mini
cube
or
kind
cluster.
That's
fun
up,
wouldn't
have
those
pre
created
namespaces,
but
they
also
shouldn't
have
the
restrictions
for
creating
namespaces
either.
So
it's
tricky.
A
A
Yeah,
it
might
not,
and
we'll
just
have
to
see.
I'm
gonna
put
this
in
the
milestone
and
given
that
we
have
a
PR,
it
just
needs.
Some
comments
addressed:
okay,
I
think
that
was
it
for
cluster
API
I
was
just
gonna,
pull
up
the
QB
on
bootstrap
provider.
If
y'all
are
interested
in
going
through
the
unassigned
issues,
anybody
object
all
right.
First,
one
I
see
here
is
verify
all
as
exiting
zero
and
one
more
verification.
Scripts
is
failing,
and
this
is
fixed.
It's
fixed
I'm.
A
Okay,
then
we
also
have
this
is
the
cloud
and
it
issue
that
we
were
discussing
ten
or
so
minutes
ago.
So
the
version
of
cloud
in
it
that
ships
with
rail
and
CentOS
does
not
have
the
functionality
to
use
instance,
metadata
resolution
in
the
Jinja
templates,
and
until
that
is
resolved
and
I've
opened
a
bugzilla.
A
E
A
I
open
up
a
BZ
for
this,
and
hopefully
we
can
get
this
updated,
but
until
it's
updated
at
least
with
CentOS
and
rel,
you
won't
be
able
to
use
stock
the
stock
versions
of
cloud
in
it
with
the
bootstrap
scripts
that
we're
generating
so
we're
largely
dependent
on
the
cloud
net
maintainer
to
at
Red
Hat.
To
try
and
get
this
fixed
for
us
I.
Don't
think
how
a
milestone
maintainer
on
this
one!
So
I
don't
have
it.
It
makes
sense
for
me
to
keep
going.
I
have
Chuck.
You've
got
milestone
permission,
so
yeah.
A
Yeah
so
with
the
current
v
1
alpha,
1,
implementations
of
Kappa
and
I
think
also
cat
V.
If
you
are
using-
or
they
are
using
a
slightly
forked
version
of
cloud
in
it,
that
Jason
put
together
that
actually
I
don't
even
know
that's
a
fork.
It's
just
a
point
in
time,
snapshot
right
Jason
or
did
you
we
added
the
qadian
module
to
it.
Yeah.
A
It's
a
point
in
time,
snapshot
of
cloud
in
it
in
between
versions,
8,
point,
eighteen
point:
three
and
eighteen
point:
four
that
adds
a
cube,
ATM
module
that
we're
using,
and
it
also
takes
advantage
of
a
feature
that
was
added
into
in
the
cloud
net
repo
in
between
18,
3
and
18
for
and
if
we
don't
continue
to
use
this
fork
of
cloud
in
it.
When
we
are
building
images,
then
we
are
going
to
be
kind
of
stuck.
A
L
H
H
H
A
A
So
this
one
for
the
cloud
in
it
is
not
strictly
needed,
given
what
Tim
was
saying
about
using
the
Stamper
to
potentially
override
so
I
would
say
it's
probably
it
would
be
nice
to
have
this
in
the
milestones
you
have
in
the
zero
down.
Yeah
it'd
be
nice
to
have
a
40.1,
but
if
it
slips
it's
okay,
so
the
priority
can
be
important
soon.
Okay,.
G
A
Okay,
so
this
one
was
something
naming
TBD,
but
I
was
thinking
it
might
be
nice
to
have
a
cluster
wide
configuration
data
type
that
could
hold
cluster
wide
field
so
that
you
didn't
have
to
copy
and
paste
them
into
every
single
cube
am
bootstrap
type,
so
I
think
your
milestone
a
priority
or
appropriate
here
Chuck.
Thank
you.
I'm.
A
E
Basically,
the
the
motivation
might
be
to
you
know,
to
deploy
to
deploy
some
add-on
that
that
needs
to
run
in
order
to
enable
the
cluster,
though
the
Kathy
control
plane
to
reach
the
workload
clusters
API.
That's
that's
my
own
use
case.
I
mean
there.
Are
there
different
ways
to
do
it?
There
might
be
other
use
cases
too,
just
something
that
lets
the
user
deploy
some
kind
of
atom
without
you
know,
with
before
before,
actually
getting
a
coop
config
and
being
able
to
reach
that
workload
clusters
it
right.
A
So
I
think
this
is
made
it
into
the
data
types.
We
have
a
additional
user
data
files,
whether
Kim
rabbits
in
here
or
the
per
provide
program
for
provider.
I
have
to
go
back
and
check,
but
that
would
be
one
way
that
you
could
do.
Is
you
want
to
have
static?
Pod
manifests?
You
could
define
those
as
part
of
either
big
strapping
or
infra
again,
I'll
do
it?
I
can
double-check
and
you
could
get
those
in
there
potentially
okay
yeah.
That
didn't
sorry.
Is
this
something
you
want
to
see
in
v1
alpha
2.
E
E
Quarter,
the
control
plane
is
in
fact
operational
without
the
the
pod
networking
and
in
fact
you
can
write
you
can
bring
up
the
workload
cluster.
The
club,
you
know,
the
control
plane
will
be
operational.
In
fact,
I
believe
knows
will
be
able
to
join
as
well,
but
they
will
all
be
and
not
ready
right.
They
won't,
they
won't
be
able
to
schedule
workloads
and.
E
You
can
talk
to
that
workload
cluster
using
using
the
coop
config
and
and
deploy
your
your
your
CNI
plugin
as
a
daemon
set
you
you
don't
mean
that
that
doesn't
require
you
to
do.
You
know
to
use
this
path,
or
you
just
use
this
mechanism
for
an
ad
on
the
there
is
sometimes
you
know
you
do
need
to
be
able
to
deploy
something
before
you're
actually
able
to
talk
to
the
workload
clusters
control
plan,
it's
a
kind
of
a
specific
use
cases.
A
E
Yeah,
I'm
I'm,
I
guess
I
can
assign
myself
or
maybe
somebody
couldn't
find
me.
Okay,.
K
Chuck
yeah,
so
this
kind
of
came
back
to.
Let's
see
this
came
back
to
the
girl.
Really
sir
discussion
we
had
yesterday
for
the
binaries
I'd
like
to
use,
go
really,
sir,
for
this
project
and
the
image
the
image
promoter
process
would
also
be
good
to
implement,
and
that's
what
this
addresses
I
should
I
will
flesh
out
the
ticket
more
because
now
that
I
have
a
good
sense
of
what
we're
doing
here.
I
can
actually
write
up
useful
instructions.
Okay,.
A
N
A
A
G
A
The
machine
needs
a
reference
to
the
bootstrap
config
once
that
reference
exists,
the
bootstrap
can,
or
the
bootstrap
controller
or
bootstrap
provider
can
go,
do
its
thing
and
generate
the
loose
drug
data.
So
we
may
not
need
this
because
the
bootstrap
object,
it
will
start
out
and
it
won't
have
any
object.
References
and
the
Machine
controller
will
give
it
an
object.
Reference
back
to
the
machine
as
soon
as
that
object
reference
is
there.
We,
the
provider,
can
go,
do
its
thing,
so
yeah
I'm
not
sure
that
we
absolutely
need
this
yeah
I.