►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
today
is
wednesday
september
15th,
and
this
is
the
cluster
api
office
hours.
Cluster
api
is
a
sub
project
of
sig
cluster
life
cycle.
As
always,
please
make
sure
that
you're,
following
the
cncf
code
of
conduct,
be
respectful
to
everyone
and,
if
you'd
like
to
speak,
you
can
use
the
raise
hand
feature
in
zoom,
which
is
now
placed
under
reactions.
A
I
believe-
and
if
you
can
please
add
your
name
to
the
attendance
list
as
well
as
any
topics
that
you
might
have,
that
you
want
to
discuss
or
any
questions
or
anything
like
that
in
the
agenda
and
the
discussion
topics
all
right
before
we
get
started.
Do
we
have
anyone
who's
new
to
this
meeting
hasn't
been
here
before
and
just
wants
to
introduce
themselves,
say
hi
and
just
tell
us
a
bit
about
why
they're
here.
B
Hi,
yes,
my
name
is
sebastian
greissen,
I'm
an
engineer
at
twilio
yeah,
I'm
on
the
agenda
today,
just
going
to
do
an
early
demo
of
an
in-house
in
an
in-house
cluster
api
provider,
we're
working
on
offering
bare
metal
called
cluster
api
provider
live
just
looking
for
some
early
early
feedback
and
opinions.
A
Oh
pillow,
is
it
delivery?
Oh,
we
can't
hear
you
you're.
C
Muted,
all
right
is
it
better
all
right,
hi
antiloam,
I'm
with
microsoft.
We
recently
joined
I'm
here
for
the
flatcar
container
linux
team,
and
we
have
a
number
of
changes:
open
prs
that
we'd
like
to
get
into
cluster
api
before
the
code
freeze
and
I'm
here
to
get
a
better
idea
of
the
timeline
and
of
the
chances
to
get
this
in
and
the
requirements
for
doing
so.
A
A
If
not,
let's
get
to
it
all
right
vince!
You
want
to
start
us
with
psis.
D
Sure
so
I
said
that
at
an
email
guys,
we
all
worked
on
an
email,
the
the
rfc,
the
like
exam
mailing
list
last
week.
Tomorrow
is
the
one
week
time
frame
that
like
this
was
sent
and
us
folks
like
it
to
plus
one
if
they
agree
with
the
plan,
and
this
is
kind
of
like
our
maturity
release.
So
just
just
to
summarize,
like
what
we're
trying
to
say
here
is
like
in
terms
of
the
proposal
like
for
1.0.
D
Is
that
we'll
put
a
code
freeze
in
effect
on
october
1st
and
like
we
can
start
working
like
like
before
then
on
the
like
a
new
release
branches,
the
beta
one
types?
The
goal
of
this
is
to
be
as
close
as
possible
as
the
code
and
the
api
that
we
have
today
with
minimal
amount
of
of
changes.
The
only
changes
that
should
be
identified,
that's
like
release
blocking,
should
probably
be
kind
of
maintenance.
One
something
that
we
have
identified
is,
for
example,
the
contract.
D
My
we
might
need
to
change
that
from
an
api
and
the
api
version
right
now.
The
contract
says
like
I'm,
compatible
with
beta1,
for
example,
and
we
should
probably
base
that
on
the
the
series
version
so
1.0,
for
example,
1.1
et
cetera.
D
Well,
I
open
an
issue
later
today
to
discuss
this
and
there's
some
amendment
to
the
the
roadmaps
that
we
have
to
do
in
the
book
and
we'll
be
able
to
continue
working
on.
Like
any
other
proposal.
We'll
add
a
breaking
changes
section
to
the
cap
as
well,
so
that,
like
each
cap
will,
if
it
comes
with
breaking
changes,
you
need
to
outline
what
kind
of
breaking
juice
these
are
and
if
there
are
api,
breaking
changes
we'll
need
to
discuss
when
the
new
beta
api
is
going
to
be
worked
on.
A
Questions
mike.
A
Yeah,
sorry,
do
you
want
to
type
your
question
or
comment
in
the
chat?
Maybe
okay,
great
anyone
else
have
any
other
comments
or
questions
in
the
meantime.
Oh,
I
guess
nader
put
one
in
the
chats.
We
were
going
to
broadcap
out
to
0.8
in
october
november.
Is
it
worth
us
bumping
our
api
version
to
v1
day
one
at
the
same
time.
D
Perhaps
so
like,
we
need
to
identify
like
those
a
couple
of
changes
that
like
will
need
to
happen
like
for
providers.
We
want
to
make
for
providers
like
this
as
smooth
as
possible,
right
like
because
there's
going
to
be
like
a
really
like
no,
no
api
changes
between
alpha
4
and
beta
1.,
so
we
want
to
take
into
account
and
classical,
for
example,
should
be
able
to
upgrade
from
clusters
from
alpha
3
and
alpha
4
to
beta
1
directly.
D
So
I
don't
know
yet
if
it's
worth
it
to
to
wait
or
not,
it
also
depends
if,
like
so,
for
example,
if
we
start
working
on
the
beta
1
release
and
1.0
release
today,
just
as
an
example
and
we're
ready
on
october
1st
for
for
code
series
that
we
don't
keep
back
pointings,
it
might
be
worth
to
do
to
wait.
But
if
it
goes
at
the
end
of
the
month,
right
like
we
might
want
to
chat
more.
So
I
guess
like
depends.
D
A
F
G
D
That's
correct
so
by
this
proposal,
alpha
4
is
only
going
to
be
supported
for
six
months,
starting
from
the
alpha
release
so
and
1.0
will
be
supported
for
one
year
plus.
I
guess
like
well
does
that,
like
you
know
that
you're
going
to
backboard
those
changes
or
not
like
on
a
case-by-case
basis,
you
know
we'll
work
with
the
whole
community
to
understand
where
we're
at,
but
if
that
work
is
definitely
going
to
go
first
in
to
the
1.0
branch,
so
the
main
branch
first.
A
So
a
question
from
the
chat
from
mike
I'm
curious
about
code
and
api
freeze
and
how
that
would
affect
the
scale
from
zero
changes
that
provider
might
make.
Example,
would
this
api
be
part
of
1.0,
even
though
it
is
optional?
A
I
think
in
general,
like
this
release
is
not
meant
to
be
an
improvement
on
v1
alpha
4.,
it's
meant
to
be
us,
acknowledges
acknowledging
that
we're
already
operating
at
beta
and
we
have
been
for
a
while,
and
we
want
to
get
that
signal
to
our
users,
and
we
also
want
to
improve
our
release
process
where
we're
able
to
do
quicker,
less
painful
releases
for
providers,
but
also
not
completely
stop
innovation
and
development
and
there's
still
a
lot
of
very
important
features
and
changes
that
we
want
to
work
on.
A
So
this
isn't
us
saying
this
is
the
final
release
of
cluster
api.
Everything
is
in
there.
Nothing
else
is
ever
going
to
change,
so
I
don't
think
people
should
worry
too
much
about
getting
everything
in
there
in
the
next
two
weeks.
It's
more
like
you
know,
it
should
already
be
a
better.
Let's
just
call
it
beta
and
then
move
on
with
our
lives
and
keep
working
on
it.
A
Cool
any
other
questions
or
major
concerns.
H
C
Yeah,
as
I
said
in
the
in
the
introduction,
we
have
a
number
of
pending
changes
that
have
been
around
for
a
while.
There
have
been
ample
discussions
and
I'm
pretty
new
to
the
whole
to
the
whole
special
interest
group,
so
I'm
not
entirely
sure
about
the
amount
and
the
depth
of
code
freezes
that
the
beta
release
would
entail.
C
Since
it
was
mentioned
that
providers
shouldn't
be
too
concerned
about
about
changes
as
long
as
the
api
is
stable
and
second
part
of
the
question
is,
there
is
an
api
change
which
we
proposed
regarding
configuration
of
the
operating
system,
part
of
instances,
that's
a
pr
on
the
cube,
adm
bootstrap
provider,
in
which
we
introduce
a
kind
of
abstraction
level
between
the
configuration
and
and
the
operating
system,
in
that
you
could
use
cloud
in
it
or,
as
we've
proposed,
alternatively,
ignition
to
configure
instances
would
that
be
affected.
D
Yeah,
so
for
ignition,
specifically,
I
think
that
pr
we
just
need,
like,
I
think
we
just
need
end-to-end
tests
on
that.
D
I
don't
know
to
be
honest,
like
I
don't
know,
if,
like
we
should
put
that
behind
a
feature
gate
given
that
feature
gates
are
excluded
from
the
code
freeze,
so
we
might
want
to
do
with
that
instead,
but
just
to
clarify
like
on
the
on
the
feature
freeze
like
what
what
we're
saying
is
like
before
you
know
like
until
we
release
1.0,
we
should
not
merge
any
new
groundbreaking,
changing
code
right
like,
for
example
like
if
someone
wants
to
do
a
refactoring
on
the
machine,
deploying
machine
set
controllers
which
should
be
refracted
at
some
point.
D
This
is
probably
not
the
best
time
to
do
it.
That's
like
we
would
hold
like
an
apr
like
until
later,
like
I
said
we
can
review
it
with
like
a
little
bit
less
stressful
time,
but
like
yeah,
this.
D
Stop
in
the
world
temporarily,
as
we
make
this
bump
off
like
our
api
types
and
then
once
we
get
there
like,
as
you
mentioned
before,
like
we'll,
continue
working
on
all
the
all
the
features
and
additions
that
we
want
to
improve
for
ignition,
specifically
like
on
that
pr,
I
would
suggest
to
put
everything
behind
a
feature:
gate
right
now
and
just
pretty
much
block
that
code
behind
the
gate,
and
then
we
can
consider
merging
it
today
in
alpha
4,
or
we
can
wait
also
like
bit
of
one
I'll
defer
to
whoever
kind
of
reviews
it
a
little
bit
more
I'll
do
another
pass
but
yeah.
C
Go
ahead;
okay,
it
was
mentioned
on
the
on
the
pr.
If
I
record
correctly,
we've
been
working
on
completing
the
end-to-end
test
law,
so
that
was
our
current
priority,
so
I'm
very
thankful
for
the
follow
feedback.
That
makes
a
lot
of
sense
to
me.
C
Awesome
did
you
have
any
other
questions?
Just
of
you
know,
general
understanding.
We
have
a
number
of
of
smaller
changes
in
the
number
of
providers,
most
significantly
aws
and
azure.
That
then
make
use
of
that
particular
feature.
A
So
I
believe
this
code
phrase
we're
talking
about
only
applies
to
cluster
api,
like
the
key
code
base.
If
providers
decide
to
do
their
own
code,
freeze,
that's
separate
and
they
you'd
have
to
bring
that
up
with
them.
A
I
don't
know
if
kappa
would
want
to
do
one
considering
if
they
want
to
do
like
a
0.8
directly
to
beta
1.
in
azure.
That's
not
something
we've
discussed
so
far
and
we'd
probably
do
it.
If
we
were
to
do
it,
probably
like
after
v1
video,
when
it's
released
so
that
we
can
do
that,
while
we're
integrating
but
yeah.
C
Yeah
we
started
reaching
out
and
working
with
the
individual
providers,
so
I
was,
I
was
trying
to
get
the
big
picture
and
that
helps
a
lot.
Thank
you.
Yeah.
No.
A
Problem,
the
other
thing
that
we
should
probably
talk
about
is-
and
I
don't
think
this
mentions
is
when
we're
targeting
the
end
of
the
good
phrase.
A
I
think
it
might
be
easier
to
have
a
timeline
in
mind
where
things
can
resume
and
but
in
general
it
would
be
a
lot
easier
to
have
a
short
code
phrase
for
everyone
and,
if
we're
not
blocking
on
the
changes
that
we're
trying
to
get
in,
that
would
allow
us
to
do
that
and
not
get
the
code
freeze.
D
I
guess
that's
kind
of
what
we're
trying
to
do
here,
but
I
see
a
lot
of
plus
ones
on
on
on
the
on
the
proposal
and
it
seems
like
there
isn't
much
pushback
to
you
to
do
this
and
quite
the
opposite,
so
we'll
probably
be
able
to
start
working
on
this
beta
like
even
today,
right
like
and
we
we
just
started
making
the
release
branch
for
zero,
four
and
start
working
on
the
types
and
and
all
of
that
and
hopefully
like.
D
We
can
keep
this
freeze
like
as
short
as
possible
and
make
sure
that
we
can
also,
like
maybe
do
an
announcement
like
outside
of
this
group
and
the
kubecon,
which
is
second
week
october.
A
All
right
any
other
questions.
A
Awesome
thanks
for
the
update,
miss
all
right.
Let's
go
to
discussion
topics,
oh
actually,
sorry,
I'm
skipping
stuffs.
I
don't
think
we
have
any
release
blocking
issues
yeah
that
we're
good.
I
guess
proposals
what
we
discussed
just
changes
that
a
little
bit
in
the
sense
that,
unless
we're
planning
on
merging
any
proposals
in
the
next
week,
or
so,
we
should
probably
put
a
hold
on
those
yeah
mike
go
ahead.
E
I'm
gonna
try
this
again,
we'll
see
what
happens.
Can
you
all
hear
me?
Yes,
we
can
hear
you.
Okay,
cool
yeah,
like
we
talked
with
alex
about
the
spot
instance
proposal
and
I
think,
as
as
marked
it
just
needs
some
reviews.
I
think
everything's
cleared
up
there
and
on
the
opt-in,
auto
scaling.
I
think
stefan
and
I
have
worked
through
all
the
all
the
naming
issues
there,
so
I
would
actually
love
it
if
we
could
put
that
back
into
some
sort
of
lazy
consensus
if
it's
possible
to
get
that
merged.
A
A
Alright,
you
have
the
first
topic.
J
Yes,
cluster
class
pitches
again
so
I
I
got
some
feedback,
I
hopefully
answered
them
all
and
I
just
want
to
ask
how
we
continue
with
the
proposal.
So
if
something
like
lazy
consensus
also
makes
sense,
yeah,
I'm
not
sure
if,
if
it's
impacted
by
the
beta
discussion.
D
I
think
this
proposal
should
yeah.
It
should
definitely
be
to
push
forward
the
proposal
and
ask
for
more
feedback.
D
It
might
be
worth
to
send
an
email
to
the
list
if
folks
haven't
seen
it
by
coming
to
these
meetings
to
ask
for
more
feedback,
given
that
this
is
adding
like
a
lot
of
new
good.
We
might
you
know
we
could
just
say
like
that.
We
wait
until
beta
is
released
because
again,
like
we're
gonna,
keep
that
good
freezes
like
as
small
as
possible.
This
would
be
probably
a
good
way
to
do
to
do
so
as
well.
J
Okay,
so
this
means
next
step
send
a
mail
to
the
mailing
list
for
more
feedback.
A
K
You
know
I
I
was
just
only
commented
that
that,
in
my
opinion,
there
is
nothing
controversial
in
that
amendment,
so
an
option
could
be
just
to
merge
the
pr
but
wait
for
implementation,
but
if
you
feel
that
that
we
we
should
look
for
more
feedback,
I'm
fine,
even
that
we
are
delaying
it
implementation.
A
little
bit
is
not
a
problem
to
delay
merging
the
amendment.
A
A
Okay,
if
not
sebastian,
did
you
need
to
share
your
screen
for
a
demo?
Yes,
please,
all
right.
Let
me
give
you
access.
B
Okay,
hello,
everybody.
My
name
is
sebastian
guyson,
I'm
an
engineer
at
twilio
sengrid,
I'm
here
to
talk
about
a
new
provider
that
we
are
kind
of
working
on
in-house
called
cluster
api
provider
live.
This
is
it's
it's
in
very,
very
early
development.
Really.
The
the
purpose
of
this
is
just
to
kind
of
get
it
out
in
front
of
the
greater
community
and
just
kind
of
garner
early
feedback.
B
So
what
this
does?
We?
We
run
a
lot
of
bare
metal
in-house,
and
so
this
is
gonna
be
for
teams
running
a
cluster
api
on
infrastructure
with
longer
provisioning
times
like
bare
metal.
B
The
goal
of
what
this
is
kind
of
sets
out
to
do
is
make
kind
of
make
a
separation
between
host
bootstrapping
and
node
bootstrapping,
so
where
host
bootstrapping
is
like
the
provisioning
of
the
the
physical
hardware,
writing
of
the
os
etc
from
node
bootstrapping,
which
would
be
bootstrapping,
humaydm,
etc-
and
this
is
this-
is
really
in
service
of
making
those
node
bootstrapping
tasks
faster
for
teams
working
with
metal,
so
kind
of
jump
right
into
it.
B
So,
if
you're
using
cluster
api
with
bare
metal
you're
using
a
reference
architecture
like
this,
this
is
a
reference
architecture
of
cluster
api
level.
Three,
and
so
essentially,
what
what's
gonna
happen
is
there's
machine
templates
that
are
under
the
control
of
a
of
a
controller
and
when,
when
a
host,
when
a
provisioning
event
happens,
for
example,
an
md
or
a
q,
medium
control
plane
is
scaled
up
that
results
in
creation
of
machines
that
are
satisfied
by.
B
In
this
case,
all
three
machines
that'll
essentially
pick
a
bare
metal
host
and
start
this
provisioning
process,
and
this
provision,
this
provisioning
process
is
kind
of
what
I
want
to
expand
on
here
in
this
next
slide.
B
So
when
we,
when
we're
provisioning,
bare
metal,
there's
really
two
main
kind
of
phases
to
this
we're
provisioning
the
host.
So
we
have
to
pixie
boot
it
we
have
to
you,
know
boot
through
any.
You
know,
bmc
subsystems,
any
lifecycle,
controllers,
etc.
A
lot
of
times
we're
gonna,
be
pixie
booting,
the
node
writing
the
provisioning,
the
os
via
either
like
something
like
kickstart
or
precede
and
then
finally
bootstrapping
with
qadm
on
hardware.
B
This
can
be
a
very
long
running
process,
often
requiring
you
know
two
reboots
one
to
pixie
and
then
one
to
boot
into
the
provisioned
os.
This
can
also
be
very
opaque
if
you
from
an
operator's
standpoint,
you're.
Basically,
you
know
you're
you're,
you're
scaling
up
an
md
and
then
there's
a
lot
of
things
that
happen
due
to
the
fact
that
that
provisioning
basically
starts.
I
mean
at
that
life
cycle
event
once
the
machine
gets
created,
so
it
can
be
a
lot
if
something
goes
wrong.
B
A
lot
of
times
we're
in
the
situation
like
you
know,
your
machine
just
won't
show
up
for
for
15
or
20
minutes,
and
due
to
the
fact
that
there's
so
many
things
that
happen
within
that
provisioning
cycle,
it
can
be
kind
of
difficult
to
to
kind
of
attempt
to
triage
that
just
due
to
so
many
things
kind
of
happening.
B
So
what
we're
kind
of
setting
out
to
do
is
to
separate
the
host
provisioning
from
the
from
the
node
provisioning
so
effectively.
What
we
want
to
do
is
be
able
to
kind
of
front
end
load
a
lot
of
the
host
provisioning
and
be
able
to
kind
of
handle
those
longer
provisioning
tasks.
B
Sort
of
on
on
on
hardware.
In
batch,
we
install
a
a
small
agent
onto
the
under
the
machine
that
then
basically
gets
added
to
the
management
cluster
as
an
inventory
of
like
almost
like
hot
standby.
B
Spares,
then,
when
we,
when
a
consuming
cluster,
has
a
machine
that
needs
to
be
facilitated
with
a
provider,
only
the
agent
essentially
can
just
execute
the
q
adm
provisioning
tasks
by
reading
out
of
the,
for
example,
like
the
qadm
template,
which
which
basically
takes
the
the
node
provisioning
time,
meaning
the
time
from
scaling
up
your
md
to
the
time
your
node
actually
shows
up
in
the
cluster
down
from
you
know,
15
20
minutes
in
our
case,
to
to
less
than
one
minute.
B
So
really
the
the
goal
is
to
kind
of
really
decrease
that
that
node
provisioning
time
once
that
lifecycle
event
is
triggered.
So
here's
kind
of
the
core
components
that
this
controller
or
this
controller
suite
is
comprised
of.
So
it's
going
to
be
a
host
controller.
This
could
work
with
multiple
back
ends.
Right
now,
our
supported
back
end
is
metal
three.
B
Essentially,
what
this
does
is
it
provisions
a
certificate
in
in
cluster
for
for
the
for
each
each
agent
and
then
provisions
a
in
this
case
a
bare
metal
host
and
installs.
The
os
performs
all
that
that
configuration
and
then
adds
a
hook
to
install
a
little
live
agent.
This
is
listening
via
like
an
authenticated
api
from
the
controller,
and
so
when
we
have
a,
we
have
a
live
agent
controller.
B
That's
basically
here
just
communicating
with
these
agents
doing
things
like
health
checks,
reconciling
agent
state,
etc.
Then
we
have
a
live
machine
controller.
So
this
is
when
we,
when
we
have
a
consuming
cluster.
B
So,
for
example,
we
scale
up
an
md,
that's
going
to
result
in
the
creation
of
a
machine
from
a
machine
set,
and
so
we
are
going
to
that
machine
will
have
an
infrastructure
reference
of
a
live
machine
and
a
a
q,
a
dm
config
which
essentially
contains
instructions
for
how
to
bootstrap
qb.
So
in
like
an
init,
this
would
contain.
You
know
instructions,
for
you
know
how
to
how
to
perform
essentially
q80
and
knit.
B
In
most
other
cases,
it's
going
to
be
a
sense
like
q8m
join,
so
our
live
machine
controller
when
it
when
it's
reconciling
these
machines.
What
it's!
What
it's
going
to
do
is
it's
going
to
pick
an
available
agent?
It
will
read
the
qbm
config
from
its
corresponding
machine
and
send
an
authenticated
request
over
to
that
agent,
which
will
then
just
execute
the
qbm
config
components
then,
after,
if
that
is
successful,
it'll
link
that
agent
to
the
live
machine
and
effectively.
That's
that
completes
that
reconciliation.
B
This
is
a
this
provides
kind
of
a
nice
interface
for
us,
because,
basically,
the
the
the
live
machine
controller
as
it
is
sending
these
cube
adm
commands
over
to
over
to
the
agent,
can
inject
hooks
so
that
that
can
basically
be
run
while
state
transitions.
So
this
gives
us
some
initial
some
some
additional
sort
of
interfaces
for
for
configuring
sort
of
life
cycle
tasks.
B
Obviously,
you
know
q
idiom,
like
a
q,
adm
config,
gives
us
things
like
free
cube,
adm
commands
and
post
cubadium
commands,
but
this
this
with
this,
we
can
now
sort
of
have
additional
hooks,
injecting
that
as
well.
So
we
can
do,
for
example,
like
pre
and
post
d
provisioning,
which
today
is
a
kind
of
difficult
to
to
hook
into
in
a
lot
of
in
a
lot
of
cases.
B
All
again,
all
these
requests
are
sent
to
the
agent
by
the
controller
and
the
agent
is
essentially
verifying
the
client
authentication.
So
it's
it's
essentially
only
executing
commands
that
it
can
verify,
are
authenticated
with,
with
a
with
a
with
the
certificate
from
the
cluster
controller
that
provisioned
it
so
kind
of
putting
all
that
back
together.
B
What
we,
what
we
have
in
practice
is
something
that
essentially
looks
like
this,
so
this
looks
very
similar
to
our
existing
sort
of
clustering,
api
provider,
metal
three
example
from
earlier:
we've:
just
swapped
all
the
machine,
the
machine
infrastructure
providers
over
to
live
machines,
away
from
metal,
three
machines,
and
so
essentially,
when
we,
when
we
scale
up
our
our
machine
deployment
and
that
life
cycle
event
gets
triggered.
B
B
So
hear
me
launch
my
demo
here.
I
have
to
stop
sharing
and
reshare
my
terminal.
B
Just
text
a
little
bit
bigger,
okay,
so
over
here
on
the
left
side,
we're
we're.
We
have
a
single,
a
consumer
cluster,
so
this
is
just
a
kind
of
run-of-the-mill
workload.
Cluster.
That's
just
has
a
single
single
control
plane
on
the
right
pane
here
we're
in
our
management
cluster,
where
we've,
where
we've
provisioned
a
number
of
live
agents.
So
most
of
these
are
in
just
like
a
ready,
idle
state.
You
can
see.
B
One
of
these
live
agents
has
been
consumed
and
provisioned
as
the
first
control
plane
node
of
our
cluster
over
here.
B
So
let's
go
ahead
and
scale
up
an
md
and
add
a
couple
of
ceph
nodes
to
this
cluster
and
we'll
see
we'll
see
what
happens
here
so
once
we
scale
that
cluster
we
see
here
that
our
agents
are
are
transitioning.
Our
live
machine
controller
has
has
started.
B
Reconciling
the
new
created
live
machines,
it
has
selected
two
agents
that
are
matching
the
selector
criteria
from
the
from
the
parent
machine
deployment
and
is
now
running
that
that
pre-provisioning,
that
pre-provisioning
hook-
and
we
can
see
here
that
these
are
now
sort
of
joining
the
cluster.
So
it's
finished
the
pre-provisioning
hook,
and
these
have
now
joined
entered
kind
of
a
ready
state
and
when
we
look
back
over
here
at
these
agents,
we
can
see
that
these
are
now
transitions
to
a
provision
state.
B
This
is
it
could
be
a
little
bit
difficult
to
appreciate
if
you're,
if
you
haven't,
worked
with
metal,
because
ordinarily,
this
would
be
a
15
to
20
minute
process
from
from
the
time
of,
like
scaling
up
your
md
to
actually
like
seeing
nodes
pop
into
the
cluster,
and
so
this
is
a
fairly
dramatic
improvement.
Just
for
for
cases
where,
for
example,
you
may
need
have
a
have
a
have
a
node
die
of
unexpectedly,
you
need
to
replace
that.
B
Well,
if
it
takes
20
minutes
to
replace
that
node,
you
know
you
may
have
encumbered
workloads
or
you
know,
for
for
the
duration
of
that
time,
and
you
know
other
customer
impact.
Maybe
it
may
be
be
a
result
of
that
and
we
we
kind
of
see
the
same
thing.
We
can
the
same
process
sort
of
happens
in
reverse.
So
if
we
scale
these,
this
replica
is
set
back
down.
B
These
agents
will
enter
a
deprovisioning
state
once
the
provisioning
completes
they'll
go
back
to
pending
and
then
they'll
essentially
go
back
to
they'll
re-register
where
this
this
could
be
a
place
where
you
can,
for
you
can,
for
example,
inject
hooks
to
do
system
cleans
and
wipe
disks
set
the
system
back
to
a
base
state
and
just
provides
a
nice
sort
of
additional
interface
for
for
for
configurability
there.
B
That's
yeah,
that's
really
that's
most
of
what
I
wanted
to
to
show.
I
again
we're
still
in
very
early
development,
the
the
we
we're.
I
know
that
the
there's
been
some
discussion
on
on
sort
of
ways
of
decoupling
sort
of
that
host
os
provisioning
from
the
node
provisioning.
B
So
I
thought
this
may
be
a
sort
of
a
an
appropriate
topic
to
sort
of
bring
up
to
that
end,
and
really
we
sort
of
want
to
make
sure
that,
since
this
is
in
sort
of
sort
of
the
early
phases
of
development,
that
we
are
we're
not
really
doing
anything
or
heading
in
a
direction
that
might
that
might
kind
of
bring
us
into
conflict
with
future
initiatives
from
the
sig
kind
of
going
forward.
B
The
intention
would
be
to
to
continue
development
on
this,
and,
ideally
do
it
would
do
whatever
we
need
to
do
to
get
it.
You
know
adopted
as
a
as
a
like
an
official
provider
whatever
that
process
may
look
like
awesome,
so
yeah
yeah.
Thank
you
very
much.
Everybody.
A
All
right
leave
them
here.
You
have
a
question.
F
Excellent
demo,
I'm
assuming
some
of
the
lights
of
those
hardware
pieces,
have
been
blinking
really
fast
to
get
this
working.
No,
I
I
have
a
general
concern
here,
which
is
my
assumption.
Is
that
potentially
the
separation
between
machine
bootstrapping
and
cube
adm
or
kt
slot
bootstrapping,
if
you
will,
is
going
to
have
to
be
supporting
core
cluster
api?
F
Yeah,
so
the
concern
is
whether
we
should
you
know,
allocate
the
repository
for
this
provider
before
core
supports
it,
or
should
we,
you
know,
keep
it
on.
F
B
Well,
right,
yeah,
I'm
not
quite
sure
how
to
how
to
begin
answering
that
but
yeah,
that's
we're
absolutely
open
to
to
kind
of
getting
this
functionality.
In
I
mean
we
really
just
want
the
functionality
and
we
want
to.
B
F
A
D
Yeah
so
there's
two
avenues
of
that,
like
probably
like
you
need
to
approach
like
at
the
same
time,
so
kappa
k
as
a
blue
strip
provider.
There's
definitely
a
lot
of
things
at
the
same
time,
so
51
24,
which
is
a
fun
point
out
in
in
chat
like
it's
the
issue
that
I
think
you
folks
from
twilio
open
right,
which
is
like
hey.
We
want
the
the
format
to
actually
just
be
cubitty
and
mark,
but
like
we
don't
want
anything
else.
This
is.
D
This
could
be
like
a
pretty
easy
additive
change
like
in
capricate
today,
the
only
we
need
to
like
kind
of
address
like
some
requirements
on
that,
like,
for
example,
if
we
do
have
like
plain
cube
idiom
that
you
have,
we
need
to
make
sure
to
block
the
fields
in
the
validation
web
book
that,
like
won't,
be
able
to
do
anything
right.
It's
like
to
inform
the
users
and
then
so.
D
That's
that's
one
side
of
the
questions
like
make
sure
that
the
current
provider
can
just
upgrade
cubanium
types,
and
then
there
is
separating
the
the
code
a
little
bit
in
kpk
and
cleaning
it
up
over
time
to
make
sure
that
it's
a
little
bit
more
like
separated.
I
guess
like
from
os
bootstrapping
to
cloud
init
and
then
the
cubetm
parts
of
it,
and
then
there
is
the
proposal
that
we
were
discussing
last
time,
which
is
if
we
do
separate
os
and
kubernetes,
we
strap
it.
D
Could
we
have
the
machine
types
and
the
machine
templates
and
and
so
on,
expose
these
types
directly
in
plush
api
and
so
have
cube,
cubed
m
types
and
ignition
or
cloud
in
it
to
be
more
badass
included
and
not
like
a
complete
different
bootstrap
provider.
So
there
is
a
bunch
of
different
things
that
would
probably
need
to
discuss,
but
I
think
that
probably
a
good
first
step
would
be
to
add
support
for
the
plane
format
in
capyk.
A
Yeah-
and
I
think
we
would
love
help
with
like
starting
to
think
about
the
proposal
and
the
requirements
for
separating
things
out
since,
like
you've
been
looking
at
that
area
very
closely,
you
might
have
some
good
input
there.
F
D
No,
the
shoe
will
be
backward
compatible
yeah.
I
don't
see
any
breaking
changes
because
we
do
want
still
to
allow,
like
any
other,
any
company,
that's
or
like
person,
that's
using
a
cluster
api
to
replace
the
whole
bootstrapping
process
if
they
want
to.
We
don't
want
to
be
too
prescriptive,
but
we
do
want
to
improve
the
user
experience
at
the
same
time.
So
there's
just
some
trade-off
there
and
we'll
make
sure
to
make
that
backward,
compatible.
F
Let's
see
so
nadir
is
saying
that
these
providers
can
still
exist
as
a
separate
repository
until
we
have
these
changes
in
core.
So,
like
I
said,
I
don't
have
a
strong
opinion
if
the
others
want
to
see
that
we
can
proceed
that
route.
L
I
do
yeah.
No,
so
I
think
cappell
is
still
a
provider
in
a
sense
that
you
know
it's
gonna.
It
works
with
existing
provision
machines
that
is
a
provider
the
way
it
interacts
with
the
machine
and
gets
information
onto
those
is
which
is
its
own
provider
in
much
the
same
way
as
the
azure
or
an
aws
surprise.
So
it's
still
an
infra
provider
of
some
sort.
What
I
mean
is
you
know
if
we
can
get
the
raw
kubernetes
format,
then
that's
a
much
better
consumption
model
for
that
for
cafel.
A
Hey
sebastian,
I'm
just
curious
right
now.
How
are
you
getting
the
demo
working?
Are
you
doing
some
sort
of
parsing
of
the
qbm
config
or
are
you
writing
your
own
provider.
B
Yes,
so
our
our
agent,
it's
basically
able
to
parse
the
keyboard
config,
so
yeah.
What
we
want
to
do
in
the
future
is,
you
know,
move
to
a
different
format,
so
we're
we're
essentially
kind
of
we're
supporting
that
format,
but
that's
probably
not
the
best,
the
best
user
experience
and-
and
it
would
be
a
good
place
for
something
like
a
raw
format
or
if
we
wanted
to
support
something
like
ignition
going
forward,
could
could
be
updated
to
support
those
formats
as
well.
K
First
of
all,
thanks
for
the
demo,
we
should
have
more
demons
in
this
meeting
because
they
are
always
fun
and
interesting.
Second,
I
think
that,
given
that
as
it
was
verified,
sebastian
clarified,
this
is
a
in
early
stage.
I
think
that
the
major
concern
for
this
team
is
to
make
this
possible
so
to
remove
all
the
blocker,
all
the
blocker
or
and
put
in
place
all
the
extensibility
points
that
are
required
and
then
and
then
we
will
discuss
code
organization
when
it
is
time.
B
Yeah,
absolutely
yeah,
there's
still
a
lot
of
work
for
us
to
do
on
our
end.
We
need,
to,
you
know,
make
sure
we're
we
have
end
and
tests
and
all
these
with
the
other
sort
of
conformance
items
which
absolutely
we'll
be
we'll
be
working
on
in
the
coming
weeks
as
well.
So
looking
forward
to
to
continuing
on
that
adoption
process,.
A
Awesome
one
question
I
had
was
for
the
life
part
of
it.
What
decides
like
how
many
agents
are
on
standby?
Is
that
like,
depending
on
your
hardware
available
or
is
there
some
sort
of
format.
B
It's
yeah
so
so
right
now
we
we're
we're
based
we're
basing
off
of
just
just
labeled,
we're
using
again
like
bare
metal
hosts
on
a
metal
three
implementation
right
now,
so
we're
using
bare
metal
hosts.
So
we
have
a
reconciler
that
will
effectively
take
any
any
bmhs
that
are
that
have
a
particular
label,
basically
are
reserved
for
agents
will
be
provisioned
off
from
those.
B
A
Awesome
all
right!
Well,
thanks
for
the
demo,
it's
really
great
and
yeah,
looking
forward
to
the
guitar
demo
from
paprika
next
time,
all
right,
let's
keep
going,
because
we
have
two
more
topics
and
a
bit
of
less
than
15
minutes
left
so
yeah
actually
for
breezy.
You
have
the
next
topic
to
begin.
K
Yeah,
this
is
an
update
for
the
from
the
kubernetes
meeting
that
just
finished
so
also
in
kubernetes.
We
are
starting
discussion
about
graduating
our
types
to
be
one
beta
to
v1
in
this
case,
and
the
reason
are
almost
the
same,
the
think
copy.
So
the
api
are
there.
We
are
already
providing
higher
level
guarantees,
the
the
things
are
used
in
production
and
so
yeah.
K
We
started
this
discussion
in
the
cafe
meeting,
but
what
is
really
important
is
to
get
feedback
from
this
community
and
from
everyone
is,
is
a
using
cluster
api
and
the
next
step
that
that
we
are
planning
is
to
send
an
email
to
the
main
list.
K
K
Grooming,
so
only
focus
two
api
changes
issue
and
the
the
result
of
this
screening
will
be
to
identify
things
that
we
consider
blocking
for
graduating
or
not.
So
the
the
ask
to
this
to
this
team
is,
if
you
have
some
expectations,
some
requests
to
to
forward
to
the
kubernetes
team.
This
is
the
the
right
time.
D
F
What
are
we
going
to
do
if
some
of
our
apis,
such
as
the
kubernetes
api
or
the
quest
api
api
graduates
to
v1,
and
whether
we
are
going
to
establish
a
recommendation
that
the
api
is
set
in
stone
and
no
incrementing
changes
should
happen
to
it,
and
a
new
api
version
should
be
released
instead,
such
as
v2,
and
this
is
a
almost
like
a
sig
wide
discussion,
and
I
would
like
to
for
us
to
have
it
in
the
next
request
lifecycle
meeting,
which
is
next
tuesday.
F
A
I
guess
one
question
I
have
is:
there
was
some
discussion
in
the
past.
I
remember
about
some
of
the
add-ons
that
are
installed
as
part
of
qradium
to
be
a
bit
more
flexible
and
possibly
leveraging
operators
through
the
cluster
add-ons
project
or
something
like
that
to
be
able
to
change
things
like
the
core
dns
deployment.
A
K
Yeah,
the
idea
original
was
to
wait
for
the
dawn
project,
so
we
have
to
sync
up
again
with
them
or
we
have
to
decide
if
to
provide
a,
let
me
say
another
solution,
but
but
I
think
that,
as
we
are
discussing
for
class
api,
the
point
is
the
same.
So
this
is
a
label
and
graduating
we
are
is
not
stopping
the
evolution
of
the
project.
It
is
just
a
declaration
of
the
maturity
that
we
have,
but
definitely
we
should
ask
ourselves
is
if
this
is
something
that
for
us
block,
graduation
or
not.
F
F
My
personal
preference
right
now
is
to
completely
stop
managing
addons
the
way
we
are
managing
managing
dials
today,
which
hard
coding
them,
but
also
the
operator
add-on
concept,
is
really
mature.
Unfortunately,
and
also
just
in
santa
barbara
found
some
arbuck
related
problems
to
it
so
yeah,
I
don't
think
we
can
consider
some
of
these
things
strictly
brokers.
At
the
same
time,
obviously,
we
need
discussion
like
the
way.
F
A
I
see,
I
think
yeah
fabricio,
vince,
marie
or
maybe
it'd
be
good
to
discuss
add-ons
and
the
add-ons
project
and
one
of
the
next
thing
meanings,
because
that's
something
that
also
for
cluster
api.
We're
kind
of
you
know
blocked
on
in
terms
of
csi
driver
stuff,
and
things
like
that.
So
maybe
something
we
need
to
focus
on
as
a
sig.
F
A
Makes
sense
cool
all
right
last
topic
alex
cubemark
provider.
M
M
I
was
wondering
as
to
the
reasons
I
so
I
see
a
related
pr,
that
we
sort
of
providing
support
for
hollow
nodes
for
docker
provider,
but
I
was
wondering
about
a
use
case
when
we
wanted
to
test
new
features
on
a
bare
metal
cluster
and
just
simulate
large
fleet
of
workers
for
it,
or,
let's
say
in
our
case,
for
kuber
for
keyword
provider.
We
want
to
create
a
control
plane
running
on
cube
word,
but
we
still
want
to
simulate-
let's
say
hundreds
of
nodes
with
cubemark.
A
E
Oh,
I
didn't,
I
didn't
necessarily
have
a
response
to
how
you
would
do
it
without
the
kubemark
provider
and
I
would
defer
to
fabrizio
for
kind
of
the
bigger
action
on
moving
it.
I
just
think
on
that
archived
repo,
like
I
had
I
had
gone
there
to
put
another
pr
and
to
adjust
to
readme
so
that
it
pointed
to
the
new
location
and
it
got
archived
before
I
got
a
chance
to
do
that.
E
So
I
think
we
should
probably
try
to
un-archive
that
and
put
the
and
put
that
last
pr
update
in
there
just
to
point
people
to
where
the
you
know
where
the
new
stuff
is
going
to
be.
M
E
We
need
to
yeah,
we
need
to
merge
fabrizio's
pr
to
get
it.
You
know
kind
of
officially
as
part
of
our
repo,
but
the
idea
here
was
that
originally
we
had
created
that
you
know
ben
moss
had
created
that
separate
kubemark
provider
repo
and
then
we
had
we
had
decided
kind
of
as
a
community
that
we
thought
the
kumar
provider
was
valuable
enough,
that
it
should
be
part
of
our
testing
infrastructure,
because
I
mean
it
has
it's
going
to
have
a
lot
of
extended
testing
effects.
E
Unfortunately,
we're
just
kind
of
I
think
we
kind
of
ground
to
a
halt
in
that
pr,
but
it
would
be
great
to
get
it
merged
at
some
point
in
the
near
future,
so
we
could
have
it
as
an
official
provider.
I
think
we
just
need
we're
just
in
the
last
stages
of
kind
of
getting
it
working
and
getting
it
in
there.
K
I
I
think
that,
at
least
from
my
point
of
view,
the
the
the
situation
is
the
following:
we
create
a
repo
for
a
new
provider,
but
then
ben
changed
basically
his
work,
so
the
activity
got
kind
of
stuck,
and
I
tried
to
revive
this
effort,
but
basically
I
don't
have
enough
bandwidth
to
make
it
basically,
a
separate
provider
take
care
of
all
the
scaffolding
that
a
provider
has,
and
so
I
pro
I
proposed
to
to
merge
it
in
into
tap
d,
which
is
the
testing
provider
that
that
we
have
so
we
can
have
the
functionality
of
all
the
nodes
a
bit
without,
let
me
say
the
overload
to
maintain
a
separate
provider
it
it.
K
It
was
a
a
kind
of
way
forward
to
to
keep
this
dcd
going,
but
yeah
the
pr
is
still
there.
E
E
You
know
repo
and
then
more
recently
it
got
archived
so
yeah.
I
guess
it's
curious
to
hear
you
know
for
from
fabrizio
yeah.
If
you
don't
know
why
it
got
archived
yeah,
then
I
guess
we're
really.
It
would
have
been
nice
to
keep
it
open
until
we
were
able
to
have
the
new
pr
in
place.
So
maybe
we
could
un-archive
it
just
to
kind
of
update
what's
going
on
there
or
something,
but
I
do
have
some
bandwidth
to
work
on
this
and
I'd
like
to
help
get
that
pr.
E
F
I
recall
that
we
discussed
the
archival
during
the
sequester
life
cycle
meeting
it's
one
of
our
inactive
projects
and
we
do
that
periodically.
You
know
if
we
have
some
one
maintainer
that
no
longer
works
on
it
nobody's
setting
code
to
this.
We
just
archive
it.
If
you
want
to
unarchive
it,
I
think
we
have
an
open
dm
me
vince
and
I
guess
justin
and
someone
who
is
from
the
github
team
vince.
Do
you
like
to
tell
them
that
we
can
unarchive
this
provider
or
what?
D
I
E
You
know
cappy
stuff,
and
then
you
know,
and
obviously
there
was
no
movement
happening
there,
but
you
know
ben
bennett
kind
of
like
passed
that
off
to
me
and
I
was
willing
to
keep
doing
it
there.
You
know-
and
I
think
we
should
have
at
least
until
the
point
we
have
it.
The
pr
merged
in
in
you
know,
cluster
api,
but
you
know
I'm
happy
to
follow
whatever
the
best
guidance
here
is
now
like.
F
N
K
No,
we
we
agreed
on
archiving
cup
d,
okay,
the
separator
report
for
for
cup
d
cup
bk,
but
when
we're
discussing
archiving
the
the
kubernetes
provider
at
legally
a
secret
meeting,
we
we
basically
repeated
what
we
are
saying
that
there
is
still
no
clear
discussion
if
no
clear
direction.
If
even
if
there
are
no
activities,
so
please
keep
them,
keep
it
around
the
game.
F
Vince,
I
guess
you
can
try
to
talk
to
mr
bobby
tables
about
this.
I
mean
you
the
course
repairman
should
decide
what
to
do
with
this
provider.
I
guess.
D
Well,
let's
move
this
into
the
channel
where
two
minutes
passed.
A
All
right
thanks,
everyone
online
bye.