►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
today
is
wednesday
december
1st,
and
this
is
the
cluster
api
office
hours
if
you
haven't
joined
us
before.
This
is
a
reminder
that
we
have
a
meeting
etiquette.
A
B
B
Sometime
in
the
middle
of
january,
but
and
we
will
start
to
cut
the
first
rc
release,
probably
in
the
middle
of
december,
around
the
middle
of
december
and
the
psa.
The
most
important
things
here
is
that
we
are
introducing
cluster
class
a
lot
of
work
done
there.
Stefan
will
give
another
an
update
later.
B
What
we
really
need
now
is
feedback
from
the
provider
from
the
community,
so
we
are
looking
for
all
for
the
staff
and
if
someone
from
the
provide
from
the
provider
needs
help
in
getting
started
with
cluster
class,
please
reach
out
to
the
to
me
to
stephan
our
kilian
ivaraji
to
the
team.
So
we
can
help
and
and
try
to
to
to
to
bring
all
the
community
using
this.
B
This
feature,
and
I
go
on
also
the
next
point,
which
are
which
is
special
release
planning
so
in
in
the
last
two
or
three
weeks,
we
had
a
couple
of
fixes
both
in
maine
and
they,
and
then
they
were
backported
in
on
the
different
branches,
and
so
probably
some
time
and
on
the
end
of
next
week
we
will
cut
a
series
of
patches
release
with
all
these
fixes.
A
Awesome
thanks
fabric
too,
all
right,
any
other
psas
or
any
questions
about
the
release.
Planning.
A
All
right,
if
not,
let's
keep
going,
I'm
not
gonna,
go
over
each
proposal,
but
does
anyone
have
updates
on
any
of
these
or
any
proposal?
That's
in
progress?
That's
not
in
here
that
should
be
in
here
or
anything
that
should
not
be
in
here
anymore.
C
This
is
david
watson.
I
just
have
a
quick
comment
on
the
load:
balancer
provider.
C
I'm
not
aware
of
much
work
occurring
on
that
this
half
it's
something
that
I'm
considering
working
on
next
half
as
part
of
my
company's
roadmap.
So
I'll
just
mention
that
I'm
still
interested
in
doing
this,
but
I
don't
think
that
a
lot
has
happened
yet.
A
All
right,
if
not,
let's
keep
going
all
right.
So,
let's
get
into
the
discussion
topics,
paul
and
cheyenne.
You
have
the
first
one.
D
Yeah
thanks,
so
we
just
wanted
to
sort
of
say
hi,
because
this
first
time
that
we
joined
just
to
let
you
guys
know
that
we
oracle
are
working
on
a
provider
for
our
cloud
infrastructure
and
it's
just
to
sort
of
reach
out
and
say
that
we're
we're
joining
being
part
of
the
community
and
yeah
looking.
D
You
know
we'll
probably
have
some
silly
questions,
but
we're
probably
looking
for
some
advice
as
we
go
forward
really
and
we've
got
a
couple
of
I'm
a
product
manager
looking
after
our
implementation
of
this,
and
we've
also
got
todd
joe
and
cheyenne
from
engineering
on
the
call
as
well.
E
Initial
implementation,
which
is
uses
the
image
builder
project,
as
well
as
the
cluster
api.
So
we
would
like
to
understand
where
the
next
steps
on
how
to
create
a
repo
in
the
corporate
six
project.
How
to
you
know,
push
stuff
there?
Whether
we
can
raise
the
pr
for
the
image
builder
project
and
all
those
things.
A
I'll
take
the
image
builder
part
and
then
fabricio,
let
you
maybe
take
the
repo
for
image
builder
yeah,
feel
free
to
open
up
the
art
to
add
a
new
provider
for
image
building
as.
E
F
A
Has
like
proper
documentation
and
everything
I
you
can
find
some
example
prs
of
previous
providers
that
have
been
added.
It
should
be
pretty
straightforward
sure
for
research.
Do
you
want
to
answer
the
question
about
opening
a
provider
repo
or
creating.
B
B
E
B
Once
basically,
you
are
ready
to
do
so.
Then
it
is
just
a
matter
to
open
an
issue
in
kubernetes
org.
B
Typically,
it
is
required
that
the
seek
chairs
in
this
in
this
meeting
we
have
vince,
but
it
will
be
nice
if
you
show
up
at
the
sikh
meeting,
so
we
we,
basically
you
you
basically
introduce
yourself
also
to
the
rest
of
the
siege
and
once
the
pr
the
issue
is
there
vince
and
and
the
other
leagues
plus
one
the
the
issue
that
the
the
report
gets
created
and
sure
that
you
are
ready
to
go.
So
it's
not
difficult.
Just
after
you
sort
out
the
the
problem
about
donating
codes.
A
Well,
the
other
thing
I'll
add
is
there
are
a
lot
of
other
providers
hanging
out
in
the
cluster
api
channel
on
slack.
So
if
you
have
any
like,
you
know,
questions
or
anything
you
can
always
post
in
flag
and
get
some
good
yeah.
A
Great
all
right,
killian
go
for
it
mission,
health
checks
proposal.
G
I
yeah
so
this
proposal
has
been
kicking
around
for
a
while.
It's
been
quite
dormant,
but
I
just
made
some
updates
to
it
over
the
last
couple
of
days,
so
this
is
about
defining
a
machine
health
check
inside
a
cluster
class
and
then
having
the
cluster
class
reconciler
create
that
health
check.
So
this
has
changed.
The
cluster
class
api
is
really
well
documented
in
a
bit
of
the
behavior
and
validation
of
it
so
yeah.
So
I
think
all
of
the
conversations
are
resolved.
G
So
if
you
had
previously
take
a
look
at
this,
I
kind
of
spun
a
couple
of
conversations
out
into
different
issues
and
try
to
make
this
kind
of
minimal
implementation.
So
we
can
iterate
on
it
later
on.
So
anybody
who
can
please
review
this,
it's
a
relatively
small
change,
and
particularly
anybody
who
has
any
expertise
on
machine
health
checks
and
any
experience
using
them
today
to
make
sure
this
aligns
with
what
people
would
expect
in
a
cluster
class
implementation.
A
All
right,
if
not,
let's
keep
going,
and
I
think
we
have
our
first
demo,
so
I
will
stop
sharing
screen
and
anusha.
You
can
take
it
away.
You
should
be
able
to
share
screen.
A
A
F
Can
you
say
it
still,
yep,
okay,
great
so,
hey
everyone,
I'm
anusha,
I'm
an
engineer
at
vmware,
so
we
have
been
working
on
a
new
provider
called
bring
your
own
host
byoh.
In
short,
so
in
this
meeting
I
want
to
like
introduce
to
you
all
what
bring
your
own
host
provider
is,
why
we
started
with
it
and
probably
show
you
a
small
demo
on
how
this
works
and
where
we
are
at
currently
now.
This
might
seem
a
little
familiar
because,
just
a
few
weeks
ago,
we
had
a
demo
from
cap
live
by
twilio.
F
So
we
have
started
a
discussion
with
them
on
slack
and
we'll
see
how
we
can
collaborate
with
them
and
see
what
are.
Are
there
any
overlap
between
the
two
providers
and
how
we
can
move
forward,
but
right
now
I
want
to
just
talk
about
byoh.
F
So
why
byuh?
So
it
started
with
a
a
use
case
where
we
had
to
support
kubernetes
on
infrastructure
like
hyper-v.
There
was
no
cluster
api
provider
for
hyper-v
present
and
then
so
we
thought
bring.
Your
own
host
could
be
a
good
use
case
here
and
also,
if
we
plan
on
extending
the
watch
to
bare
metal,
possibly
because
we
have
seen
customer
use
cases
as
in
where
the
customer
wants
to
manage
the
host
the
physical
host
and
the
operating
system,
and
they
want
us
to
manage
everything
kubernetes
around
it.
F
So
we
think
this
could
be
a
good
potential
use
case
for
bare
metal
as
well
coming
to
like
the
current
cluster
api
providers
like
vsphere
or
aws.
F
This
is
a
typical
clustering
provider,
like
I'm,
not
sure
if
all
the
providers
work
a
similar
way,
but
I'm
sure
most
of
the
providers
do
supposedly
whenever
there
is,
when
you
want
to
create
a
workload
cluster
or
a
cluster
with,
say
one
control,
pane
node
or
one
worker
node.
The
cluster
api
controllers,
together
with
the
provider
controllers
provision
the
hardware,
as
in
suppose
we
are
creating
a
vsphere
cluster.
We
have
it
creates
vsphere
vms,
based
on
the
different
obs
that
are
already
available,
so
these
ovas
have
kubernetes
binaries
baked
into
them.
F
So
all
of
these
ovas
or
amis
are
that
are
built
using
image
builder,
have
pre
kubernetes
pre-installed
in
them
and
then
and
then
we
use
the
bootstrap
providers
like
cube
adm
to
bootstrap
kubernetes
node,
so
this
happens
in
a
typical
sensory
way
provider.
F
So
what
is
the
difference
with
by
oh?
Is
that
the
hardware
in
the
os
is
user,
managed
in
the
sense
that
the
customer
or
the
user
is
responsible,
for
you
know
installing
os
any
security
updates
to
it,
the
customer
or
the
user
has
to
manage
it
and
what
vivac
provider
and
the
capi
controllers
do
here
is
installing
relevant
kubernetes
binaries,
and
then
we
also
use
the
quberium
bootstrap
provider,
so
we
use
qvadium
to
bootstrap
the
kubernetes
node.
F
F
As
in
suppose
say,
you
want
to
scale
up
or
scale
down
clusters,
since
the
hardware
is
not
under
not
under
our
control,
we
don't
have
the
ability
to
create
hardware
or
discard
hardware
on
the
fly
like.
If
you
want
to
scale
down
or
delete
a
cluster,
we
will
not
delete
the
hardware
per
se,
but
we
will
try
to
de-provision
the
kubernetes
node
that
is
whatever
kubernetes
binaries
that
we
have
installed.
We
try
to
do
a
clean
uninstall,
so
it
kind
of
breaks
down
this
entire
stack
into
host
provisioning
and
node
provisioning.
F
This
horizontal
line
indicates
the
management
cluster
is
in
data
center
and
workload
clusters
are,
it
could
be
potentially
data
center
or
edge
locations
like
retail,
store
outlets
by
in
its
current
form,
has
this
limitation
that
we
cannot
provision
a
management
cluster
out
of
by
host.
That's
why
we
kind
of
have
this
as
a
prerequisite
that
the
management
cluster
has
to
be
one
of
vsphere,
aws
or
azure,
and
on
this
management
cluster
we
will
install
byh
provider
as
well
and
on
the
byh
side
that
is
at
the
edge
location
on
each
of
the
user
host.
F
F
So
only
all
the
user
has
to
do
is
run
this
host
agent
and
the
host
agent
will
register
the
host
into
the
capacity
pool
so
next
time
when,
whenever
there
is
a
cluster
creation,
request
suppose
say
to
create
a
byh
workload
cluster
with
one
control,
plane,
node
and
one
worker
node
cluster
api
creates
all
the
machine
deployment
machine
templates
and
we
have
defined
similar
crds
like
bio,
machine
templates
and
byo
machines.
So
these
bio
machines
look
for.
Are
there
any
available
hosts
that
it
can
attach
to?
F
Because
it
is
not
something
like
it
can
create
a
vm
or
create
an
ec2
machine?
It
has
to
look
for
the
available
hosts
that
are
registered
if
there
are
available
hosts.
It
gets
attached
to
this
once
this
attachment
is
done.
A
host
agent
can
move
forward
with
kubernetes
provisioning,
like
it.
Installs
cube,
adm,
cubelet,
cubectl
even
container
run
time
like
container
d
and
then
uses
cubadium
to
bootstrap
into
a
kubernetes.
F
Node
suppose
say
you
want
to
scale
up
further
say
you
want
to
scale
to
three
worker
nodes,
but
if
you
had
registered
only
two
worker
nodes,
obviously
the
third
byo
machine
is
just
going
to
be
hanging
in
there.
It
is
going
to
get
stuck
in
a
provisioning
or
a
pending
state.
So
so,
as
long
as
you
have
enough
hose
on
your
edge
location
or
the
customer
site,
you
can
create
as
many
workload
clusters
as
you
wish
to.
F
So
this
is
the
basic
high
level
architecture.
I
do
have
a
short
demo,
I'm
going
to
walk
you
through
it.
F
Okay,
so,
like
I
said,
we
do
need
a
management
cluster
on
one
of
the
existing
providers
for
the
demo,
I'm
using
a
kind
cluster.
F
F
Now
you
see
it
right:
okay,
yeah,
okay,
so
so
we
are
creating
a
kind
management
cluster
here
and
then
we
are
installing
the
in
viva
infrastructure
provider.
F
Yeah,
so
for
this
demo
to
simulate
the
previous,
we
are
using
docker
containers
as
hosts.
So
when
I
talked
about
the
host
agent,
installing
all
the
kubernetes
binaries,
we
have
a
component
called
the
installer
that
does
all
the
installation
and
uninstallation
for
us
at
the
moment.
We
have
some
difficulty
in
making
this
installer
work
with
docker
containers,
so
for
this
demo,
I'm
using
a
container
that
already
has
all
the
kubernetes
binaries
on
it
yeah.
It
is
this
docker
file.
F
F
F
F
So
if
you
see
here,
we
started
the
host
agent.
This
registers
itself
as
byo
host
this
network
info
is
basically
we
use
qp
for
load
balancing
or
for
our
control
plane
ip.
So
when
I
say
add
network
info,
it
is
basically
detecting
what
is
the
default
network
interface
so
that
qbit
can
get
attached
to
it?
F
Now
we'll
go
back
and
apply
the
cluster
template.
Okay,
before
that,
if
we
get
by
host,
we
see
that
all
four
hosts
have
registered.
F
So
at
the
moment,
if
we
don't
have
the
capability
to
select
any
particular
host
for
provisioning
control,
plane
or
worker
nodes,
so
whatever
hosts
have
been
registered,
we
will
just
pick
one
of
them
to
be
control.
Plane,
node
and
remaining
are
going
to
be
about
the
nodes.
F
F
So
if
we
go
back,
we
can
see
that
they've
all
gone
into
bootstrapping,
kids,
node
and
then
finally
successfully
bootstrapped.
So
if
we
examine
the
locks
of
say,
control,
plane
host
and
the
worker
host,
there's
literally
no
difference,
because
we
are
not
differentiating
any
host
to
be
particular
in
particular
to
be
control,
play
not
work
or
not.
F
A
Does
anyone
have
any
questions
yeah
all
right?
We
do
have
two
other
demos
left.
So
if
you
want
to
wrap
up
your
slides
go
for
it,
but
let's
try
to.
F
F
So
over
here,
so
what's
next
for
vyh
like
we
saw
we'll
be
using
cubeconfig
for
talking
to
management
cluster,
which
is
very
nice,
so
we
want
to
be.
We
want
to
move
to
more
secure
api
and
agent
authentication
mechanisms.
F
We
want
to
be
able
to
provision
management
cluster
out
of
byh
and
not
rely
on
other
infrastructure
providers
for
that,
and
we
also
want
to
extend
b
by
h
to
edge
sites.
So
what
this
means
is
usually
edge.
Locations
are
with
the
constraint
resources
use
case,
so
we
want
to
see
if
we
can
reduce
the
host
agent
binary
size
or
how
we
can
make
byh
work
with
other
kubernetes
distributions
that
are
more
suitable
for
edge
use
cases.
F
So
yeah,
if
you
like
this
pro
demo
or
if
you
like
this
provider,
and
even
if
you
don't
like
this
provider
to
get
in
touch
with
us,
so
we
just
went
open
source
last
month,
so
we
are
under
vmware
transfer
as
of
now,
so
you
can-
and
we
don't
have
upstream
slack
yet
so
you
can
reach
out
to
us
to
me
damjit
in
either
cluster
api
or
cluster,
their
bare
metal
channels.
A
A
H
Okay,
perfect,
okay,
so
I
start
with
a
short
status
update
and
then
I
just
show
some
yammer.
It
shouldn't
take
too
long,
so
status
update
about
a
classic
class
patch
implementation.
Essentially
we
got
most
of
our
prs
merged.
So
when
you
pick
up
cluster
api
from
main
and
enable
cluster
topology,
it
should
work.
We
have
a
little
bit
of
work
left
regarding
validation,
but
it's
only
if
you
update
cluster
classes
that
it's
safe.
H
If
you
change
various
schemas
and
things
like
that,
good
shout
out
to
killian
to
implementing
all
that
validation
and
defaulting
stuff,
it
was
a
lot
of
work
and
even
more
unit
testing,
so
great
work
there.
Next
hint,
I'm
not
sure
if,
if
someone
noticed,
but
we
already
have
four
end-to-end
tests
running
on
the
core
repository
and
they're,
also
usable
by
providers,
so
we
have
the
usual
quick
start
enter
enters.
We
have
kubernetes
upgrade
tests.
H
We
have
cluster
class
changes
that
tests
essentially
that
changes
to
the
cluster
class
are
rolled
out
and
that
cluster
class
three
bases
work
and
we
even
have
a
self-hosted
test
which
tests,
if
classic
cloud
is
able
to
run
in
a
self-hosted
management.
Cluster
yep,
that
should
be
it
one.
Additional
hint
aws
also
already
has
a
quick
start
cluster
class
test.
So
please
start
adopting,
if
possible,
and
come
back
your
feedback
and
questions.
H
If
there
are
some
problems
short
outlook,
the
next
features
which
are
coming
up
are
essentially
extending
the
work
on
patches
to
be
able
to
have
optional
patches,
so
that
patches
are
not
always
applied,
but
depending
on
some
variables
and
extending
the
variable
schemas.
Currently
we
only
support
basic
types,
boolean,
integers,
strings
and
stuff,
and
that
work
will
make
it
possible
to
also
use
arrays
and
objects
yep.
So
and
now
to
some
yaml.
H
I
essentially
showed
a
cluster
api
provider,
aws
quickstart,
enter
and
test,
and
just
to
give
a
short
glimpse
of
how
the
classic
class
looks
like
and
how
you
can
use
that
stuff.
So,
first
up
on
the
left
side
is
the
cluster
class.
H
In
that
case,
we
have
a
variable
for
the
image
repository,
which
is
probably
a
little
bigger
case,
cheers
here
gcrio
and
we
added
two
additional
variables
to
be
able
to
control
which
lcd
image
and,
oh
sorry,
that
was
actually
the
wrong
yep.
That
is
the
aws
demo
so
back
to
aws.
H
So
in
the
aws
case,
I
just
replicated
what
regular
kickstart
does.
So
we
have
a
variable
for
the
region,
one
for
sh,
key
name,
control,
plane,
machine
types
and
worker
machine
types,
so
just
a
very
minimal
set
of
variables
and
we
have
patches
accordingly,
so
just
one
example
to
actually
use
the
region
variable.
I
have
a
patch,
the
patch
patches,
the
aws
cluster
template
and
it
patches
the
region
field.
H
According
to
the
variable,
when
we
look
at
the
corresponding
cluster
template,
we
see
the
list
of
variables
and
yeah
just
key
value
in
a
cluster
template.
That's
still
just
the
end
substit
thing,
because
when
cuddle
creates
a
cluster
with
cluster
class
it
it
still
does
the
same
local
and
substitution
to
set
those
fields.
But
of
course
later,
if
you
just
create
a
custom
manually,
you
just
fill
in
the
value.
H
H
H
So
in
that
case
we
have
the
control,
plane,
template
and
we
chose
it's
somewhere
up
there
in
the
log,
but
we
set
that
sh
keyname
is
on
a
cluster
and
we
set
that
instance
type.
So
that's
essentially
how
it
is
resolved.
So
during
reconciliation,
we
take
those
variables
we
take
those
patches
and
when
we
create
the
objects
of
the
manitobans
topology,
we
just
apply
the
patches
on
top
and
fill
out
all
those
things
when
we
want
to.
H
When
we
now
want
to
change
the
variable
we
can
just
edit
the
cluster,
for
example,
and
go
to
what
was
the
name
worker
machine
type
go
to
working
machine
type
changed
it
here
to
t4
large,
and
what
we
should
hopefully
see
is
that
we
will
get
a
new
machine
with
t4
large
and
when
we
would
take
a
look
at
the
machine
deployment
and
the
linked
aws
machine
template,
you
would
see
that
it
now
has
default,
not
large
yeah.
So
that's
essentially
how
the
aws
quickstart
works.
H
One
additional
example
what
we
can
do
or
what
we
are
validating
with
cap
d
if
it
actually
works
a
little
bit
more
tricks
here.
So
we
have
in
this
case
two
variables
which
can
be
used
to
customize,
which
fcd
and
which
coordinates
version
are
used.
Those
are
usually
just
properties
on
the
kcp
template
and
you
can
still
just
go
into
your
kcv
template
and
set
those
variables
if
you
want.
H
But
if
you
want
to
make
them,
let's
say
more
easily
user
facing
you
can
introduce
variables
here
and
on
a
cluster
resource
you
can
either
just
set
them
empty.
That
means
that
the
default
value
is
used,
the
cube
adm
default
value
or
you
can
pick
whatever
hc
version.
You
want,
of
course,
and
that
is
applied
one
additional
thing.
I
think
that
was
maybe
use
case
in
the
vsphere
case.
I'm
not
sure
if
that's
still
a
problem,
but
let's
say
you
have
a
specific
kubernetes
version.
H
You
want
to
use
in
your
cluster
and
you
have
to
find
or
find
some
way
where
you
can
calculate
properties
based
on
that
kubernetes
version.
H
We
are
creating
a
patch
and
we
are
generating
that
patch
and
what
we
do
is
we
set
a
custom
image
to
kindness,
node
and
then
just
to
the
current
version
we
have
so
in
that
case
kindest
node
v,
122,
whatever
is
used.
So
that's
just
it's
a
simple
mechanics,
simple
method
to
just
do
a
little
bit
of
calculation
and
depending
on
this
cluster
has
or
when
we
change
the
version.
One
machine
deployment
after
another
is
upgraded.
H
Your
machine
deployment
will
always
have
the
correct
version,
so
that's
just
calculated
on
the
fly
and
don't
have
to
worry
about
which
version
is
that?
Actually
it's
always
the
same
as
the
machine
deployment
yep?
A
Thank
you.
Stephane
yeah,
I
guess
I
in
azure
provider
were
a
bit
behind.
We
had
some
issues
with
the
azure
cluster
templates
so
but
we'll
give
you
feedback
as
soon
as
you
can
test
it
all
right.
I
think
we
had
another
demo
next,
you
rush.
Do
you
want
to
share
your
screen,
then.
E
Can
be
busy
listening,
yeah,
okay,
I'll
try
to
keep
the
demo
short
so,
with
the
new
managed
topologies
in
place
and
cross,
the
glass
is
in
place.
E
The
way
a
user
would
upgrade
their
cluster
is
just
by
changing
the
version
in
the
topology,
but
that
version
is
not
immediately
reflected
on
all
comp.
All
the
underlying
components,
like
the
control,
plane
objects
and
all
the
machine
deployment
objects.
Instead,
it's
rolled
out
in
phases
like
we,
the
we
upgrade
the
components
one
at
a
time.
So
first,
the
control
pin
is
upgraded.
Only
after
it's
stable.
We
start
upgrading
one
machine
deployment
at
a
time.
E
So
at
this
at
this
stage,
since
the
underlying
object,
spec
and
the
topology
are
out
of
sync.
Basically
the
version
is
not
propagated
to
all
of
them.
The
status
will
be
false
and
while
the
status
is
false,
it
also
shows
you
helpful
messages
on.
Why
is
the
status
false
or
what
is
it
waiting
on
or
what
is
being
held
back
or
stuff
like
that?
So
we'll
take
a
look
at
that,
so
I
I'm
going
to
use
kind
of
docker
provider
for
this
demo.
E
I
have
a
cluster
with
just
like
one
kcp
node
and
one
two
machine
deployments,
each
with
one
replica.
The
version
is
going
to
be
120
1.2,
so
the
initial
cluster
version
is
going
to
be
121.2
and
you
can
see
that
the
topology
reconciled
is
true.
It
means
that
the
underlying
object's
desired
spec
is
in
sync
with
the
topology,
and
it
doesn't
necessarily
talk
about
the
status
it
just
talks
about
the
spec
are
the
objects
created
with
the
desired
spec
that
should
be.
That
is
reflecting
the
topology.
Now,
let's
edit,
the
cluster.
E
So
let
me
change
the
version
to
122.0
and
you
can
see
that,
since
the
version
change
is
not
immediately
propagated
to
the
underlying
objects,
because
the
topology
reconciler
waits
for
the
for
the
components
to
be
stable
before
it
propagates
the
update,
we
can
see
that
the
status
is
false
and
we
can
also
see
an
helpful
message
saying:
control
pane
upgrade
to
122.0
is
on
hold
because
the
control
plane
is
still
in
the
process
of
finishing
the
initial
provisioning
and
so
on.
So
similarly,
once
the
control
pin
is
provisioned.
E
If
we
still
keep
the
update
on
hold,
because
let's
say
the
machine
deployments
are
rolling
out,
the
message
would
again
reflect
that.
Okay
upgrade
to
so,
and
so
part
is
on
hold,
because
machine
deployments
are
rolling
out
and
so
on,
and
we
also
have
like
appropriate
reasons
to
against
the
condition.
So
there
are
three
kinds
of
reasons
that
we
can
have
when
the
condition
is
false.
One
is
topology,
reconcile
reconciliation
error,
the
other
is
control
plane
upgrade
pending
and
the
last
one
is
machine
deployments
upgrade
pending.
E
It's
a
pretty
quick
demo
and
if
we
can
see
that
now
that
the
control
plane
is
stable
but
we're
still
waiting
for,
like
the
machine
deployments
to
finish
rolling
out
before
we
pick
up
the
update,
the
message
has
been
like
updated
to
show
that
we're
still
holding
out
the
122.0
update
because
mission,
if
we
were
waiting
for
the
machine,
implements
to
finish
roll
out
once
so
that
that's
the
new
topology
reconcile
condition
that
we
have
on
our
cluster
objects.
E
There
is
a
pr
open
for
that
and
I
have
already
linked
it
in
our
in
our
meeting
notes.
So
if
people
want
to
take
a
look
at
it
and
review
it,
that
would
be
great,
but
that's
the
end
of
the
demo.
Are
there
any
questions?
I
can
answer.
A
Thanks
your
friend,
I
actually
have
a
question:
is
there
any
cases
where
you
enforce
someone
wanting
to
force
the
upgrade
like?
Even
if
the
cluster
is
not
stable?
Let's
say
that
some
version
of
kubernetes
is
causing
an
error
on
the
mission
deployments
and
you
want
to
upgrade
to
the
version
with
the
fix.
E
In
that
case,
like
I
would
say,
delete
and
recreate
would
just
force
the
upgrade,
but
right
now
we
don't
have
a
way
to
force
the
upgrade
if
we
if
we
want
to,
but
that
is
a
good
solution.
We
should
probably
work
on,
provide
a
way
to
do
it,
because,
right
now
they
can't
force
it.
It
just
has
to
roll
out
on
the
weights.
E
And,
and
also
one
thing
to
one
thing
that
I
should
maybe
mention
is
it's
unrelated
to
the
topology
conditions,
but
we
don't
allow
downgrade
so
like
the
versions
can
only
go
one
way.
So
if
there
is
ever
a
problem
in
like
rolling
to
the
next
person,
we
can't
can't
roll
back.
We
only
need
to
like
jump
on
a
version
ahead.
A
All
right,
if
not,
I
will
take
over
screen
sharing
again.
I
I
Yes,
I
wanted
to
ask
about
the
ignition
pr
we've
been
working
on
for
a
while.
Now
I
wanted
to
post
an
update
and
see
what
are
the
next
proposed
steps
so
yeah
we
have
all
the
conversation
resolved,
which
yeah
it
was
a
lot
there's
like
a
500
items
on
the
on
the
pr.
I
I
think
the
ci
is
passing
so
johannes
had
some
issues
with
some
entrant
tests,
but
I
think
this
is
now
resolved
and
I
also
heard
the
announcement
about
the
the
rc
release
of
1.1
and
I
was
wondering
if
you
think
this
could
make
it
into
this
release-
maybe
yeah,
and
so
just
pretty
much
call
for
attention
and
maybe
have
some
final
reviews
yeah
for
it.
A
Do
we
have
anyone
here,
who's
been
reviewing
the
pr.
I
don't
have
the
latest
information.
I
So
I
think,
since
the
last
feedback
we've
added
missing,
so
there
was
the
antenna
tested,
but
that
was
a
long
time
ago
and
I
think
the
final,
the
last
additions
are
the
documentation
and
I
I'm
not
sure
if
this
has
been
like
thoroughly
reviewed.
Yes
still,.
A
H
Yeah,
I
did
a
full
review
of
violet,
though
I
I
didn't
have
time
yet
to
take
another
look
I'll,
try
to
do
that,
hopefully
in
the
next
week
or
something
I
think.
Apart
from
that,
as
far
as
I
could
see,
the
major
issues
that
were
raised
should
be
solved,
so
I
agree
with
that
in
general
and
I
think
my
point
all
the
points
that
I
had
in
my
review
were
in
that
major.
So
there's.
H
Just
another
round
of
nitpicking
or
something
which
I
would
expect
but
yeah,
I'm
not
sure
if,
if
others
will
raise
some
new
major
points
there,
that
I
think
that
could
be
what
what
might
block
it.
A
Yeah,
I
think,
to
answer
your
question.
It
sounds
like
it's
just
waiting
for
reviewers
which,
like
it
is
a
huge
pr
with
lots
of
activity
and
comments.
So
I
think
it's
hard
to
follow
takes
a
bit
of
time
to
review.
So
I
think
people
just
need
to
carve
out
some
time
and
do
another
round
of
review,
but
yeah,
let's
aim
to
have
it
merged
by
mid
of
december.
We
have
two
weeks.
A
I
A
If
you
could
just
like
tag
the
people
who
have
been,
you
know
putting
comments
so
that
they
can
have
a
chance
to
check
that
their
comments
were
resolved.
That
would
be
great
all
right,
yeah
we'll
do
it
thanks!
Well
stephan.
Did
you
still
want
to
say
something,
or
was
this
your
okay
cool
chang?
You
have
the
next
one
and
the
last
one
actually.
J
Yeah,
so
just
to
have
some
updates
from
the
keyword
provider
side
that
we
kicked
off
the
first
meetings
on
the
on
that
the
keyword
providers
with
the
red
hat
and
microsoft
yesterday
morning
and
we
plan
to
have
our
current
meetings
every
monday
to
bootstrap
the
the
projects
and
currently
like
we
are
more
focused
on
getting
the
ci
working
because
like
to
test
this
keyword,
we
need
net
virtualizations
and
they
need
the
test
infra
to
build
on
the
bare
metals,
and
we
made
an
agreement
with
the
keyword
community
that
part
of
our
ca
job
will
run
on
their
infrastructure
because,
like
the
the
the
the
technical
stack
is
very
similar
and-
and
we
will
first
collaborate
on
the
testing
first
on
this
project
and
I'll
share
the
like
once
once
well,
once
fabric
february,
help
us
to
set
up
the
calendar
event,
we
will
share
in
the
slack
channel
for
everyone.
J
So
we
are
trying
to
set
up
a
kind
of
like
yesterday,
I
use
my
personal
zoom
and,
and
we
are
trying
to
set
up
a
community
one
and
so
and
also
post
this
on
the
coordinate
on
the
event
calendars,
so
that
everyone
can
get
access
to.
A
Okay,
yep
sounds
great
all
right.
If
there
are
no
other
questions,
I
don't
see
any
hands
raised
or
any
last-minute
topics.
I
think
oh
february
go
for
it.
B
A
Yeah,
thanks
for
all
the
great
demos,
everyone
and
we'll
see
you
next
week
and
talk
on
flag
in
the
meantime,
bye.
Everyone.