►
From YouTube: SIG Cluster Lifecycle - Cluster API 21-10-06
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
the
cluster
api
office
hours
meeting
today
is
october
6
2021..
This
is
a
big
day
for
this
community.
We
are
releasing
today
the
release,
the
production,
ready
release,
the
taco
really
has
been
cut
and
the
the
bot
like
the
release,
notes
which
we
have.
A
Oh
okay,
so
I
have
it
here.
So
we
have
release
notes
in
here.
I
have
we
haven't
published
the
release
yet
just
because,
like
the
images
were
taking
a
little
bit
of
time
to
to
get
promoted,
so
I
think
we
could
just
do
it
here.
Everything
looks
good.
This.
The
cncf
announcement
will
go
on
the
blog
later
today.
We
just
got
the
the
link
from
folks.
If
anything
needs
to
be
changed,
I'll
reach
out
and
to
cncf
here,
but
all
right
so
should
we
do
it?
B
A
A
All
right
yay,
we
did
it,
we
arrived
here,
that's
this
has
been
three
or
four
years
in
the
making
and
like
we're.
Finally,
here
declare
production
ready,
and
now
I
guess,
like
we'll,
wait
for
the
providers
to
go
in
as
well
with.
I
saw
that
vrs
like
to
update
the
quick
start
this
morning,
so
this
should
go.
It's
already
pulling
closer
cuddle
1.0
and
you
know
we
need
to
make
sure
that
the
providers
get
updated
so
that
the
metadata
and
cluster
cattle
won't
complain.
A
For
that.
Any
questions
on
the
release
comments
concerns
like
speeches
like.
Does
anybody
want
to
give
a
speech.
B
Okay,
I
guess
I'll
go
first,
just
to
be
the
ice
breaker,
but
thanks
everyone.
It's
I'm
super
excited.
This
is
really
awesome
and
I
think
it
wouldn't
have
been
possible.
Without
you
know,
everyone
working
together
and
being
super
like
respectful
to
everyone
and,
being
you
know,
a
model
of
an
inclusive,
comedy
community,
so
I'm
very
proud
of
us
for
that.
So
yeah
thanks
and
I'm
excited
for
what's
coming
next,
I
think
this
is
just
a
signal
to
everyone
to
say
it's.
B
A
A
All
right,
just
final
words
is
like
for
me.
It's
like
this
is
an
amazing
community.
We've
seen
it
from
like
I've,
seen
it
grow
so
much,
and
it's
it's
really
an
emotional
day.
To
be
honest,
like
it's,
it's
kind
of
like
the
mountain
to
get
here
and
I'm
excited
to
see
like
you
know.
A
Where
did
we
go
from
here
next
and
like
what's
the
road
to
ga
like
in
the
future
right
like
like
in
a
couple
of
years,
we
couldn't
probably
always
be
able
to
talk
about
like
ga
apis
and
things
like
that.
So
it's
exciting
awesome
and
with
that,
let's
just
move
to
discussion
topics
that
we
do
we
have.
C
Ye
I
just
pushed
another
update
to
address
the
last
outstanding
question
we
had
about
the
annotations,
so
I've
changed
the
annotations
to
be
clusterautoscaler.kubernetes.,
because,
apparently,
that
you
know
cluster
auto
scaler
is
like
a
core
component.
I
guess
so.
I
updated
those
and
I
updated
the
status
block
and
I
think
those
were
the
last
two
open
questions
we
had.
I
also
added
some
language.
You
can
see
it
right
there
below
the
annotations
that
says
you
know
these
won't.
C
These
will
be
defined
in
the
cluster,
auto
scaler,
not
in
cluster
api
and
then
up
towards
the
top.
In
the
implementation
details
you
have
to
scroll
up
a
little
further
there's
a
section
for
implementation
details,
and
I
added
a
second
paragraph
just
mentioning
that
the
the
implementation
for
the
annotations
will
be
in
the
cluster
auto
scaler,
and
that's
so
that
we
couple
to
that
api
as
opposed
to
coupling
to
cluster
api.
So
hopefully
that
addresses
everything
that
we
had
outstanding.
A
A
On
my
side
so
yeah,
if
any
other
people
once
want
to
lgtm,
if
you
were
a
reviewer,
if
you
want
to
review
it
again
and
leave
comments
like,
if
please
do,
I
think,
let's
give
it
until
friday
and
then
we
can
merge
it
like
yeah,
that's
great
just
we
can
just
re
ask
for
reviews
here.
Okay,
should
we
go
to
these
yet
or
should
we
just
move
on
with
the?
I
know
that
there
is
a
topic
for
machine
pool
machines
very
soon,
yeah,
so
matt
you
want
to
go
ahead.
E
Yeah
sure
thanks,
so
I
just
want
to
give
an
update
on
that
machine
pull
machines
proposal.
We
just
showed
there
and
basically
apologized
that
I
haven't
touched
it
in
a
few
weeks,
but
I
do
have
some
updates
for
it
coming,
hopefully,
I'll
have
all
the
to
do's
cleared
up,
probably
by
next
week's
meeting,
and
rather
than
go
over
the
dock
again
just
to
get
on
people's
radar.
Again,
I
thought
maybe
I'd
show
a
real,
quick
demo
that
might
explain
what
we're
trying
to
do.
E
So
I've
got
it
all
running
because
I
don't
waste
people's
time,
but
here's
a
here's,
a
cluster
cluster
cuttle
describe
is
not
aware
of
machine
pools.
Yet
that's
something
else.
We
should
maybe
do,
but
that's
what
this
is
other.
So
you
can
see-
and
maybe
this
already
looks
different
from
your
normal
machine-
pull
cluster-
that
there's
a
machine
hanging
out
under
it
right
now,
when
you
provision
a
machine
pool
it's
its
own
construct
and
it'll
spawn
instances
behind
it,
but
you
don't
actually
have
any
machines
that
correspond
to
them.
E
E
E
So
it's
nice
to
keep
track
of
the
specific
machines
rather
than
have
to
go.
Hunt
through
you
know
your
portal
or
whatever,
to
find
out.
Oh
what
instances
are
actually
running
on
behalf
I
mean
you
can
always
see
them
as
nodes,
but
if
you're
trying
to
look
at
the
infrastructure
directly,
it
was
more
difficult.
You
can
also
you
know,
kill
an
individual.
E
Machine,
whereas
before
all
you
could
really
do,
is
scale
down
and
have
the
have
it
decide
which
one
it
wants
to
destroy
for
you.
E
E
E
So
these
are
the
nodes
in
the
actual
workload
cluster
or
they're.
Not
oh,
it
must
have
dropped
out
already.
Well,
it's
a
live
demo.
So
just
take
it
on
take
my
word
for
it
that
the
coordinate
drain
code
up
in
capi
for
node
was
successfully
invoked,
so
we
actually
have
some
parallel
code.
We
wrote
in
cap
z
for
azure
machine
pull
machines
to
involve
invoke,
coordinate
drain.
That's
at
this
point.
You
know
that's
something
we
can
throw
out,
and
hopefully
other
implementations
wouldn't
have
to
write
that
themselves.
E
E
It
also
lets
us,
I'm
not
gonna
demo
this
well,
I
could
demo
a
little,
let's
see
so
seal
also
started
a
patch
for
cluster,
auto
scaler.
That
is
aware
of
machine
pools,
and
it
didn't
take
very
much
more
to
get
that
all
working
anyway.
That
also
works.
It
basically
is
agnostic.
E
It
also
scales
up
fine,
so
real,
quick
just
to
show
you
what
the
idea
is
and
that
some
of
it
is
working
in
a
proof
of
concept
which
is
helping
me
flesh
out
the
rest
of
the
doc.
That's
where
we're
at
any
questions.
C
Mike
yeah
yeah
matt
your
screen
kind
of
locked
up
to
me
before
I
could
see
the
auto
scale
or
stuff,
but
I'm
I'm
really
excited
to
hear
that
it
pretty
much
just
works
like
machine
deployments.
My
question
here
is,
I
guess,
are
you
gonna?
Are
you
gonna
open
up
a
patch
that
just
adds
a
machine
pool
as
like
a
separate
type
in
the
auto
scaler?
Once
we've
brought
this
to
cluster
api,
essentially
yeah.
E
That's
the
well,
I
think,
yeah,
that's
the
idea,
it'll
be
a
fairly
small
patch
and
I
think
it's
really
non-destructive
actually
well.
So
the
only
the
only
real
breaking
change
here
and
it
won't
be
a
breaking
chain
such
as
an
optional
field,
is,
we
need
to
add
something
to
the
machine
pool
to
bring
infrareds
along
so
and
if
those
are
populated,
then
we
use
that
to
keep
track
of
the
provider
machines
and
all
that,
so
that
would
be
the
only
sort
of
type
related
hitch
for
deploying
the
cluster
autoscaler
patch.
E
C
Cool
I'm
happy
to
hear
that
yeah.
I
guess
like
from
the
from
the
auto
scaler
side
with
respect
to
the
infrastructure
references.
I
think
we
might
be
adding
logic
for
that
for
the
scale
from
zero
stuff
anyway.
So
maybe
there's
a
little
bit
of
I
guess,
depending
on
which
patch
goes
in
first,
you
know
could
just
piggyback
on
that.
F
F
E
The
goal
here
is
to
get
them
to
act
more
similarly,
and
so
you
can
yeah
and
machine
deployment
is
arguably
also.
Some
is
something
that
cappy
controls
completely,
so
we
can
have
more
control
over
its
behavior.
So
I
guess
the
overall
argument
is:
there
are
probable,
there
are
efficiencies
and
strategies,
and,
other
goodness,
wrapped
up
in
these
cloud
provider.
Specific
constructs
like
the
auto
scaling
group
on
aws
and
the
vmss
and
azure
et
cetera.
E
I
guess
the
assumption
is
still
that
there's
value
in
using
those
native
constructs,
rather
than
just
completely
going
around
and
doing
it
all
with
the
machine
deployment,
maybe
they're
a
little
more
efficient,
more
robust,
I'm
kind
of
speculating,
because
I
don't
really
know
specifically
what
we
get
out
of
it.
But.
E
The
assumption
is
that
there's
efficiencies
and
or
you
know
better
provisioning
times,
there
are
obviously
some
features
like
you
can
do
over
provisioning
intentionally
with
a
vmss
that
we
don't
necessarily
have
code
for
that
in
in
the
capi
machine
deployment.
F
But
yeah
that
makes
total
sense
to
me.
So
the
fact
that
we
are
introducing
machines
underneath
it
doesn't
mean
that
we
are
not
going
to
be
able
to
make
the
most
of
the
out
of
the
cloud
native
capabilities.
That's
gonna,
that's
gonna,
be
there.
E
Yeah,
it's
I
mean
it
should
enable
us
to
use
these
cloud
native
scaling
group
type
things
better,
because
now
we
have
a
little
more
control
over.
We
can
say
I
want
to
remove
that
instance,
rather
than
an
instance
is
kind
of
the
main
thing,
okay,
yeah
sure,
dane
or
david
dane.
You
go
first.
D
Oh,
thank
you
yeah.
I
can.
I
can
speak
a
little
bit
to
use
cases
because
the
neuralic
we're
having
heavy
use
users
of
machine
pools.
They
are
especially
useful
in
the
case
of
things
like
manage
node
groups,
where
the
control
simply
is
not
possibly,
it
cannot
be
delegated
to
a
machine
deployment,
at
least
as
far
as
I'm
aware
something
like
if
you're,
using
like
a
managed
cluster
like
aks,
I
don't
believe
you
can
just
join
a
machine
deployment
to
that,
or
at
least
that's
what
some
folks
have
told
me.
D
D
That
is
where
I
really
see
machine
pools
shining
is
where,
where
there
is
a
need
to
delegate
to
the
cloud
provider
or
where
there's
a
benefit
to
doing
it,
sometimes
it's
one.
Sometimes
it's
the
other,
but
yeah.
D
I
think
that's
the
the
easiest
we've
used
it
in
aws
as
well
in
order
to
take
advantage
of
the
cluster
auto
scaler
aws
provider
under
cappy,
so
the
entire
cluster
is
actually
being
managed
by
cappy,
but
we're
using
cluster
autoscaler
to
manage
the
auto
scaling
groups
that
were
being
provisioned
by
the
aws
machine
pools
and
that's
largely
due
to
just
the
maturity
level
of
the
two
providers.
At
the
moment
we
needed
things
for
like
scale
from
zero,
which
weren't
yet
available,
but
no.
This
is
awesome.
Having
machines
this.
G
David,
thank
you.
Thank
you,
matt.
That
was
a
great
demo,
one
of
the
things
that
so
from
from
a
cloud
provider
standpoint,
there's
a
lot
of
information
that
they
have
like
at
the
allocator
level
like
where
they're
actually
choosing
to
place
nodes.
So
for
us
for,
like
azure
having
a
sense
of
like
what
your
failure
domains
are
and
describing
those
as
a
declarative
state,
as
opposed
to
each
asking
for
another
allocation
onto
a
failure
domain,
that's
really
good
use
for
the
vmss
or
asg.
G
We
can
also
allocate
more
effectively
at
this
point
in
choosing
how
to
balance
across
different
racks
and
stuff,
so
those
those
all
come
into
play
a
lot
of
it
overlaps.
So
it's
it's
a
little
nuanced,
but
yeah.
I
think,
there's
there's
definitely
some
cases
that
would
be
very
difficult,
if
not
impossible,
to
do
for
machine
deployments.
A
Just
one
comment
from
myself
like
looking
at
the
machine
pool
spec
and
like
and
this
implementation
is,
it
does
seem
like
that
the
machine
pool
and
machine
deployment
are
converging.
Have
we
thought
about
potentially
bringing
the
api
together
under
the
machine
deployment
instead
of
having
a
different
one?
Given
that,
if
I
look
if
I
look
at
the
machine
deployment,
machine
pool
and
machine
deployment,
spec
machine
pull
spec
is
a
subset
put
few
additional
status
fields
on
the
machine
deployment.
A
G
Go
ahead
dude
I
would
just
throw
out
there
if
we
were
to
put
them
together,
which
I
I
yeah.
We
just
discussed
this
quite
a
bit.
G
It
would
be
really
interesting
to
see
what
what
what
would
we
do
with
the
things
that
are,
but
we
are
unable
to
do
in
machine
pools.
So
the
reason
why
they're
subset
is
because
we
delegate
some
of
that
ownership
to
the
the
provider
and
and
not
everything
is
the
the
assumption
was
not
everybody
can
do
all
of
the
things
that
machine
deployment
can
do
with
their.
You
know
abstraction.
G
So
what
do
we
do
in
the
case
where
yeah
we
are
backing
with
a
machine
pool
like
asg
or
whatever,
and
can't
do
the
thing
that
we're
prescribing
on
the
machine
deployment.
D
Yeah,
I
think
after
I
raised
my
hand
david
hit
on
most
of
what
I
was
thinking
but
kind
of
along
the
line.
The
logic
is
very
different.
The
implementation
is
very
different
still,
even
though
the
spec
looks
similar
because
the
there
is
no
machine
set,
there's
no
concept
of
creating
a
new
machine
set
on
a
roll.
Usually
a
roll
is
delegated
to
the
cloud
provider
in
the
form
of,
like
instance,
refresh
or
I'm
not
sure
what
the
equivalent
is
upgrade.
Maybe
start
start
update
or
start
upgrade
in
bmss,
but
yeah.
D
I
think
it's
the
it's.
The
controller
logic
would
be
quite
a
bit
different,
not
that
it
couldn't
be
done
with
some
kind
of
branching
in
the
reconciler,
but
it
may
not
be
the
cleanest
implementation.
If
we
do.
A
Yeah,
just
you
know,
just
want
to
throw
it
out
there.
If
you
wanted
to
consider,
especially
like,
as
we
talk
about
a
run-time
extension,
this
would
be
like
a
good
use
case
to
delegate
functionality
to
something
else,
but
still,
I
guess,
within
the
context
of
like
the
machine
deployment
controller,
it
wouldn't
be
a
machine
set,
but
it
would
be
more
like
hey.
I
need
to
create
a
new
set
of
machines.
A
A
It
would
be
like
a
strong
refactor
of
everything,
but,
as
you
know
like,
we
want
to
bring
these
features
like
more
up
like
something
to
consider,
but
you
know
we
could
definitely
keep
the
machine.
Price
is
for
sure.
A
I
don't
see
anything
so
dimitri
did
you
have
the
next
for
image
builder.
H
Hi
everyone
yeah-
I
just
wanted
to
propose
so
open
up
a
feature
request
in
the
image
builder
and
yeah,
and
it's
basically
to
expose
this
imports
feature
or
yeah
value
in
the
container
d
configuration
so
that
we
can
drop
a
runtime
drop
in
some
override.
So
we
use
it
for
metrics
and
then
container
d
mirror
features,
and
without
this
like
we,
you
know
you.
H
H
B
A
Yeah,
I
think
they
could
probably
be
better
if
you
bring
at
their
office
hours
as
well.
Okay,
we'll
do
a
deal.
I
Yeah,
it
looks
pretty
cool.
I
think
it
solves
a
few
things
that
we
that's
in
our
tantu
community
edition,
where
we
do
some
hacky
things
with
continuity,
so
this
would
solve
it,
so
I
think
definitely
bring
it
up
in
the
image
build
office
hours.
I
think
that'll
be
interesting.
J
B
Yeah
just
know
about
office
hours,
we've
had
few
topics
recently,
so
we've
done
a
policy
where
we
look
if
there's
any
topics
the
day
before
and
cancel.
If
there
are
no
topics-
and
I
think
the
next
ones
are
tomorrow.
So
if
you
want
to
add
the
topic
there
today,
that'd
be
great,
so
we
can
know
that
we
should.
C
Yeah
we
had,
I
had
brought
this
up
like
a
while
ago,
but
since
we
had
kind
of
the
the
kubemark
you
know,
I
don't
know
tennis
match
or
whatever
back
and
forth.
It's
come
up
again.
Since
you
know
some
of
the
apple
folks
are
starting
to
take
a
look
at
kumar
and
testing
it
out
and
driving
it
around
or.
C
I
think
that
registry
doesn't
exist
anymore,
that
name
space
in
the
registry,
so
I
was
just
curious
like
is
there
a
defined
process
for
how
we
could
you
know,
get
automated
builds
of
the
controller
images
you
know
pushed
to
a
a
well-known
registry
or
is
there?
Is
there
like
another
way
to
go
about
this?
Like
I'm
just
kind
of
looking
for
the
community
wisdom
here.
I
Yeah,
I
need,
I
think.
Well,
we
haven't
completed
the
work
in
the
cat
v,
but
it's
similar
so
there's
a
process
to
get
a
google
project
set
up
and
then
you
can
use
cloud
build
to
do
the
automated
building
and
then
we
use
the
kubernetes
image
promoter
to
push
images
from
it
from
a
staging
inside
your
projects,
gc
inside
your
repositories,
gcp
project
to
the
main
kubernetes
image
repository.
So
there
is
a
whole
way,
but
there's
a
few
there's
quite
a
few
pr's.
I
A
What's
up
thanks
folks,
stefan
you
have
the
next
topic.
J
Yep
just
want
to
bring
up
the
classical
patch
proposal.
Amendment
again
we're
now
at
a
point
where
you
have
a
few
atms
and
vince
did
a
slash
hold
under
until
the
meeting
yeah.
I
guess
the
question
is:
what
are
the
next
steps?
J
I
think
I
resolved
the
conversations
as
far
as
I
could.
I'm
not
sure
if
there's
anything
else.
A
Last
time
on
look,
I
saw
a
bunch
of
lgtms
already.
I
did
approve
and
hold
it.
I'd
say
like
let's
wait
until
in
the
week
for
timeout,
given
that
I
guess
we
talked
about
this
last
two
weeks
ago,
and
I
have
not
seen
new
comments
on
it.
A
A
Find
any
anything
else
on
patches
before
we
move
on.
Does
anybody
have
any
questions
comments
concerned
on
this
as
well.
A
Yeah
good
question:
I
I
oh
that's
a
pretty
big
question.
I
guess
like
we'll.
We
overdue
for
backlog
grooming,
so
we'll
have
to
go
through
all
the
200
issues
at
some
point
in
the
past
we
have
scheduled
these
on
fridays
for
two
hours
and
a
half
multiple
over
the
course
of
a
month,
and
we
went
through
the
entire
backlog
to
reprioritize
things
just
to
actually
cecil.
Do
you
want
to
like
speak
about
the
policy
of
minor
releases
that
we
have.
B
Sure
I
don't
think
we've
documented
that
yet
so
at
this
point
it's
just
a
proposal.
I
guess,
but
I
can
open
a
pr
for
it.
I,
so
what
we
were
thinking
is
going
forward
to
like
simplify
releases
for
bug,
fixes
and
also
make
sure
that
we're
releasing
more
often
in
smaller
releases
that
are
easier
to
adopt
that
we
should
start
having
a
model
where
we
have
like
a
release.
B
Branch
for
the
current
miner
release
and
only
bug
fixes
and
like
improvements
like
test
fixes,
can
be
backported
to
the
current
release
branch
to
like
release
frequent
patches,
and
then
we
have
a
new
miner
release
so
like
right
now
we're
at
1.0.0,
so
1.0.1
would
be
patch
fix.
So
like
bug
fixes,
and
then
we
can
do
like
1.1.0
whenever
we're
ready
to
release
new
features,
and
that
could
be
more
often
than
you
know
when
we've
been
doing
in
the
past
with
api
version.
B
So
I
don't
think
we've
agreed
on
a
cadence
yet
so
that's
something
we
need
to
discuss,
but
something
like
every
month
or
every
six
weeks,
maybe
have
a
minor
release
with
new
features
and
then
in
the
meantime
we
can
still
backboard
and
release
patch
patches.
So
that's
what
we're
thinking
right
now.
A
Yeah,
that's
that's
definitely
yeah.
I
did
I
just
created
the
1.1
milestone.
I
think
we
could
probably
close
zero
three
after
we
drop
support
for
it.
I
I
guess
I
would
have
to
touch
it.
Six
months
from
the
latest
release.
A
I
guess
zero
four,
the
same.
It's
gonna
be
six
months
from
today,
and
then
we
probably
document
these
two
is
like
how
long
we
want
to
support
that.
That's
europe
x
range
going
forward.
A
The
kubernetes
has
like
some
rules
around
that
as
well,
and
how
long
we
want
to
support
the
one
point
x:
branches
as
well,
for
example
like
right
now
we
have
1.01.
Where
is
1.1
and
1.2,
do
we
deprecate
1.0
and
for
how
long
do
we
support
it
after
right?
Because
if
we
release
every
month,
that's
a
lot
of
minors
to
you
know
to
take
care
of
and
to
release
at
the
same
time,
but
yeah
like
we
should
just
document
it.
I
guess
to
see
you
and
then.
B
Oh,
should
we
do
like
a
planning
meeting
soon
now
that
1.0
is
released
separate
of
the
office
hours
to
discuss
all
these
things,
and
you
know
organize
our.
A
Milestones
yeah,
maybe
we
can
do
it
at
the
same
time
of
the
first
grouping
session,
like
planning,
plus
grooming
and
then
dane
yeah.
D
Yeah,
if
we're
gonna
have
a
discussion
at
a
separate
meeting,
we
probably
don't
need
to
discuss
it
too
much
at
length
here,
but
monthly,
minor
versions.
Just
just
my
gut
reaction
was
that
also
sounded
very
frequent.
You
know
I
could
see
maybe
quarterly
or
something
like
that,
and
I
I
feel
like
it
may
be
wise.
D
Maybe
it's
maybe
it's
still
too
early
in
the
project,
but
it
may
be
wise
to
go
to
you
know
a
12-month
cycle,
if
not
now,
at
least
at
some
point
on
those
minor
versions
similar
to
kubernetes
main
just
just
because
it's
such
a
core
infrastructure
component
upgrading,
it
can
be
challenging
impactful
things
like
that.
D
A
That's
definitely
a
good
topic
to
discuss,
like
the
other.
The
other
thing
is
definitely
the
pressure
on
providers.
Do
you
upgrade
like
you
know
as
frequently,
and
if
like,
how
do
we
automate
that
as
well
or
like
as
much
as
possible?
A
That
would
be
a
good
thing
to
discuss
too
in
terms
of
what's
next
in
planning
and
roadmap
just
want
to
touch
on
beta
2
we're
not
planning
on
beta
2
for
a
while.
This
is
what
that
was.
The
whole
goal
to
do
beta
1
is
to
keep
the
apis
like
a
little
bit
more
stable
for
at
least
like
unless,
like
it
is
like
something
completely
broken
or
needed.
A
It's
all
up
for
discussion,
of
course,
like
maybe
during
the
planning
meeting
we
can.
We
can
discuss
that
or
I
I
guess
for
yeah.
We
should
also
favor
a
little
bit
of
more
async
discussions,
maybe
on
these
policies
on
apr,
rather
than
just
discuss,
asynchronously
on
in
an
ad
hoc
meeting.
A
F
Yeah
thanks
liam
yeah,
I
was
I
wasn't
expecting
to
for
us
to
come
out
with
with
a
proper
roadmap
just
right
now
it
was
more
about
how
to
yeah.
How
are
we
going
to
discuss
those
scenes
like?
Are?
We
is
this
grooming
meetings
still
happening,
or
we
want
to
have
roller
meetings
to
you
know
to
plan
about
these
things
like.
A
Yeah,
I
think
this
is
a
good
forum
to
to
bring
it
up.
So
that's,
let's.
Maybe
let's
do
the
first
movie
meeting
next
friday
if
it's
kubecon
week,
so
maybe
we
can
skip
the
office
hours
for
next
week
and
try
for
friday.
If
folks
will
don't
think
friday
is
a
good
idea,
we
can
do
the
next.
One
we'll
probably
need
a
bunch
of
like
at
least
like
three
or
four
fridays
of
two
hours,
at
least.
A
Usually
that's
how
long
it
takes,
because
we
have
a
lot
of
open
issues
and
we
need
to
groom
them
and
there
is
like
a
lot
of
them
from
like
still
on
zero.
Four.
J
Next
friday,
you
mean
next
week
on
friday,
because
just
fyi
whole
vmware
has
off
that
day.
So
I'm
not
sure
how
how
many
folks
from
vmware
would
be.
There
then.
A
Okay,
wait
stay
off
for
vmware.
A
Great,
so
I've
yeah
well,
I
didn't
know
that,
but
we'll
we'll
figure
it
out,
let's
actually
just
send
like
an
a
doodle
and
then
we
can
find
a
time
so
that
more
folks
can
join.
A
Yeah
and
then
we
go
from
there
like
that,
the
best
way
to
put
something
on
a
road
map
is
to
open
an
issue
even
if
needs
a
proposal.
We
just
put
the
kind
slash
proposal
tag
on
it
and
so
that
we
know
that
there
is
a
cap
needed
out
of
that
proposal.
A
C
Yeah
just
about
the
operator
work
alex
is
on
holiday
right
now,
but
he's
been
doing
a
lot
of
the
work
on
our
end
for
that
stuff.
So
I
know
he's
still
very
interested
in
pursuing
it,
but
yeah
he
won't
be
back
probably
until
after
next
week.
So
just
a
heads
up.
A
Okay,
yeah,
we
can
just
discuss
with
alex.
A
All
right
thanks
all
and
congrats
on
the
release
again.
So
it's
a
great
effort
from
everybody,
so
everyone
congrats.