►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
we
are
recording,
welcome
everyone
to
the
weekly
cluster
api
office
hours
community
meeting.
It
is
wednesday
april
6th.
We
abide
by
the
cncf
code
of
conduct,
so
please
be
kind
to
one
another
raise
hands
if
you'd
like
to
speak
and
without
further
ado,
I'm
going
to
go
through
the
agenda
here.
So
the
first
thing
we
want
to
do
is
actually
I'm
going
to
split
the
agenda.
A
A
Okay,
fair
enough,
it's
opened!
Oh,
yes,
welcome.
B
A
A
Okay,
good
deal,
all
right
so
looks
like
we've
got
a
little
picture
and
picture
here,
so
we
can.
This
is
just
the
active
list
of
open
proposals.
E
Thank
you
for
some
reason:
your
little
preview
there
is
kind
of
stale,
it's
not
actually
up
to
date,
but
I
was
just
going
to
point
out:
we've
I've.
We
all
made
some
more
updates
the
machine
pool
machines.
I
think
we've
caught
up
with
almost
all
the
feedback,
we're
hoping
we're
at
a
point
pretty
soon
where
we
can
enter
lazy
consensus,
but
please
have
a
look.
E
F
No,
that
pronunciation
is
correct.
I
added
an
entry
to
the
open
proposals
for
the
runtime
books
for
add-on
management
and,
if
folks
are
okay
with
it,
we
can
take
a
look
at
it
right
now.
F
So
this
is
the
cluster
api
runtime
hooks
for
add-on
management
proposal.
This
builds
on
top
of
the
runtime
sdk
proposal
that
was
recently
merged.
F
Killian
did
go
through
and
provide
a
preview
of
this
proposal
in
last
week's
office
hours,
but
will
this
proposal
is
now
shared
and
available
to
everyone
in
the
community
we'll
go
through
the
proposal
and
see
and
try
to
answer
any
questions
that
there
are
right
now
but
of
course
feel
free
to
take
a
look
at
it
afterwards
and
please
leave
your
comments
on
so
first
thing.
F
The
motivation
behind
the
whole
proposal
is
to
enable
add-on
management
systems
to
hook
into
cluster
lifecycle
events
to
have
better
control
on
what
they
want
to
do
with
the
add-ons
examples
are
operations
that
you
would
want
to
do
on
and
on
on
installing
add-ons,
while
you're
provisioning,
your
cluster
operations,
that
you
would
want
to
do
regarding
upgrading
your
add-ons
when
the
cluster
flow,
when
the
cluster
upgrade
flow
is
figured
and
operations
that
you
would
want
to
do
when
deleting
or
doing
operations
on
your
add-ons
when
the
cluster
delete
workflows
figured
right
now,
these
things
are
not
quite
possible
by
writing.
F
Your
own
external
controller,
and
so
on,
because
you
can
cannot
hook
into
the
exact
moments
in
the
cluster
life
cycle
that
you
really
care
about,
and
this
proposal
introduces
the
concept
of
runtime
books
which,
as
I
mentioned,
builds
on
top
of
runtime
sdk,
which
allows
you
to
basically
hook
into
those
runtime
lifecycle,
events
for
a
cluster
and
then
perform
operations
on
top
of
it.
F
Even
though
the
proposal
right
now
is
documented
as
a
as
using
these
runtime
hooks
for
add-on
management,
the
runtime
hooks
actually
solve
are
modeled
against
common
events
that
they
can
be
used
for
cases
beyond
add-on
management,
and
it
could
be
used
for
anything
else
and
it
does
not.
The
proposal
does
not
make
any
limitations
on
using
it
only
for
add-on
management,
but
right
now
it
is
framed
for
using
add-on
management
as
an
example
to
show
how
runtime
books
could
be
used.
F
Now.
As
far
as
the
goals
of
this
proposals
go
purpose
of
goals.
In
this
proposal,
we
want
to
identify
a
set
of
runtime
books
that
could
be
used
and
define
the
schemas
associated
with
the
runtime
books.
That
would
be
the
request,
response
schemas
and
also
identify
the
movement
in
the
cluster
lifecycle
when
these
corresponding
rundowns
will
be
called
or
executed,
and
as
far
as
non-goals
go.
This
proposal
is
going
to
just
list
a
subset
of
runtime
hooks
that
would
eventually
come
up
in
cluster
api.
F
This
on
this
proportionally
talks
about
the
runtime
books
and
just
provides
ex
use
cases
for
add-on
management,
but
this
does
not
cover
our
detailed
solutions
for
advantage
management
problems
like
migrating
cpi,
from
e3
to
out
of
k
and
other
stuff
like
that.
F
Okay,
if
you
jump
into
the
body
of
the
proposal
itself,
the
runtime
hooks
are
proposed
to
be
supported
only
for
clusters
that
are
based
out
of
cluster
class.
Basically,
clusters
with
managed
topologies
and
the
reason
behind
that
is
clusters
with
managed
topologies.
Have
this
overarching
view
of
the
cluster
cluster
and
its
life
cycle
and
also
have
some
amount
of
control
on
the
underlying
objects.
F
Therefore,
the
ability
to
be
able
to
provide
runtime
books
becomes
obvious
when
you
are
using
clusters
with
managed
topologies
and
in
classic
clusters
the
ones
that
do
not
use
managed
topologies,
since
no
specific,
reconciler
or
no
specific
controller
has
an
overarching
view
of
the
entire
process.
F
It
becomes
a
little
tricky
so,
for
the
purposes
of
this
proposal,
we're
going
to
limit
it
down
to
only
clusters
with
managed
topologies,
that
is,
customer
space
sort
of
cluster
classes
and
the
proposal
does
document
why
we
did
this
so
please
to
take
a
look
if
you
have
any
further
questions
on
that
going
further
down
here.
This
proposal
introduces
six
lifetime,
six
runtime
hooks
and
we
have
use
cases.
We
have
example
use
cases
for
each
of
them.
For
example,
if
you
want
to
be
able
to.
F
Perform
certain
actions,
so
let's
take
the
metrics
database
as
an
example
of
an
add-on
that
you
want
to
care
about
that
we
want
to
operate
on
using
the
runtime
books.
So
let's
say
you
want
to
be
able
to
check.
If
there
is
enough
disk
to
wait
this
space
for
you
to
be
able
to
persist
the
metrics,
then
you
would
want
to
do
certain
actions
before
your
cluster
is
created
and,
let's
say
after
your
control,
plane
is
initialized.
F
You
want
to
check
if
the
matrix
database
is
also
available,
and
let's
say
after
when
you
trigger
your
cluster
upgrade.
You
also
want
to
be
able
to
pump
the
version
or
change
the
version
of
the
metric
server
associated
associated
and
so
on.
So
there
are
a
few
use
cases
listed
down
here
and
each
of
them
talk
about
certain
actions
that
you
can
do
within
with
an
add-on
or
certain
actions
that
you
can
do
when
associated
with
a
runtime
book.
F
So
let's
take
a
closer
look
at
the
six
runtime
books
that
we
are
introducing
in
this
proposal.
This
shows
a
timeline
this
graph.
This
diagram
shows
a
timeline
for
a
cluster.
This
was
presented
in
last
week
as
well,
but
we'll
go
through
it
again.
F
F
So,
let's
say
a
user
creates
a
cluster,
since
this
is
a
managed
cluster
before
any
of
the
underlying
objects
like
the
the
infrastructure,
cluster,
the
control
plane
or
the
machine
deployments
are
created,
we
do
trigger
something
called
a
before
cluster
create
hook,
and
once
this
resolves
to
success,
the
clustering,
the
cluster
provisioning
will
start-
and
let's
say
at
some
point
after
that,
the
first
api
server
of
the
control
pin
is
available
and
the
control
plane.
Then
we
trigger
the
after
control
plane,
initialized
hook
and
at
some
point
in
the
future.
F
F
We'll
take
a
closer
look
at
how
the
request
and
response
of
these
hooks
are
going
to
look
like
in
a
second
and
after
a
while,
let's
say
the
control
plane
finishes
upgrading
then
we
trigger
the
after
controller
upgrade
hook.
And
after
that
succeeds.
Let's
say
all
the
workers
finish
upgrading
then
we
trigger
the
after
cluster
upgrade
book
and
at
some
point
let's
say
the
user
deletes
the
cluster
again
before
any
of
the
underlying
objects
of
the
cluster
are
deleted.
F
The
before
cluster
upgrade
before
cluster
delete
hook
is
then
called
and
the
user
can
take,
and
then
the
extension
author
can
take
actions
depending
on
the
hook
and
the
request
passed
along
with
the
hook.
F
So
taking
a
closer
look
at
the
runtime
hooks
itself,
so
for
each
runtime
hook,
this
proposal
describes
or
specifies
how
the
request
for
the
hook
and
the
response
for
the
hook
is
going
to
look
like,
for
example,
the
before
cluster
upgrade
hook
has
a
before
cluster
create
before
cluster
create
hook,
has
a
before
cluster
create
request,
and
in
that
we
send
the
cluster
object,
and
this
is
the
minimal
information
that
we
are
sending
right
now.
F
Just
the
cluster
object,
because
all
the
extension
controllers
or
the
extension
authors
would
probably
need
this
as
a
minimum
information
to
make
decisions
on
top
of
it.
F
So
the
the
the
object
that
we
are
sending
right
now
is
just
the
cluster
object
and
as
a
response,
the
hope
could
get
a
response
similar
to
this,
which
is
a
before
cluster,
create
response,
and
you
can
see
that
it
also
sends
could
potentially
send
back
something
called
a
retry
after
seconds,
and
this
signifies
to
signifies
to
the
runtime
hook
that
the
user
wants
to
block
the
operation
and
then
retry
after
a
few
later
after
a
while,
and
this
response
basically
says
that
the
hook
should
be
called
sometime
within
the
next
will
be
called
sometime
within
the
next
10
seconds.
F
So
runtime
sdk
proposal
introduced
this
concept
of
blocking
and
non-blocking
books,
so
the
before
cluster,
create
hook
is
an
example
of
a
blocking
book.
Where
the
extension
author
can
send
back
a
response
which
then
blocks
the
cluster
create
operation
for
a
certain
time,
and
then
it
redries
after
a
certain
and
then
it
re-executes
that
hook
and
then
the
extension
order
can
then
re-evaluate
and
then
decide
if
it
needs
to
be.
If
it
can
then
succeed
at
that
point
or
if
it
needs
more
time,
let's
take
a
look
at
the
after
control
plane,
initialized
hook.
F
F
F
F
That
could
use
the
event
that
the
control
pane
was
initialized
and
now,
let's
take
a
look
at
the
before
cluster
upgrade
hook,
the
next
set
of
the
next
three
hooks
will
talk
specifically
about
the
upgrade
flow
associated
with
the
cluster's
life
cycle,
and-
and
in
this
case
we
again
have
a
before
cluster
upgrade
a
request
in
which
we
send
the
cluster
object.
But
in
this
case
we
also
send
the
from
kubernetes
version
and
the
two
kubernetes
version
associated
with
it.
F
These
values
are
the
versions
associated
with
the
corresponding
upgrade
operation
for
that
cluster,
and,
as
you
can
see,
the
response
does
support
retry
after
seconds
indicating
that
it
is
a
blocking
response.
But
the
thing
to
note
is
in
this
case
before,
because
this
is
an
upgrade
hook.
If
a
hook
does
return
a
retry
after
seconds,
it
would
only
block
the
upgrade
operation,
but
the
rest
of
the
reconciliation
will
go
through
normally
right.
F
So
this
only
blocks
the
upgrade
flow
and
similarly,
we
have
an
after
control
plane,
upgrade
hook,
which
also
gets
the
cluster
object,
and
since
this
is
an
after
option,
after
hook,
we
only
have
the
kubernetes
one
version,
which
is
the
kubernetes
version,
which
is
the
upgraded
version
of
the
control
plane
at
this
point,
and
this
can
also
send
back
a
retry
after
seconds.
This
response
can
also
send
back
after
seconds,
and
in
this
case
it
would
block
the
upgrade
of
the
workers.
At
that
point,
it
will
finish
the
update
of
the
controller.
F
This
is
called
after
the
control.
Plane
is
finished
upgrading,
but
then
it
will
block
the
upgrade
of
the
work
notes,
and
then
we
have
an
after
cluster
upgrade
hook,
which
does
not
have
a
retract
per
second,
so
it
is
non-blocking
and
it
also
takes
a
cluster
object
as
a
cluster
object
and
the
kubernetes
version
in
its
request,
and
the
last
hook
that
we
have
is
the
before
cluster
delete
hook.
This,
as
the
name
suggests,
is
used,
while
the
cluster
is
being
is
triggered
when
the
cluster
is
being
deleted
and
again.
F
There
is
a
draft
open
api
spec
has
linked
in
the
dock
that
you
can
take
a
look
at
it
talks
about
all
the
six
hooks
that
we
described
in
the
proposal
and
also
gives
you
the
schemas
associated
with
the
request
and
response
objects.
So
please
do
take
a
look
at
of
the
sdk
of
the
draft
opening
a
spec
as
well.
F
Moving
on,
we
have
some
notes
about
how
extensions
that
could
consume
these
runtime
books
should
be
when
basically
giving
providing
developer
guide
for
extension,
authors
to
think
about,
while
writing
extensions
that
consume
these
runtime
books
and
most
of
the
guidelines
that
were
defined
in
the
runtime
sdk
do
apply
here.
But
we
are
specifically
calling
out
certain
things.
F
This
certain
points
from
the
runtime
sdk
and
those
are
things
to
note
when
writing
extensions
that
could
be
blocking
and
non-blocking
and
stuff
about
error,
specifically
caring
about
error
management
and
how
these
runtime
hooks,
when
combined
with
failure
policy,
can
affect
how
the
reconciliation
of
an
object
of
the
cluster
is
going
to
happen
and
also
general
guidance
about
avoiding
dependencies
between
extensions
itself
like
you
would
not
want
to
have
one
extension
depend
on
another
extension
being
executed,
and
so
on
and
more
details
about
these
developer
guides.
F
F
A
look
at
that
going
further
down,
we
just
listed
on
the
security
model
risk
and
mitigations.
F
The
security
model
again
also
shares
most
of
its
principles
from
the
runtime
sdk
proposal
same
goes
with
risks
and
mitigations,
and
that
is
basically
the
end
of
the
proposal.
I'll
pause
here
to
answer
any
questions:
if
there
are
injection.
A
So
I
don't
see
any
hands
right
so
I'll
ask
a
question.
This
is
thanks
for
the
detailed
presentation,
jonathan
tong
and
I
are
actually
working
going
to
be
working
with
fabrizio
on
his
add-ons
orchestration
proposal,
which
it
sounds
like
would
benefit
from
these
runtime
hooks.
So
this
is
this
seems
really
promising.
A
F
D
So
the
use
case
that
we
have
today
are
about
metric
database,
so
it
is,
it
is
already
a
kind
of
a
real
use
case.
If
you
have
in
mind.
C
D
No,
no,
no,
this
is
basically
what
we
are
saying
is
that
if
you
ever,
if
you
want
to
manage
your
metric
system,
these
hooks
can
help
you,
but
we
are
not
entering.
The
proposal
is
not
entering
into
into
the
detail
on
how
you
manage
your
metric
system
and
make
sense.
C
A
A
Thank
you.
So
I
don't
see
any
hands
raised
so
in
the
interest
of
getting
all
the
agenda
items
included
in
this
discussion,
I
think
I
will
move
to
the
agenda.
Part
of
share
screen
again
move
to
the
agenda.
A
You
can
do
it
zoom,
it
looks
like
people
may
be,
adding
items-
people
I'm
just
joking
all
right.
First
on
the
list
is
matt
with
machine
cool
machines.
Matt.
You
want
to
take
a
few
minutes.
E
So
you
feel
like
we're
we're
good
for
that
yeah.
I
think
this
is
just
fabricio
was
writing
down
that
I
mentioned.
A
That
we
updated
machine
pool
machines,
okay,
great
we're
good.
So
to
folks
who
are
interested
in
that
effort.
I
see
a
ptal
note
there
so
ptal!
C
That's
perfect,
okay,
so
just
some
context.
First,
so
it's
about
cluster
class
or
cluster
variables.
The
suggestion
today
is
the
following:
in
a
cluster
class,
you
can
define
a
variable
which
is
an
object
and
you
can
opt
to
just
not
define
any
schema
at
all,
so
you
just
say
type
object
and
then
you
can
fill
in
random
objects
whatever
you
want.
So
just
like
that
example,
here
a
little
bit
at
the
bottom
of
the
screen,
so
just
some
random
object.
Something
like
that.
C
Just
works,
that's
not
ideal,
because
a
few
patches
or
later
on
x1
patching
is
relying
on
a
certain
format.
Yeah,
you
won't
be
able
to
work
with
that
kind
of
data.
The
problem
is
just
we
essentially
missed
to
to
prune
those
fields
so
in
kubernetes
or
to
validate
those
fields,
so
in
kubernetes
there's
something
like
pruning.
So
if
you
do
that
with
a
cd,
those
features
per
default
are
just
getting
implicitly
pruned
and
now
we're
in
a
situation
where
we
have
type
object
and
default
behavior.
C
Is
that
we're
just
keeping
those
fields
restoring
them
in
lcd?
They
might
not
adhere
to
any
scheme
at
all
they're.
Just
there
and
that
issue
is
about
what
do
we
want
to
do
with
the
current
state
and
where
do
we
want
to
get
to?
I
never
just
go
over
it,
and
then
I
don't
know
if
someone
has
a
spontaneous
opinions,
that's
fine!
Otherwise
we
can
just
discuss
some
issue,
so
what
I
would
suggest
is,
can
you
scroll
up
a
little
bit?
C
What
I
would
suggest
is
that
we
switch
to
an
explicit
opt-in.
So
essentially,
if
you
want
to
have
a
equality
or
freeform
object,
then
you
would
have
to
set
something
like
preserve
unknown
fields,
and
if
you
don't
do
that,
then
we
assume
that
any
additional
field
is
a
mistake
essentially,
and
we
would
validate
this
so
essentially,
if
you
set
preserve
unknown
fields,
then
we
accept
additional
fields.
If
you
don't
set
it,
then
you
get
a
validation,
error
and
there's
a
noise
here.
C
So
that's
in
my
opinion
or
that's
the
target
behavior
that
I
would
suggest,
and
the
follow-up
discussion
is:
how
do
we
get
there,
of
course,
because
if
we
just
change
it
right
now,
it's
kind
of
a
breaking
change,
because
folks,
who
already
rely
on
that
they
suddenly
get
a
different
behavior
with
1.2
or
whenever
you
implement
it.
Can
you
scroll
down
to
the
end?
C
So
I
think
overall,
migration
options
are
first
version.
That's
the
breaking
one.
We
just
introduce
that
preserve
unknown
fields.
I
mean
the
name
is
tbd,
of
course,
of
what
I
think,
but
that
would
be
the
same
as
in
series,
but
we
just
introduce
a
preserve
unknown
fields
and
at
the
same
time
we
add
validation
so
that
folks
are
getting
validation
errors.
So
that
would
be
a
breaking
change,
but
not
an
implicit
one.
You
would
notice
the
difference,
but
it's
definitely
a
breaking
change
and
that
api
type
is
already
beyond
better
one
alternatives.
C
Are
we
introduce
preserve
unknown
fields
now
with
the
next
release,
so
that
folks
can
already
start
defining
their
variables
correctly,
and
then
we
introduce
the
validation
later
on
and
later
on
could
be
1.3
or
it
could
be
v1
beta2
or
whatever
our
next
api
version
is
yep.
That's
all
I
want
to
tell
if.
I
A
Next
we
have
furcat.
Are
you
here,
I'm
happy
to
screen.
I
I
Yeah
yeah
sure
hi
everyone
just
a
background
on
this
issue.
I
We
have
a
scenario
where
we
want
to
scale
out
workers
and
perform
an
upgrade,
and
so
that,
with
the
current
behavior
of
the
machine
deployment,
we
end
up
triggering
the
rollout
upgrade
which
results
in
reprovisioning
of
the
current
worker
nodes
and
that's
a
bit
of
expensive
operation
in
some
environments,
especially
in
bare
metal.
I
So
what
we
were
thinking
would
be
nice
to
have
is
a
possibility
so
that
whenever
we
scale
out
and
perform
upgrade
machine
deployment
would
not
be
touching
the
existing
worker
nodes
at
all,
while
all
the
new
newly
created
worker
nodes
would
be
using
the
like
desired
kubernetes
version,
let's
say
so.
I
got
some
responses
from
wins
and.
I
I
don't
recall
the
I
think,
alberto
sorry,
if,
if
I
mistakenly
pronounce
your
name
so
that
we
could
have
maybe
a
separate
controller
to
to
handle
this
use
case.
But
what
we
are
thinking
is,
maybe
we
could
have
like
basic
in
place.
Kinda
upgrade
controller
so
that
we
don't
touch
the
existing
machines
anymore,
at
least
like
regarding
the
covenant
version,
but
ensure
that
the
new
machines
are
according
to
the
new
specs.
I
So
I
just
wanted
to
take
this
up
for
it
for
your
attention
and
wanted
to
know
your
like
other
folks,
opinions
or
suggestions
on
this,
and
if
you
could
agree
on
the
possible
way
to
cover
this
use
case.
J
Yeah,
I
think
that,
like
use
case
wise,
there's
definitely
a
like.
There's.
Definitely
some
use
cases
here,
especially
at
the
edge,
although
I
don't
think
that,
like
cluster
api,
wise
that
we
should
have
this
in
the
core,
but
rather
probably
enable
just
enough
of
capabilities
so
that
folks
can
plug
in
their
own
management
in
terms
of
in
terms
of
upgrade,
because
if
we
start
to
put
in
too
much
things
in
the
into
the
core,
things
might
get
hard
to
manage.
H
Yeah
thanks
yeah.
I
pretty
much
agree
with
that.
I
think
this
is
definitely
a
valid
use
case,
like
for
environmental
environments.
Most
of
the
time
you
don't
want
to
appreciate
machines
right,
you
want
to
have
a
way
to
to
apply
in
place
upgrades.
I
agree
with
you
seeing
as
well
something
that
we
probably
don't
want
to
support
end-to-end
in
core
copy,
but
hopefully
we
can
discuss
and
explore
options
to
enable
the
use
case.
H
So
also
this,
so
you
can
achieve
part
of
it
today
by
using
only
machine
sets
without
machine
deployments
on
top
of
it.
But
there
is
not
clear
contact
on
how
you
would
be
able
to
plug
in
an
external
controller
that
will
handle,
like
the
employees
upgrades
in
a
rollout
fashion.
I
Sure,
thanks
for
inputs,
yeah,
let's
discuss
them,
follow
up
on
the
on
the
issue,
and
hopefully
we
can
get
to
the
point
where
we
can
start
implementing
something.
Thank
you.
Okay,
great
thanks
again.
A
All
right
moving
along,
thank
you
for
taking
notes
again
fabrizio
vince
wants
to
ask
about
kubecon
contributor
summit
vince.
G
Yeah,
I
was
wondering
if
folks
here
like
are
going
to
kubecon.
We
we
don't
have
to
like,
say
it
here
but
like
we
could
talk
like
on
slag
about
it,
it
would
be
great
to
know
like
or
just
like
to
meet.
I
think
there
is
like
a
like
a
bunch
of
informal
sessions
on
the
contributor
summit
so
like
we
could
do
another
one,
I'm
personally
going
got
approved
yesterday.
So.
H
G
Great,
I
think,
for
breaches
going
from
vmware,
but
yeah
feel
free
to
reach
out.
If
you
want
to
meet
somewhere.
A
Cool
great
call
out
all
right,
so
I
see
jacob
or
jakob
is
willing
to
risk
it
and
go
at
the
very
end
so
we'll
take
that
at
face
value
and
move
to
the
provider
updates,
then
we'll
circle
back.
If
we
have
time
so,
the
first
on
this
list
is
I'm
sorry
if
I
can't,
if
the
the
acronym
friendly
readout
of
this
is
butchered
but
capoci,
this
looks
like
an
announcement
that
the
slack
channel
is
live
and
there's
a
monthly
office
hours.
A
K
Nope,
you
nailed
everything
and
the
only
other
thing
I
think
we're
going
to
have
a
release
later
this
week
or
early
next
week,
so
release
bump.
I'm
sorry.
A
Great
okay,
so
here's
an
announcement
for
cap
z,
cecile,
has
been
managing
a
pr
to
get
matt
boersma
maintainer
status,
well-deserved
matt
and
we
are
actively
experimenting
using
helm
to
install
out
a
tree
clock
fighter.
That's
a
lot
of
fun!
I've
been
doing
that
work.
So
so
do
you
want
to
add
anything?
L
Oh
yeah,
I
know
just
that
most
of
our
azure
services
are
now
factored
to
the
asynchronous,
so
that's
been
a
journey
over
a
few
months
and
with
crowdsourcing
and
special
thanks
to
jonathan
tong
and
cheyenne,
who
have
been
helping
with
that
effort.
A
Hey
and
welcome
matt
to
your
deserved
place
as
a
maintainer
in
the
short
term,
if
any
any
folks
in
other
provider,
communities
are
interested
in
the
helm,
gesture
to
install
out
a
tree
cloud
provider
or
cni
or
things
of
that
nature.
Please
hit
me
up
on
slack.
I've
been
in
that
world
for
a
while.
Now
all
right,
cap
g
winnie,
is
announcing
that
we
are
that
cafe
is
starting
at
office
hour
on
the
first
thursday
of
each
month,
and
the
first
meeting
is
tomorrow
at
10.
I
think
pdt
stands
for
daylight
time.
M
Yeah,
no,
no,
you
got
everything
right.
So
cap
g
has
been
pretty
quiet
provider,
but
we
are
actually
starting
off
this
hour
starting
tomorrow,
so
come
and
join
us.
A
A
All
right
so
that
is,
provider
updates
last
chance
for
any
provider
to
add
an
announcement
or
unmute
and
say
hi.
Otherwise,
we'll
go
back
up
and
talk
about
the
ipam
proposal.
A
It
is
actually
slightly
terrible.
Let's
maybe
it's
it's
really,
it's
really
low.
Okay,
then!
No,
that's
that's
a
little
bit
better.
I
think.
Okay.
N
N
Okay,
good
great
yeah,
the
slider
was
a
little
bit
down
so
since
we
have
time
left,
there
was
a
lot
of
discussion
on
the
ipam
proposal
in
the
last
few
weeks.
N
By
now
again,
I
think,
but
I
feel
like
the
scope
is
getting
larger
now
or
or
folks
want
to
to
increase
the
scope
of
the
proposal,
because
the
initial
idea
of
the
proposal
was
just
to
enable
integration
with
ipam
systems
because
with
the
current
state
of
cluster
api,
that's
not
really
possible,
since
machines
are
created
automatically
and
you
will
need
to
request
ip
addresses
per
machine.
N
And
now
people
are
talking
about
ip
talking
about
ip
address,
pools
and
integrating
with
integrating
it
with
cluster
class
and
so
on,
which
actually
wasn't
the
initial
idea
of
the
proposal,
which
is
only
about
enabling
that
single
step
or
that
single
integration
which
is
requesting
ip
addresses
for
machines,
because
that's
something
that
you
can't
do
right
now.
Everything
else
assigning
pools,
for
example
to
a
cluster
can
be
done
with
external
automation,.
N
Because
you
can
just
when
creating
the
cluster
pass
it
in
as
a
variable
for
example,
so
my
idea
would
be
with
what
the
proposal
currently
covers,
that
if
you
want
to
to
allocate
subnets
for
for
your
cluster,
for
example,
you
create
you
do
that
with
some
external
automation
for
now
create
a
matching
pool.
N
Then
reference
the
pool
from
your
cluster
from
the
remaining
cluster
definition
files
and
then
allocate
ip
addresses
that
way.
If
you
want
to
include
more
things,
for
example,
allocating
pools
from
a
larger
pool
of
addresses-
or
I
don't
know-
maybe
we
can
do
a
separate
iteration
or
another
iteration
on
that
ipam
proposal
and
add
things
later
similar
to
how
cluster
class
handles
things.
Because
right
now,
I
think
it's
getting
bigger
and
bigger
but
covering
the
basics
might
be
enough.
N
D
Yeah,
I
think
that
this
is
a
fair
ask
to
keep
the
proposal
scoped,
and
so
we
were.
I
saw
a
trade
in
the
intellect
started
by
a
scene
to
get
a
a
deep
dive
meeting
on
this
proposal.
I
suggest
that
we
we
we
take
this
opportunity
go
through
what
people
are
asking
decide
what
at
least
what
we
have
to
do
is
that
to
keep
track
of
this
ask
as
a
future
goal
if
we
consider
them
reasonable
and
to
make
sure
that
we
are
not
doing
nothing
to
prevent
these
to
be
implemented.
A
N
Just
the
idea
just
was
to
maybe
if
we
can
agree
on
something
quickly
to
do
it
in
here,
so
we
don't
have
to
do
a
separate
meeting
but
yeah.
We
can
also
do
a
deep
dive
and
think
more
about
future
use
cases.
To
avoid
limiting
things,
then
we'll
just
organize
a
separate
meeting.
A
A
Yeah,
I'm
I'm
now
becoming
acquainted
with
the
the
complexity
here.
Okay,
so
we
talking
right
here.
A
A
G
Chat,
it
looks
like
vince
may
have,
may
have
dropped,
but
please
I'm
still
here,
I'm
I
just
have
like
another
minute,
and
then
I
have
to
hop
but
yeah.
So
like
I
made
this
comic,
it's
like
a
while
readings
with
the
proposal.
This
seems
like
something
that
these
types,
like
don't
even
hold
any
information
or
any
like.
G
I
don't
don't,
have
any
useful
meaning
within
cluster
api,
usually
when
we
did
that
in
the
past,
like
it
was
going
to
be
something
reference
like
but
like
my
point
here
was
more
like:
should
we
flip
the
model
so
that,
like
the
our
people
and
the
all
these
peoples
like
it,
can
work
together
and
then
integrate
with
other
ipams
to
get
allocated
and
also
have
like
an
entry
implementation
that
like
it,
could
be
used
without
any
other?
G
You
know
implementation
as
well,
so
that
was
kind
of
like
more
like
the
provider
model
I
had
in
mind.
It
was
like
more
kind
of
like
opt-in
and
optional,
and
but
you
know
yeah,
we
can
talk
about
it
like
at
a
one-off
meeting
as
well.
A
Well,
certainly
to
whomever
is
managing
this
ongoing
much
credit
goes.
This
is
looks
like
a
lot
of
work
put
in
so.
N
A
Okay,
thanks
jacob,
I
think
we
are
at.
We
have
no
more
agenda
items.
Any
last
remarks
from
folks
can
get
13
minutes
back.
A
All
right
great,
when
the
recording
download
drops,
I
might
bug
some
folks
to
clarify
how
to
upgrade
to
the
right
or
upload
to
the
right,
youtube,
destinations,
etc,
etc.