►
Description
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
A
We
are
a
project
of
Sig
cluster
life
cycle
and
as
such,
we
comply
with
their
guidelines,
mostly
in
terms
of
conduct
which
boils
down
to
everybody,
be
nice
to
each
other.
Let's
try
to
use
the
raise
hand
feature
in
Zoom,
so
we
don't
accidentally
talk
over
each
other
and
make
sure
everybody's
had
a
chance
to
weigh
in
who's
interested
in
the
certain
topic.
A
B
Okay,
I
can
go
real
quick.
My
name
is
Chris
petersman
I'm,
a
Microsoft
person
work
with
a
couple
customers
that
are
using
the
product,
so
I
know
some
of
the
folks
on
the
call,
but
mostly
here
just
to
listen
in
so
appreciate
the
time.
C
Hey
folks,
here's
celerio
I'm,
also
from
Microsoft
I,
was
here
also
the
last
time,
I'm,
also
helping
the
customer
that
is
using
the
product
and
I'm
trying
to
get
them
a
couple
of
PR's
and
closing
a
few
issues
very
important
to
my
customer
I
plan
to
stick
around
for
a
few
weeks,
maybe
a
few
months
at
least
until
I'm
active
with
their
customer
and
the
product.
A
A
All
right:
well,
let's
get
moving.
Then
let
me
oh
I,
haven't
been
sharing
my
screen.
Let
me
do
that.
A
A
E
A
Or
that's
very
helpful.
A
I
think
we
can
skip
Milestone
review
for
right
now,
because
it
kind
of
folds
into
capsi
release
status.
I
just
put
that
on
here,
but
honestly,
I
haven't
been
that
plugged
in
and
Jack's
been
kind
of
hurting
it
more
than
I
have
so
do
you
want
to.
F
Yeah,
let's,
let's
start
with,
let
me
add
this
as
an
agenda
item
just
because
I
have
a
record
so
I'm
gonna
walk
us
through
some
test
grid
status,
so
Matt,
let's
see
where
do
we
want
you
to
lay
at
how
about
question.
F
So
before
we
release,
we
want
to
make
sure
that
all
of
our
covered
release,
branches
and
Main
are
green
and
I'm.
Just
given
a
cursory
overview
and-
and
things
look
good
but
I
can
walk
through,
because
there
is
some
red
but
I'd
like
to
take
a
quick
step
back
and
talk
about
some
changes
to
test
grids.
We
recently
sort
of
reclassified
things.
F
There
was
actually
I'm
not
going
to
go
into
what
used
to
be
I'll
just
describe
what
we
have
now,
because
I
I
expect
we'll
we'll
have
this
for
the
the
long
term.
F
If
we
look
at
cap
Z,
it
should
be
kepsi
periodic
dash,
e2e
dash
main,
so
you
got
to
go
down
a
little
bit
because
it's
alphabetized
there
exactly
right.
So
this
is
going
to
be
our
new
sort
of
Suite
of
test
scenarios
that
run
various.
What
we'll
call
self-managed
configurations-
and
there
is
a
red
red
failure
for
the
cluster
class
test
there.
F
Both
of
those
failures
have
to
do
with
known
issues
with
cluster
cleanup,
so
we
have
something
we're
tracking,
where
it
seems
that
when
we
run
a
bunch
of
these
scenarios
in
a
single
test
run,
which
we
are,
that
we
have
some
issues,
cleaning
up
tests
and
we
were
formally
seeing
this
signal
with
AKs
and
now
we're
seeing
it
with
cluster
class,
not
exactly
sure
what's
going
on,
so
that
needs
to
be
triaged,
but
the
actual
scenarios
I
looked
through
them
are
looking
good.
F
A
Wish
I
could
the
zoom
widgets
are
preventing
me
from
getting
the
button
and
shortcuts
are
not
working
all
right
I'll
just
do
it
again,
there's
a
link.
F
At
the
the
AKs
main
test,
so
cap
C
Dash
periodic
Dash
e
to
e
Dash
AKs
Dash
main.
So
this
is
also
testing
main.
But
it's
a
it's
a
it's
a
discrete
sort
of
classification
so
that
we
can
test
AKs
scenarios
so
right
now
we
only
have
one
test
scenario
for
AKs.
You
can
see
there's
a
bunch
of
tests
that
were
running
one
hour
apart
there
in
the
past,
as
Matt
scrolling.
F
F
The
the
last
two
runs,
are
runs
that
follow
along
a
configuration
change
to
the
test,
the
test
definition,
so
they
now
run
twice
daily.
I
think
that's
right,
and
so
those
are
green.
So
the
AKs
scenario
looks
good
at
Mains.
That's
great,
so
1.6
looks
ready
to
go
and
then
we
also
have,
if
you
click
on
the
tab
map
for
V1,
beta
1
and
viewer
beta
Dash
minus
one.
So
we've
got
comparable
test
coverage
for
the
n,
minus
1
and
N
minus
two.
F
F
We
will
cut
a
new
release
of
1.4
when
to
find
like
a
final
patch
release
of
1.4
and
then
run
up
to
1.6,
after
which
point
those
of
you
and
beta
1
and
-1
tests
will
will
test
the
release
branches
for
1.6
and
1.5
going
forward.
I
know
that's
a
mouthful.
Does
anyone
have
any
questions
about
any
of
this?
Just
pause.
F
Cool
I
think
I
can
see
you
raise
hand,
features
and
I
didn't
see
anybody,
so
so
yeah.
The
the
long
story
short
is
that
test
signal
is
looking
good
hit
us
up
on
slack
if,
if
test
grid
is
confusing
to
anyone
or
if
it's
not
clear,
why
we
would
call
this
Good.
I
I
know
that
there's
an
obvious
red
color
there
has
to
do
with
a
known
issue
with
cleanup
so
happy
to
talk
more
about
that
on
slack
without
going
into
too
much
detail
here.
F
So
what
that
means
is
that
1.6
has,
at
least
in
this
engineer's
view
sufficient
positive
test
signal
to
to
move
forward
so
we'll
plan
to
cut
a
1.6
release
later
on
this
morning.
Matt
do
you?
Should
we
go
to
the
Milestone
page
and
just
make
sure
that
the
Milestone
is
is,
at
this
point
empty
of
open.
A
F
Both
of
them
awesome
beautiful
all
right
so
next
week
we
will
go
through
the
exercise
of
creating
a
1.7
milestone.
Let's
wait:
let's
not
jinx
ourselves.
By
doing
before,
we
cut
one
at
six.
F
So
if
anyone
wants
to
raise
their
hand
and
confirm
that
what
they're
expecting
to
be
in
1.6
is
there
feel
free
to
do
that
or
scan
over
this,
hopefully
even
watching
the
GitHub
magic
that
Matt's
been
demonstrating.
You
can
obviously
do
this
for
yourself,
but
if
now
is
the
time
to
raise
concern,
if
you
think
that
what
one
of
the
changes
or
fixes
you
expect
in
1.6
isn't
going
to
land.
E
C
Ahead,
I'm
sorry,
I
I
was
raising
the
hand
because
I
forgot
to
click
it.
No
I
just
wanted
to
ask
a
naive
question,
and
now
there
is
this
release
of
a
1.6
coming
up.
This
project
has
like
a
regular
release,
Cadence
or
so
like,
because
I'm
new
to
the
project
should
I
expect
like
a
release
after
a
certain
amount
of
weeks,
or
you
folks
just
release
when
there
is
a
significant
amount
of
work,
marriage
that
you
want
to
deliver
in
the
next
release.
F
Great
question:
are
you
looking
for
the
yeah.
F
But
the
the
summary
is
that
we
release
them.
We
cut
a
new
minor
release
every
month
by
convention
and
you've
been
demonstrating.
Some
of
the
criteria
that
we
use
to
determine
a
kind
of
go,
no
go
I
mean
the
most
important
criteria.
Frankly,
is
the
test
signal.
F
So
if
there
were
some
extraordinary
reason
not
to
release,
then
we
would
prioritize
that
over
keeping
to
a
strict
monthly
release,
but
so
far
we've
been
doing
that
for
a
few
releases
now
and
it's
been
working
out,
fine
so
and
then
we
cut
patch
releases
as
needed,
sort
of
according
to
a
weekly
consensus.
F
So
every
week
we
just
sort
of
pull
the
room
to
see
if
there
are
changes,
fixes
that
have
been
cherry-picked
back
to
the
currently
supported
release
branches
that
are
needed
and
then,
if
we
get
some
desire
for
that,
then
we'll
cut
a
patch
release.
C
Okay,
then
I'll
be
a
little
bit
more
specific
with
my
question.
I
don't
have
any
concern
for
this
release
1.6,
but
if
I
would
like
something
to
be
included
in
a
1.7,
then
I
should
expect
the
PRS
to
be
merged
in
about
a
month
time,
and
it
would
this
be
the
right
meeting
like
to
get
the
PR's
labeled
with
the
milestone.
F
Yeah
right
since
we're
we're
running
on
a
monthly
cycle,
it's
it's
not
strict,
but
I
would
say
we
we
probably
follow
what
kind
of
common
say.
75
of
the
release
cycle
is
development
in
the
last
25
is
integration
and
testing
and
validation.
F
C
Oh
okay,
okay,
yeah
I
have
already
PR's
in
flight,
so
I'll
just
comment
on
those,
so
that
I
would
like
the
Milestone
tag
and
then
someone
will,
with
the
right
access
will
be
able
to
pick
that
up
and
at
the
level
right.
F
Yeah
I
would
I
would
vote
to
to
punt
on
that
for
next
week.
Just
so,
we
can
focus
on
1.6,
but
if
you're
able
to
come
next
week-
and
you
should
have
a
real-time
opportunity
to
voice
your
advocacy
for
prioritization
and
make
sure
it's
in
the
Milestone
and
all
that
kind
of
thing.
A
So,
as
far
as
release
status,
we
have
clean
enough
signal
from
test
grid
and
I,
don't
think
we're
waiting
for
anything
else
to
merge
so
we're
just
going
to
go
ahead
today.
Set
the
plan.
A
We're
gonna
try
and
crowdsource
it
or
invite
people
to
play
along
if,
if
time
permits.
F
If
folks
would
like
I'd,
be
happy
to
to
drop
in
a
zoom
link
and
go
through
it,
myself,
I
go
through
it
in
you
know
a
communal
way
is
there.
Does
anybody
want
to
watch
somebody
released
kapsy
from
his
basement
in
Portland
Oregon.
A
All
right,
if
there's
no
further
questions
about
the
release
plans,
let's
move
on
Jack.
You
have
the
next
thing
about
Azure
managed
clusters.
F
Yeah
we
talked
about
this
last
week,
so
if
you
can
click
through
that,
the
the
so
just
a
reminder
that
this
is
a
thing
we're
actively
working
I've,
been
spending
more
of
my
focus
in
the
last
week
in
Cappy,
trying
to
clarify
how
we
might
drive
to
a
long-term
consensus.
So
I
wanted
to
bring
that
to
this
group
in
terms
of
the
so
we've
had
some
prior
conversations
around
how
we
might
break
the
current
V1
beta,
1
API,
and
how
that
might
affect
users.
F
We've
talked
about
the
the
key
reason
to
do
that
would
be
to
achieve
consistency
across
providers
so
in
in
recent
weeks.
Really
this.
This
is
a
sort
of
kubecon
outcome
which
is
great.
This
is
the
best
part
about
going
to
kubecon
talk
with
a
bunch
of
folks,
and
it
seems,
like
other
providers,
are
not
going
to
necessarily
be
able
to
all
agree
in
the
near
term
on
a
particular.
F
So
if
taking
a
little
bit
of
a
setback,
a
proposal
landed
in
cluster
API
a
couple
months
ago
that
defined
a
couple
of
acceptable
approaches
to
doing
managed
cluster
kubernetes
in
in
Cappy
from
a
provider
perspective
based
on
the
fact
that
that
Cappy
itself
does
not
define
a
managed
cluster
specification.
So
it's
really
at
this
point
in
time
it's
up
to
Providers,
to
figure
out
how
to
use
the
existing
set
of
cluster
API
and
Primitives
and
Behavior
expectations
of
those
Primitives
and
then
compose
those
into
a
managed
cluster
solution.
F
So
there
have
been
a
couple
of
different
approaches
so
that
cluster
API
proposal
was
meant
to
sort
of
codify
what
those
approaches
look
like
and
after
some
consensus,
gaining
defining
some
like
opinions
about
like
this
approach
is
acceptable.
This
approach
has
downsides.
Don't
do
this.
This
other
approach
is
also
acceptable
for
different
reasons.
So
there
was
some
understanding
of
like
say
a
month
ago,
previous
to
kubecon
that
that
Kappa
and
capsi
in
particular,
would
land
on
a
common
near-term
proposal
which
would
include
breaking
changes.
F
It
appears
that
that's
less
likely,
which
means
near-term
consistency,
is
not
likely
for
managed
cluster
across
the
main.
You
know:
existing
providers-
Kappa
cap,
Z,
Capo,
CI
cap
G,
so
really
for
our
community.
What
this
means
is
that
the
the
more
the
more
likely
sort
of
real
world
scenario-
that's
going
to
bring
consistency,
is
going
to
be
a
long-term
scenario.
So
I
can't
predict
exactly
what
that
means,
but
say
anywhere
between
6
and
18
months.
F
So
the
idea
would
be
that
we
will
work
with
the
Cappy
Community
to
actually
you
know
once
and
for
all
Define.
What
managed
cluster
looks
like
in
cluster
API
and
then
that
would
require
all
the
providers
in
the
ecosystem,
at
some
point,
let's
say,
V1
to
to
update
their
managed
cluster
implementations
to
fulfill
this
new
cluster
API
specification,
which
is
going
to
be
breaking
across
all
providers
and
again
the
justification
for
that
API
breakage
is
the
eventual
goal
of
consistency
across
providers.
F
So
short,
summary:
getting
providers
to
agree
on
a
sort
of
consistent,
frankincense
approach
is
hard
and
we
are
leaning
towards
changing
tack
a
little
bit
and
baking
this
into
cluster
API
itself,
which
will
sort
of
enforce
provider
consistency
just
like
for
self-managed
the
cluster
API
spec
is
sort
of
Paramount
like
that
is
driving
the
entire
story,
and
so
the
provider
implementations
follow
from
that
and
so
from
a
self-managed
cluster
perspective.
F
Capsi
and
Kappa
and
capshi
are
going
to
have
a
lot
of
consistency
because
you're
going
to
see
the
the
usage
of
say,
the
AWS
managed,
control,
plane
and
AWS
cluster
and
Azure
managed
control,
plane
and
Azure
cluster.
It's
going
to
sort
of
architecturally
make
sense
across
providers,
because
the
the
source
of
architectural
Authority
is
in
in
Cappy,
which
is
the
sort
of
more
foundational
abstraction
layer.
I'll
pause
right
now.
If
anybody
has
any
questions
about
that.
F
Okay
cool,
so
really
this
is
the
time
that
I
would
love
to
pull
the
community
not
this
is
this
is
the
exclusive
time,
but
I'll
be
continuing
to
do
this
on
slacken
and
other
media,
but
since
we're
all
here
now
would
be
a
time
to
voice
concerns
about
the
sort
of
high
level
outcome
of
breaking
the
manage
cluster
API
in
say,
late
2023
versus
late
2022.
F
F
So
the
reason
that
we
want
to
do
that
is
really
practical.
It's
it's
like
I,
said
the
likelihood
for
consistency
in
the
near
term
is
low
and
the
the
benefit
to
the
type
of
consistency
we
get
by
bringing
this
into
kpi
is
is
much
much
higher,
so
higher
value
also
higher
cost
any
any
concerns
or
thoughts
about
any
of
this.
H
F
I
will
paste
a
link
to
an
issue
but
I
encourage
folks
to
read.
This
is
not
baked
by
any
means.
So
it's
just
a
conversation
at
this
point,
but
I
think
I
can
confidently
predict
that
the
key
the
key
bits
of
friction
are
the
manage
cluster.
Sorry,
sorry,
this
is
recorded
So.
That
I
said
that
in
recording
the
cluster
and
the
control
plane
crds
as
defined
by
cluster
API,
so
cluster
API
defines
a
control
plane,
crd
that
has
certain
behavioral
expectations
with
respect
to
like
the
API
server
endpoint.
F
It
also
defines
a
cluster
crd,
which
has
certain
behavioral
expectations
for
carrying
cluster
configuration
that
isn't
specifically
related
to
the
control
plane
like
network
cidrs.
Things
like
that.
So
that
makes
sense
as
a
way
of
decomposing.
The
set
of
sort
of
kubernetes
cluster
configuration
surface
area
for
clusters,
where
everything
is
under
the
enforcement
of
cluster
API
in
the
managed
cluster
scenario.
Did
that
make
sense
Dane
what
I
just
said,
those
two
crds
in
the
in
the
managed
cluster
scenario.
F
F
F
One
of
the
outcomes
of
having
that
consists
that
that
sort
of
architectural
non-affinity
is
that
each
provider
Azure
AWS
gke
Oracle,
has
has
sort
of
themselves
decompose
the
managed,
the
sort
of
single
monolithic
managed,
kubernetes,
API
representation
that
that
big,
so
eks
defines
a
cluster
AKs
defines
a
cluster
and
then
in
order
to
make
it
fit
with
the
cluster
API
set
of
architectural
Primitives.
F
You
kind
of
like
say
all
right:
we're
going
to
put
these
set
of
cluster
configurations
in
this
crd,
and
then
these
other
ones
are
going
to
go
in
this
other
crd.
So
different
providers
have
done
a
different
way
of
doing
that,
which
is
sort
of
speaking
to
the
consistency
or
lack
of
consistency
that
we
see
across
providers.
F
If
you,
if
you
want
to
use
multi-cloud,
manage
kubernetes
in
cluster
API
right
now,
you
know
you,
you
don't
get
the
consistency,
sugar
that
you
get
with
self-managed,
which
is
one
of
the
value
propositions
of
cluster
API
from
the
very
beginning,
like
let's
define
what
a
cluster
looks
like
once,
so
that
you
can
run
multi-cloud
in
you
know,
if
the
context
switch
between
various
like
Cloud,
specific
stuff
for
every
single
environment
that
you're
supporting
you
can
adhere
to
a
common
pattern.
F
So
the
key
thing
to
answer
your
original
question
is
that
particular
decomposition
of
cluster
and
control
plane.
We
expect
that
to
change.
We
expect
that
when
we
bring
this
into
Capital,
if
you
read
that
that
issue
there's
definitely
debate.
So
it's
not
going
to
be
it's
not
clear
exactly
how
it's
going
to
happen,
but
that's.
The
key
point
is
advocacy
towards
a
single
common
crd
source
of
Truth
in
cluster
API
that
defines
managed
cluster
that
doesn't
exist
right
now.
F
So
you
know
concretely.
That
would
mean
all
you
know
in
any
in
any
case
where
we're
going
to
do
breaking
changes.
What
it
really
means
is
that
your
your
cluster
templates
for
V1,
beta
1,
say,
aren't
going
to
work
with
V1
V1
and
that's
very
hypothetical
I'm,
not
predicting
that
this
would
land
in
a
V1
specifically,
but
did
that?
Does
that
kind
of
give
you
some
hints
it.
H
Does
yeah
I'm
just
coming
from
a
position
where
we
have
quite
a
few
so
I'm
just
trying
to
kind
of
think
through
how
we?
What
our
migration
story
would
look
like
from
from
one
to
the
next.
F
Yeah
I
think
it.
It
would
be
a
part
of
this
effort
to
yeah
I
I
would
Advocate
that
that
a
migration
like
a
a
sort
of
formal
official
migration
solution,
would
be
a
requirement
for
any
provider
implementing
this
who
has
an
existing
managed
cluster.
So
you
know,
if
you
want
to
implement
this
new
Cappy
thing
that
we've
defined
and
you've
got
an
existing
user
Community.
F
Then
you
know
this
is
how
you
provide
migration
like
automated
migration
for
your
users,
but
that
that
will
be
a
part
of
that
effort,
because
it's
going
to
be.
You
know
the.
As
we've
said
in
previous
meetings.
F
To
a
certain
extent,
the
Clusters
are
going
to
be
fine,
because
the
underlying
calls
to
the
the
provider
apis
should
be
consistent
across
just
that
the
front
door
is
going
to
be
totally
different,
so
we
want
to
provide
a
way
of
doing
some
data
transformations
of
the
existing
crds
so
that
they
now
fit
the
new
V1
beta
1
and
maybe
provide
some
tools
for
helping
folks
in
their
various,
the
various
tooling
that
they
have
to
produce
those
templates,
because
they're
going
to
look
different
now.
F
F
So
the
one
the
one
thing
I
wanted
to
go
ahead.
Please.
C
Sorry
did
I
understand.
Can
you
hear
me
yeah
and
I
just
wanted
to
ask.
C
Did
I
understand
correctly,
that
the
long-term
goal
is
to
have
a
crd
for
for
managed
clusters,
and
the
goal
is
to
use
the
crd
to
describe
AKs
gpe
ETS
like
any
possible
manage
cluster,
because
I
I
mean
I,
see
a
big
challenge
with
that
that,
in
this
crd,
you
should
be
able
to
represent
the
in
the
API,
also
things
that
are
very
specific
of
an
implementation
and
and
not
to
necessarily
like
kubernetes
things
and
make
an
example.
C
I
know
that
gke
and
AKs
you
can
specify
when
you
create
a
cluster,
the
maintain
and
swing
of
of
when
the
cloud
platform
can
perform
an
upgrade.
So
that's
like
super
specific
and
it's
very
different
in
each
implementation.
So
I
just
wanted
to
add
that
my
idea
here
that
maybe
it
makes
more
sense
to
create
a
crd
for
each
cloud
provider
that
offers
a
kubernetes
managed
implementation
rather
than
trying
to
merge
everything
in
a
single
crd
that
could
easily
like
explode
with
a
lot
of
options.
C
F
Yep
sure,
let
me
clarify
a
little
bit
of
that,
so
all
that
is
totally
sensible.
In
fact,
this
is
just
how
cluster
API
works,
so
the
the
idea
is
that
the
common
thing
that
I'm
referring
to
that
would
be
defined
in
cluster
API
is
more
abstract
and
the
providers
bring
all
that
specificity
that
are,
that
is
provider
specific.
This
is
the
case
with
self-managed
cluster
API
right
now
as
well.
So
cap
Z
has
a
bunch
of
azure
specific
stuff.
Cap
G
has
a
bunch
of
specific
Google
stuff
ditto
for
Kappa
and
AWS
it.
F
You
know
it's
sort
of
like
basic
architectural
separation
concerns
going
on
here,
so
cluster
API
defines
things
at
an
abstract
level,
and
then
cap
Z
and
cap
G
and
Kappa
Define
things
at
a
more
concrete
level.
But
the
point
is
that
that
that
Journey
from
abstract
to
concrete
doesn't
exist.
F
It
was
not
designed
with
managed
cluster
in
mind
and
so
the
the
way
that
we've
implemented
the
way
that
Kappa
and
cap
Z
and
cap
G
have
implemented
manage
cluster
on
top
of
the
the
Cappy
sort
of
abstract.
Primitives,
just
isn't
is
hard
to
make
consistent
across
cross
providers
and
doesn't
really
make
a
lot
of
semantic
sense,
but
your
concerns
are
totally
valid,
those
that
that
will
not
be
the
outcome.
F
Okay,
cool
I
should
have
led
with
this
information
the
all
that
backstory
I
described
about
kubecon
and
near-term
turning
into
long
term
and
going
into
Cappy
instead
of
following
the
the
sort
of
Frankenstein
proposal,
recommendations
that
this
is
actually
not
going
to
block
at
least
I'm,
advocating
that
this
doesn't
block
the
graduation
from
experimental
of
cap
Z
Azure
manage
cluster.
F
It
simply
means
that
if
we
were
to
go
this
route
and
if
the
capsi
community
agreed
the
breakage,
we
would
not
so
there's
been
an
assumption
that
we
we
prefer
to
introduce
breakage
while
Azure
managed
cluster
is
in,
is
classified
as
an
experimental
API
I'm
advocating
for
at
least
you
know
with
the
Cappy
group,
I
I
think
the
direction
now
is
that
the
breakage
is
going
to
come
from,
say
V1
beta
1
to
V1.
So
we
would
graduate
we
would.
F
You
know,
feel
free
to
graduate
cap
C
from
experimental
to
a
stable
API
in
V1
beta
1,
independent
from
the
consistency
outcome.
So
that's
a
really
I'll
try
to
clarify
that
in
this
document,
I'm
I'm
waiting
for
a
little
bit
more
progress
from
the
cluster
API
folks
to
you
know,
assess
the
viability
of
that
long-term
proposal,
but
this
won't.
This
doesn't
mean
that
we
won't
graduate
Azure
managed
cluster
from
experimental
until
two
years
from
now
or
something
so
hopefully
that
didn't
scare
folks
does
not
clear
go
ahead.
Mike.
G
I
would
actually
say
if
you
know
we're
really
talking
that
it's
going
to
be
18
months
or
so
before
we
have
a
consensus,
think
happy
and
an
implementation
in
Cappy.
We
may
want
to
go
a
V1
sort
of
approach,
just
kind
of
following
the
same
thing
within
Upstream
Kates,
that
things
only
stay
in
beta
for
so
long
once
they're
considered,
you
know
if
it's
not
experimental,
it's
only
in
beta
for
so
long
before
it's
graduated
to
that
kind
of
V1
state.
G
We
may
want
to
consider
that
kind
of
approach
and
look
at
a
V2
sort
of
thing
for
the
massive
refactor
just
because
it
is
such
a
significant
change
from
a
V1
beta
1
to
a
V1.
If
we
were
to
wait
for
that
that
it
really
is
almost
a
V2
beta
one
for
that
kind
of
shift.
F
Do
you
Mike,
do
you
have
any
concerns
about
that
breakage
18
months
from
now
or
a
few
years
from
now,
compared
to
a
near-term,
breakage
Do,
you
have
a.
G
G
G
Just
because
that's,
or
at
least
that's
been
the
experience
that
I've
seen
I
mean
look
at
what
happened
with
Ingress
yeah
I,
just
I
would
hesitate
to
end
up
having
the
same
thing
happen
to
us
with
that
assumption,
with
the
Azure
managed
clusters
that
people
just
assume
that
the
V1
beta
1
is
V1,
because
it's
been
that
state
for
so
long
that
we
almost
have
to
go
from
a
V1
beta
1
to
a
V2.
Just
because
of
that
assumption
that
people
made
within
the
community
yeah.
F
Cool
I
think
it's
a
really.
This
is
a
really
good
conversation,
so
happy
that
you
have
opinions
about
this.
G
Yeah
I
mean
if
we
were
talking
six
months.
I'd
probably
have
a
different
opinion,
but
if
we
really
are
talking,
you
know
12
months
to
a
year
and
we're
hoping
to
get
out
of
experimental
within
the
next
six
months,
you
know
I
think
it's
a
little
bit
of
a
different
story,
especially
since
we're
probably
going
to
be
in
a
would
be
in.
G
F
C
Yes,
so
I
I
agree
with
having
the
big
breaking
change,
go
to
V2
and
I
just
wanted
to
bring
like
the
voice
of
the
customers
here
that
there
are
folks
like
that
are
building
products
on
these
apis.
C
So
if
you
in
the
next
six
months,
get
out
of
experimental
and
then
we
quickly
go
through
the
better
phase
and
we
go
to
big
one,
then
the
B1
also
has
a
different
deprecation
cycle,
and
so
customers
can
commit
better
to
build
products
on
top
of
KFC,
and
then
we
can
do
the
breaking
change
with
V2.
C
F
I
think
so
I
I,
but
I
think
that
we
you
again.
This
is
something
we
really
want
to
bring
into
Cappy,
because
we
we're
gonna
have
to
follow
we're
not
going
to
cut
a
V2
on
a
V1,
Cappy
API,
for
example.
So
please
bring
those
concerns
into
cluster
API
and,
of
course,
I'll
pass.
Those
along
as
well.
F
F
That
speaks
to
the
fact
that
machine
pools
are
experimental
and
how
much
Affinity
do
we
want
to
keep
between
sort
of
orthogonal
experimental
features,
because
Azure
managed
cluster
is
going
to
need
Machine
Tools,
because
AKs
requires
vmss.
Vms
is
an
Azure
specific
thing,
but
machine
pools
are
the
cluster
API
abstraction
that
represents
vmss
for
azure,
so
I've.
H
F
It
as
an
open
question,
I
anticipate
between
now
and
next
meeting
I'm
going
to
voice
my
opinion
that
we
don't
worry
about
that
and
that
that
call
that
a
separate
like
basically
I
I
would
like
to
Advocate
that
we
have
the
the
freedom
to
graduate
Azure
managed
cluster
out
of
experimental
separate
from
the
machine
pool,
go
ahead.
Dave.
H
F
Yeah
and
this
this
partly
informs
the
pivot
in
the
last
you
know
the
cubecon
pivot,
so
to
speak,
because
it
was.
It
was
deemed
very,
very
difficult
for
AWS
to
for
Kappa
to
actually
follow
the
recommendations
that
were
in
The
Proposal
that
folks
from
Kappa
actually
author,
so
the
best
intentions
were
there,
but
in
practice
it's
really
hard.
So
yeah.
F
Yeah,
so
that
precedent
helps
us
but
again,
I
think
it's
really
up
to
us
to
decide
how
we
want
to
do
that,
it's
good
it
would
require.
F
You
know
that
the
cluster
API
environment
is
feature
flagged
to
include
machine
pools,
so
that's
a
little
bit
more
ux
friction
for
for
users,
but
from
my
point
of
view,
what
experimental
means
in
terms
of
azure
managed
cluster
adoption
is
that
we
are
going
to
break
the
API
at
any
point
and
what
graduation
from
experimental
means
is
that
we
will
not
break
the
API.
So
it's
not
necessarily
a
production
worthiness
guarantee
either
either
non-production
worthy
or
production
Worthy.
F
It's
not
really
speaking
to
any
particular
like
practical
realities
like
you're
going
to
have
to
have
a
feature:
Flagger,
not
a
feature
flag.
It's
really
just
a
way
of
saying
you
can
build
a
platform
on
top
of
this
because
we're
not
going
to
break
this,
like
all
changes,
will
be
additive
and
backwards
compatible.
So
that's
what
we
don't
have
right
now,
which
we
want
to
deliver,
and
someone
said.
D
Hello,
hello,
I
was
wondering:
would
there
be
any
concerns
with
this
machine,
postal
and
experimental
and
there's
a
breaking
API
change
there?
How
is
that
possible
that
will
affect
the
API
guarantees
once
kept
fees
out
of
experimental.
F
Yes,
that
is
actually
a
risk,
so
what
that
would
mean
is
that,
because
for
for
Azure
managers,
I
would
think
that
the
if
I,
don't
think
this
is
likely.
Matt
can
speak
to
a
little
bit
more.
The
CC's
worked
with
this,
but
that
is
definitely
a
risk
and
the
the
way
that
we
deal
with
that
risk
is
we
we
sort
of
guarantee
that
we
would
have
a
migration
solution.
If
that
were
the
case,.
H
I
mean
opinions
or
Diamond.
Doesn't
we
all
have
them,
but
I
feel
like
machine
pools
has
been
it's
been
around
a
while.
It's
even
been
in
Fairly
heavy
use
by
folks
like
ourselves
and
I
feel
like.
We
should
be
very
close
to
a
position
where
we
can
declare
that
API
stable,
at
least
from
major
breaking
changes.
H
I
have
been
out
of
some
of
those
threads
for
a
bit,
so
I
wouldn't
be
up
to
date
on
any
recent
proposals
of
major
changes
to
those
schemas.
But
it
feels
to
me
like
that,
could
probably
graduate
ahead
of
managed
cluster
to
help
ease
that
pain.
F
Yeah,
unfortunately,
that
hasn't
happened
and
I
actually
think
that's
not
true.
It's
there's
probably
still
another.
Several
months,
Matt
I'd
love
for
you
to
weigh
in
on
your
yeah.
A
Well,
I
mean
the
only
thing
I'm
aware
of
is
the
proposal
that
we
merged
several
months
ago,
but
haven't
actually
made
any
implementation.
Progress
on
or
enough
progress,
which
is
machine,
pool
machines,
and
that
would
be
a
you
know.
Having
a
machine
represent
each
instance
in
a
machine
pool,
whereas
right
now,
machine
pulls
kind
of
a
black
box,
and
all
you
can
see
is
essentially
the
instance
count.
A
The
Proposal
got
merged,
it's
still
viable,
it's
mostly
just
the
implementation
is
lagging,
and
that
would
be
a
very
large
breaking
change.
So
I
think
at
this
point,
we're
considering
that
has
to
land
in
Cappy
and
a
couple
providers
before
machine
pulls
is
really
stable.
A
F
A
We
shouldn't
it
might
be.
We
shouldn't
get
too
far
into
it
anyway,
yeah
if.
A
H
You
still
that's
pretty
much.
What
I
was
going
to
say
is
what
I'm
interested
in
is
stabilizing
the
spec
of
the
Azure
machine
pool
and
Azure
managed
machine
pool
objects
because,
and
then
the
equivalent
over
in
AWS,
which
seem
to
have
stayed
relatively
stable
throughout
that
proposal.
At
least
the
last
I
checked.
H
My
my
knowledge
is
a
bit
dated
by
a
few
months
so
and
and
if
there
were
to
be
any
it'd,
be
nice
to
front
load
those
because
then
the
it
would
actually
be
easier
to
add
those,
because
the
those
resources,
those
Azure
machine,
managed
machines
or
whatever
they
call
the
machine,
pull
machines,
and
things
like
that
would
just
appear
when
you
upgrade
and
you'd
have
more
functionality,
but
nothing
should
reprovision.
That's.
F
A
No
you're
right
I
guess
we
could
think
yeah,
it's
a
it's,
not
necessarily
blocking
it
in
those
terms,
because
it's
introducing
a
couple
optional,
new
fields
on
the
provider
machine
that
we
hunt
for
and
if
they're
there
then
new
Behavior,
then
there's
a
new
code
path.
Essentially,
if
they're
not
there,
it
should
be
Backward
Compatible.
So
perhaps
it's
not
strictly
speaking
blocking
it
I
think
in
a
lot
of
people's
minds
and
obviously
kind
of
in
mind.
A
F
Well,
I
I
definitely
think
we
should
bring
this
up
in
in
copy
in
the
near
term,
but
I
I
support
the
idea
of
Landing
this
fully
before
we
graduate
machine
pools.
F
I
just
want
to
clarify
that
as
far
as
I
can
tell
it's
it's
what
Dane
said
that
what's
going
to
happen
when
you,
when
you
land
on
a
Cappy,
when
you
upgrade
your
Cappy
version
to
include
a
version
that
includes
Machine
Tool
machines
is
you
will
now
have
more
more
granulated,
but
will
suddenly
materialize
in
in
your
you
know,
Cappy
surface
area,
but
your
machine
pool
and
your
Azure
management
pool
front
door.
Specifications
won't
change.
Those
are
going
to
be
identical.
F
Encouraging
people
doing
it
on
this
PR,
that's
right,
just
feel
free
to
please
please
when,
on
the
active
conversation
and
I'll
give
another
round
this
coming
week,
I'll
I'll
do
some
mutations
to
the
language,
so
just
FYI.
A
B
Guys,
I
I'm,
not
even
sure
if
this
is
an
appropriate
question,
but
you
guys
can
give
me
some
guidance.
The
network
policy
setting
copycap
Z
is
that
a
required
field
to
either
set
that
to
Azure
Calico
or
can
it
be
emitted
and
therefore
the
Azure
Paul
or
the
npm
is
excluded
from
the.
F
B
F
So
the
current
type
spec
is
that
the
values
Azure
and
Calico
are
the
only
allowed
values
and
it's
an
optional
field.
So
there
has
been.
This
is
something
for
folks
to
think
about.
There
has
been
some
discussion
in
the
past
about
whether
or
not
capsa
should
be
opinionated
about
this
or
if
it
should
defer
to
AKs
for
its
opinion.
F
So,
for
example,
if
the
user
doesn't
specify
default,
do
we
as
cap
Z,
apply
our
own
opinionated
default
so
that
we
have
a
notion
of
like
a
sort
of
capsi
blessed
configuration
or
do
we
simply
pass
that
lack
of
user
opinion
onto
AKs
and
allow
AKs
to
do
what
it
wants?
F
Generally
I
would
say:
I
don't
want
to
say
in
100
cases,
but
most
cases
we
are
doing
the
latter.
So
we're
we're
simply
transparently
passing
that
lack
of
configuration
opinion
onto
the
AKs
API.
So
you
can
imagine
there
are
some
potential
downsides
to
doing
that,
because
the
AKs
API
could
change,
in
which
case
you
have
from
time
from
T1
time
T1
to
time.
T2
you've
got
a
cap,
Z
spec
that
behaves
differently
at
T1
compared
to
T2,
because
between
T1
and
T2
the
downstream
API
has
changed.
F
B
Yeah
I
I,
basically
up
with
you
on
the
slack
Channel
and
drill
into
this
a
little
bit
more
and
the
problem
is
that's
the
way
it's
running
now
for
this
one
particular
customer
is
there.
They.
E
B
That
field,
or
maybe
there's
something
buggy
in
there
somewhere
but
I'll
I'll,
take
that
offline
yeah.
F
B
C
Yeah
I
want
to
follow
up
on
the
example
of
the
network
policy.
I
would
say
that
would
be
strongly
against
doing
an
opinionated
to
choose
so
selecting
something
different
from
the
AKs.
They
called,
especially
because
looking
at
the
example
of
the
network
policy,
that's
an
immutable
field,
meaning
that
if
you
want
to
change
that,
you
have
to
recreate
completely
the
the
cluster
so
say
that
in
the
future,
for
whatever
reason
now
there
are
these
two
options.
This
is
just
an
example
like
Azure
and
caliber.
C
C
Also,
there
is
another
point
that
I
wanted
to
add
that
I'm
looking
into
the
crds
to
create
the
managed
clusters
and
the
implementation
is
not
complete.
So
there
are
many
options
that
are
not
implemented.
I
was
looking,
for
example,
at
creating
the
outbound
policy
for
the
network.
You
cannot
specify
the
user-defined
routing.
So
what
does
it
mean
that
when
you
have
an
API
that
is
implemented
partially,
all
the
stuff
that
is
not
implemented
will
necessarily
get
to
the
AKs
default.
C
So
on
the
stuff
that
you
implemented,
you
cannot
take
your
opinionated
default
because
otherwise
is
a
mixture
you
you
can
get
to
opinionated
default
only
once
you
implement
the
full
API
spec,
in
my
opinion.
Otherwise
you
never
know
like.
What's
gonna
happen
to
this
parameter
is
not
implemented,
so
it's
the
AKs
default,
but
I
don't
specify
and
is
the
capital
Z
default
is
going
to
be
very
difficult
for
customers
and
for
supportability.
So
just
this
is
my
idea.
I
hope
this
was
relevant
to
the
discussion.
Thank
you.
F
Yeah
I
think
that
that's
generally
true,
there
will
be
exceptions
to
that.
But
yes,
that's
that
I
think
everybody
agrees
with
that.
No.
E
I
Hear
me
there
you
go:
okay,
hey
sorry,
Chris
earlier
I
didn't
see
your
hand
up.
I
was
trying
to
find
my
own
hand.
I
just
wanted
to
say
that
our
meeting
has
fallen
off
the
kubernetes
dot.
Io
shared
calendar.
A
A
Little
PR
in
kubernetes
community
and
it's
got
a
little
structure
to
it
in
yaml,
and
then
you
run
a
script
over
it
and
it
Splats
out
some
other
markdown
files
and
you
check
in
that
PR.
As
far
as
I
can
tell
that's
the
whole
mechanism,
but
it
didn't
seem
to
work.
So
there
is
a
chant.
There
is
a
slack
Channel
That's
specific
for
this,
but
I've
not
been
able
to
get
anybody
to
answer
me
on
other
questions.
So
I'll
go
ask
there,
but
I'm
not
optimistic.