►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
today
is
Thursday
October
20th,
and
this
is
the
cluster
API
provider
Azure
office
hours.
If
you're
joining
us
for
the
first
time,
there's
an
agenda
Link
in
the
slack
Channel.
A
If
you
don't
have
access,
you
need
to
join
the
sick
cluster
lifecycle,
mailing
list
and
Please,
be
aware
that
we
are
part
of
the
rear,
sub
project.
I've
said
cluster
lifecycle
and
we
abide
by
the
cncf
code
of
conduct.
It's
essentially
just
be
nice
and
respectful
to
everyone,
and
if
you'd
like
to
speak,
please
raise
your
hand
with
the
race
hand,
feature
on
Zoom
all
right.
A
If
you
have
any
agenda
items
or
questions
or
anything
that
you'd
like
to
add,
please
feel
free
to
add
them
to
the
agenda
and
your
open
discussion
topics
and
if
you'd,
like
you,
can
add
your
name
to
the
attendee
list
that
helps
us
track
who's
coming
to
these
and
then
also
like
for
reaching
out
to
people
afterwards.
If
we
have
follow-ups,
this
is
optional
and
yeah.
A
So
before
we
get
started,
we
always
give
a
bit
of
time
at
the
beginning,
if
anyone's
new
to
this
group
and
wants
to
say
hi
and
just
kind
of
introduce
themselves
and
tell
us
a
bit
about
why
they're
here
so
I
will
take
a
pause
here
and
if
anyone
wants
to
say
hello
and
introduce
themselves,
please
go
for
it.
B
Hello,
everybody
here
is
savario
Prado
I
work
at
Microsoft
and
I
will
be
working
on
on
this
API
for
the
next
weeks
at
least
so
I
thought
it
was
a
good
idea
to
jump
in
the
meeting,
introduce
myself
and
look
what's
going
on.
A
All
right,
I,
don't
see
any
hamstrings,
so
let's
keep
going
okay.
So
we
have
a
pretty
AKs
managed
cluster
focused
agenda
today.
That
being
said,
if
anyone
else
has
anything
else,
they'd
like
to
discuss
or
anything
at
all
any
questions
for
the
project,
please
add
them,
but
yeah,
let's
get
started
with
the
first
two
topics
that
Jack
added
and
I
think
a
few
people
are
here
for
that
too,
so
check.
Take
it
away.
C
C
Cool
should
I
share
screen,
or
do
you
want
to
I
was
going
to
present
the
pr
would.
C
C
All
right
great,
so
in
the
past
several
months,
the
Cappy
community
at
large,
the
cluster
API
Community,
has
been
discussing
how
to
guide
new
implementations
of
managed
kubernetes
so
manage.
Kubernetes
is
eks,
AKs
gke
in
the
Cappy
ecosystem,
and
so
Winnie
did
a
did
a
bunch
of
great
work
Landing.
This
proposal,
with
help
of
other
folks,
to
sort
of
codify
a
set
of
recommendations
for
for
these
new
implementations
of
managed
kubernetes
and
going
back
a
little
bit
further.
C
The
the
justification
for
this
becoming
a
a
key
topic
was
the
observation.
A
couple
observations,
one
that
the
Azure
provider,
the
capsi
and
Kappa
the
AWS
provider
had
gone
in
slightly
different
directions
when
implementing
AKs
and
eks,
respectively,
and
also
very
specifically,
for
the
eks
implementation
for
the
Kappa
implementation.
C
There
was
a
the
the
sort
of
choices
that
were
made,
which
made
sense
at
the
time
for
Kappa
ended
up
not
being
compatible
with
cluster
class
when
it
arrived,
I
think
they
sort
of
arrived
in
parallel,
or
maybe
cluster
class
arrive
later,
and
so
it
was
a
way
of
sort
of
having
a
retrospective
about
how
the
API
can
be
better
designed,
so
that
things
like
that
can't
happen
in
the
future,
to
these
implementations
and
I'll
just
hand
wave
around
why
this
is
interesting
for
managed
kubernetes
versus
self-managed,
kubernetes
I'm
happy
to
talk
offline
or
answer
questions.
C
If
folks
have
specific
questions,
but
there
are
a
lot
of
differences,
which
is
why
this
is
a
sort
of
more
interesting
API
challenge
for
the
managed,
kubernetes
Solutions
versus
the
sort
of
just
raw
self-managed
infrastructure,
Solutions.
C
Okay.
So
that
being
the
background,
that
PR
landed,
which
States
a
a
set
of
recommendations
that
deviate
a
little
bit
from
the
current
cap,
Z
managed
kubernetes
API
specification,
and
so
this
PR
attempts
to
kind
of
clarify
what
those
those
are
and
so
I
would
love
for
folks
who
have
a
stake
in
Azure
managed
cluster
to
read
over
this
and
critically
evaluate
what
I'm
stating
here
and
we
can
make
this
better
and
the
Really.
The
goal
of
this
PR
is
to
sort
of
create
a
historical
record
for
why
or
why
not.
C
So
the
the
primary
recommendations
for
the
capital
proposal
are
that
the
cluster
so
they're
for
folks
who
aren't
familiar
I
appreciate
your
patience,
but
there
I
think
there
are
at
least
some
folks
who
are
in
in
the
weeds
enough
to
understand
what
this
is.
So
the
primary
outcome
of
the
proposal
is
to
define
a
set,
a
unique
set
of
responsibilities
between
the
two
main
custom
resource
definition,
resources
that
at
least
Kappa
and
capacity
originally
shipped
with.
C
So
this
is
now
codified
and
it's
also
part
of
the
Cappy
spec
long
story
short.
You
want
a
cluster
crd
that
represents
your
managed
kubernetes
solution
and
you
want
a
control,
plane,
crd,
and
so
this
is
more
or
less
copy
pasted
from
that
Cappy
proposal
and
it
offers
hopefully
a
brief
sort
of
expression
of
what
that
distinction
is
so
control
plane.
Specific
configurations
would
go
in
this
control,
plane,
crd
and
cluster.
C
Specific
configurations
would
go
into
this
cluster
crd,
so
cap
Z
already
has
something
sort
of
like
this
for
Azure
managed
cluster.
The
the
main
violation
of
the
spirit
of
that
recommendation
is
that
we
are
putting
all
of
the
configuration
in
the
control,
plane,
CRT,
and
so
if
we
were
to
spike
on
a
set
of
improvements,
you
know,
assuming
that
we
deem
that
this
proposal
amounts
to
that
per
cap.
C
Z
for
the
Azure
managed
cluster
effort,
then
what
we
would
end
up
doing
in
reality
is
moving
a
bunch
of
the
properties
that
are
in
Azure
management.
Cluster
I,
don't
think
I
have
anything
copy
pasted
here,
but
but
you
could
imagine
that
in
the
Azure
case,
Azure
managed
control
plane.
Here
has
all
the
configuration
Azure
managed.
Cluster
is
more
of
like
a
sort
of
referential
key
that
fulfills
the
Cappy
specs.
It
doesn't
really
carry
any
configuration.
C
So
a
lot
of
this
stuff
up
here
would
go
down
here,
which
is
a
breaking
change
and
would
have
all
kinds
of
consequences,
but
would
have
the
benefit
which
is
which
I'm
trying
to
Express
here
of
being
slightly
more
sort
of
semantically
significant
from
a
human's
perspective,
because
the
concepts,
cluster
and
control
plane
have
some
sort
of
semantic
significance
and
we
are
as
a
as
a
sort
of
cluster
API
citizen
I,
think
it's
arguable
that
we
are
sort
of
more
compatible.
So
there
are
some
advantages
there.
C
The
disadvantage
is
really
amount
to
to
two
key
things.
One
is
that
it's
going
to
break
existing
situations,
and
so
that
would
have
to
be
accounted
for.
Two,
it's
going
to
be
a
lot
of
work
and
the
the
breakage
is
actually
gonna
make
that
a
lot
of
work,
quotient,
even
greater
quotient,
might
be
the
wrong
word.
Product
I
should
say
so.
Yeah
I,
don't
necessarily
need
to
go
through
this
in
detail,
but
I
think
from
a
high
level.
C
It
would
be
maybe
a
good
time
to
answer
questions
and
have
a
little
bit
of
a
discussion
around
any.
If
anyone
has
any
sort
of
opinions
that
they
can
carry
into
this
meeting
about
the
benefits
of
moving
cap
Z
towards
this
sort
of
platonic
ideal,
as
defined
by
that
Cafe
proposal,
I
would
love
to
I.
Would
I
would
really
love
to
hear
that
because,
as
I
said,
it's
going
to
be
a
lot
of
work
and
we're
gonna
necessarily
break
the
API
spec.
C
So
the
reason
we
should
do,
that
is
because
the
community
really
wants
us
to
do
that.
So
I'd
love
to
hear
that
from
the
community
and
if
anyone
has
any
just
general
comments,
now
would
be
a
good
time
to
try
to
see
if
I
can
follow
the
raised
hands.
If.
D
Yeah,
hello,
yeah,
that's
really
interesting,
because
I
guess
when
we
wrote
that
managed
proposal,
we
explicitly
put
in
I
guess
a
recommendation
that
sort
of
was
geared
towards
Kappa
strike,
capsizing
current
implementations.
So
are
you
not
to
force
any
changes
upon
providers
just
purely
for
the
reasons
you
listed,
you
know,
there's
lots
of
people
potentially
using
it,
and
it's
quite
painful
I
guess:
cap
Z
is
the
one
question
I
have
is:
AKs
is
still
experimental
as
far
as
the
API
definitions
go.
Is
that
right.
D
Of
course,
that's
a
bit
easier
so
in
in
cap
in
Kappa
we
we
moved
out
of
experimental
at
the
same
time
that
we
decided
to
go
down
to
one
resource
kind
for
the
control
plane
and
the
the
cluster.
So
it's
it's
going
to
be
a.
C
Lot
yeah
so
I'm.
Sorry,
if
I,
if
I'm,
suggesting
that
this
is
painful
per
capsa
you're
saying
it's
an
order
of
magnitude,
more
people
per
cap.
D
Potentially
so
we
we
haven't,
decided
how
we're
going
to
do
this
at
the
moment,
so
I
think
from
a
capital
point
of
view.
We
are
going
gonna,
go
short
term
back
to
the
original
implementation,
which
basically
follows
what
you
do
so
we
had.
We
originally
had
an
AWS
managed
cluster
that
satisfied
the
contract
and
basically
just
passed
through
the
values
from
the
control
plane,
because
that
would
get
us
over
the
initial
supporting
cluster
class.
D
So
we've
actually
discussed
a
couple
of
different
options
to
get
us
to
I.
Guess
the
recommendation
from
The
Proposal,
one
of
them
being
that
we
actually
introduced
completely
new
API
types
so
for
for
Kappa,
because
you
know
they
are
graduated,
essentially
we're
wondering
whether
it's
just
easier
to
go
instead
of
call
it
a
AWS
manager,
control
plane.
D
You
know
eks
control,
plane
and
stuff
like
that
to
start
with,
but
I
think
I
guess
from
chatting
with
users
in
Kappa,
one
of
the
main
areas
of
confusion
that
they
have
is
so
I'm
putting
all
this
networking
stuff,
for
example,
on
the
control
plane.
D
But
really
that
is,
you
know,
infrastructure
and,
if
I'm,
using
an
unmanaged,
kubernetes
cluster
that
that's
sitting
on
the
cluster.
But
you
know
for
a.
C
D
Yeah
so
I
think
yeah,
I
guess
from
a
capital
point
of
view.
We
are
going
to
take
our
time
and
we're
going
to
move
to
the
mo
back
to
the
model
that
you
have
short
term,
because
we
we
actually
copied
the
AKs
implementation.
When
we
first
started
out,
so
we're
going
to
move
there
first
and
then
work
out
how
we're
actually
going
to
get
to
the
the
recommended
way.
D
D
So
again
you
know
if
we,
you
know
if
AKs
and
eks
stay
with
this,
this
current
model
and
gke
is
on
the
recommended
model.
It's
you
know
they.
We
don't
have
that
consistency
so
yeah,
it's
probably
just
good
to
know
as
well
super
good
to.
A
So
the
way
I
understand
it
and
and
my
understanding
might
be
limited.
This
is
like
mostly
if
I'm
reading
the
proposals,
but
it
seems
like
there
isn't
really
any
benefit
that
users
of
cap
Z
would
gain
by
moving
to
this
model
except
consistency,
because,
right
now
we
have
something
that's
working.
We
have
something
that's
compatible
with
cluster
class.
A
It
does
match
this
real
state
of
things
in
AKs
like
there
isn't
a
separate
control,
plane
and
cluster
API
or
object
when
you
actually
create
an
AKs
cluster.
So
it
does
make
sense.
Logically,
in
that
sense,
and
if
we
were
to
make
these
changes,
it
would
be
breaking
for
everyone
who's
already
using
it.
A
But
if
we
don't
make
these
changes
and
then
we
decide
later
like
next
year,
we
want
to
make
these
changes.
It's
going
to
be
even
more
breaking
because
they're
going
to
be
more
people
using
it
and
potentially
we're
going
to
be
not
experimental
anymore,
hopefully
by
then
we
would
be
non-experimental,
so
I'm,
I
guess.
My
only
concern
is
like,
if
we
decide
to
do
this
now
or
to
not
do
this
now,
I
would
want
to
make
sure
that
there
isn't
going
to
be
anything
in
the
future.
That
forces
us
to
do
it
like.
A
If
Cappy
is
saying
this
is
just
a
recommendation,
you
don't
have
to
follow
it
now.
That's
fine
as
long
as
it
stays
that
way,
but
if
we
ever
get
to
a
point
where,
like
both
cap,
G
and
Kappa
are
doing
something
different
from
KB
and
kabzi
is
like
the
special
snowflake
I
wouldn't
want
something
to
go
into
Cappy
that
is
like
well.
A
Capsie
is
not
following
the
recommendation
anyway,
so
they
should
be,
and
you
know
that
kind
of
puts
us
in
a
bad
situation
where
something
Cappy
ads
is
not
supported
for
cap
Z,
because
we've
made
this
choice
and
then
we're
kind
of
stuck
where
it's
much
harder
to
move
at
that
point.
A
So
I
think
that's
my
main
concern.
I,
don't
know
I'm
very
curious
to
know
what
Richard
thinks
about
this,
but
I
guess
Mike.
You
have
your
hand
up.
E
Yeah
I
just
want
to
comment
on
the
the
one
thing
that
you
mentioned
about
consistency
from
my
point
of
view.
You
know
we're
multi-cloud
multi-provider
on
you
know
AWS
VMware
and
so
AWS,
Azure
VMware
and
possibly
some
more
coming
in
the
future,
so
that
consistency
is
definitely
something.
That's
really
important
to
us
and
I
agree
with
what
you
were
kind
of
saying
about
having
to
change
later.
If
it
was
forced,
would
be
definitely
more
painful
and
but,
as
for
you
know,
the
existing
users
were
early
enough
in
our
journey.
D
Yeah
still
I
I
completely
agree
with
you,
so
I
think
from
The
Proposal
point
of
view.
It
was
I
think
there
were
two
main
recommendations.
D
First
recommendation
is
don't
do
what
Kappa
did
use
the
same
resource
kind
for
the
control
plane
and
the
infra,
so
you
have
to
have
two
different
kinds,
so
that
was
the
I
guess,
one
of
the
main
recommendations
and
the
second
one
I
guess
was
more.
You
know
idealistic.
You
know
that
the
current
implementations
don't
necessarily
follow
the
separation
of
concerns
between
the
different
provider
types.
D
It's
a
lot
of
effort
for
no
real
gain
or
or
quite
it's
hard
to
quantify.
You
know
that
consistency
gain
that
would
you
know
when
it's
currently
works,
so
yeah
I
I
see
the
I,
see
that
point
and
that's
why
there
was
that
the
additional
recommendation
in
there
that
you
know
if
it
currently
works
and
it
satisfies
the
contracts
and
the
two
recommendations.
And
you
know
there
is
no
need
to
change
but
yeah
I
guess
you
might
I
guess.
D
Your
final
point
is
getting
painful
changing
now,
but
as
soon
as
you
graduate
the
API
and
if
you
did
decide
to
change
yeah,
it
would
be
a
lot
more
painful,
but
I,
don't
think,
there's
ever
going
to
be
any
huge
differences
between
the
different
provider
types,
so
the
control
plane
and
the
cluster
in
which
you
satisfy
you
satisfy
those
contracts.
Now
so
I
can't
see
it
there
being
dramatic
changes
in
in
those
because
that
would
apply
to
every
single
provider
when
they
managed
and
unmanaged
so
yeah
I
I.
A
A
What
we
all
want
is
consistency
right
and
we
want
managed
kubernetes
implementations
that
have
good
design
across
all
the
providers
that
are
consistent,
we're
in
a
situation
right
now
where
Kappa
has
to
make
the
change
because
of
cluster
class
right
and
it
sounds
like
you
all-
are
going
towards
what
Captain
is
doing
in
the
first
time
like
At.
First,
that's
going
to
be
an
easier
change
and,
in
the
meantime,
we're
also
doing
tab
G
from
scratch
that
doesn't
exist.
A
Yet
so
there's
no
I
guess
existing
user
have
we
considered
instead
of
making
cab,
Kappa
and
cabs
the
move
to
go
to
what
cap
Gina
has
to
instead
make
cat
G
just
do
what
kebs
is
doing
from
the
start
and
I
see
you
smiling
I
know,
that's
something
that's
been
considered,
but
I'm
just
curious.
Like
can
we
consider
it
again
like
why
not
because
it
just
seems
like
that
would
be
the
easiest
solution
where
we
would
all
have
where
you
can
cluster
class.
A
It
would
be
consistent
and
from
a
design
perspective
like
it
is
a
bit
idealistic
to
move
like
some
Fields
into
manage
cluster
because,
like
strictly
speaking,
those
are
more
clusters
and
control
control
plane,
but
from
a
managed,
kubernetes
service.
Point
of
view.
There's
not
really
any
differentiation
right.
So
I
guess
like
what's
what's
the
Thought
on
that.
F
A
D
Completely
agree
with
you
that
would
be
yeah.
The
easiest
quickest
way
to
get
consistency
across
all
of
the
providers
for
managed,
definitely
I
guess
when
we
were
discussing
it
just
felt
too
easy.
It
felt
like
a
bit
of
a
cop-out
saying
oh
yeah.
This
is
the
way
it
should
be.
D
You
know,
adhere
into
the
contracts
and
the
original
idea
with
the
you
know:
separation
concerns
between
the
different
provider
types
and
then
you
know,
if
kept
eager,
we
could
just
go
with
the
the
way
that
the
the
other
two
could
do
so
yeah
are,
we,
you
know,
hasn't
started,
so
we
could
do
that.
D
To
be
honest,
if
we
wanted
to
get
that
consistency
short
term
and
then
you
know
work
out
how
we
get
to
the
whether
we
even
do
get
to
that
idealized
view,
because
you
don't
always
have
to
go
there,
do
you
actually,
it
just
felt
a
bit
bit
too
easy
to
go
that
route,
not
because
it's
you
know,
work
wise,
but
you
know
we're
saying
this
is
the
ideal
where
you
know
where
we
want
to
get
to,
but
we're
not
going
to
go
there
just
because
it's
a
bit
too
hard,
but
I
think
it's
about
a
point,
though
I
do
definitely
sorry.
C
Well,
I,
it
sounds
like
what
you're
saying
is
that
the
proposal
was
written
with
the
expectation
that
Kappa
would
refactor
according
to
a
particular
sort
of
set
of
recommendations,
but
in
the
in
real
on
the
ground.
When
that
refactor
work
began,
it
was
I
mean
it
sounds
like
it's
not
finished,
but
as
of
right
now,
the
sense
from
the
engineering
folks
who
are
actually
going
to
do
this
work
is
that
in
fact,
the
recommended
approach
which
is
not
going
to
be
followed
because
of
you
know,
engineering,
time
constraints
or
whatever
is
that
fair.
D
I
think
it's
bad
to
say
that
yeah
definitely
in
the
short
term,
is
too
problematic,
because
we
we
moved
the
API
out
of
experimental.
So
we
have
to
adhere
to
the
you
know
the
deprecation
policies
of
of
the
API.
So
you
know
that's
just
going
to
cause
US
problems,
but
a
whole
bunch
of
constraints
on
that
so
yeah,
so
yeah.
D
We
will
go
to
this
this
same
model
as
as
kabzi
short
time,
because
that's
we
can
do
that
now
get
cluster
class
working
now
and
then
we
did
think
we
thought
two
options
that
we
would
evolve
the
API
over
time
in
line
with
to
communities
API
deprecation
policies
over
a
period
of
a
year,
two
years
whatever.
D
That
would
be,
but
that's
a
lot
of
work
and
it's
a
lot
of
work
to
you
know
just
remember
to
move
things
at
the
right
time
in
the
releases
and
you
know,
and
then
we
also
like
I,
said,
consider
just
creating
completely
new
API
types
that
adhere
to
that
recommendation
and
just
put
all
new
features
into
the
new
API
types
and
just
keep
the
current
ones.
D
You
know
pinned
in
time
and
deprecate
them
over
nine
months
or
whatever
it
is,
but
yeah
I
think
realistically
we're
going
to
move
to
your
model
first
and
then
just
work.
The
you
know
work
out
how
we're
even
going
to
get
there.
Even
if
it's
like
you'd,
like
you,
said
Cecile,
is
it
really
worth
the
effort.
C
Well,
I
would
I
would
speak
to
that.
I
think
that
I
would
I
would
second
the
notion
that
consistency
is
the
most
important.
C
The
most
sort
of
valuable
thing
we
can
give
to
the
community,
and
time
is
a
variable
in
that,
and
so,
if
there's
a
notion
that
Kappa
will
be
introducing
in
a
period
of
18
months,
three
different
ways
to
do:
managed,
eks
but
I
think
we're
not
actually
being
consistent
and
but
the
next,
if
we
want
to
be,
if
we
want
to
optimize
our
consistency,
the
next
refactor
that
Kappa
does
I
think
should
be
the
model.
C
So
if
that
Max,
if
that
fact,
you
know
what
I
mean
at
some
point,
we're
not,
we
think
we're
being
consistent,
we're
not
actually
being
consistent,
we're
just
referring
to
like
vaporware
set
of
specifications
and
that's
not
actually
consistency.
C
The
other
point
I
wanted
to
make
is
that
I
I
think
that
the
there
is
actually
that
I
write
this
in
my
doc
and
feel
free
to
disagree.
Anybody,
but
there
is
a
downside
to
the
the
decomposition
between
something
called
a
cluster
and
something
called
a
control
plane
with
managed
kubernetes,
because
at
least
in
AKs
there
there
is
no
intrinsic.
C
D
C
Going
to
be
true
for
every
managed
cluster
or
every
kubernetes
offering
in
the
ecosystem,
so
there
there
could
be
competitive
solutions
that
do
offer
their
own
opinions
about.
You
know
this
is
our
architectural.
C
These
are
our
set
of
architectural
Primitives,
but
I
think
for
AKs
and
for
eks.
You
know,
that's
not
a
thing.
There
really
is
one
sort
of
notion
of
a
managed
control,
plane
and,
and
then
the
a
discrete
architectural
boundary
is
the
node
pools
or
the
worker
nodes,
but
they're
in
that
managed
control
plane
offering
there's.
No.
This
is
what
your
cluster
is,
and
this
is
what
your
control
plane
is
as
distinct
from
one
another.
That's
not
a
thing!
That's
a
Cappy
thing!
Only
Cecil
you're
raising
your
hand.
A
Yeah
I
think
I
I
was
going
to
say
pretty
much
what
you
said,
but
I
think
just
to
answer
what
Richard
said
earlier.
I
personally
am
also
like.
Not
I.
Definitely
don't
want
us
to
choose
something,
because
it's
easier,
like
I,
think
we
should
make
the
right
design
Choice.
Even
if
it's
the
harder
one
I
think
that's
more
important,
but
in
this
case
I
think
we
also
have
to
be
pragmatic
and
I
agree
that
consistency
here
is
the
biggest
win
like
that's
what
we
want
right.
A
That's
why
the
proposal
was
made
that
and
getting
cluster
class
to
work.
So
if,
in
practice
we're
going
to
have
different
implementations
of
providers,
I
don't
think
we're
winning
I
think
we're
actually
losing
so
I
think
the
proposal
should
be
something
that
realistically
can
be
implemented
with
minimal
Pain
by
the
providers.
If
we're
going
to
be
consistent,
otherwise,
like
I,
don't
think
the
like
separation
of
concerns
is
really
in
itself
as
important
as
having
a
consistent
implementation.
That's
just
my
take.
A
Is
it
Jake?
What's
your
first
name,
sorry
John.
G
Yeah,
it's
John,
so,
okay,
hey
just
my
input
from
somebody.
That's
trying
to
do
some
implementation
across
multiple
Cloud
providers
and
one
of
the
things
that
actually
is
come
up
in
our
discussions
about
whether
to
use
copy
or
something
else
things
like
that
is,
in
fact
consistency.
G
That
is
actually
a
key
for
us
as
a
as
a
user
and
an
operator.
So
that's
that's
super
important.
The
other
thing
is
in
terms
of
the
some
of
the
comments
about
there's
really
no
difference
between
a
control,
plane
and
a
cluster
that
may
be
true
in
terms
of
the
actual
implementation
or
or
something
internal
or
something
like
that.
Whatever.
G
But
from
a
from
an
admin's
perspective
or
a
user's
perspective,
there
is
a
there
is
definitely
a
barrier
there.
There
is
definitely
a
delineation
between
the
two,
whether
it's
physical
or
logical
or
whatever
it
is.
There
is
a
difference
there
so
and
then.
Finally,
the
other
thing
is,
you
know
you,
a
couple
of
folks
have
mentioned
things
about
deprecation
policies
and
things
like
that
I'm,
not
sure
of
the
details,
of
what
what's
being
thought
out
there,
but
all
of
these
managed
providers
are
still
experimental
as
far
as
I'm
aware.
G
D
Yeah
sorry,
I'm,
yeah
I
was
going
to
say
one
of
the
one
of
the
points
that
was
just
made,
which
is
when
I
when
I,
when
I
think
of
cluster
I,
think
of
it
as
the
cluster
infrastructure,
so
yeah
things
like
the
networking
and
VPC
and
stuff,
but
I
guess
it
sort
of
raised
another
point
that
we
never
fully
got
to
the
bottom
of
the
proposal
or
got
agreement
on
which
is
especially
when
it
comes
to
managed
kubernetes,
is
Cappy,
makes
this
assumption
in
the
cluster
infrastructure
that
the
controlled
plane,
load,
balancer
or
endpoint
is
provisioned
via
and
reported
via
the
cluster,
whereas
in,
like
you
know,
UK
snake
has
that
doesn't
hold
true.
D
Does
it
it
comes
from
the
actual
service
and
then
so
we
end
up
with
this
weird.
You
know
reporting
of
the
control
plane
back
to
the
cluster,
just
to
satisfy
the
contract,
so
that
that's
never
been
ideal.
We
discussed
this
over
and
over
when
we
were
writing
the
proposal,
but
there
was
I.
Don't
know
there
was
a
huge
amount
of
appetite
to
change
that
and
I
I
guess:
I
had
a
question
for
people
around
the
consistency.
D
C
F
C
Well,
I,
don't
have
an
answer:
I
have
a
response,
which
is
to
say,
I,
think
the
way
to
solve
that
would
be
to
move
this
into
Cappy.
C
So
part
of
the
challenge
we
have
is
solving
this
outside
of
Cappy
and
so
to
that
extent
the
the
record
the
recommendation
proposal
that
landed
is
very
helpful
to
future
implementers,
because
there
is
very
little
sort
of
core
guidance
from
the
the
cluster
API
API
itself.
As
as
you
point
out,
there's
you
point
out
one
instance
there's
many
instances
where
it
doesn't
really
fit.
It
wasn't
designed
for
managed
it
wasn't
designed
for
sort
of
split
brain
Authority.
I
manage
some
aspects.
The
managed
provider
manages
other
aspects,
so
I
think.
A
I
think
Mike.
You
were
next
I.
F
That's
fine,
yeah
I
think
like
I
I
can
take
on
this.
One
like
just
I
have
to
drop
also
so
like
I
just
wanted
to
add
a
few
few
things
from
like.
We
have
been
using
actually
gapsody
in
production,
I
think,
like
probably
the
only
one
who
are
using
that
in
production
and
just
wanted
to
say
that
we
are
okay
with
breaking
changes
in
this
sense.
F
It's
fine
because
we
are
a
bit
outlier
in
the
sense
of
how
we
have
been
using
this
capsi
and
as
long
as
it
matters
like
how
we
create
the
guest
cluster
that
that's
the
guarantee
contract,
maybe
from
The
Gap
sitting.
We
would
like
to
agree
like
the
Azure
SDK,
how
it
is
being
created.
It
should
follow
the
same
calls
as
long
as
we
are
creating
the
same
cluster
and
yeah.
So
as
long
as
it's
like
the
same
cluster
that
has
been
created
like
we
can
create
the
same
object,
we
can
turn
off.
F
F
So
don't
want
like
the
equipment
to
stop
just
because
you
know
someone
is
using
that
in
production,
because
I
think
like
that
is
the
the
whole
point
of
like
using
operators
at
that
point,
and
that's
the
why
we
are
using
that
now
coming
from
the
use
that
we
are
giving
it
I
think
like
that's
interesting
because
we
do
are
using
a
Cappy
object
for
that
right.
So
we
have
this
cluster
object.
That
is
our
main
point
of
entry
for
all
the
automations
that
we
are
building
on
top
of
caps
start
from
the
cluster.
F
We
are
not
directly
going
to
Azure
manage
control,
pin
we
find
the
child
objects
for
that.
We
do
have
GK
cluster.
We
don't
have
JK
implementation,
but
we
can
create
cluster
object
with
this
pack
ignore,
like
pause,
that
Gap
is
not
acting
upon
that
we
just
want
to
fill
in
some
Cube
config
to
build
automations
on
top
of
that
right.
So
as
long
as
like
for
us,
it
is
important.
It's
the
consistency
from
the
Cappy
point
of
view
right,
so
we
will
go
to
the
top
level
object
and
everything
start
from
there.
F
So
the
cluster
objective
is
the
same
and
all
the
child
object
like
it
can.
We
can
discover
those
right
we
can
see.
We
don't
even
really
care
at
this
point
if
it's
control,
pane
or
cluster
as
long
as
there's
a
like,
consistent
way
to
reach
out
to
the
to
those
automation
right
and
that
has
been
working,
pretty
well,
I
would
say
as
for
now,
because
it
is
way
better
story
than
anything
else
we
had
it.
F
Other
alternative
for
us
is
to
build
our
own
operators,
and
that
will
end
up
into
something
that
looks
like
cluster
object
on
Cappy
right
and
it
will
be
one
way
or
another
one.
It
will
end
up
in
the
same
same
field,
so
we
are
okay
with
that
and
just
the
second
yeah
we
are
okay
with
the
breaking
chain.
It's
also
as
long
as
we
say
like
the
last
version,
the
Azure
security
call
that
we
are
to
integrate
Spanish
cluster
can
be
it's
the
same
call
as
the
new
version
is
using
right.
F
So
as
long
as
the
cluster
that
are
being
created
is
the
same,
for
example,
just
to
give
an
example:
if
I
create
a
cluster,
terraform
and
I,
don't
add
these
like
in
this
example,
the
SSH
public
key
I
cannot
import
it
anymore
to
cap
C
class
right.
So
this
is
not
possible,
but
if
I
do
create
from
like
capsi-
and
we
have
done
this
Disaster
Recovery
like
experiments-
we
can
do
that
right.
So
we
can.
F
A
Thanks
Mike.
E
I
was
just
gonna
on
the
consistency
question
that
Richard
had
asked
we're
going
to
be
focusing
with
Cappy
on
the
managed
control
plane
perspective,
primarily
so
it's
consistent
for
us.
It's
consistency
within
the
different
managed
control
planes,
though
my
two
cents
would
be
if
I
were
to
be
using
non-managed
control.
Planes
I'd
be
interested
in
consistency
of
the
classes,
so
consistency
within
match
control
plans
and
consistency
with
inside
of
non-managed
control
planes,
not
consistency
across
the
boundaries
of
the
two.
A
G
Yeah
I
was
actually
gonna
sort
of
say
the
same
thing
that
Mike
just
mentioned
so
our
perspective.
We
we
actually
are
doing
both
as
well
managed
and
unmanaged
from
from
our
side,
consistency
across
the
the
manage
once
consistency
across
the
unmanaged
ones.
Frankly,
from
my
perspective,
I
don't
see
how
you
could
have
any
consistency
between
managed
and
unmanaged
in
a
in
a
sensical
way,
because
they're
they're
very,
very
different
right.
G
You
have
very
very
different
things
that
you
have
you
haven't
so
so
I
think
that
sounds
like
an
awesome,
an
awesome
goal
and
something
to
strive
for,
but
I'm
I,
just
I
think
that
would
be
really
really
hard.
I
think
you'd
have
a
really
hard
time
doing
so.
A
Awesome
I
see
there's
a
side
chat
going
on
in
chat
about
what
breaking
change
means.
Jack.
Do
you
wanna
yeah.
C
I
was
just
clarifying
that
breaking
changes.
It's
a
pretty
big
deal.
It
would
mean
that
the
Clusters
you
have
currently
running
when
suddenly
you
revved
cap
Z,
with
the
newer
version,
the
newer
breaking
crd
API,
those
that
caps,
you
would
not
work
with
those
existing
clusters,
so
there
would
have
to
be
something
else.
F
D
F
Think,
like
that
part
I
understand,
but
I,
don't
I,
don't
see
the
need
of
like
recreating
clusters
right
so
I
agree
like
there
are
existing
crds
that
are
there
right
and
their
new
crd
that
are
there
as
long
as
no
one
is
acting
on
that,
like
we
just
turn
off
the
operator,
no
one
is
gonna
watch
what
we
are
gonna
be
doing.
We
can
delete
the
old
objects
and
create
the
new
objects,
bring
up
the
new
version
with
the
new
C
artist
and
like
bring
up
the
operator.
C
Well,
are
we
talking
about
an
AKs
cluster
that
whose
life
cycle
extends
Beyond,
so
a
life
cycle
create?
The
cluster
is
created
at
time,
t
with
capsi
V1
and
then
that
cluster
still
exists.
Cap
CV2
comes
along,
which
has
breaking
changes
in
the
crd
that
abstracts
away
the
AKs
cluster.
Are
we
expecting
that
existing
AKs
clusters
survive
and
then
be
managed
by
that
newer
version
of
the
crd
that
can't
happen?
C
F
Okay
can
I,
okay,
I
will
take
it
like
so
this
this
can
happen
actually
because,
for
example,
when
we
do
have
like,
let's
say
like
right
now,
it's
happening.
It's
not
different
from
I,
have
a
capsi,
a
guess,
definition
of
V1
right
and
I.
Don't
have
status
filled
in
and
I
go
and
create
the
cluster
like,
and
it
will
just
reconcile
because
the
objects
still
exist.
The
same
fields
are
being
populated
in
this
pack
right
and
it
will
just
reconcile
in
for
the
good,
and
this
has
been
happening
right.
F
We
don't
need
equivalence
clusters
to
be
up
and
running
to
always
have
like
Disaster
Recovery
happening.
The
cluster
can
come
and
go.
We
will
just
create
a
new
operator
and
everything
like
that.
That
would
be
like
working
now.
Let's
imagine
that
we
have
the
V2
right.
So
what
I'm
talking
about
the
crd,
V1
and
V2?
That
can
change
absolutely,
but
the
AKs
definition
that
is
adjacent
definition
of
the
Clusters
that
we
are
creating.
F
If
somehow
the
migration
can
manage
to
create
the
same
definition
with
the
V2
objects,
it
should
just
simply
like
work
in
that
sense
right.
So
those
crds
are
gone
and
see
our
objects
are
gone
and
you
see
your
objects
are
coming
and
that
should
be
possible
because
we
are
not
introducing
a
new
way
to
create
AKs
cluster
from
the
AKs
like
SDK
right.
Those
calls
are
exactly
should
be
same
in
that
sense.
Unless
we
are
introducing
some
changes
there
right
and
I
will
stop
now.
C
F
C
A
Yeah
I
was
just
gonna
say,
like
John
is
saying
the
chat.
If
there's
interest,
we
could
also
have
like
some
migration
script
or
something
like
some
helper.
If
this
is
like
shared
across
multiple
people,
but
yeah,
essentially,
the
the
main
point
here
is
that
there
would
be
no
downtime
in
like
zero
downtime,
because
your
clusters
would
still
be
running
you
just
temporarily
would
not
manage
their
life
cycle
and
then
you
would
essentially
adopt
them
via
the
new
crds.
A
So
there
is
a
path
forward
and
it's
very
good
to
see
that
Zane
sees
that
path
is
an
option.
That's
good
to
know,
foreign.
B
Yeah
so
and
I
apologize
from
the
beginning,
if
I
say
anything,
that
is
completely
wrong
because
I
started
getting
myself
informed
on
this
project
just
starting
yesterday,
so
I
have
a
Long,
John,
barnetti's
experience
with
terraform
and
on
jiki
and
AKs,
and
if
I
can
add
to
this
conversation
with
some
fresh
eyes,
like
somebody
that
comes
to
this
project
like
and
sees
everything
with
new
eyes,
I
found
very
difficult
to
understand
why
we
had
two
crds
one
for
the
control
plane
and
another
for
the
cluster,
because
from
a
product
point
of
view
and
I
think
this
was
already
said,
gke
AKs.
B
B
This
into
a
single
one
will
make
everybody
screaming,
because
this
is
like
a
huge
impact,
but
seriously
like
think
about
this,
because
I
understand
that
this
comes
from
a
historical
reason,
or
at
least
this
is
my
understanding
after
24
hours,
watching
at
this
stuff,
that
the
initial
project,
if
I
understand
correctly,
was
deploying
a
control
plane
based
on
Cube,
ADM
and
then
disabled
to
GK.
And
they
say
yes,
I
mean
this
is
my
understanding
if
I'm
wrong,
I'm,
really
sorry
here.
B
It
would
make,
in
the
long
term,
everything
very
much
simpler
to
have
a
single
crd,
because
what
I
see
here
is
that
you
want
to
move
some
elements
from
once
your
D
to
another,
and
you
are
struggling
to
make
everybody
consistent
when
having
a
single
crd
will
have
implicit
consistency
right
because
you,
you
won't,
have
the
problem
of
each
implementation,
the
gke
one,
the
AKs
one
and
who
put
what
value
in
the
manage
control
plane
and
who,
in
the
managed
cluster.
A
single
crd
would
be
like.
B
A
D
Yeah
yeah
I
agree,
but
I
think
we
do
a
very
good
job
of
explaining
what
the
the
role
of
the
cluster
infrastructure
provider
is,
so
so
yeah.
So
people
knew
do
generally
think
it's
about
creation
of
the
actual
cluster,
not
not
the
actual
infrastructure
that
is
required
for
the
cluster.
D
So
yeah
I
completely
agree
with
that.
Actually,
when
we
were
discussing
the
proposal,
we
one
of
the
options
we
talked
about
was
within
the
Cappy
cluster,
making
the
infrastructure
ref
optional.
D
So
if
you've
got
a
managed
cluster,
don't
bother
putting
the
infrastructure
ref
in
there,
so
I.E,
don't
put
the
you
know,
AWS
or,
as
your
managed
cluster
part
in
there
and
just
just
reference
a
control
plane,
but
then
that
that
was
sort
of
people
didn't
really
like
that
idea
at
the
time
when
we
discussed
it
just
because
it
required
too
many
changes
would
start
to.
D
You
know,
report
the
control
plane
back,
but
I
guess
also
what
I'm
hearing
is
two
things,
one
that
we
need
to
investigate
like
the
migrations.
You
know
how
people
have
migrated.
You
know
done
significant
changes
to
apis
Via
migrations
in
the
past,
but
then
also
I,
think
John
and
might
make
a
good
point.
Is
consistency
I
think
Michael
used
down
class,
but
I
don't
want
to
confuse
that
plus
class,
but
you
know
consistency
between
all
the
managed
providers
and
you
can
see
between
unmanaged
providers.
D
I
think
you
know
that
is
also
an
option,
and
then
that
would
if
we
did
that.
That
would
then
enable
us
to
get
to
consistency
and
manage
quicker,
because
you
know
Kappa
can
just
do
the
change
back
to
what
it
was.
Originally
capgee
hasn't
started,
so
it
could
follow
the
same
pattern
and
then
bang
we
got
the
three
managed
big
managed
providers
all
the
same,
but
yeah
I'll
be
really
interested
in
looking
into
the
migrations
as
well.
A
Yeah
one
thing
I
think
that's
a
bit
confusing
right.
Now
is
just
the
fact
that
everything
in
kabzi
is
in
the
managed,
control,
plane
and
nothing
is
in
the
cluster.
Even
though,
from
a
product
perspective,
there
is
no
control,
plane
and
you're
creating
a
managed
cluster,
so
it
would
almost
make
a
bit
more
sense
if
it
was
reverse
where
the
managed
control
plane
was
kind
of
like
doing
nothing,
and
it
was
just
there
to
satisfy
the
Cappy
contract.
A
But
the
managed
cluster
is
where
it
was
at
right,
where
that's
what
you're
creating
and
it's
the
fact
that
we're
doing
it
in
reverse,
almost
I,
don't
know
I
feel
like.
If
we
change
the
name,
it
would
be
even
a
bit
easier
to
like
accept,
but
just
yeah
I
was
also
going
to
ask.
What
should
we
do
for
like
what?
Where
should
we
take
this
like?
What
would
be
the
next
steps?
I
think
it'd
be
valuable
to
get
like
the
managed
kubernetes
providers
of
Cathy
into
a
room.
C
Yeah,
my
sense
is
that
the
next
it
sounds
like
Kappa
is
actively
making
a
change,
and
so
I
think
that's
the
most
interesting
near-term
thing
and
I
I,
like
I,
would
want
to
strongly
Advocate
that
that
change
be
reflective
of
the
model
of
consistency
that
all
providers
follow.
C
So
if
Kappa
introduces
changes
that
that
we
can
make
in
cap
Z
to
to
form
at
least
a
like
a
present
of
consistency,
and
then,
if
we
can
then
Advocate
with
cap
G
to
do
the
same
thing,
then
I
think
that
we
are
in
a
best
case
scenario
for
the
for
the
near
term,
I
think
long
term,
based
on
everything
I'm
hearing,
that
this
is
a
Cappy,
I
I,
think
the
learning
I'm
taking
from
the
the
sort
of
at
least
up
until
now.
C
The
friction
between
the
proposal
landed
in
Cappy
to
solve
this
problem
and
the
inability
of
the
engineers
to
actually
do
so.
According
to
that
proposal
is
because
that
proposal
was
not
aimed
at
the
right
abstraction
layer
and
if
we
aim
that
proposal
like
Cappy
itself
and
if
we
Implement
managed
clustering
Cappy
as
a
first
class
crd,
that
that
is
the
way
to
solve
this
problem
for
the
sort
of
long
term.
C
E
E
C
I
would
if
I
were
to
vote.
I
would
say
next
week
is
going
to
be
tremendously
distracting
and
we
should
sort
of
Empower
that
distraction,
because
it's
a
way
for
folks
to
run
into
each
other
and
do
ad
hoc
reconnaissance
and
all
that
kind
of
thing.
C
C
If
that
happens
next
week,
I
think
that's,
that's
fantastic,
but
I
I
think
that
we
probably
shouldn't
like
set
up
a
definite
meeting
with
a
definite
agenda
for
Monday,
with
everybody
Landing
in
Detroit
from
all
over
the
world.
C
D
Yeah
I
am
it's
just
a
couple
of
things
if,
if
Kappa
moves
back
to
its
original
model,
which
is
exactly
the
same
as
as
cap
C
model
is
now
yeah
yeah,
you
wouldn't
have
to
make
any
changes
which
I
think
is
good,
then
you
know
I
can
then
do
capgee
in
the
same
way.
So
it
wouldn't
be
a
problem.
D
I
I,
when
we
originally
wrote
The
Proposal,
we
had
strong
guidance
from
Upstream
Cappy
that
you
know
we
couldn't
make
changes
to
Upstream
Cappy
and
at
the
time
it
was
only
really
Kappa
saying
you
know
it.
Doesn't
it
doesn't
really
fit
managed
communities
doesn't
fit
the
current
cap
Cappy
model,
so
I
think
if
we
want
to
push
you
know
to
make
it
Upstream,
then
we
have,
to
you
know,
come
together
as
the
three
main
providers
or
and
say.
D
Ideally
what
we'd
want
and
agree
with
ourselves
before
we
take
it
up
between
the
cafe,
say
you
know
as
a
as
a
group
of
managed
providers.
We
think
these
changes
are
needed
because,
prior
to
that,
when
this
proposal
was
written,
there
was
there
was
Zero
appetite
to
change
up
through
Cappy
and
then
the
guidance
was
it
has
to
fit
the
current
model
so
that
that's
30
cow
output
to
to
getting
people
together,
as
it
probably
should
be
driven
by
the
actual
providers
that
are
going
to
implement
this.
A
Yeah
I
was
just
going
to
say,
I.
Think
our
main
like
constraint
right
now
in
terms
of
time
for
kevzi,
is
that
we're
trying
to
graduate
the
feature
the
AKs
feature
outside
of
experimental,
and
this
discussion
has
been
a
blocker
like
not
doing
the
change
but
deciding
whether
we're
going
to
be
the
doing
the
changes
or
not.
We
want
to
decide
before
we
make
any
moves
towards
making
it
not
experimental,
because
we
don't
want
to
say
it's
no
longer
experimental
and
then
two
months
later,
actually
we're
breaking
chain
we're
making
breaking
changes.
A
So
that's
just
the
only
thing
is
like
we're
kind
of
waiting
for
that
to
happen
before
we
can
make
it
non-experimental,
but
I
I
think
it
would
be
a
good
game
plan
if
we
all
started
with
like
consistency
in
what
we
know
works
with
cluster
class
and
what
categy
has
since
that
would
work
for
kappa2
and
then
take
it
from
there
and
see
how
we
can
make
it
more
like
make
Cappy
fit
and
get
rid
of
the
useless
crd
Maybe.
A
Cool
well,
thank
you
so
much
Richard
for
coming
to
our
office
Towers.
This
is
really
really
useful.
I
am
I'm
glad
you
came
and
we
were
able
to
have
this
discussion.
A
All
right,
I
think
we're
almost
that
time.
So,
let's
punt
the
graduation
discussion
for
next
time
Jack,
since
we
probably
need
to
do
some
thinking
between
now
and
then.
A
And
then
just
a
reminder
that
next
week,
office
hours
are
going
to
be
canceled
due
to
kubecon.
So
we'll
see
you
all
in
two
weeks
and
if
anyone's
around
at
kubecon
I'll
be
there
so
hit
me
up,
we
can
meet
in
person.
G
D
C
And
if
you
want
to
be,
please
reach
out
over
slack
I
think
most
people
have
slack
on
the
ground.
We
can
include
lots
of
folks
who
are
in
Detroit
in
in
those
conversations
with
laptops
and
zoom,
and
all
that
kind
of
thing.
So.