►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
good
morning
to
folks
on
the
west
coast
of
the
United
States
good
evening
to
folks
in
Europe
and
elsewhere,
further
east.
This
is
the
inaugural
feature
group,
we're
calling
it
a
feature.
Group
I,
think
discussion
in
cluster
API
for
managed
kubernetes,
so
I'll
say
the
standard,
disclaimer
stuff
I
think
that's
appropriate.
A
A
Welcome
everyone!
So
to
folks
who
don't
know
me,
my
name
is
Jack
Francis
I
work
at
Microsoft,
doing
cluster
API
and
kubernetes
things
with
a
focus
on
Azure
we're.
Certainly
here
I
think
to
represent
the
generic
cross
provider.
You
know
non-specific
provider
story
for
managed
kubernetes.
So
it's
great
to
see
so
many
people
here
I'll
go
ahead
and
share
the
agenda
doc.
A
A
Since
this
is
our
inaugural
meeting-
and
someone
pointed
out
on
slack
that
I
haven't
done
the
sort
of
correct
thing
and
invited
folks
via
the
the
Google
group
for
I
think
cluster
lifecycle,
maybe
we
can
just
sort
of
focus.
Today's
meeting
on
introductions,
I
put
a
couple
of
gen
agenda
items
to
make
sure
to
call
out
the
scope,
as
it's
currently
defined
in
the
proposal
dock.
A
So
we
can
make
sure
that
folks
have
lots
of
time
to
criticize
that,
and
you
know
I,
don't
foresee
scope
being
super
super
strict
or
super
concrete,
but
I
think
it
helps,
especially
for
folks
who
are
driving
by
and
want
to
know
they
should
be
spending
their
valuable
time
with
us
every
week
or
every
two
weeks
for
30
minutes.
It
helps
to
have
some
definition
of
what
we're
aiming
towards.
A
So
maybe,
since
this
is
the
first
discussion,
do
we
want
to
do
a
round
table
discussion?
Everybody
raise
your
hand
and
brief
who
you
are,
and
maybe
where
you
are
with
managed
kubernetes
and
cluster
API
and
I
would
say
this
is
optional,
so
you
don't
feel
like
you're
up
for
that.
That's
totally
great.
A
A
C
Hi
hi
good
morning
to
everyone
and
good
evening
to
who
are
on
my
time
zone
or
in
Europe,
so
my
name
is
I'm
part
of
the
VMware
group
based
in
Bangalore,
and
so
we
are
the
tkgs
group.
So
we
are
providing
the
kubernetes
as
I
guess,
a
cluster
over
vsphere,
so
I'm
part
of
the
group
and
Rec
currently
I'm
working
on
cluster
Auto
scaler,
along
with
Classy
cluster,
so
have
some
issues
on
that.
C
D
Hey
I'm
Joe
crassett
I'm,
with
Oracle
working
on
the
cluster
API
and
trying
to
work
with
the
managed
kubernetes
for
our
kubernetes,
offering
we've
kind
of
followed
the
previous
managed
kubernetes
proposal,
I'm,
not
sure
for
the
first
one,
but
we're
one
of
the
ones
that
are
following
that
I
know:
there's
other
big
providers
that
have
kind
of
gone
their
own
path.
E
Hey
everyone,
I'm
John,
hewn
I
work
with
Jack
at
Microsoft
on
cluster
API
and
kab
z,
type
stuff,
I
started
working
on
coaster,
API
and
KFC
about
three
months
ago,
so
I'm
still
fairly
green,
but
getting
the
hang
of
things
and
yeah
looking
forward
to
getting
more
done
on
manage
clusters.
F
All
right,
well,
Richards
I
work
at
Susa,
so
I
am
a
maintainer
of
the
AWS
provider.
I
did
the
original
or
a
lot
of
the
original
work
for
eks
in
Kappa
and
I
guess
what
happens
to
this?
I
were
I,
am
one
of
the
co-authors
of
the
original
managed
communities
proposal
along
with
Winnie.
G
B
Yeah
I'm
Alex
I
work
with
Suzy,
specifically
on
Rancher
and
we're
looking
into
Cappy
and
into
smoother
integration
of
KP
and
managed
kubernetes.
H
Hi
I'm
Erwin
I'm
working
for
plus
my
mind,
We've
also
been
working
at
our
integration
to
Cluster
API,
very
excited
about
it
and
we're
also
at
the
stage
to
that
stage.
Looking
at.
B
H
To
create
this
more
yeah,
so
I'm
happy
to
at
least
listen
in
and
help
out,
where
possible.
A
Great
thanks,
everybody
I
think:
that's,
that's
all
the
hands
I
see
at
least
so
I
see
that
it's
just
me
for
agenda
items,
so
I'm
going
to
go
straight
to
the
scope
thing,
but
if
anybody
based
on
this
conversation
or
just
have
been
sort
of
queuing
up,
something
you'd
like
to
talk
about
feel
free
to
just
pop
that
onto
the
stack
below
mine
and
we'll
get
to
this.
After,
let
me
do
a
quick
link.
Stress
test
to
make
sure
I
can
get
to
the
PR
is
the
pr
link
here?
Maybe
it's
not?
A
A
So
this
is
the
first
instance
of
this
thing
in
Cappy,
so
you
know
we're
probably
going
to
get
it
a
little
wrong,
but
I
thought
it
was
useful
to
sort
of
just
document
what
we're
doing
as
a
way
of
aiding
Discovery
for
other
folks
who
would
like
to
be
here
but
don't
know
that
we
exist
and
again,
someone
on
slack
suggested
that
the
I
think
the
Cappy
Google
group
is
also
an
appropriate
place
to
blast
this
info.
A
But
I
wanted
to
discuss
what
I've
sort
of
so
it's
a
short
Doc,
which
is
hopefully
useful
to
folks
I
wanted
to
discuss
the
the
way
we
Define
our
scope
and
I
actually
thought
there
was
an
update,
maybe
I,
haven't
pushed
this
I
need
to
refresh
this
all
right,
we'll
we'll
work
with
this
super
concise
one
for
now.
A
So
what
I've
got
so
far
is
that
we
are
scoped
to
standardizing,
managed
kubernetes
Solutions
across
the
cluster
API
proprietor
ecosystem.
So
that's
not
there's
not
a
lot
of
detail
there,
but
the
reason
that
that
was
my
initial
go
is
to
sort
of
emphasize
the
the
key
problem
that
we
we
want
to
be
solving.
Is
the
lack
of
consistency
across
providers
for
managed
kubernetes
so
as
as
Winnie
and
Richard
I
think
earlier
mentioned
their?
Let
me
just
do
this.
A
The
old-fashioned
way
seems
like
most
folks,
are
aware
of
this
proposal,
but
I'll
dial
it
up
just
for
fun.
They
spent
a
lot
of
time.
Developing
kind
of
I
mean
Winnie
feel
free
to
chime
in
here,
but
my
read
is
kind
of
half
of
a
observation
on
the
current
state
of
managed
kubernetes
across
providers
and
a
sort
of
retrospective
of,
say,
learnings
and
Then
followed
that,
along
with
some
set
of
sort
of
concrete,
at
least
somewhat
concrete
recommendations
for
for
new
provider
implementations
going
forward.
So
where
is
this.
A
Here
maybe
it's
on
the
second
page,
it's
been
a
while
here
we
go
great,
so
this
is
a
wonderful
document
that
captures
a
lot
of
the
historical
sort
of
gotchas
around
how
we
have
managed
to
do
this
I
mean
the
key
thing
that
I,
when
I
read
through
this
document,
is
that
there's
a
lot
of
different
ways
that
we
have
managed
sort
of
holistically?
Sorry
politically,
the
wrong
word
organically
grown
the
managed
kubernetes
space
and
cluster
API
from
provider
to
provider
and
I.
A
Think
one
of
the
goals
of
this
doc
was
to
solve
the
problem.
I
I
described
earlier
as
far
as
consistency
goes
by
recommending
a
a
path.
Where
can
I
find
I'm
sort
of
familiar
enough
to
know
that
it's
the
the
option?
Three
in
here
the
infamous
option?
Three.
So
here
we
go,
we
don't
have
to
like
worry
about
internalizing.
A
This
live
in
this
meeting,
but
the
one
of
the
intentions
of
this
document
was
to
to
try
to
solve
as
best
as
we
could
with
the
current
scenario
where
there
is
no
such
thing
as
a
managed
kubernetes
primitive
in
cluster
API.
So
it's
up
to
us
providers
to
figure
out
how
to
kind
of
compose
the
existing
set
of
resource
Primitives
into
a
solution.
A
We
know
that
this
is
probably
not
going
to
be
feasible
in
the
at
least
in
the
short
term,
maybe
even
in
the
long
term.
So
I
think
we
know
right
now
that
the
Amazon
AWS
provider
for
eks
and
the
Azure
provider
for
AKs
are
using
a
different
option
and
it's
it's
going
to
be
non-trivial
to
sort
of
transition.
The
the
way
that
the
eks
and
AKs
Solutions
are
to
fit
this
option.
A
Three
recommendation
going
forward,
it
sounds
like
Oracle,
has
benefited
from
this
Doc
and
has
a
a
working
maintainable
solution
with
option
three,
which
is
fantastic.
So
again
the
stock
has
been
super
valuable,
Richard
and
Winnie.
A
Can
you
clarify
that
the
one
of
the
reasons
I'm
talking
about
this
is
to
just
sort
of
build
a
a
case
for
considering
another
like
follow-up
task
to
this
doc,
which
is
to
say
building
a
managed
kubernetes
back
into
Cappy
natively,
but
I
want
to
talk
through
to
make
sure
that's
a
good
idea,
so
is
it
would
you
would
either
Richard
or
Winnie
be
able
to
to
chat
about
the
state
of
eks,
managed
kubernetes
Richard
go
ahead,
you're
raising
your
hand.
Thank
you.
F
Yeah
I
can
talk
to
that
I
I,
guess
probably
one
caveat
to
highlight
right
up
front
is
when
we
wrote
this
proposal,
we
were
given
a
strong
direction
that
Cappy
could
not
change
so
that
set
the
tone
of
this
whole
proposal,
so
yeah
so
I,
I,
guess
the
thing
with
Azure
and
the
AWS
implementations
where
they
preceded
this.
F
So
we
had
to
go
with
what
we
thought
and
actually
the
original
implementations
of
both
were
the
same,
which
is
option
two
in
this
document,
which
is
having
this
pass
through
infrastructure
cluster
just
to
satisfy
the
contract
with
Cappy
AWS,
then
then
it
was
probably
then
decided
to
move
realized.
F
Actually
having
this
separate
kind
that
is
just
passed
through
was
a
bit
pointless
and
that
we
could
satisfy
the
infrastructure,
cluster
contract
and
the
control
plane
contract
by
the
same
single
instance,
which
we
did
so
then
we
moved
on
to
something
else,
which
is
fine.
Everything
worked
fine
and
then
plus
the
class
came
along
because
the
class
assumes
that
the
infrastructure
cluster
and
the
control
plane
reference
from
the
Cappy
cluster
are
two
different
resource
kinds,
which
that
basically
means
that
the
Kappa
cannot
be
used
with
cluster
class
for
eks
clusters.
F
So
since
then,
we
have
moved
back
to
our
original
implementation,
which
matches
the
Azure
implementation,
which
is
having
this
pass-through
infrastructure
cluster
just
to
satisfy
the
contract
and
and
really
when
we
say
satisfy
the
contracts.
It's
two
things
it's
sitting
already
as
as
true
in
this
status,
but
also
the
control
plane,
endpoint
that
we
can
see
here-
and
this
is
probably
that's
probably
one
of
the
biggest
bones
of
contention
in
this
whole
implementation-
is
the
control
plane.
F
Endpoint
is
assumed
to
be
set
by
the
infrastructure
cluster,
because
traditionally
Cappy
was
designed
for
non-managed
eks.
So
there
was
this
assumption
built
in
to
its
architecture
that
there
would
be
a
separate
load
balance
of
created
that
would
handle
communication
to
the
control,
plane
loads,
and
now
this
doesn't
it's
not
applicable
with
managed
kubernetes
and
actually
the
Manchester
community
service
does
this.
F
So
we
end
up
in
this
weird
scenario
where
we
have
to
put
it
on
the
infrastructure
cluster
great
so
that
infrastructure
cluster
has
to
watch
the
control
plane,
but
the
control
plane
won't
start
reconciling
it
until
the
infrastructure
cluster
is
ready.
So
there
is
this
dance
between
these
two
reconcilers
to
to
satisfy
everything
and
get
a
cluster
provisioned.
A
Yeah
great
that
I
mean
that
makes
sense
to
me,
but
I
have
a
lot
of
background.
So
I'll
try
to
summarize
to
maybe
give
multiple
versions
for
other
folks
who
don't
have
the
background.
It
sounds
to
me
that
these
options,
one
and
two
as
defined
here
are-
are
really
like
sort
of
like
just
descriptive,
rather
than
prescriptive,
prescriptive
options.
A
These
are
observations
of
what
folks
have
been
doing,
and
so
what
I
heard
you
describe
Richard
is
that
initially
Kappa
launched
with
something
like
option
one
and
then
ran
into
issues
with
cluster
class
compatibility.
The
thing
the
thing
we
didn't
speak
to
is
the
is
the
decision
not
to
go
to
option
two
sorry
to
not
to
go
to
option
three
after
committing
to
Cluster
class
compatibility
and
segment
option,
two
and
I
think
we've
had
conversations.
F
A
F
Far
Far
easier
to
go
to
option
two
for
us
because
we
don't
have
to
plan,
for
you
know:
morphing
our
existing
API
over
a
period
of
time
and
worrying
about
migrations
and
deprecations
of
apis
and
Fields
and
stuff
like
that.
So
it's
very
quick
win
for
us
to
get
us
to
plus
to
class.
A
Right,
okay
and
I
can
speak
really
quickly
to
azure's.
What
Azure
is
considering
for
I
should
say:
cap
C
is
considering
so
capsi
as
Richard
described
is
using
something
like
option
two,
and
you
know
we
want
to
be
consistent,
and
so
we
have
evaluated
just
going
to
option
three
and-
and
it's
totally
workable,
I-
think
that
the
the
main
themes
of
this
document
are
not
controversial.
From
capsi
maintainer
perspective.
It's
just
simply
a
practical
consideration.
It
would
be
a
ton
of
work.
The
main
the
main
thing
isn't
the
ton
of
work.
A
As
far
as
the
trade-offs
going
forward,
I
think
the
trade-off
starts
to
make
sense
for
cap
Z
if
all
providers
can
agree
to
sort
of
break
themselves
and
go
towards
the
same
pattern,
because
that
leads
that
at
that
point,
we've
achieved
consistency,
but
it
doesn't
that
doesn't
sound,
feasible
and
so
I
think
From
capsey's
perspective
that
it
sounds
like
from
kappa's
perspective
perspective,
although
I
don't
want
to
speak
for
Kappa
the
the
I'm
trying
to
find
this.
A
The
key
sentence
there's
a
sentence
here
which
describes
that
this
is
really
operative
for
new
implementations
of
managed
kubernetes
and
existing
implementations
are
sort
of
grandfathered
in
so
we're
not
breaking
any
rules.
I
mean
we
we're
a
self-governing
community,
so
we
get
to
make
up
the
rules
as
we
go
along,
so
we
can
always
agree
upon
a
new
consensus
and
and
sort
of
modify
the
rules,
but
I
think
that
the
this
document
does
sort
of
explicitly
state
that
that
Kappa
and
capacity
are
not
sort
of
breaking
the
spirit
of
this
document.
A
By
not
following
the
option.
Three
recommendation
for
new
providers
Richard
go
ahead.
F
Yeah,
which
is
explicit
non-go,
which
was
not
to
force
existing
implementations
to
have
to
go
with
the
recommendation
and
I
think
we
probably
also
state
in
the
recommendation,
saying
yeah
option:
three
is
you
know
the
best
and
what
we
should
aim
for?
But
you
know
if
you
have
to
go
to
the
option
two
or
you're
already
there,
then
that's
fine,
because
it
satisfies
plus
the
class
instantly
cap
G,
which
is
sort
of
used
here,
is
a
sorry.
F
The
gcp
provider
is
currently
heading
towards
option
three,
because
that
is
a
recommendation
at
this
time.
So
we
are
headed.
We
are
endearing
to
this
recommendation
in
here,
which
is
option
three.
A
Great
to
be
clear
cap
G's
managed
kubernetes
implementation
is
new,
so.
A
Got
it
cool
cool,
great,
so
yeah
I
think
the
tldr
is
that
this
is
something
like
a
post-mortem.
You
know
report
from
the
from
the
front
lines,
type
thing
that
that
incorporates
lots
of
learnings
from
Winnie
and
Richard
from
their
history
of
doing
managed,
kubernetes
and
the
for
new
folks,
something
like
option.
Three
is
probably
going
to
get
you
into
the
least
amount
of
trouble
with
respect
to
making
this
again,
there's
there's
no
manuscripts
primitive
in
cluster
API.
A
So
what
Oracle
has
done-
and
it
sounds
like
what
capgee
is
doing
in
accordance
with
this
for
for
new
managed
kubernetes
implementations
is,
is
great
and
of
course,
for
folks
doing
that
work
in
option
three.
We
can
make
this
proposal
a
living
document
so
feel
free
to
incorporate
your
future
learnings
into
this.
To
maybe
refine
this
if
some
of
the
language
can
be
improved
over
time.
A
So,
having
said
all
that,
is
it
controversial
to
conclude,
based
on
this
is
just
a
sort
of
a
limited
set
of
all
the
learnings
over
the
past
couple
years.
Is
it?
Is
it
sensible
to
conclude
that
we
are
not
going
to
achieve
provider
consistency
without
a
cluster
API
primitive,
which
Richard
said
as
of
say
a
year
ago
or
whatever
that
timeline
was?
Was
a
non-startering
cluster
API
but
I?
A
Think
really
my
goal
to
spoil
the
the
the
the
Twist
and
the
plot
here
is
to
re-advocate
to
get
folks
together
to
advocate
in
cluster
API
for
a
reconsideration
of
that
position.
So
we
can
move,
manage
kubernetes
into
cluster
API
as
a
primitive
and
then
plan
for
a
V2
type.
Implementation
of
a
standardized
managed
kubernetes
across
providers
so
does
does
that
make
sense
as
a
as
a
statement
of
scope,
it's
not
as
I
was
reading
this.
A
A
You
know
primitive
resource
and
to
update
the
the
guidance
for
folks
as
the
as
that.
Api
evolves
go
ahead,
I'm
going
to
actually
call
in
jail.
First
go
ahead,
Joe.
D
Yeah,
so
from
what
it
sounds
like
in
the
cluster
API
office
hours
from
kubecon,
there
were
not
just
us
internally
calling
for
managed
kubernetes
to
be
I.
You
know
not
not
using
the
word
primitive,
but
be
a
part
of
the
the
tooling.
It
sounded
like
an
absolute
ton
of
customers.
A
ton
of
users
want
that
as
well.
So
I
think
you
know
moving
it
into
that
tool
makes
a
lot
of
sense.
From
my
standpoint.
F
Yeah
I
must
add,
rather
rephrase
it
slightly
to
say
that
we
we
evaluate
making
changes
to
Upstream
copy,
to
accommodate,
managed
kubernetes,
not
necessarily
scoping
it
to.
We
have
to
have
a
separate
kinds
or
representation,
because
it
might
not
need
that
it
might
be.
We
just
changed
the
you
know
where
the
control
plane,
endpoint
comes
from
or
various
other
things,
which
means
you
know
that
might
be
easier,
more
consistent
with
unmanaged,
so
it'd,
be
good
to
I.
F
Guess
give
it
a
slightly
wider
scope,
because
I
know
that
there
is
concern
if
we
push
or
a
separate
managed
representation.
A
Yeah
right
I
definitely
think
that
the
actual
what
we're
actually
going
to
end
up
discussing
as
a
proposal
into
Cappy,
is
a
sort
of
moving
Target
at
this
point.
For
me,
in
terms
of
what
kind
of
outcome
do
I
want
to
see,
I
want
to
see
this
type
of
option.
One
option
two
option:
three:
being
just
not
even
possible,
so
I
want
to
see
something
in
Cappy
that
provides
a
happy
path
for
all
managed
kubernetes
users,
where
they
can't
help
but
deliver
an
implementation.
That
looks
you
know:
provider
a
and
provider,
B
and
provider.
A
C
will
follow
the
Cappy,
managed,
kubernetes
path
and
at
the
end
of
that
path
you
know
they're
at
the
same
place
so
for
users
in
2024.
However
long
this
might
take
users,
especially
users,
who
are
interested
in
multi-cloud,
managed,
kubernetes
I
think
that's
the
sort
of
canonical
use
case.
A
They
have
a
really
happy
day
where
they're
managed
they're
using
cluster
API
to
do
managed
kubernetes
across
Oracle
across
Azure,
across
Google
across
Amazon
and
their
they're
fulfilling
a
front-end
interface.
That
is,
you
know
every
one
of
those
resembles
everyone
every
other
one,
so
I
yeah
you,
the
word
primitive
is
a
little
metaphorical
I'm,
not
sure
you
can
even
call
it
crd,
a
primitive,
but
something
to
the
to
the
effect
where
we're
just
taking
away
this
whole
sort
of
like
here's.
We
got
a
big
menu
pick
one.
A
We
recommend
this
one,
but
there's
a
whole
bunch
of
other
options.
I
would
love
to
just
have
that
be
not
even
a
factor,
so
we're
almost
at
time.
I
do
like
these
scope
to
a
half
hour.
So
should
we
should
we
aim
to
do
weekly
and
then
there's
going
to
be
a
big
break
and
in
the
holidays
at
least
for
the
us-dominated
world
that
does
Christmas
and
all
that
kind
of
thing.
So,
let's,
let's
maybe
try
the
week
to
meet
next
week
and
then
at
some
point.
A
If
we
have
some
momentum,
we
can
probably
change
this
to
bi-weekly
or
maybe
even
monthly.
If
there's
an
effective
working,
async
sort
of
working,
Vibe,
okay,
cool
and
I
will
I'll
send
out
the
invitation
in
the
the
Google
group
and
then
I'll
work
with
all
you.
Folks
to
maybe
establish
some
communication
medium
where
we
don't
try
to
fit
everything
into
a
half
hour
discussion
in
real
time
and
make
progress
on
the
stuff
that
folks
have
talked
about.
A
So
this
is
going
to
be
a
long
journey,
really
excited
to
see
all
these
people
here
you
know,
Joe,
you
just
built
a
managed
kubernetes
solution,
so
you're,
probably
not
super
excited
about
re-architecting
that
and
then
asking
your
users
to
change,
but
I
I.
Think
the
good
news
is
it's
going
to
take
us
a
long
time
to
get
there.
So
it's
not
going
to
be
tomorrow.
A
All
right,
I
think
we
should
hop
off
before
you've
Raj
pops
on
and
like
who's
in
this
meeting.
So
let
me
stop
sharing.
Thank
you
very
much
Winnie
for
taking
notes,
stop
recording
all
I'll
work
with
Fabrizio
to
post
this
to
the
Cappy
YouTube
channel
and
also.