►
From YouTube: 20210617 SIG Arch Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
everybody.
This
is
the
kubernetes
architecture
community
meeting
for
june
17th
2021,
and
thank
you
to
kirsten
for
getting
our
agenda
put
together
today.
So
we
have
interesting
things
to
talk
about
and
why
don't
we
get
started
first
on
the
agenda
is
jason
hall
to
give
a
discussion
of
kcp,
and
I
guess
I
need
to
give
you
you're
going
to
present
something
right.
B
C
Unchecked
power
there
you
go
great
now.
I
just
need
to
figure
out
what
to
do
with
it.
There
we
go.
Can
everybody
see
that.
D
C
Okay,
great,
so
I
realized
only
too
late
that
the
agenda
says
demo
of
kcp
and
I
just
in
in
typical
fashion,
destroyed
my
cluster
right
before
I
joined,
so
I
won't
be
able
to
do
a
demo,
but
for
a
video
of
a
demo
clayton
who
is
also
here
at
kubecon
sort
of
went
through
something
you
could
do
with
this
I'll
step.
Back
in
case,
you
haven't
seen
that
demo
kcp
is
a
minimal
kubernetes
api
server.
It's
basically
a
hacked
up,
regular
kubernetes.
D
C
Server,
where
every
resource
that
we
don't
need
is
completely
hacked
out
so
things
like
pods
things
like
nodes
deployment,
services,
blah
blah
blah.
Those
things
are
just
ripped
out.
We
have
namespaces,
we
have
service
accounts
and
our
back
secrets,
config
maps,
and
then
everything
else
is
a
custom
resource
definition.
Literally
every
other
object
you
can,
you
can
give
it
back
pods,
you
can
get
it
back
nodes,
but
every
resource
that
we
don't
strictly
need
and
every
controller
for
all
those
resources
are
completely
ripped
out.
C
This
is
a
a
hacked
up
version
of
the
stock
kubernetes
api
server
that
we're
using
to
sort
of
experiment
with
some
things
and
play
around
with,
and
then,
if
this
experimentation
is
successful,
we
hope
expect
plan
hope
to
contribute
these
things
to
contribute
this
sort
of
architecture,
backup
stream
to
kubernetes,
so
that
you
could
run
a
kubernetes
api
server
in
this
fashion
or
embed
it
as
a
library
inside
your
own
application.
C
So
the
the
first
question,
aside
from
like
arguing
over
which
resources
are
needed
versus
not
needed,
is
usually
what
would
I
do
with
that?
Why
would
I
want
that?
I
really
like
pods.
I
really
like
nodes,
I
really
like
all
the
stuff
that
kubernetes
offers
me,
and
so
in
order
to
convince
people
that
this
is
a
worthwhile
thing.
The
demo
that
we
hacked
up
was
basically
running
this
kubernetes
api
server
that
doesn't
know
about
anything
as
a
a
sort
of
global
control
plane
to
talk
to
multiple
clusters.
C
So
one
of
the
ways
that
we
are
experimenting
with
all
of
this
is
is:
if
we,
if
we
wrestled
the
kubernetes
api
server
into
this
small
enough
shape,
could
we
run
it
as
a
service
to
run
a
multi-cluster?
You
know
multi-cluster
scenario
that
just
looks
like
a
regular
api
server.
So
what
that
looks
like
is
we
I'll
actually
go
to
the
architecture
of
this.
C
Other
there
are
other
things
we
could
use
this
for
I'll
go
into
some
of
those
as
well,
but
the
main
one
that
we're
focusing
on
right
now
is
multi-cluster,
so
this
is
kcp.
Kcp
is
backed
by
etd.
Some
people
have
asked
if
we
could
use
other
backing
stores,
and
the
answer
is
like
sure.
If
you
want
to
we're,
not
really
focused
on
that,
but
in
the
same
way
that
you
can
use
kind
to
back
with
sql,
lite
or
other
people
have
experimented
with
other
things.
C
C
What
we
do
to
enable
multi-cluster
is,
we
attach
a
controller
to
that.
So
it's
a
regular
api
server.
You
can
do
list
and
watch
and
you
can
using
the
standard
client
go
libraries
you
can
you
can
talk
to
this
thing.
Just
like
it's
a
regular
api
server.
We
define
a
cluster
crd
that
cluster
crd
contains
a
cube
config
to
talk
to
a
real
cluster.
It's
not
terribly
secure,
don't
do
this
in
real
life.
C
Basically,
everything
is
going
to
come
with
an
asterisk,
but
don't
do
this
in
real
life,
we're
very
much
flying
by
the
seat
of
our
pants
right
now,
but
we
run
this
cluster
controller
against
kcp
that
watches
for
cluster
crds,
when
a
new
cluster
is
registered
in
into
the
kcp.
The
cluster
controller
connects
to
that
cluster
and
installs
a
syncer
on
that
cluster.
So
this
is
the
kcp.
Can
you
see
in
my
mouse
as
well?
I
hope
so.
This
is.
C
C
Has
all
the
regular
controllers
that
run
against
it
that
actually
do
stuff
with
things
so
when
it
gets
a
pod,
it
will
assign
it
to
a
node
and
when
it
gets
a
you
know,
load
balancer,
it
will
do
network
stuff
and
when
it
does
a
volume,
it'll
do
volume
type
stuff,
but
so
a
user
can
give.
Oh
the
other
thing
it
does
before
before
it
sets
up
the
synchro.
It
has
to
figure
out
what
resources
this
cluster
knows
about,
because
kcp
is
a
big
old
dummy
and
doesn't
know
about
anything.
C
C
The
api
server
and
then
tells
kcp
here
all
the
resources
that
that
api
server
knows
about
I'm
going
to
gloss
over
a
bit
of
complexity,
but
maybe
we
can
come
back
to
it
later
if
it
talks
to
two
clusters
and
those
clusters
disagree
about
the
the
shape
of
a
resource
like
this.
This
cluster
is
running
an
old
version
of
kubernetes
that
has
a
deployment
that
doesn't
have
this
field,
and
this
one
is
running
a
newer
one
that
does
have
that
field.
It
needs
to
negotiate
between
those
two
and
figure
out.
C
What
can
I
actually
do?
What
can
kcp
actually
do
with
this?
The
the
the
two
incompatible
types?
We
can
come
back
to
a
lot
of
there's
a
lot
of
interesting.
E
C
Fun
complexity
there,
but
the
cluster
controller
installs
a
sinker
and
the
syncer,
basically
just
copies
objects
assigned
to
this
cluster
to
that
cluster,
where
they
actually
get
stuff
done.
C
C
It
watches
for
deployment
objects
and
says:
okay,
well,
you're
a
you're,
a
deployment
of
15
replicas.
I
know
about
three
clusters:
I'm
going
to
create
a
deployment
for
each
cluster
assigned
to
each
cluster
and
sent
down
to
each
cluster
with
some
subset
of
that
resource.
C
Sorry
of
those
replicas
to
run
your
replicas
split
across
these
n
clusters,
and
so
it
does
that
it
creates
three
extra
deployments
each
assigned
to
a
different
cluster.
Each
of
those
are
synced
to
those
api
servers
and
run
and
does
the
normal
deployment
stuff.
There's
a
lot
of
complexity.
Missing
from
this
like
I
said
it
should
do
stuff
and
we're
currently
working
on
having
it
handle.
C
First
objects
beyond
just
deployments
and
then
eventually
anything
it
should
be
able
to
take
any
any
type,
any
crd
and
have
some
idea
of
what
to
do
with
it
to
make
it
assigned
to
clusters,
and
it
should
take
into
account
things
like
scheduling
and
back
pressure
and
all
kinds
of
exciting
stuff,
but
for
right
now
it
basically
does
this.
C
A
user
can
give
kcp
a
deployment
of
n
replicas,
and
this
system
will
assign
it
across
many
clusters
and
and
update
the
status
with
as
they
are
becoming
healthy
as
they
are
becoming
unhealthy,
update
their
status
and
to
the
user.
It
just
looks
like
a
single,
regular
cluster
right
all
this
complexity.
Back
here
all
this
other
stuff.
The
many
clusters
are
invisible
to
the
user
and
as
clusters
come
online,
the
deployments
will
be
rebalanced
across
those
clusters.
As
the
best
cluster
to
go
away,
they
will
rebalance
among
available
clusters
and
yeah.
C
So
that's
the
sort
of
multi-cluster
architecture.
I
will
stop
here
for
questions,
because
I've
been
talking
for
a
little
while
and-
and
I
see
the
chat
has
some
things
in
it,
but
I
can't
read
it
while
I'm
presenting
so
I'll
stop
here,
but
there's
also
more
that
we
could
talk
about
as
far
as
other
things
kcp
might
be
able
to
solve,
and
then
I
don't
know
I
guess
open
the
florida
questions
in
general.
D
Hi,
so
I
I
guess
my
main
question
here
was
like:
where
do
we
want
to
take
this
right
and
how
would
we
end
up
using
this
experiment
to
bootstrap
revitalization
of
what
we
are
doing
in
in
abm
missionary,
for
example,
right,
like
we've
been
talking
about
you
know,
crds,
as
you
know,
would
see
both
trap
mechanism
for
crds.
D
You
know
there
was
like
a
long
thread
that
just
ended
with,
like
you
know
two
days
ago,
with
maybe
hopefully
some
outcomes,
good
outcomes,
but
then,
in
general,
like
how
do
we
use
this
experiment
like?
What
are
we
learning
there,
that
we
can
use
to
either
reorganize
the
code
and
like
throw
things
out,
extract
things
out,
so
those
kinds
of
experiments?
What
can
we?
What
are
we
learning
there
that
we
can
use
to
do
our
existing
code
base
better.
C
Yeah
yeah
in
particular,
I
think
one
one
answer
to
that
is
while,
while
researching
a
lot
of
this
stuff,
I
found
a
lot
of
issues
that
are.
C
It
would
be
nice
if
crds
supported
this
thing
that
built-in
type
support,
or
it
would
be
nice
if
this
was
this
behavior
was
consistent
between
crds
and
built-in
types,
and
then
I
completely
ignored
that,
as
we
cut
all
the
resource,
all
the
built-in
types
out
and
made
them
crds
and
therefore
like
weakened
the
you
know
like
pods
in
kcp,
can't
do
things
that
pods
can
do
as
built-in
types
so
and
a
lot
of
those
issues
actually
like
point
out,
we
would
love
to
make
crds,
you
know,
have
have
uniform
behavior
with
built-in
types,
but
it's
a
lot
of
work
and
we
need
to
motivate
that
right
like
we
need
to.
C
We
need
to
have
a
reason
to
take
take
on
that
work.
Right.
C
Yeah
yeah
so
far
I
haven't.
As
far
as
I
know,
we
haven't
found
any
new
issues
that
aren't
already
sort
of
known
and
tracked
somewhere.
A
lot
of
them
are
known
issues
that
are
just
low
priority,
because
nobody
has
nobody
has
built
anything
that
needs
them
to
to
work,
and
now
kcp
might
be
that
thing
that
needs
you
know,
field
selectors
to
work.
Similarly
across
crds
and
regular
built-in
types,
and
I.
G
Was
going
to
add
jason
actually
like
another
way
of
thinking
about
it,
too,
is
like
there's
a
lot
of
people
in
the
ecosystem
in
the
community
under
sick
multi-cloud,
the
multi-cluster
working
group,
people
working
with
api
machinery
like
k3s,
went
through
this
cluster
api
network
are
nested
like
a
lot
of
people,
I
think,
are
they're,
stretching
the
bounds
of
what
cube
is,
and
one
of
the
things
that
I
kind
of
noticed
was
everybody's
everybody's
kind
of
trying
to
solve
a
set
of
fairly
similar
problems,
but
they're
not
all
the
same
problem
right.
You
know.
G
Three
problems
are
similar
and
then
that
overlaps
with
someone
else's
one
problem,
and
then
they
have
two
different
problems.
Some
of
this
is
like
more
of
a
like
in
the
short
run.
It's
less
of
a
like
here
are
some
immediate
things.
We
could
go
and
prove
beyond
what
jason
mentioned,
but
it
kind
of
gets
us
thinking
about
like
six
months
or
a
year
out
like
are
there
like?
This
is
a
prototype
very
explicitly
like,
let's
think,
about
transparent
multi-cluster,
so
that
it
clusters
like
a
node,
big
idea.
G
Syncing
is
something
everybody
does,
but
it's
hard
right
like
we
tried
it
with
cube,
fed
one.
We
had
some
challenges.
Api
management
was
a
big
problem.
Q
fed
2
went
a
different
direction.
Karmatic
was
the
third
direction,
but
now
maybe
we
could
say?
Oh
well,
you
know
if
people
want
api,
tenancy,
api,
tenancy
and
crd
negotiation,
which
jason
kind
of
mentioned
like
that,
could
actually
help
things
that
caused
cube,
fed
v1.
So
I
think,
in
the
short
run,
the
list
jason
had
is
a
good
one.
I
was
kind
of
thinking.
G
Could
we
get
enough
interesting
ideas
together
that
people
would
say
like?
Oh
well,
you
know
like
the
discussion
about
crds
is
like
well
what
if
everything
was
just
a
crd
who
would
go
work
on
that
today?
The
overwhelmed
tired,
broken
down
dan
dan
smith
or
you
know,
could
we
create
some
excitement
around
well,
you
know
I
need
this
and
therefore
I'm
willing
to
jump
on
and
say
well,
let's
go
help
api
server
team
in
a
way,
that's
not
disruptive
to
api
server
team.
That
was
kind
of
the.
G
I
think
I
don't
want
dan
to
panic
or
anything
or
I
don't
want
a
sig
node
to
panic
or
cluster
api
life
cycles
like
how
could
we
like
get
some
ideas
that
people
are
like?
Oh,
I
feel
excited
about
this
versus
I
feel
beat
down,
and
this
is
just
one
more
problem
on
the
list.
Yeah
david
was
panicking
and
and
david's
panic
is
valid
right,
like
somebody
at
the
end
of
the
day,
has
got
to
keep
supporting
cube.
F
I
I
I
wouldn't
say
I'm
panicking,
so
don't
worry.
I
I
am.
I
am
I
I
gotta
say
I'm
I'm
a
little
confused
about
what
kcp
is
trying
to
do.
I
can't
tell
if
you're
trying
to
make
a
generalized
api
server
framework
type
thing
or,
if
you're,
trying
to
solve
a
multi-cluster
problem
or
if
it's
something
else
like
like
generally,
my
feeling
is
like,
if
you're
trying
to
like
it's
best,
it's
best
to
innovate
along
one
dimension
at
a
time,
because
it's
pretty
hard
to
get
multiple
things
right
at
the
same
time.
G
I'm
gonna,
I'm
gonna,
push
back
then
and
say
we
are
trying
to
do
three
hard
things
at
the
same
time,
because
all
the
individual
efforts
along
one
line
have
ended
up
in
kind
of
dead
ends
or
not
dead
ends
like
people
have
gone
and
looked
at
namespace
tenancy
to
death.
Namespace
tenancy
has
been
done
to
death.
What's
something
that
would
help
namespace
tenancy,
that's
out
of
the
blue
cluster
tenancy
and
we've
done
a
few
explorations
of
that.
But
that
comes
with
costs
and
trade-offs.
So
we
know
that
that's
got
limitations.
G
I'd
probably
say
we're
going
in
one
direction
from
a
number
of
existing
projects
and
saying
maybe
those
come
together.
Maybe
they
don't
so,
like
crd,
tenancy
works
really
well
with
multi-cluster,
because
multi-cluster
has
to
be
able
to
encompass
the
idea
of
different
types,
coexisting
and
then
transparent,
multi-cluster.
The
multi-cluster
stuff
gives
us
a
reason
to
give
a
poo
because
nobody's
going
to
rewrite
all
their
cube
apps
to
deal
with
multi-cluster.
G
How
could
we
help
everybody,
who's,
writing
controllers
and
apps
on
cube
to
go
and
level
up?
So
I
I
think
it's
it's
not
trying
to
be
too
many
directions,
but
it
is
a
couple
of
them
at
the
same
time,
and
maybe
that
won't
work
and
we'll
come
back
we'll
be
like
you're
right
dan.
We
failed.
I
I
was
getting
frustrated
because
I
do
feel
we've.
We
haven't
been
able
to
bring
a
lot
of
the
threads
together
and
jason,
and
I
were
kind
of
like
talking
about
this
with
various
groups
who
were
like.
G
Oh,
this
would
would
help
us
tell
us
how
it
goes.
That's
why
this
is
kind
of
framed
as
a
prototype
is:
let's
do
some
idea,
ideation
in
the
open
and
talk
to
various
groups
and
see
their
use
cases
versus
like
I'm
gonna
open
a
cap.
You
know
jason's
gonna
open
a
cup
tomorrow.
That
says
like.
I
think
we
should
do
these
700
things
versus
like
coming
back
and
being
like.
Well,
we
know
that
five
of
them
failed,
but
two
of
them
are
really
good.
So.
F
Yeah,
I
I
I
mean
if
you
got
to
do
multiple
hard
things
at
a
time
you
got
to
do
it,
but
I
still
like
do
we
feel
like
we've
arrived
at
a
like
just
just
just
as
one
of
the
examples
do
we
feel
like.
We
know
where
we're
going
with
regards
to
like
what
is
a
good
multi-cluster
aggregation
distribution
et
cetera
system
right
like
like.
F
Maybe
it's,
maybe
that's,
maybe
that's
like
a
solved
problem
in
the
sense
that
you
know
what
you're
attempting
to
do,
and
so
the
only
thing
that
you're
actually
innovating
on
is
like.
How
do
you
hack
up
api
server
to
make
that
possible
in
that
case,
then,
maybe
maybe
it's
fine
to
do
multiple,
hard
things
at
the
same
time,
but
I
get
the
sense
more
that,
like
it's
kind
of
an
open
world
and
we
need
to
experiment
both
on
the
like,
how
do
you
aggregate
stuff
and
in
the?
F
How
do
you
get
api
server
to
do
the
thing
that
you
want
it
to
right
if
you've
got
to
solve
both
of
those
problems
at
the
same
time
that
that,
like
I'm,
not
saying
it's
impossible
or
anything,
if
only
because
I
hate
it
when
people
tell
me
that
things
I
want
to
do
are
impossible,
but
it
does
sound
pretty
hard.
G
And
I
think
we've
seen
a
little
bit
of
that
right.
There's
there's
a
lot
of
interest.
A
couple
folks
I
talked
to
were
like
oh.
E
G
The
idea
that
it's
kind
of
open-ended
so
that
maybe
there's
a
little
bit,
there's
two
parts
of
it.
I
think
there's
like
a
technical
exploration
like
force
yourself
to
think
about
something
long
enough
and
new
stuff
starts
occurring
to
you.
Another
part
is
a
little
bit
of
like
morale
building.
Could
we
build
some
ideas
that
are
interesting,
that
maybe
they
inspire
and
then
we
come
back?
We
actually
say:
well,
oh,
you
know
what
we
could
do.
We
could
go
take
virtual
cluster.
G
We
could
amp
it
up
to
11
by
doing
these
two
things,
I'm
one
of
the
things
I
was
seeing
and
jason.
I
don't
know
if
you've
seen
this
as
well,
is
like
there's
a
lot
of
people
in
the
community
who
aren't
familiar
with
all
the
other
things.
Other
people
have
done
a
little
bit
of
this
in
my
head
was
like
could
jason
by
going
through
this
scenario,
act
to
work
with
a
bunch
of
groups
and
be
like
hey
like
what
did
you
learn?
What'd,
you
learn
what'd,
you
learn.
G
A
So,
let's
let
I
think
david's
gonna
hand
raise
for
a
while.
I
also
want
to
make
a
comment,
but
let's
have
david
go
first
thanks.
H
So
I
was
wondering
if
this
could
be
done
in
such
a
way
that
first
we
build
out
a
an
api
server
that
fulfills
cube-like
contracts
for
list
and
watch
in
a
way
that
you
would
then
be
able
to
reuse
it
to
build
the
particular
use
case.
You
have
around
federation
right,
so
so
I
can
imagine
utility
in
being
able
to
say
you
know
what
here's
a
cube-like
api
server
that
serves
crds
and
you
can
have
policy
around
it
and
now
you
can
connect
to
this.
G
Yes,
I
think,
if
I
understand
so,
maybe
like
I,
a
lot
of
people
were
saying
like
hey,
I
wanted
to
hack
up
api
server
in
this
dimension,
or
I
wanted
to
do
this,
and
this
is
that
what
you're?
Thinking
david,
like
a
strip
like
cuba,
kate's,
I
o
api
server
plus
plus
kubernetes
package,
master
minus
minus,
or
are
you
thinking
in
a
different
direction?
I'm.
H
Thinking
something
along
that,
but
I
would
like
to
sort
of
see
that
idea
built
out
in
in
a
less
hacky
way
to
endorse
it
as
like
a
hey.
We
think
this
is
a
good
thing
and
then
try
to
build
your
syncing
layer
where
you're
reconciling
schemas
across
it
and
figuring
out
how
to
create
a
union
custom
resource
with
multiple
versions
in
it.
G
And
in
the
in
the
project,
that's
kind
of
come
up
is
like
there
are
really
three
different
things
at
the
same
time,
one
is
what
you're
describing.
I
definitely
think
that
was
probably
the
easiest
of
the
things
to
pull
out.
First,
I
was
hoping
that
a
lot
of
people
would
come
and
complain
about
what
they
wanted
to
change.
G
I
think
there's
maybe
a
discovery
phase
that
needs
to
happen
still
of
who
would
be
incentivized
to
go
through
all
of
the
different
people,
who've
requested
and
all
the
issues
in
cube
and
all
the
folks
who've
done
talks
at
kubecon
over
the
years
like
daniel
or
jason
tibberus,
and
then
go
through
those
requirements
and
come
up
with
a
list
like
that's,
probably
something
that
I
would
say
would
absolutely
expect
to
be
split
out
of
this
into
its
own
thread
for
sure
not
coupled
to
these.
It's
not
it's
not
intended
to
be
coupled
to
these
pieces.
G
These
pieces
require
some
of
that
hacking.
The
next
phase
is
more
of
a
principled
exploration
of
the
design
space.
I
think.
H
Okay,
because
I
do
think
that
we
can
gain
traction
on
the
idea
of
of
here's,
my
stripped-down
server
and
what
I
want
to
serve
with
it,
but
I
think
the
ideas
around
schema
unification,
how
that
can
even
exist
with
independent
failure
domains
and
how
it
interacts
with
federation,
bring
up
a
lot
of
history
with
people
that
are
not
as
well
agreed.
H
I
don't
know,
I
don't
know
if
tim
hawking
is
here,
but
yeah,
absolutely
and
and
all
right,
please,
no!
That's
it
well.
A
A
C
A
C
C
You
are
not
the
first
person
to
come
up
with
that
feedback.
The
the
multi-cluster
possibility
is
one,
I
think,
very
exciting,
flashy
scary
thing:
you
can
do
with
it
with
the
minimal
api
server.
Definitely
there
are
already
people
building
a
lot
of
stuff
on
crds
that
aren't
scheduling
pods
to
nodes
or
doing
any
of
the
other
baggage.
C
I'll
use
your
word
baggage
that
the
rest
of
kubernetes
gives
them,
and
I
you
know
if,
if
a
minimal
api
server
that
didn't
have
all
of
that
baggage
existed
and
was
easily
easily
embeddable
and
easily
runnable,
like,
I
think,
an
ecosystem
of
things
would
flourish
around
it.
I
mean
things
are
already
flourishing
with
the
baggage.
So
if
you
remove
the
baggage,
it's
obviously
you
know
it
seems
like
it
would
be.
A
good.
A
good
area
of
exploration.
C
Absolutely
like
multi-cluster
is
not
it's
very
hard
to
frame
great
because
clayton's
saying
like
we're
solving
three
problems
or
we
want
to
solve
three
problems.
Multi-Cluster
might
not
be
the
one
that's
exciting
to
to
you
to
this
audience
and
in
fact,
adds.
A
G
What
I
was
noticing
with
minimal
api
server
is
that
each
of
the
use
cases
is
niche
like
everybody's
in,
like
way
different
directions.
I
was
kind
of
hoping
that
there's
kind
of
like
a
little
bit
of
a
pillar
where
there's
three
layers
and
when
you
point
the
pillar
out
and
people
are
like.
Oh,
I
can
see
you
have
a
couple
layers.
Can
I
just
get
that
bottom?
G
They
come
out
of
the
woodwork
a
little
bit
more
to
say
like
I
like
this,
but
can
I
get
rid
of
all
this
stuff
and
then
you're
like
you
got
them
hooked
and
you're
like?
Yes,
you
can
let's
go
and
have
let's
go
and
do
a
working
group,
that's
a
little
bit
separate
from
this
big,
crazy
multi-cluster
and
then
they
feel
better.
So,
like
I'm
doing,
you
know
machiavelli
and
mind
tricks
with
community.
I
apologize
in
advance
like
a
great
example,
and
this
gets
to
david's
point
like
when
we
did
this
example.
G
I
was
like
I
want
namespaces,
but
namespaces
are
in
core,
but
pods
are
in
core.
You
can't
cut
things
out
of
core
in
cube
and
have
a
generic
client
talk
to
them.
So
some
of
those
things
kind
of
came
up
which
was
like,
oh,
like
there's
some
limitations,
so
it's
I'd
say
we're
really
in
the
prototype
phase.
It's
a
little
bit
tall
and
it
really
is
the
moment.
People
are
interested
and
like
kind
of
going
broad
we'd
cut
that
part
out
and
go
broad
on
it
with
a
different
group.
G
This
is
a
very
savvy
forward-looking.
You
all
are
the
ones
that
I
want
to
trick
into
the
lower
level
stuff,
but
then,
like
everybody
else,
is
kind
of
like
yeah.
This
sounds
like
a
kind
of
a
science
experiment
in
the
cube
ecosystem
and
like
well,
but
what
if
you
could
get
multi-cluster
and
they're
like
tell
me
more?
So
it's
a
little
bit
of
a
like
rabble-rousing
effort
for
like
let's
get
excited
about
the
idea
of
cube.
D
Right,
I
I
got
to
drop,
but
I
just
wanted
to
add
one
scenario
like
we
were
talking
about
the
disconnected
node
on
the
edge
right
like
so.
If
we
have
this
there
as
a
proxy
that
can
talk
back
to
the
main
api
server,
you
know
that
that
is
definitely
one
more
example
of
things
that
we
could
do.
There's.
G
Yeah
there's
the
use
cases.
People
have
kind
of
come
up
for
that
require
a
couple
of
changes
to
our
client
ecosystem
that
are
actually
very
similar
to
the
changes
that
you
need
to
do
like
a
api
server.
That's
not
cube
talking
to
multiple
clusters,
so
some
of
this
would
be
like.
Can
we
get
a
couple
of
people
familiar
with
enough
of
those
ideas
that
they
see
those
connections
and
then
go
be
like
hey?
Let's
go
recruit
someone
who's
excited
about
this.
How
do
we
find
parallel?
So
anyway,
thanks
a.
A
D
C
I
C
Saw
that
in
the
chat
I
had
not
seen
it
before
before
his
kubecon
talk,
and
I
I
don't
think
it's
a
bad
idea
at
all.
I
think
it's.
C
So
yeah
it
is,
it
is
very
similar.
I
think
he
was
coming
at
it
from
the
from
the
point
of
like
cluster
api
needs.
Some
bootstrapping
thing
to
be
like
you,
don't
want
to
have
a
cluster
to
create
and
manage
clusters.
So
yeah,
that's
a
that's
a
good.
A
But
that's
showing
that
demand,
I
mean
demand,
there's
a
demand
for
that
minimal
api
server
within
within
the
kubernetes
developer
community
and
I
think
the
outside.
According
to
what
clayton
saying
he
doesn't
think
the
outside
world
sees
that
yet,
but
but
they
will
when
they,
when
they
want
to
do
some
of
these
things.
Okay,
cool!
Thank
you
all.
Okay,
hippie
you're
up
man.
I
B
I
The
good
news
is
we're
going
to
get
likely
about
20
points
for
122.
I
I
We've
had
four
promotion
prs
this
week
for
a
total
of
10
points,
merged
10,
different
endpoints
within
the
apps
area,
and
we
have
three
more
tests
merged
that
we're
going
to
still
wait
to
get
a
soak
for
a
while
make
sure
they're
clean
to
get
in
at
least
seven
more.
These
alone
would
get
us
up
to
88
percent
by
the
end
of
122,
and
our
hope
is
that
we'll
get
100
conformance
coverage
for
people
using
the
apps
api
to
deploy
their
applications
by
123..
I
There's
the
the
increase
in
coverage
for
the
apps
area
that
we're
focused
on
right
now,
which
I
think
looks
great,
just
a
few
more
lines
to
color.
In
there
those
light,
colored
ones
will
become
darker
when
those
seven
endpoints
are
promoted
to
conformance
and
then
those
seven
remaining
ones
should
be
filled
in,
like
I
said
by
the
end
of
123.,
we
had
some
folks
join
our
call
and
got
to
see
some
of
our
pairing,
our
paired.sharing.org
stuff,
but
then
they
were
quite
impressed
in
our
and
our
workflow
things.
I
A
Okay,
excellent.
Thank
you.
So
one
question
on
coverage:
how
are
we
doing
on
catching
up
to?
We
had
a
list
of
endpoints
that
I
mean.
Obviously,
apps
was
a
big
one
of
endpoints
that
that
have
been
in
the
in
the
api
for
a
long
time,
but
have
not
been
covered
by
conformance
and
you're
burning
that
down.
How
is
that
coming
overall,
as
opposed
to
just
apps
specifically,
do
you
have
a.
I
Can
you
pull
up
apisnut.cncf.io?
Okay,
so
glad
you
asked
this
is
where
you
can
find
out
that
information.
We
keep
this
up
to
date
and
we
we
go
update
this
whenever
there's
a
good,
that's
a
fun
one
cncf.org,
it's
apparently
someone
else's
fun
place
to
play
class,
and
this
is
us
for
some
reason.
It's
doing
115..
I
You
can
see
what
it
was
wet
back
in
the
day,
but
let's
look
at
what
it
looks
like
now
by
switching
releases
to
122.,
which
is
our
current
workflow
and
it
looks
much
better
and
this
is
where
it
lacks.
If
you
click
on
green,
it's
probably
the
easiest
to
show
what
that
stable
is
still
has
holes,
storage
are
back
and
if
you
want
to
go
down
a
little
bit
further,
I
I
can't
quite
there's
a
little
check
box.
I
I
Next,
we'll
focus
on
that,
but
apps
just
seem
like
the
most
obvious
one:
core
is
kind
of
spread
across
several
different
things,
yeah,
and
if
you
want
to
see
that
list
specifically,
you
can
scroll
back
up
to
the
top
you
can
mouse
over,
which
is
a
little
hard
to
explore
it.
But
if
you
go
back
up
to
the
top
there's
some
links
that
are
interesting,
the
main
one
being
progress
so
on
the
progress
area.
E
I
A
H
Yes,
so
I
figured,
I
would
mention
that
the
122
pr
view
went
pretty
well
the
biggest
problem
that
we
found
with
it
was
that
there
were
a
lot
of
last
minute,
prr
requests.
A
very
high
percentage
came
in
in
filled
out
the
questionnaire
in
the
last
three
days
before
kept
freeze.
That
represents
a
sort
of
loading
problem
for
review.
H
So
now
that
we've
collected
the
percentage,
we're
planning
to
go
to
the
release
team
and
see
if
they
can
add
a
check
earlier
in
the
process
for
whether
the
prr
questionnaire
has
responses
that'll
make
it
easier
to
review
and
people
will
thought
of
it
in
advance.
H
But
overall,
the
interactions
that
we
had
were
positive
right.
The
comments
that
we
gave
were
well
received.
People
would
look
at
them
and
say:
okay.
Yes,
I
can
make
it
easier
for
a
cluster
admin
or
an
end
user
to
understand
whether
this
feature
is
working
correctly
and
the
sig
leads
were
taking
it
seriously
and
helping
to
develop
good
answers.
H
The
yearly
survey
that
we
have
been
sending
out
actually
sent
out
once
last
year
were
sent
out
again
this
year
to
try
to
gauge
the
impact
of
our
efforts
has
been
out,
but
we
haven't
looked
at
the
results
yet.
A
Yeah
I'll
pop
in
there
that
we
really
don't
have
a
great
response
rate.
We
only
have
like
27
responses
right
now,
which
is
a
third
of
what
we
had
last
time,
which
already
wasn't
all
that
great.
So
if
there's
anybody
on
this
call
who
operates
ideally,
who
operates
fleets
of
kubernetes
clusters
and
has
not
yet
filled
out
the
survey,
please
click
on
that.
Every
little
response
helps
and
we'll
have
to
do
some
more.
A
H
A
few
more
people
to
answer
there
is
actually
I
just
found
them
last
week,
a
group
inside
of
contributes
that
own
some
twitter
accounts
and
if
we
haven't,
if
you
haven't,
contacted
them
already,
they
can
actually
make.
A
I
will
talk
to
bob
he
he
did
put
it
out
on
the
cncf
user
group,
which
was,
I
think,
helpful,
but
but
I
don't
know
if
it's
been
out
on
twitter,
so
yeah,
I
don't.
H
I
had
to
say
today
when
they
were
asking
like.
Can
you
review
this?
Like
I
don't
know
what
the
twitter
rules
are
like?
How
this
thing
is,
is
size,
limited
right
and
I
don't
know
how
many
characters
I'm
allowed
right
now
it
was
that
was
awkward,
but
they
were
very
friendly,
so
we'll
be
looking
for
that
response.
Having
the
the
data
every
year
will
help
us,
I
think
next
year
will
be
the
one
where
we
will
finally
be
able
to
say:
did
we
help
or
do
we
not.
I
A
Okay,
awesome.
Thank
you,
david.
Okay,
that's
our
agenda.
That
was
quick.
We
could
have
talked
for
21
minutes
on
the
kcp.
I
John,
would
you
mind
kirsten
did
come
to
the
call,
and
I
always
like
to
have
a
little
bit
of
visibility
on
some
of
the
tooling
and
as
someone
who
hasn't
seen
any
of
our
stuff
before
I'd
like
to
get
a
little
hear
from
her.
Maybe
on
what
it
was
like
to
to
see
that
work
and
and
maybe
a
little
bit
about
what
she
saw
about
pear.
E
Oh
sure,
so
I
I
can
just
briefly
say:
I
attended
the
conformance
call
just
to
kind
of
get
a
feel
for
what
was
going
on.
It
was
like
super
well
organized
and
they
were
going
through.
All
of
the
you
know
work
that
they
had
in
progress,
but
then
the
lovely,
the
lovely
gentlemen
in
the
group
actually
stayed
around
and
started,
showing
me
some
of
the
software
that
they're
using
to
pair
and
do
their
work,
which
was
like
super
duper
cool.
So
basically
they
have
this.
E
I
forget
what
the
what
the
website
is,
but
they
have
this
tool
where
you
can
share
a
terminal,
but
then,
within
that
terminal,
you're
actually
in
a
kubernetes
cluster,
so
like
you'd,
be
working
with
hippie
and
you're
like
in
the
actual
cluster
sharing
a
terminal.
But
then
you
can
also
like
run
your
tests
and
do
all
of
these
other
things
like
from
within
this
share
app,
and
it
was,
I
don't
know
like.
I
actually
thought
it
was
just
really
cool.
E
It
was
like
very
polished
as
well,
so
it
just
felt
like
a
nice
tool
and
it
just
seemed.
I
don't
know
to
me.
It
seemed
really
impressive,
like
I,
I
was
like
wow
like
a
lot
of
people,
are
going
to
get
a
lot
of
usage
out
of
this,
because
you're
literally
sharing
all
of
the
things
that
you
need
to
be
productive,
not
just
like
one
screen
here
or
one
screen
there.
E
It's
like
it
was
just
kind
of
like
an
all-in-one
sort
of
tool,
and
it
seemed
really
well
thought
out,
and
I
just
thought
I
thought
that
that
was
that
was
actually
super
impressive
for
me,
like
I,
I
was
extremely
impressed
by
that
and
by
how
much
time
is
probably
not
wasted,
because
everything
is
like
all
there
and
they
totally
just
like
showed
me
everything
and
answered
all
of
my
dumb
questions.
E
So
it
was
like
a
really
great
experience,
just
just
being
on
the
call
and
kind
of
seeing
like
the
work
that
they're
doing
for
conformance,
but
also
like
these
just
cool
tools
that
they're
developing
to
like
facilitate
that
work,
because
tooling
is
also
like
a
thing
that
can
help
other
people
get
a
little
bit
more
involved
because
you
know
like
when
everything's
there
and
you
compare
and
share
all
of
this
info.
Like
it
just
seems
a
little
bit
less
intimidating,
so
I
was
super
impressed.
I
thought
the
tool
was
pretty
slick
honestly.
E
I
I
A
All
right,
thank
you
all
anything
else.
Before
we
get
back,
15
minutes
or
so.