►
From YouTube: 2021-06-22 Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
everyone
to
the
june
22nd
community
meeting.
We
have,
as
I
said,
a
packed
agenda,
a
couple
things
just
really
quick.
I
wanted
to
go
over
the
sig
architecture
group,
the
kubernetes
cig
architecture
group
had
clayton
and
I
come
talk
about
kcp
to
them.
A
I
think
it
went
pretty
well,
I
think
my
own
fault,
I
talked
a
little
bit
too
much
about
multi-cluster
when
it
turns
out.
They
mostly
cared
about
the
minimal
api
server.
A
Slimming
down
the
api
server
part-
and
there
were
you
know-
as
you
might
expect-
a
bunch
of
very
valid,
very
reasonable
questions
about
how
multi-cluster
would
work,
but
mostly
we
could
have
ignored
all
of
those
and
just
talked
about
the
minimal
api
server
the
whole
time,
I'm
still
waiting
for
them
to
put
up
a
recording
of
that.
But
when
it,
when
it
happens,
I
will
post
it
to
the
slack
and
and
update
this
with
that.
A
A
They
saw
kcp
and
were
interested
in
possibly
sharing
some
of
the
syncer
logic
or
syncer,
whether
or
not
they
end
up
using
actual
code
like
at
least
we're
doing
similar
things
with
sinkers,
and
we
could
share
experience
and
knowledge
and
they
purposefully
don't
have
a
multi-cluster
story
in
opt-in
and
they
were
very
interested
in
kcp's
multi-cluster
opportunities.
A
I
would
say
it's
still
early
days
with
with
octan,
I'm
sure
they
are
making
assumptions
about
a
real
kubernetes
cluster
that
we
will
break
and
I'm
sure
that
we
are
making
assumptions
about
the
type
of
client
requests
that
we
will
get
that
they
will
break
for
us.
So
I
think
it
will
be
hopefully
productive
in
terms
of
figuring
out
how
we
are
making
bad
assumptions
about
each
other.
B
I
wanted
to
add
to
that.
I
jessica
forrester,
who
works
on
the
openshift
ui,
actually
was
starting
to
look
at
some
of
this
and
she
was
gonna,
try
and
reach
out
some
of
the
folks.
There
see
if
there's
any
areas
for
discussion
presenting
more
of
a
user-focused
perspective
and
probably
mostly
on
the
multi-cluster
side
and
use
case
side.
So
that
was
the
thread
that
she
was
mostly
interested
in.
B
She
was
able
to
get
the
openshift
ui,
which
is
fairly
complex,
working
against
kcp,
and
you
know
some
of
the
common
problems
that
you
know
over
the
years.
That's
driven
a
lot
of
cubic
pi
server
machinery
and
honestly
hit
some
of
those
limitations
that
we'd
hope,
minimal
api
server
and
improvements
to
api
server
for
multi-cluster,
logical
use,
cluster
use
cases
with
address
so.
C
She
was
going
to
follow
up.
I
don't
think
she
had
a
chance
since
then,
but
I
do
think,
there's
a
there's.
A
useful
sub
work
stream
thread
on
what
what
multi-cluster
concepts
makes
sense
for
users
who
are
doing
this,
which
aligns
mostly
with
multi-cluster,
transparent
cluster,
but
not
completely.
A
Yeah,
that's
that's
very
exciting.
I
didn't
know
that
the
that
ui
team
was
looking
into
that.
I
would
guess
that
both
that
ui
and
octane
and
many
probably
cube
cuddle
plenty
of
things.
A
large
class
of
their
problems
with
kcp
will
will
boil
down
to
don't
handle
what
is
a
pod
correctly
like
they
probably
handle.
I
don't
have
access
to
list
pods,
because
that's
the
thing
that
you
might
have
they
wouldn't.
A
Having
the
this
api
server
doesn't
even
know
what
you're
talking
about
doesn't
even
know
what
a
pod
is.
That's
fairly,
I
assume
fairly
easy
to
to
handle
beyond
that
things
like
known
limitations
between
crds
and
built-in
types,
something
that
uses
a
field
selector
for
pods
wouldn't
be
able
to.
If
it
was
a
crd.
A
E
C
I
think
that
opens
the
door
too,
for
you
know.
I
think
a
lot
of
those
things
you
mentioned
is
our
ede
tests
and
conformance
tests
in
cube
today
are
extremely
heavily
biased
towards
whatever
some
tests
someone
wrote
once
and
then
we
we
shipped
them
up
a
little
bit
for
conformance
or
we
you
know
someone
came
in
and
it
relies
a
lot
on
kind
of
a
all
hand
or
a
zone,
defense
kind
of
approach
and
it's
not
necessarily
very
thorough,
we're
definitely
going
to.
C
As
we
start
hitting
things
that
are,
you
know,
is
this
a
point
of
consistency
for
all
cube
or
not?
I
think
we
should
be
very
deliberate,
around
conformance
and
its
implications
around
api
servers
and
look,
you
know
think
about
things
that
we
would
suggest.
As
you
know,
you
shouldn't
assume
that
the
core
api
group
is
available
at
slash
api.
Unfortunately,
it's
available
api
for
backwards
compatibility.
C
Did
I
ever
drop
that?
How
do
we
get
those
topics
into
lists
that
people
care
about,
and
how
do
we
have
motivations
for
people
to
actually
care
and
go
fix
that
stuff.
A
Yeah
yeah
yeah,
I
hadn't
thought
about
confirming
how
this
work
relates
to
conformance.
I
had
always
sort
of
assumed
that
that
conformance
would
continue
to
exist
for
upstream
but
yeah.
If
we
start
to
upstream
changes
where
core
types
aren't
there
or
core
types
behave
differently,
then
that
will
have
conformance
down
or
even.
C
Is
it
a
bug
that
pods
and
crds
don't
support
the
same
patch
mechanisms,
which
is
something
david
as
we're
going
through
cube
control
patch
assumes
that
strategic,
merge
patch
will
be
available
and
the
the
resolution
effectively
for
api
machinery
was
like.
Oh,
this
is
hard.
I
guess
we
just
give
up
and
we
don't
do
it,
which
maybe
strategic
merge
patch
should
not
be
part
of
conformance,
because
you
can't
represent
all
types
for
it.
What
are
the
implications
for
end
users?
I
don't
think
we
should.
We
should
be
triggering
these
and
recording
them.
F
And
to
be
further
the
strategic
match
patch
force
here.
This,
maybe
is
the
the
least
worst
problem,
because
I
mean
the
the
this
half
problem,
because
it
seems
that
it's
maybe
one
of
the
part
of
the
change
with
hacks
we
did
that
could
be
quite
easily
pushed
back
to
to
existing.
F
You
know
kubernetes
right
because
even
for
crd,
we
are
seeing
this.
We
already
have
the
shimmer
already
available.
You
know
the.
C
F
C
Server
to
be
successful
in
like
if
we
can
find
use
cases
which
is
an
assumption,
and
if
people
want
to
use
minimal
api
server,
what
does
minimal
api
server
mean
and
yeah?
What
are
the
end
user
workloads
that
we
are
empowering
to
remain
consistent,
which
is
the
original
goal
of
performance
conformance,
is
very
little
about.
Like
does
cube
work
correctly
and
it's
you
know
from
an
application
author's
perspective.
Can
you
rely
on
these
things
and
does
that
act
as
a
reinforcing
mechanism
and
ecosystem
to
say,
let's
try
to
converge
rather
than
diverge.
C
I
think
minimal.
Api
server
would
have
some
implications
there,
so
we,
I
can
add,
that
to
the
minimal
api
server
implications,
which
is
you
know,
a
few
notes
and
maybe
david.
I
can
get
you
to
describe
some
things
that
you
had
and
we
can
add
those
to
the
minimum
api
server
like
a
sub
thread
on
conformance
yeah.
F
I
still
have
a
pending
task
that
I
will
probably
tackle
after
you
know
finishing
on
the
on
the
api
negotiation,
which
is
take
back.
The
various
changes
did
down
in
in
kubernetes
feature
branch
and
really
document
them,
those
which
are
complete
hacks,
that
you
know
we
should
do
completely
differently
and
we
know
it
and
those
that
could
be
reintegrated
quite
easily.
Inside
existing,
I
mean
the
current
state
of
turbulence.
F
A
C
Actually,
that's
not
really
true,
I
would
probably
say
they're
hacky
in
the
sense
that
they're,
basically
just
the
the
basic
ship
of
an
idea.
The
touch
points
for
cube
are
much
much
smaller.
C
The
the
touch
points
to
actually
start
a
minimal
api
server
are
horrific
because
cube
is
like
cube,
is
implemented
as
a
very
specific
api
server
and
the
generic
libraries
are
only
the
story
so
and
then
that
should
go
into
the
api
server
and
I
think
it's,
I
my
latest
pr
added
a
few
details,
but
I
think
you're
right,
like
we
should
say,
like
anything
that
we
have
like
logical
clusters,
would
have
to
be
justified
on
their
own
one
of
the
we
should
be
looking
for
use
cases
that
justify
the
plug
points
that
they
need,
that
are
not
just
logical
clusters,
and
there
are
some
like
kind
storage
layer.
C
Plugability
is
absolutely
the
same
thing
as
what
logical
clusters
need
yes
and
then
there's
a
higher
level
of
plug
ability
around
like
api
handlers
and
wrappers
middle
http,
middleware
that
maybe
isn't
justified
for
kind,
but
might
be
justified
for
something
like
a
a
rate.
Limited
multi-tenant,
cube
api
server
or
improvements
that
people
want
to
make
to
priority
in
fairness
or
whatever.
F
Yeah
I'd
say
the
the
the
worst
hacks
are:
maybe
those
related
to
syria
literacy
for
now,
because
it's
it's,
it's
very
you
know
tied
to
the
fact
that
all
the
crd
machinery
is
based
on
controllers
up
front
and
so
and
so,
and
you
know
and
start
open
api
hms
and
so
that
that's
quite
a
pity
that
you
have
to
you
know,
index
everything
per
logical,
cluster,
etc.
F
So
that's
where,
but,
but
obviously
it's
the
part
that
we
should
not
implement
like
this
way
in
the
future,
because
we
would
not
like
to
to
you
know,
use
upfront
controllers
to
manage
much
open
api
hms,
but
better
change
completely
changed
the
approach
show.
So
I
mean
that's
the
place
where
the
hacks
are
the
worst.
These
are
the
hacks
that
we
would
not
keep.
Obviously.
A
Yeah
david,
while
we're
talking
while
you
are
talking,
do
you
also
want
to
go
over
updates
about
the
api
negotiation
stuff
there?
There
were
a
couple
of
absolutely
fantastic
prs
that
are
large
and
complex,
and
I
haven't
had
time
to
completely
review
them,
but
they
are
very,
very
good
if
you're
interested
in
and
how
this
works.
I
don't
know
david
if
you
want
to
like.
Do
the
video
or
talk
about
talk
about
what
the
video
is.
F
Well,
I
don't
want
to
take
too
much
time.
The
video
is,
I
think,
10
minutes,
so
I
can
also
make
it
quicker
if
you
want,
or
according
to
the
number
of
people
that
showed
it
maybe
switch
to
questions.
I
don't
know
I
prefer,
but
we
yeah
we
can.
We
can
play
the
video
if
you
think.
A
Well,
yeah,
so
I
think
I
think
the
just
to
review,
because
I
don't
know
if
everyone.
A
What
like
what
the
design
for
this
is,
and
we
need
to,
we
need
to
write
this
design
down,
to
make
sure
that
it
doesn't
just
live
in
our
heads,
but
but
basically,
in
order
to
produce
a
crd
type,
you
go
through
an
api
resource,
import
type,
yeah
that
david's
pr
defined.
F
A
Yeah-
maybe
maybe
I
can
I
can
summarize
or
you
could
summarize,
but.
F
Yeah
I
mean
maybe
or
maybe
just
looking
the
demo
would
would
explain
it
at
the
same
time.
I
don't
know
if
you
think.
F
C
To
the
theater
mode,
so
it's
bigger.
F
And
so
yeah
here
we
are
adding
in
in
the
demo.
In
fact,
we
just
used.
You
know
two
clusters,
one
in
1.20
and
the
other
one
in
1.15
kubernetes,
so
the
east
one
is,
is
the
newest
one
in
1.20,
and
so
here
we
just
created
an
api
resource
import.
F
That
means
that
we,
just
you
know,
added
the
cluster
and
that,
instead
of
immediately
creating
the
crd
as
ex
was
without
any
check
as
it
existed
before,
then
we
create
a
customer
source
which
is
api,
resource
import
and
it
it
will
be,
since
there
was
no
import
of
this
api,
the
deployments
api.
Until
now,
it
will
immediately
create
a
negotiated
api
resource.
So
there
are
two
objects:
api
import,
which
is
mainly
you
have
one
import
per
location
per
physical
cluster.
F
Typically
so
because
you
can
import
deployments
from
three
physical
clusters
and
then
can
you
just
pose
json
and
then
all
the
api
resource
imports,
for
you
know
of
deployments
coming
from
all
the
physical
clusters
would
finally
result
in
only
one
negotiated
api
resource,
which
is
a
distant
crd
and
negotiated
api
resource,
which
is
mainly
the
result
of
the
shima
that
you
that
is
expected
to
be
used
in
the
logical
cluster
for
deployments
and
by
default.
F
But
this
is
configurable
when
you
create
a
negotiate
when
a
negotiated
api
resource
is
created
like
here,
because
you
you
imported
at
least
once
deployment
from
a
physical
cluster.
So
it
created
a
negotiated
api
resource,
but
by
default
it's
not
published
because
you
might
want
to
you
know
import
also
deployments
from
other
locations
before
you
know
and
having
the
resulting
chema
being
the
lowest
common
denominator
from
all
the
deployments
that
you
put
from
various
sources
and
then
only
when
you
have
the
lcd
of
the
deployments.
F
So
that's
the
way
I,
but,
of
course
you
can
also
choose
to
you,
know
auto
publish
as
soon
as
you
as
that
you
import
at
least
once
so,
but
here
we
publish
manually,
as
you
saw
because
I
patched
the
the
negotiated
api
resource
and
then
the
theory
is
created.
F
F
And
if
we
look
inside
the
crd
here
that
was
imported
from
the
physical
cluster,
we
see
that
we
find
the
ethernet
containers,
which
is
just
part
of
the
kubernetes
120
deployments
schema
now,
and
we
can
see
also
that
the
the
api
resource
import
coming
from
the
location
usc
one
is
now
compatible
compatible
with
the
negotiated
api
resource.
Of
course,
there
was
only
one
import,
but
also
available.
That
means
that
there
was
one
corresponding
crd
applied
and
it
was
published
in
the
open.
F
You
know
logical
cluster,
open
api,
shima
and
now
the
the
point
is
to
to
add
a
second
cluster
to
join
a
second
cluster
into
kcp,
which
is
in
fact
kubernetes
1.15.
So
it
will,
among
others,
lack
miss
the
the
ephemeral
containers
in
the
shimmer,
and
so
we
can
see
that
automatically
the
the
there
was
a
check
of
consistency
of
compatibility.
F
Can
you
please
choosen
yeah
automatically
a
check
of
the
compatibility
between
the
shima
of
the
newly
imported
deployments,
which
comes
from
kubernetes
1.15
and
the
existing
shema
that
is
currently
being
used.
The
negotiated
api
resource,
which
was
based
on
kubernetes,
1.20
shima
and,
of
course
it
was
seen
as
incompatible
as
we
can
see
on
the
list.
F
And
now,
if
you
go
into
the
compatible
condition
of
the
second
import
that
comes
from
the
second
physical
cluster,
and
we
can
go
the
next,
then
we
can
see
here
that
in
the
message
of
this
condition,
we
have
all
the
fields
that
were
removed,
in
fact
that
are
missing
in
the
second
shima,
corresponding
to
the
negotiated
api
resource
of
deployments,
typically
ephemeral,
containers,
overhead
said,
host
name,
etc.
A
number
of
those
that
were
added
between
mainly
everything
that
had
been
added
in
pod
templates
between
kubernetes,
1.15
and
1.20.
C
I
might
ask
this
one:
this
seems
like
a
good
one
to
materialize
as
a
real
status
field,
which
is
incompatible
fields
and
the
reason
why
or
a
field
like
having
in
the
message
is
interesting,
I
think
for
tools
and
then
just
thinking
about,
like
you
know,
if
we
think
about
like,
I
was
kind
of
like
running
as
you
were
doing
this
I
was
like
running
through
my
head.
You
know,
say
you
have
to
get
ops
flow
and
you
wanted
a
tool
that
said:
go.
C
Look
at
15
different
servers
calculate
whether
they're
they're
going
to
even
deploy,
so
you
could
run
this
and
get
up
in
the
config
loop,
alongside
like
your
30
servers,
so
like
thinking
about
how
it
would
decouple
this,
and
so
I
was
like
okay,
you
know,
I'd
want
to
actually
get
a
structured
output
of
compatible
fields,
and
I
probably
want
deeper
details
and
it
kind
of
felt
like
a
status
field
would
probably
be
richly
structured.
I
think
it's
okay
to
summarize
in
the
condition,
but
I
was
kind
of
like
when
someone
really
screws
up.
C
I
mean
okay,
most
things
are
probably
just
gonna,
be
like
one
or
two
fields.
When
someone
really
hoses
it,
what
does
that
look
like,
maybe
really
hosing
it
just
isn't
that
common,
because
it's
either
like
close
or
it's
no
cigar
and,
I
think,
like
yeah.
This
is
an
example
of
a
no
cigar.
Actually,
what
is
this?
How
many
releases
was
this
five?
This
is
like.
C
C
C
When
I
run
this
tool
against
three
servers,
get
a
list
of
output,
and
then
how
would
I
say:
okay,
given
the
generalized
form
from
a
cli
tool
across
these
three
servers,
how
would
I
then
say
go
and
take
the
negotiated
one
and
tell
me
how
many
incompatible
cube
configs?
I
have
in
a
directory.
It's
like
this.
This
is
valuable.
C
This
is
probably
something
that
I
think
most
teams
who
are
running
multi-clusters
should
be
doing
right
now.
So,
like
we've
already
found
and
improved
on
the
state
of
the
art
here
now
I
don't
know
if
anybody
else
is
doing
it.
There's
some
of
the
linters
are
doing
stuff
like
this,
but
this
feels
like
something
the
linters
should
be
using
as
a
library,
rather
than
vice
versa.
Yeah.
F
So,
in
fact,
how
I
created
two
peers.
The
second
one
is
not
finished,
I'm
still
fixing
some
stuff,
but
the
first
one,
mainly
the
comparison
is
just
a
library.
In
fact,
it's
just
a
function
that
both
calculates.
You
know
that
boss
checks
the
compatibility
and
calculates
the
lcd.
F
If
you
have
a
an
argument
that
is
filled,
you
know
an
argument
that
is,
you
know,
narrow
existing.
Then,
if
you
allow
narrowing
the
existing
one,
the
existing
shema,
it's
it's
always
a
comparison
between
two,
because
in
fact
we
do
things,
you
know
iteratively,
you
never
add
10
cluster
at
the
same
time.
So
it's
always
just
comparison.
F
I
mean
a
subtype,
in
fact,
in
other
words,
so
yes,
for
now
it's
I
mean
and
and
the
errors
that
I
return
are
mainly
invalid
errors.
So
there
is
the
path
in
each
error
separately,
so
we
I
mean
we
could
do
something
more
structural
that
one
I
just
dumped
into
the
condition
message
if
it
was
necessary.
I.
A
D
A
C
Like
thinking
about
this,
I'm
like
this
is
probably
something
that
should
be
on
cue
pre-submits
and
we
should
probably
I
know-
we've
historically
talked
about
this
topic.
C
C
Do
you
want
to
check
and
then
any
if
people
are
already
using
linters
that
do
half
of
this
we
can,
let's
legitimately
go
look
at
all
the
linter
teams
out
there
and
ask
them
what
are
they
doing
for
this,
because
it's
it's
probably
worth
just
saying
like
this
is
a
concrete
thread.
This
is
the
first
concrete
thing
that
we
are
doing
that.
C
If
someone's
got
a
really
good
version,
it's
just
hidden,
let's
go
find
it,
let's
merge
it
and
let's
get
it
in
linters
and
say
it's
just
a
pure
library
and
we
split
it
out
of
kcp
into
yeah
crd
tool,
and
then
you
know,
we've
got
our
first
like
hey.
We
popped
a
very
useful
thing
that
can
grow
on
its
own,
but
also
be
driven
by
the
new
use
cases.
We've
come
up
with.
F
Yeah
and
the
la
the
the
function,
mainly
you
know,
works
on
structural
shimmers.
So
that's
the
the
limit.
I
mean
the
constraint
or
limitation,
but
anyway,
if
you
don't
have
a
structural
shima,
there
are
already
many
things
that
you
just
cannot
do
in
kubernetes.
F
So
that
seems
quite
a
reasonable
limitation
to
me
by
the
way,
if
you
don't
have
any
structural
shama,
you
just
cannot
publish
open
api,
so
yeah.
That
seems
basically
pretty
quick.
So
if
we
continue
here,
of
course,
we
have
to
be
able
to.
You
know,
do
enforce
things
manually
because
you
might
still,
even
if
we,
you
publish
the
negotiated
api
resource
to
the
to
your
logical
clusters
by
and
authority.
F
F
Finally,
your
negotiated
api
resolution
to
be
the
1.15,
and
then
you
just
have
to
patch
the
your
api
resource,
import
and
change
the
what
is
called
the
shima
update
strategy
to
update
and
update
published
because
by
default
it's
you
know,
you
agree
updating
the
negotiated
api
resource
only
if
it
has
not
been
published,
as
you
know,
a
real
api
through
a
crd,
because
you
don't
want
to
change
and
take
the
risk
of
changing
the
api
of
something
that
already
has
objects
in
it.
F
But
if
your
negotiated
api
resource
has
not
been
published,
of
course
you
can,
you
know,
change
the
negotiated
chamber
by
default,
and
so
we
override
here
so
that,
finally,
the
negotiated
api
resource
is
changed
and
if
you,
if
we
grape
ephemeral
containers
inside
the
content
of
the
the
schema
of
the
negotiated
api
resource,
you
can
see
that
we
don't
find
them
anymore.
So
yeah,
mainly
we.
We
just
took
the
lcd
between
between
deployments
of
cube,
1.20
and
1.15,
and
change
that
in
the
crd
as
well.
A
F
Yeah,
that's
mainly
what
we
had
discussed
several
times
saying
you
know,
api
negotiation
should
be
a
sort
of
calculation
of
impact.
I
mean
you,
you
check
if
things
are
compatible,
as
I
mean
as
soon
as
you
have
an
already
exist,
an
already
used
negotiated
api
and
then,
if
you
have
some
impact
and
incompatibility,
you
should
be
notified
and
be
able
to
override
that,
which
is
the
case
here,
yeah
and
then
yeah,
the
the
maybe
we
can
stop
the
demo
here.
F
I
think
it
gives
the
main
id
and
the
the
last
point,
but
I
can
just
explain
it
very
easily
now-
is
that
you
might
also
want
to
enforce
the
shima
before
adding
any.
You
know,
physical
cluster,
and
in
such
a
case
you
would
just
you
know,
add
crd
for
deployments,
for
example,
one
one
that
you
just
pulled
from
or
created
manually
or
something
like
that
and
then
in
such
a
case,
the
negotiated
api
resource
is
marked
as
enforced.
That
mean
that
in
you
will
never,
it
will
never
be
changed
by
any
import
you
all.
F
A
Way
and
that
that's
sort
of
the
break
glass
mode
right,
that's
that's
not
something
you
should
generally
try
to
do,
because
that's
that
turns
it
back
into
a
global,
a
globally
applied
change
right,
but
that's
something
that
you
might
want
to
do.
If
you
need
to
resolve
a
conflict
between
two
clusters,
you
just
need
to
give
over
and
say
this
is
the
type
now
stop
fighting?
This
is
the
type.
F
Yeah,
because
the
case
we
took
is
is
very
simple,
you
know
yeah
I
mean
for
for
kubernetes
internal
types.
F
I
assume
that
you
know
you
have
a
quite
very
big
backward
compatibility
in
kubernetes,
so
you
would
never
have
cases
where
you
are
importing
the
same
api
and
you
cannot
find
you
know,
and
you
don't
have
a
you
know,
backward
compatibility
between
all
the
the
versions,
but
now
imagine
that
you
want
to
import
some
other
crd
or
some
other
apis
living
in
in
other
living
externally.
In
other
clusters,
you
might
have
some
cases
where
you
want
to
manually
define
what
will
be
the
common
api
for
the
various
imports.
A
Right
does
it
make
sense
yeah?
This
is
great.
I
I
love
absolutely
every
part
of
this.
I
will
keep
reviewing
the
prs,
but
everything
looks
basically
completely
on
track,
and
I
agree
about
the
point
about
how
how
to
report
the
differences
in
a
in
a
structural
easily
to
consume.
F
A
Consumed
way
but
but
that's
you
know,
a
small
aesthetic
change
on
top
of
just
being
able
to
get
that
information
in
any
in
any
form.
So
that
that's
great
and
I
agree
with
clayton's-
we
should
feel
find
some
way
to
to
make
this
packageable,
so
that
upstream
can
use
it
so
that
everybody
can
use
it
because
I
think
that's
a
good
demonstration
of
our
value
of
what
we're
doing
here.
Whether
or
not
the
rest
of
this
works
out.
F
Yeah,
maybe
just
a
last
word-
I
mean
towards
community.
I
think
that
we've
discussed
quite
much
about
you,
know
api
negotiation
and
stuff
like
that,
but
never
really
define
the
use
cases
in,
in
which
case
there
is
enforcement,
in
which
case
you
know
how
lcd
are
calculated
and
stuff
like
that,
and
I
think
that
I
mean
the
first
idea
of
this.
F
Prototype
was
mainly
to
at
least
have
a
tool
to
be
able
to
more
formalize
and
test
already
and
formalize
the
use
all
the
use
cases
where
apis
or
about
how
apis
would
flow
and
and
and
and
live
inside,
a
logical
cluster.
So
it
would
be
great
as
soon
as
it's
it's
it's
merged.
You
know
is
the
as
many
community
members
as
possible.
Would
would
you
know
play
with
apis
and
that
we
would
we
would
be
able
to
define
the
typical
flaws
or
typical
use
cases
yeah.
Regarding
api
compatibility.
A
Yeah
before
we
move
on
does
anyone
else
have
any
more
questions
about
api
negotiation
or
any
any
possible
concerns
with
this.
With
this
approach,
I
actually
really
like
the
staged.
Like
you,
don't
just
apply
a
crd
in
general.
You
can,
but
I
mean
you,
don't
generally
do
that
you
import
and
that
reports
the
status,
and
then
you
finalize
that
if
anybody
else
has
any
comments.
A
All
right
with
that,
the
next
thing
miguel,
you
are
here:
okay,
we
talked
this
week
on
slack,
I
think
about
librarian
kcp,
so
that
it
can
be
embedded
in
other
things
and
and
what
you.
A
That
for
the
and
you
sent
a
pr
which,
which
was
great,
but
I
think
we
probably
won't
merge
it
at
least
right
now
to
to
re-atomize
the
kcp
binary
into
its
constituent
parts.
We
actually
had
a
pr
a
while
ago.
I
think
david
did
it
to
be
able
to
bundle
them
all
into
one
binary
together.
A
E
E
I
mean
the
the
pr
is
not
mine.
I
think
it's
from
somebody
else
who
yeah
who's
participating
on
the
conversation,
yeah
yeah.
C
C
C
C
That
shows,
like
a
very
specific,
I
think
it's
the
pieces
that
people
are
mostly
asking
to
reuse
independently
or
lower
level
pieces,
so
those
are
probably
where
I'd
start
like.
I
don't
know
that
I
am
that
worried
about
crd
negotiation
fitting
by
itself,
because
you
have
to
have
the
problem
of
you
need
to
go.
It
can
work
with
minimal
api
servers.
So
I
think
the
moment
we
have
a
good
minimal
api
server
example.
C
E
E
Other
things
like
multi-cluster,
for
example-
and
I
mean
it-
I
really
found
interesting
that
part
of
the
minimalistic
api
server
based
on
on
the
kubernetes
api
server,
especially
from
the
submariner
project,
point
of
view,
where
we
use
that
api
server
to
exchange
information
between
the
clusters
and
also
inside
the
cluster.
I
have
like
a
very
tiny
presentation
where
I
can
show
you
that.
Do
you
think
it's
okay?
If
it's
probably
five
minutes.
C
Yeah
yeah,
so
maybe
this
was
in
the
one
note
I'd
say
so
jason.
I
think
this
is
the
readme
needs
to
get
refined,
which
is
yeah
in
the
phase
where
the
readme
needs
to
clearly
communicate
the
the
prototypes
kind
of
pulling
a
bunch
of
ideas
together.
Here's
the
ideas,
here's!
What
we
do,
we
kind
of
were
a
little
wishy-washy
on
this.
Originally
we
were
trying
to
be
like
yeah.
A
I
think
that's
actually
exactly
the
feedback
that
I
got
from
this
architecture
meeting
also
was
we
went
into
it?
Thinking
I
mean
I
did
we're
going
to
talk
about
multi-cluster
and
how
making
the
minimal
api
server
minimal
will.
A
C
Though,
because
I
would
actually
say,
I've
gotten
the
complete
opposite
json,
which
is
the
only
reason
anybody's
interested
in
this,
is
if
they
can
do
real,
credible,
multi-cluster,
minimal
api
server
is
seen,
as
I
think,
the
people
who
do
the
tech
are
really
interested
in
the
minimal
api
server.
The
people
who
are
actually
using
cube
are
like
yeah.
That's
all
just
interesting
details.
C
I
want
to
go,
make
multi-cluster
actually
reasonable,
so
it's
kind
of
like
the
we
need
to
strike
that
tone,
which
is
like
minimal
api
server
and
the
or
components
we
pull
out
or
common
threads
are
either
like
a
venn
diagram,
three
consecutive
circles,
whatever
it
is
like
minimal
api
server,
with
flexibility
tendency
like
harder
tendency,
because
one
size
fits
all,
doesn't
work,
and
then
you
know
transparent,
multi-cluster
or
multi-cluster
multi-cluster,
so
you
don't
have
to
care
about
it
or
just
showing
the
building
blocks
would
be
great
and
then
miguel,
your
point
actually
about
submariner
was
interesting
because
I
would
say
I
have
the
bias
coming
into
this-
that
applications
that
how
the
application
wants
to
use
networking
is
the
important
part,
and
so
I've
noticed
a
tendency
is
like
the
technology.
C
C
Ideally,
I
would
think
about
the
transparent,
multi-user
multi-cluster
use
case
as
the
one
that
actually
is
like.
How
do
you
make
this
a
hidden
detail,
but
still
be
able
to
get
the
advantages
you're
talking
about
place
to
orchestrate?
That
would
be
the
trick
to
me
and
we
don't
really
call
that
out
in
the
readme.
That
was
actually
another
thing
that
came
up
in
the
cigar
meeting
so
time
to
do
a
new
pass
on
the
readme
jason.
A
Yeah
yeah
yeah,
I
I
will.
I
will
take
another
pass
through
it.
I
think
the
tl
dr
correct
me
before
I
start
rewriting
the
readme
is
that
by
minimalizing
the
api
server
we
unlock
super
powers.
Those
super
powers
can
be
directed
toward
the
goal
of
transparent,
multi-cluster
and
other
things,
but
the
one
we
are
excited
to
slay
is
transparent
multi-cluster.
D
A
Server
for
real
for
for
actual
people
and
in
the
process,
we
will
also
make
it
something
that
can
be
other
non-transparent,
multi-cluster,
but
otherwise
used
and
embedded
in
other
things.
That
is
not.
That
is
not
a
non-goal.
It's
just
not
the
primary
goal
so
far
that
we
are
that
we
are
focused
on,
but
definitely
it
should
be,
it
should
be
doable.
It's
just
not
something
we
are
trying
we're
trying
to
slay
the
the
multi-cluster
dragon
with
our
new
superpower
of
minimalizing,
the
api
server
yeah.
C
And
every
time
we
carve
a
project
off,
it
should
have
its
own
goals
and
then
we
would,
we
would
sponsor
help,
drive
and
find
other
folks
around
it.
But
the
the
pinnacle
of
the
tree
is,
if
you
have
to
think
about
multi-cluster,
maybe
you're
doing
it
wrong
or
the
the
kcp
project
prototype
bias.
Is
that
the
whatever
comes
out
of
it
minimal
api
server,
maybe
whatever
case
if
kcp
becomes
a
project,
might
have
a
different
bias
and
we're
okay
changing
the
bias
as
we
go
based
on
what
we
learn.
A
Yeah,
I
think
I
think
the
word
transparent,
also
trips,
people
up
too,
because
as
soon
as
we
say,
it's
transparent
people
say,
but
I
want
to
tell
I
want
to
tell
it
so
so.
C
So
we
have
self-selected
when
we
say
multi-cluster,
we
self-select
for
the
crazies,
and
I
mean
that
in
the
nicest
possible
sense
which
is
we
are,
we
are
going
above
and
beyond,
and
then
there's
a
flip
side
perspective
that
we
should
always
when,
when
we,
our
crazy
selves,
are
like
let's
go
solve
all.
This
should
be
like
what
do
users
actually
want,
and
so
I
still
kind
of
feel
like
users,
don't
actually
want
to
think
about
multi-cluster.
Most
of
the
time,
I
think
technologists
need
to
think
about
how
their
technologies
work
well.
In
a
multi-cluster
environment.
C
C
That
was
a
that
was
misspeak
on
my
part,
so
I'd
say
thank
you
so
kcp
the
prototype
is
targeting
the
90
of
people
on
cube
today,
who
cube
meets
their
demands,
but
then
their
cluster
blow
ups
and
they
have
no
solution
and
solving
that
in
the
broadest
possible
way
that
still
hits
everything
you
just
described
mike
so
and
it's
a
big
goal
and
I
think,
like
that's,
the
thing
is,
like
previous
efforts
run
into
obstacles,
my
my
my
encouragement
would
be
that
we
that
we
are
aiming
past
the
previous
obstacles,
so
we
have
to
be
aware
of
where
we
hit
roadblocks
before
and
we
have
to
be
able
to
say
what
is
good
enough
for
the
prototype
is
that
users
like
feel
like
this
is
net
better.
C
We're
still
not
quite
testing
that
so
miguel
do
you
want
to
do
your?
Do
you
want
to
walk
through
your
yeah
yeah
yeah.
E
Yeah,
so
I
just
wanted
to
explain
very
quickly
what
submariner
is
probably
most
people
is
aware,
but
I
mean
it's
a
project
to
connect
the
board
and
services
network
of
your
clusters
into
a
single
network,
and
even
I
will
not
go
into
the
detail,
but
it's
even
capable
of
joining
networks
where
there
are
overlapping,
cidr
spaces.
So
and
then
it
creates
like
virtual
ips
for
services
and
so
on.
If
you
go
to
the
next
slide,
okay,
this
is
and
then
to
the
next
yeah
yeah.
You
can
skip
that
one.
E
The
service,
ips
and
also
the
services
discovery
is,
is
provided
in
terms
of
finding
the
the
services
that
have
been
exported
in
other
clusters
as
service
imports
and
also
via
dns
resolution.
E
E
The
broker
for
submariner
is
the
place
where,
where
the
clusters
exchange
information
in
in
the
form
of
crs,
we
have
ucr
crds
one
for
clusters,
one
for
endpoints
endpoints
are
basically
the
gateways
of
submariner
which
are
going
to
create
tunnels
to
to
other
to
other
clusters,
and
then
we
also
have
in
that
broker.
E
We
have
service
imports
for
for
the
services
that
have
been
exported
from
other
clusters,
so
other
clusters
can
find
about
those
service
exports
and-
and
we
also
publish
the
endpoint
slices
related
to
those
services
that
have
been
exported,
because
on
some
cases
you
need
to
connect
to
headless
services
or
yeah.
You
need
more
information
about
those
the
bots
baking
backing
those
services.
E
Okay,
you
can
go
to
the
next.
So
in
you
have
a
link
in
there
to
the
multi
cluster
service
api
and
that
defines
yeah.
You
don't
need
to
go
over
it.
It's
just
on
the
pdf.
E
If
somebody
wants
to
know
more
about
that,
but
on
a
high
level
that
api
defines
the
concept
of
cluster
sets
and
a
caster
set
is
a
group
of
clusters
with
I
mean
under
normally
as
administration
or
high
degree
of
trust,
and
there
is
one
assumption
and
is
that
the
services
and
sorry
that
all
the
namespaces
are
the
same
across
clusters
and
that
if
you
export.
E
One
one
service
in
a
namespace
in
a
cluster
and
you
export
a
service
with
the
same
name
in
the
same
name
space
in
another
cluster.
They
are
supposed
to
be
the
same.
So
it's
the
basically
the
mantra
of
this
multi-cluster
service
api
and
then
we
have
to
choose
here.
E
This
one
is
the
service
export
which
is
used
to
signal
the
controller
implementing
this
that
okay,
you
want
to
export
a
specific
service
in
a
specific
cluster
and
when
you
create
a
service
exporting
in
one
cluster
in
one
namespace,
the
other
clusters
will
be
able
to
resolve
this
service
as
service
dot.
E
Namespace.Service.Clusterset.Local
that
cluster
set
local
can
be
something
else.
If
you
want
to
define
something
different
and
then
service
import
is
the
api
form
of
discovering
the
same
services.
E
E
C
Have
you
had
a
chance
to
read
through
the
multi-cluster
investigation,
doc
like
how,
because
it
doesn't
have
all
of
the
details
for
some
of
the
topics
here,
but
I
know
some
of
this
has
been
discussed
like
just
how
familiar
are
you
with
the
full
depth
of
that
discussion
not
fully
familiar
yet
and
getting
familiar
with
it?
And
there
are
some
like
subtle
things.
I
think
we
discussed
in
one
of
the
previous
meetings
that
were
like,
and
I
think,
you're
kind
of
like
it
was
just
like.
C
I
was
useful
as
I
was
seeing
the
diagram.
It's
a
great
diagram
actually,
and
it
helps
kind
of
frame
that,
like
using
service,
export
and
service
import,
to
accomplish
the
goal
of
a
pod
being
of
an
existing
workload
being
mostly
unaware
that
they're
not
running
in
what
I
would
call
a
traditional
cube
mode
right
like
cube,
control,
apply
of
a
pod
and
another
pod,
and
they
have
services,
does
dns
kind
of
work
like
they
expect
the
the
the
multi-cluster
dock
kind
of
starts.
With
this
I
know,
there's
been
some
separations
like.
C
How
would
we
do
that,
so
that
the
broker
level
is
the
source
of
truth
and
looks
like
cube,
and
then
the
pod
could
be
programmed
to
not
know?
So
it's
actually
good
to
see
this,
because
some
lying
about
dns
is
required
in
that
model,
and
this
seems
like
yeah.
The
slightly
lower
level
requirement
or
a
one-way
didn't
depend
on
that
transparent
assumption.
E
Yeah
so
yeah,
I
agree
with
you
like
that
part
of
having
a
different
dns
name
for
the
services
makes
it
non-transparent,
but
this
is
something
that
yeah
that
was
broadly
discussed
when,
when
we
were
defining
the
multi-cluster
service
api
yeah,
we
thought
okay.
That
was
one
of
our
goals.
E
Let's
try
to
make
this
as
transparent
as
it
could
be,
so
we
can
make
the
workloads
not
aware
that
they
are
those
are
not
really
running
on
on
on
multi-cluster,
and
but
there
was
a
problem
with
that
and
and
the
problem
is
that.
E
E
Maybe
that
that
workload
that
you
are
creating
on
on
on
a
specific
cluster
with
the
service
name,
you
don't
want
that
to
be,
I
mean
when,
when
the
other
pods
on
the
namespace
are
connecting
to
that
service
name,
do
you
really
want
them
to
be
connecting
to
that
service
name?
You
don't
want
those
spots
to
be
connected
to
a
remote
cluster.
C
So-
and
this
definitely,
I
think,
is
that
it's
that
trick
of
there's
two
mindsets
right
now
in
multi-cluster,
there's
the.
How
could
I
go
accomplish
multi-cluster
and
then
I
think
the
way
we've
been
kind
of
framing
the
transparent,
multi-cluster
use
case,
which
is
we
know
how
hard
that
is.
But
the
problem
with
the
heart
is
like
everybody's
willing
to
make
different
trade-offs.
C
Could
we
use
the
same
mechanism
that
we
have
so
that
you
can
accomplish
it
to
lie,
and
that
might
add
additional
requirements
that
we'd
have
to
figure
out
at
those
lower
levels?
Are
they
achievable,
but
I
think
part
of
that
is
yeah
like?
Can
we
come
back
and
say
like
we
can
use
this
a
different
name,
but
what
would
someone
want
and
how
does
the
requirement
for
it
to
remain
transparent?
For
someone
to
think
through,
like.
E
Yeah
I
mean
that
that
that
that's
reasonable
at
this.
From
the
the
point
of
view
of
the
submariner
project,
I
mean
it
is
something
that
you
could
configure
and
and
say.
Okay,
I
want
you
to
respond
if,
if
somebody
is
querying
this
service
on
a
namespace,
okay,
first
try
to
look
it
up
on
the
on
the
multi-cluster
level.
If
it's
not
there,
okay
then
go
to
the
local
cluster
and
and
then
it
will
be
transparent
because
you
don't
need
to
go
to
the
long
dns
name
when
looking
up
for
this.
C
And
this
is
like
where
like
and
like
so
this
is
like
the
heart
I
think,
of
the
transparent,
like
we
were
kind
of
defining
the
transparent
multi-clusters
like
three
parts,
there's
like
the
what
parts
of
the
app,
what
does
a
95
app
look
like
and
then
how
do
you
cover
the
non
95
cases?
Then
there's
the?
What
are
the
expectations
when
you
take
a
95
and
put
it
on
a
cluster
that
you
want
to
pretend
are
exactly
the
same,
and
then
I
think,
there's
the.
C
Well,
probably,
most
people
want
to
prefer
local
dependencies,
but
when
they
want
to
prefer
remote,
can
we
overlap
that
with
another
requirement
they
might
have
like
today
on
cube
clusters?
You
also
want
to
prefer
local
and
remote
when
it
comes
to
the
current
node
or
the
current
zone,
and
so
like.
I
think
the
thing
that
we're
trying
to
hope
for
is-
and
this
is
awesome
because
this
is
like
this-
is
an
example
of
kind
of
the
general-
why
transparent,
multi-cluster
is
trying
to
be
different.
C
It's
like
how
can
we
work
with
all
of
the
groups
that
are
doing
this
to
be
like?
Can
we
create
a
common
motion?
That's
not
just
we
can
do
multi-cluster
networking,
but
people
depend
on
multi-cluster
networking
and
they
never
know
it's
there,
which
is
a
subtly
different
problem
that
depends
on
the
multi-cluster
networking,
be
there
in
the
first
place.
E
Content-
I
I
work
with
like
that.
Can
we
bring
this
to
the
to
the
submariner
meeting
about
adding
support
for
a
transparent,
modern
mode
for
discovery
of
of
services,
because
I
mean
implemented
that
it
doesn't
mean
that
we
break
the
multi-cluster
service?
Api
is
like
a
way
of
extending
it.
C
And
I
think
that
kind
of
gets
into
it
is
like
what
would
what
what
we'd
love
to
work
on
is?
Could
we
come
up
with
examples
of
what
we
expect
transparency
to
mean
and
then
ask
how
we
can
accomplish
it
and
if
it
can't
be
accomplished,
look
at
what
would
the
natural
way
to
do
it
on
a
single
cluster
b
and
then
ask
okay?
Could
we
between
the
pretending
that
we
don't
have
clusters
and
you're
actually
on
one
cluster?
C
If
you
say
you
depend
on
service
b,
that's
not
going
to
be
the
name
of
the
service
in
a
global
sense,
but
it
might
be
the
name
of
the
service
and
the
local
adaptation
you
do,
whether
it's
dns
or
whatever,
and
I
think
like
you
know.
This
is
great,
because
most
of
these
diagrams
are
almost
exactly
like.
C
We
could
put
like
three
tweaks
on
each
of
these
diagrams
and
be
like
transparent,
multi-cluster
and
like
here's,
what
it
would
mean
at
this
level
having
that
discussion
is
really
what
I
think
we're
trying
to
kick
off
here,
because
most
people
are
still
kind
of
like
that.
I'll
go.
Do
extra
work
to
get
multi-cluster
and
we're
trying
to
get
back
to
the
come
at
it
from
the
other
angle,
which
is
you
do
no
work
and
you
get
multi-cluster.
C
So
for
the
classic
people,
as
mike
said
before,
who
don't
care
about
multi-cluster
but
do
care
about
resiliency
availability,
api
stability,
reliability,
movement,
resiliency
for
cluster
failures,
multi-region
support,
whatever,
whatever,
whatever
multi-cloud
et
cetera,.
A
So
we
have
one
minute
remaining.
I
don't
know
if
you
want
to
very
quickly
go
through
the
remainder
of
these
slides
or
assignments.
C
E
C
Or
actually
we
could
we
could
potentially
have
like
a
transparent,
multi-cluster
working
group
meeting
where
we
we
go
through
some
of
these
type
of
dudes.
This
is
awesome
and
like
we
could
get
that
smaller
group
at
a
time,
that's
more
convenient
to
you
and
maybe
get
a
couple
other
folks,
who've
kind
of
been
interested
and
say
like
let's
hash
out
these
concretely
yeah,
okay.
A
Let's
do
that,
let
I'll
try
to
schedule
something
with
you,
miguel
clayton
I'll,
invite
you
invite.
Other
people
feel
free
to
reach
out
to
me
on
slack.
If
you
are
hearing
this
and
you
want
to
go
to
that.
A
E
And
taking
sorry
I
I
was
grown
for
for
next
week,
I'm
I'm
still
around!
So
that's
okay,.
A
A
That
the
next
meeting,
if
you
have
questions,
if,
if
you
go
through
this
and
have
questions
for
miguel,
I
guess
we'll
talk
to
you
on
slacker
or
something
all
right.
Thank
you
very
much.
Thank
you.
Everyone
have
a
wonderful
week.