►
From YouTube: Community Meeting July 13, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
to
the
kcp
community
meeting
july
13
2021.
Since
last
week
there
have
been
a
number
of
pr's
that
were
sent
out
and
some
one
of
them
was.
I
know
we
talked
about.
I
think
it
was
two
weeks
ago
or
perhaps
three
weeks
ago,
that
all
of
the
type
negotiation
stuff
that
david
has
been
working
on
great
all
the
type
negotiation
stuff
that
david's
been
working
on
is
separately
potentially
useful
outside
of
kcp.
A
Just
as
a
sanity
check
on
whether
you
know
the
change
you
are
proposing
to
to
a
crd
type
is
good.
Is
one
that's
going
to
cause
problems?
So
with
that
in
mind,
I
basically
split
out
a
little
package
main
binary
that
calls
the
compact
code
and
tells
you
whether
the
change
you're
proposing
making
is,
is
valid
or
not,
and
with
an
lcd
flag
that
will
emit
the
lcd.
If
you,
if
you
want
that,
so
one
of
the
use
cases
is
you
can
before
applying
it.
A
You
can
say
here
is
what
the
the
cluster's
current
food
crd
is
and
here's
my
proposed
news,
foo
crd,
and
it
will
tell
you
you
know,
you're,
making
a
bad
change
or
whatever
there's
not
much
to
that.
It's
basically
just
wrapping
david's
code
in
a
package
main
another
one
to
create
right
now
we
have
a
create
kind
clusters,
script
that
creates
two
clusters,
east
and
west,
and
in
order
to
demo
rebalancing
across
clusters,
I
wanted
to
have
it
more
easily.
A
You
know
be
able
to
create
five
clusters
or
create
three
clusters
or
whatever.
So
that's
a
small
refactor
for
that
going
through
some
of
the
stuff.
Last
week,
around
linting
and
and
stuff,
a
lot
of
lint
checks
came
up
with
bugs
or
or
I'm
sorry
coverage.
We
added
a
lot
of
coverage
checks
and
schema.
Compat
is
currently
sitting
at
like
30
coverage.
So
if
you
are
interested
in
increasing
the
the
coverage
for
this,
this
is
a
good
sort
of.
A
I
won't
call
it
mindless,
because
it's
mostly
find
out
where
we
don't
have
coverage
and
come
up
with
a
test
case
that
will
cover
that.
Do
that
you
know
10
or
20
times
and
increase
the
coverage
400
in
the
process.
I
guess
not:
400
increase
it
a
bunch
in
the
process,
that's
sort
of
a
good
thing
to
do.
If
you
have
spare
cycles
and
are
interested
in
it,
it's
it's
mainly
mechanical
work
to
identify
a
case
find
a
case
that
will
cover
it
and
contribute
it.
A
Joaquim
had
a
great
I
haven't
actually
played
with
it
yet,
but
but
a
prototype
of
kcp
ingress
for
multi-cluster
ingress.
I
don't
know
if
you
want
to
talk
about
that
anymore
or
any
any
details
or.
B
Okay,
well,
basically,
I
took
the
deployment
splitter
that
you
did
and
adapted
to
to
work
with
with
ingresses
right
now,
I'm
trying
to
make
a
full.
You
know
the
full
demo
just
deploy
well
deploy
deployment
to
play
a
service
they
play
on
ingress.
You
know
everything
and
just
drive
some
traffic
through
it.
There
are
some
complications,
as
this
is
a
local
setup
and,
as
you
know,
for
example,
in
mac
in
mac
os,
you
get
this
virtual
machine
in
linux.
B
A
I
mean
we
can
we
can
talk
about
them
here.
We
can
talk
about
that.
If
you
want
to
link
to
issues
or
link
to
a
place
where
you're,
where
you're
collecting
these
notes,
I
think
it
would
be
useful
to
sort
of
if
there
are
issues
that
are
coming
up
because
of
the
specific
issues
of
ingress,
that's
helpful,
and
if
there
are
issues
that
are
coming
up
because
of
how
kcp
is
working
or
not
working
or
working
unusually,
that
would
be
helpful
to
be
able
to
like
identify
things
we
can
improve
upon.
B
Stuff
like,
for
example,
so
who's
making
the
decision
to
actually
replicate
an
ingress
to
which
physical
clusters
you
know
who
will
make
that
decision,
for
example
right
now.
This
is
basically
just
listing
all
the
cluster
and
deploying
it
there.
B
B
This
is
the
kind
of
of
questions
that
they
have
right
now
and
then
there
is
a
lot
something
that
I
I
think
we
should
put
in
a
in
a
document
or
some
issues
to
discuss,
which
is
translation
of
different
english
types
to
another
one
and
standardization
of
the
ingress
controllers.
In
one
side
you
know
stuff
like
that,
but
yeah
some
issues
I
find
with
the
controller
which
sometimes
it's
like.
B
C
Yeah,
that
was
maybe
sorry
just
a
word
on
that
there
was
a
pending
bug
on
the
cubone.
It's
feature
branch
that
that
could
lead
to.
Yes,
some
events
not
being
taken
in
account
because
the
cluster
name
is
not
correctly
set
on.
On
the
you
know,
the
object
retrieved
by
the
lister,
the
client
lister.
This
has
been
fixed
so
in
the
very
last
pull
request
that
has
been
matched.
C
I
think
this
morning
on
the
kubernetes
feature,
branch
and-
and
there
is
a
pending
pull
request-
I
created
to
update
the
commit
on
the
kcp
side,
so
I
mean
probably
this
could
help.
On
the
other
hand,
it
seems
that
there
are
some
other
possible
causes
of
such
you
know
problems
as
you
mentioned,
even
even
after
this
fix.
C
Typically
I
when,
when
trying
to
run
the
the
kubecon
demo
as
an
integration
test,
if
I
do
it
locally,
everything
works
correctly.
If
I
do
it
when
it's
it's
run,
you
know
on
on
github
as
a
kit
of
action.
It
seems
that
this
split
deployment
are
not
synced
back
so
for
now
I
don't
know,
what's
what's
the
point,
but
it
might
be
related
to.
You
know
conflicts
that
we
have
when
we
update
the
status,
because,
as
you
know,
you
you
you
see
if
an
object
is,
is
you
know?
C
If
there
is
some
sort
of
concurrency,
then
you
might
end
up
with
a
conflict
and
you
have
to
retry
and
possibly
retry
with
back
off
and
it's
not
something
that
we
handle.
You
know
very
cleanly
or
very
systematically,
so
there
might
be
still
some
other
corner
cases
where
think
think
or
splitting
is
not
it's
not
always
fully
correct.
So
we
we
have
to
if
that
and
every
use
case
that
you
you
might
have
where
that
would
lead
us
to
such
a
situation
would
be
very
interesting
to
to
track
precisely
those
corner
cases.
B
C
B
Github
actions:
I
had
a
lot
of
issues
with
that,
mostly
when
deploying
kubernetes
clusters
and
things
like
that
into
github
actions.
It's
really
resource
constrained,
sometimes
and.
C
Yes,
yeah,
that's
yeah
that
that
might
be
eaten.
That
might
be
the
difference
between
my
laptop.
I
I
assume
I
assumed
and
and
github
actions
is
that
you
know
probably
things
to
do
not
follow
the
exact
same
flow
and
finally,
there
are
some
conflicts
and,
and
we
we
we
hit
some
sort
of
race
condition
that
we
did
not
think
of
when
deploying
locally.
That
might
be
something
like
that,
but
but
for
now
I
don't
have
more
more
information.
I
mean
more
insights,
so
the
known
thing.
A
Yeah,
the
the
to
be
clear
that
the
things
taking
up
resources
in
this
scenario
are
likely
mostly
the
actual
kind
clusters
that
we
run
in
that
end
to
end
test
right.
The
the
kcp
binary
itself,
the
controllers
that
we're
writing
the
sinkers
are
pretty
slim
as
far
as
resource
requirements
go
but
then
running,
two
kubernetes
api
servers
alongside
it
blow
up
the
resources
so.
C
Yeah,
maybe
there
is
an
area
of
investigation
as
well,
which
would
be
the
cpu
requirements
for
kcp
I
mean
in
terms
of
memory.
I'm
not
sure
you
know.
That's
that's
very,
but
but
even
when
I
run
that
locally,
it
seems
that
kcp
still,
you
know,
at
least
when
you
run
the
the
overall
kcp
that
contains
also
the
cluster
manager,
and
it
seems
that
it
it
somehow
consumed
you
know,
say
10
or
13
percents
of
the
cpu
by
something
that
it's
just
not
nothing,
and
it's
right.
So,
even
when
you,
when
we
have
yeah.
A
A
Yeah
we
get,
we
get
the
question
fairly
regularly.
A
What
kind
of
resource
requirements
are
we
are
we
targeting
or
do
we
currently
incur
and
having
a
better
answer,
having
an
experientially
based
answer,
for
that
would
be
nice
as
opposed
to
not
so
much,
not
so
bad,
and
I'm
sure
there
are
I'm
sure
in
our
in
our
frankensteining
apart
the
kubernetes
code
base,
we
have
left
some
things
running,
that
we
don't
need
running
or
in
some
cases,
probably
have
cut
things
out
that
we
did
need,
but
understanding
that
better
would
definitely
be
useful,
for
you
know
identifying
what
that
13
percent
of
your
cpu
running
is
actually
doing
and
then
going
and
seeing
it
yeah
yeah.
C
I
think
at
some
point
we
should
pro
well.
We
should
probably
create
an
issue
to
to,
at
some
point,
do
a
profiling
position
that
would
just
you
know,
give
us
a
a
a
real
view
and
precise
view
of
where
the
cpu
is
consumed.
A
Yeah,
I
will,
I
don't,
have
a
notepad,
but
I
will
make
a
note
of
that
and
do
that
after
after
this
meeting,
because
at
least
having
an
issue
where
we
can,
when
people
ask
what
kind
of
resource
usage
we
have,
we
can
say.
Well,
you
know,
even
if
we
don't
know
this
is
where
we're
going
to
find
out-
and
you
know,
contribute
your
contribute,
your
own
findings
or
whatever.
A
I
don't
think
we're
at
the
point
in
the
project
where
we
are,
you
know
trying
to
squeeze
out
absolutely
everything
out
of
it,
but
it
you
know
if
we
can
find
low-hanging
fruit
and
say
like
oh,
this
cuts
out.
You
know
a
huge
branch
of
stuff.
We
don't
need
then
by
all
means
yeah.
We
should
go
into.
C
Yeah,
because
you
know
I,
I
think
what
could
be
interesting
is
not
not
doing
really
big
optimization,
but
maybe
spotting
quite
obvious
things
like
the
fact
that
cluster
you
know,
api
negotiation
on
clusters
would
be
made
to
too
often,
or
maybe
on
some
of
the
crd
management
that
had
been
that
has
been.
You
know,
added
on
and
cube
as
well.
There
might
be
some
things
that
take
too
much
resources.
A
Know
be
an
important
part
of
this
and
even
like
you
know,
profiling
is
great,
but
it's
a
snapshot
in
time
and
if
you
exactly.
C
A
See
metrics
always
going
up
until
it
restarts
then
well,
you
have
a
leak
and
like
profiling,
wouldn't
have
necessarily
caught
that.
So.
C
A
Yeah,
I
think
I
also
want
to
make
sure
we
know
not
just
where
this
is
coming
from,
but
like
is
kcp
as
a
slim
down
kubernetes
api
server.
The
cause
like
is
that
where
things
are
happening,
is
it
happening
in
the
lcd
level?
Is
it
happening
yeah.
C
C
A
Which
sort
of
leads
us
to
the
next?
The
next
point
about
when
we
are
when
we
talk
about
kcp,
we
very
often
talk
about
it
as
one
big
thing
when
in
reality
it's
many
multiple
things
and
some
of
the
and
that's
mostly
been
fine,
but
every
once
in
a
while.
A
Something
comes
up
that
causes
problems
like
looking
when
trying
to
do
the
kcp
ingress
stuff,
because
cluster
crd
and
sinker
code
and
type
negotiation
and
deployment
splitter
are
all
in
the
same
repo
as
kcp,
which
all
depends
on
this
fork
from
kubernetes
118..
A
A
I
don't
think
this
is
an
urgent
must
happen
now,
like
work
can't
proceed
until
we
do
this
split,
but
I
think
we
should
think
about
how
we
want
to
do
this
split.
I
think
the
best
answer
that
the
best
outcome
that
I
have
been
able
to
come
up
with
is
that
we
split
cluster
crd
and
controller
sinker
type
negotiation
and
all
the
splitters,
which
will
eventually
become
a
generic
everything
splitter
into
a
separate
repo
and
have
the
kcp
repo
depend
on
that,
instead
of
having
them
in
one
module
together.
A
That
way,
when
people
want
to
write
the
ingress
controller
or
the
you
know,
k
native
service
controller
or
whatever
they
can
do
that
against
the
multi-cluster
repo
without
taking
a
dependency,
a
transitive
dependency
on
our
kubernetes
fork.
I
don't
like.
I
said:
I
don't
think
that
needs
to
happen.
You
know,
oh
my
god
right
now,
but
if
people
think
that
is
not
the
path
forward,
we
should
talk
about
it,
but
I
think
that's
probably
where
we'll
end
up
at
some
point
in
the
future.
E
That
would
be
correct.
Kcp
is
a
light
integration.
Probably
I
could
see
it
being
a
subset
of
pulling
a
couple
of
functionalities
together
and
the
other
repos
would
be
the
deeper
you're
right
on
the
cube
one.
It
probably
really
should
depend
on
a
minimal
api
server
which
again
like
depending
on
kk,
is
the
problem
because
to
get
cube
is
to
depend
on
kk
today,
you
can't
actually
do
anything
useful
with
k,
api
server,
that's
cube
like
or
yeah
efficiently
enough,
yeah.
A
Well,
I
I
think
we,
I
think,
in
the
in
the
fullness
of
time,
a
lot
of
what
is
in
kcp
or
everything.
That's
in
our
fork
goes
away
goes
upstream
and
some
of
the
stuff,
that's
in
the
kcp
repo
in
this.
In
this
version
of
this
kcp
repo
could
go
away
and
go
upstream,
hopefully,
eventually
the.
A
Yeah
yeah
exactly
so,
I
think
that's
something
and
we
we
want
to.
I
think
we
want
to.
I
don't
want
to
reopen
and
re-litigate
the
topic,
but
I
think
we
still
want
to
have
the
ability
for
one
binary
to
run
kcp
the
cluster
controller
and
any
number
of
splitters
and
sinkers
for
demo
purposes
for,
like
you
know,
proving
that
that
you
can
embed
this
all
in
one
place,
and
if
we
want
that,
then
kcp
has
to
depend
on
multi-cluster.
A
If
we
don't
want
that,
if
we
don't
think
that's
a
hard
requirement,
then
we
could
have
all
of
them
be
separate.
Atomized,
repos
and
the
way
to
glue
them
together
is
to
check
them
check
out
the
ones
you
need
run.
Those
as
separate
things
that
feels
just
kind
of
gross
like
I
don't
want
to
have
to.
I
don't
want
the
demo
to
be
okay,
go
fetch,
15,
repos
and
run
separate
15
binaries
yeah.
On
the
other
hand,
sorry.
C
I
mean,
on
the
other
hand,
doesn't
it
relate
to
you
know
the
library
aspect
of
it.
I
mean
if
we
think
that,
if,
if
you
think
about
how
all
in
one
kcp
is
done
for
now,
it's
mainly
just
including
some
code
in
the
you
know
poster
hook,
and
so
if,
if
kcp
was
available,
as
I
mean
kcp
as
a
library
and
speaking
as
a
library
where
was
available,
you
know
as
library
and
tripod
where
you
can
add.
C
Typically
things
like
post.hook,
I
mean
things
that
you
want
to
to
the
various
controllers
and
other
you
know
active
components
that
you
want
to
hook
into
kcp
into
the
kcp
library.
Then
you
know,
maybe
you
don't
need
the
only
repo
that
you
finally
need
to
have
is
a
end-to-end
demo
repo,
because
kcp
would
be
much
more
the
the
library
glue
that
allows
you
building
your
own.
You
know
demo
or
you
your
own
all-in-one
solution.
C
I
I
think
we
have
to
to
include
our
our
id
of
of
kcp,
the
library
into
that
you
know
the
the
the
sort
of
library
to
to
build
your
own
all-in-one
command
line.
Yeah.
E
And
probably
it's
almost
like
another
way
of
saying
that
david's
like
if
the
library
thing
is
only
successful,
if
you
can
accumulate
a
library,
and
that
means
that
the
kcp
repo
should
be
fairly
small,
pulling
together
a
number
of
pieces
that
feel
like
they
compose
well,
maybe
there's
a
difference.
E
You
could
go
run
all
the
logical
cluster
stuff
and
people
will
use
that
as
library,
but
I
tend
to
suspect
that
most
of
the
most
of
the
code,
we're
talking
about,
would
live
in
a
multi-cluster
like
transparent
multi-cluster,
enabling
is
its
own
big
enough
thing
and
the
transparent
part
is
what
makes
it
big,
not
the
library
bits
of
it,
like
you
know,
having
some
opinionation
about
services
and
I
think
even
another
one
is
like
community
to
make
the
use
case
really
exciting.
E
E
As
a
thing
has
a
lot
of
value
just
in
the
setting
a
direction
that
gives
a
reason
to
have
all
these
different
letters,
because
if
you
just
have
a
bunch
of
different
libraries,
you
know
you're
always
going
to
run
into
that.
Well,
why
are
we
doing
all
these
different
libraries
they're,
not
related?
They
don't
push
each
other
to
excel
yeah.
A
Yeah,
I
think
I
think
what
I'm
hearing
is
that
we
are
not
in
general,
disagreeing
that
there
should
be
this
separate,
multi-cluster
repo,
but
now
whether
the
kcp
standalone,
kcp
library,
repo,
is
includes
the
end-to-end
demo
glue
or
whether
that's
a
separate
repo.
I
think
we
all
want
the
end
to
end
demo
to
to
live
on
and
be
existing
forever.
E
E
I
would
say:
kcp
the
repo
exists
to
hit
that
success
mark
and
then
we'll
this.
You
know
as
we'll
kind
of
like
keep
ourselves
honest.
Are
we
are
we
biting
off
more
okay?
Maybe
we
should
clone
that
out.
Is
it
time
that
we've
succeeded?
Okay,
we've
succeeded
now,
let's
reorganize
this
and
turn
this
into
a
real
project,
not
just
a
prototype.
A
Right,
the
the
the
specific
impetus
for
thinking
about
all
of
this
in
general
was
joaquim,
wanted
to
add
the
kcp
ingress
stuff
and
hit
a
roadblock,
and
I
this
is.
This
is
the
time
where
I
want
to
get
all
the
roadblocks
out
of
the
way
right.
I
want
people
to
be
able
to
contribute.
You
know
the
ingress
splitter
and
the
everything
splitter
without
having
any
headaches,
and
so
yeah
yeah.
A
C
Yeah
because
the
main
problem
was
was
the
incompatibility
with
client
go
because
you
you
inherit
from
from
kcp,
so
you
inherit
from
kubernetes,
1.18
and
so
from
client
go
1.18,
and
then
you
are
not
able
to
use
a
client
code
that
contains
your
own.
You
know,
ingress
v2,
typically
objects,
and
so
that's
the
main
underlying
problem
that
we
we
would
try
to
solve
by
separating
the
repos
or
at
least
separating
the
go
modules.
E
A
Yeah,
I
really
don't
want
to
have
separate
go
modules
in
the
same
repo.
That's
in
every
time,.
A
C
A
C
I'm
not
advocating
putting
them
into
some
repo,
just
saying
that
it's
mainly
a
question
of
you
know:
modules.
E
A
The
code
can
be
as
gross
as
it
needs
to
be
to
make
it
work,
but
if
our
organization
is
making
it
hard
to
contribute
like
now,
it
is
an
issue
and
we
should
do
it,
but
so
I
think
I
think
he
got
around
it.
I
think
you
got
around
it.
If
that's
not
the
case,
then
we
should
keep
working
on
it
and
absolutely.
B
I
I
basically
use
the
the
available
ingress.
I
think
it's
the
alpha
one
or
or
something
like
that,
so
I
had
to
deploy
older
kubernetes
local
clusters
kind
clusters.
You
know
so
yeah,
it's
just
that.
I
mean
I
can
play
around
with
that.
But
having
that
clean
goal
replacement
in
the
go
mode,
honestly,
just
doesn't
let
me
import
other
stuff
or
or
make
it
more
clear.
C
It
isn't
either
isn't
the
solution,
for
I
mean
the
the
short
term
or
midterm
solution
for
this
to
simply
just
update
rebase
the
the
kubernetes
branch
feature
branch
on
the
last
on
kubernetes
master,
I
mean
because
that
could
help
yeah,
for
example,
because
because
then
in
client
go
we
would
have
all
the
old
apis
as
v1
alpha
you,
you
can
still,
you
know
include
all
apis
if
I'm
not
mistaken,.
E
That
has
been
a
ton
of
change,
okay,
david.
If
that's
something
you
want
to
pick
up
that,
might
be
or
see
what
that
would
look
like,
there's,
definitely
a
little
bit
of
churn
in
there.
You
have
to
reconstruct
some
of
the
the
patches,
because
people
probably
have
touched
some
of
those
core
layers.
Yeah.
C
A
A
Cool
cool,
and
that
I
mean
that
would
also
help
us
prepare
these
changes
as
caps
right
yeah.
C
A
C
That
would
probably
be
the
opportunity
to
you
know
also
clean
up
a
bit
the
the
list
of
the
of
the
commits
that
we
have
in
the
in
the
branch
at
least
two
to
match
those
that
that
you
know
were
fixed
afterwards
or
stuff
like
that,
a
bit
and
then
prepare
for
for
future
contributions.
I
don't
know.
A
Nice,
okay,
great,
I
don't
really
have
any
other
pressing
topics.
Does
anybody
else
have
anything
on
their
mind
that
they
want
to
go
over
in
this
forum?
So.
E
I
am
working
through
some
of
the
policy
stuff,
so
looking
at
how
you
could
use
logical
clusters
to
do
hierarchy
of
policy,
so
you
could
have
because,
like
again
at
the
end
of
the
day,
like
cupid
or
self-service
of
logical
clusters
is
a
unique
thing
that
logical
clusters
give
you
that
namespaces,
don't
quite
so
there's
a.
E
You
can
compose
apis
differently,
looking
at
some
new
ways
of
mixing
and
matching
the
cube
primitives
to
make
the
idea
of
you
can
make
self-service
of
a
full
cluster
pretty
powerful,
and
you
can
say
like
what
are
the
use
cases
so
working
through
a
dock
on
that
it's
taking
a
little
bit
longer,
mostly
because
I
keep
coming
up
with
interesting
things
that
I
got
to
figure
out
how
to
frame
it,
as
this
is
a
good
idea.
E
This
is
a
bad
idea,
expect
a
pr
dock
of
some
form,
that'll
kind
of
lay
out
here's
an
example
of
a
bunch
of
different
pieces
and
how
you
could
use
them
and
then
go
back
we'll
go
back
through
and
say,
like
okay,.
E
E
You
know
two
teams
could
use
completely
different
apis
and
actually
be
completely
independent
and
how
that
might
propagate
down
and
how
you
could
use
some
of
the
flexibility
of
logical
clusters
to
change
stuff
on
demand
like
one
morning
you
wake
up
and
your
your
apis
have
been
upgraded
to
a
future
version.
How
would
we
roll
that
out?
How
would
you
do
rate
limiting
so
that
multiple
teams
could
collaborate
and
one
team
can't
blow
out
another
so
stuff,
like
that?
Nothing,
the
daca
is
still
exploding
versus
starting
to
get
pruned
out.
So.
A
What's
the
there
was
a
I'm
gonna
butcher
it,
but
it
was
like
mark
twain
was
like
sorry.
I
wrote
such
a
long
letter.
I
didn't
have
time
to
make.
E
Sure
yeah
there's
a
lot
of
good
ideas.
What
are
the
ideas
that
would
be
most
useful
for
someone
who's
thinking
about
a
kcp
like
thing
as
transparent,
multi-cluster
plus
self-service,
because
one
of
the
big
things
about
transparent
multi-cluster
is
letting
you
expose
clusters
that
people
can't
actually
directly
access
and
that
kind
of
giving
that
decoupling
from
your
underlying
clusters
is
a
unique
thing
that
nobody
really
does
as
a
standard
cube
like
tool
today.
C
That's
right,
technical
question,
so
would
this
imply
typically
cluster
parents,
because
you
know
you
have
this
in
what
you
initially
defined?
It
would.
E
E
C
Yeah,
but
by
the
way
just
to
to
to
confirm
this
in
in
a
document
you
mentioned
that,
even
today
we
on
every
logical
cluster,
we
include
the
apis,
I
mean
the
crds
that
are
in
the
admin,
so
in
the
implicit
parent
of
any
logical
cluster,
that's
not
implemented.
Today
I
mean
that's
still
something
that
should
be
done
and
that's
not
really
clear
how
the
best
way
to
do
that.
So
yeah.
E
The
idea
of
a
logical
cluster
that
has
a
set
of
predetermined
apis
and
that
can
be
defined.
I
think,
as
a
will
be
something
I'd
call
out,
because
the
set
of
apis
you
have
available
determines
what
you
can
do
and
what
would
we
need
to
protect?
You
know
what
does
it
mean
if
somebody
creates
if
an
admin
wants
everybody
to
have
a
deployment
resource,
but
you
want
to
create
something
that
doesn't
have
a
deployment
resource.
E
C
E
A
So
is
it
sounds
like
you're
describing
our
back
for
logical
clusters?
Is
there
something
more
complex
than
just
enforcing
existing
our
back
policies.
E
Can
you
express
them
as
our
back
policies?
I
think
it's
kind
of
the
combo
of
it's.
Our
back
is
interesting,
but
then
there's
a
bunch
of
things
like
that.
Our
back
doesn't
approach
at
all,
like
the
ability
to
create
logical
clusters
allows
you
to
consume
logical
clusters.
What
would
we
do
for
quota?
Would
we
use
resource
quotas?
E
Aren't
designed
to
be
per
person
if
you
wanted
to
do
self-service
and
say,
like
every
person
at
your
company,
can
use
this
much
resource
by
default
and
then
has
to
be
granted
a
higher
level.
What
would
that
look
like?
Would
it
be
a
combo
of
resource
quota,
rvac
and
a
net
new
type
of
extension
or
policy?
What
would
that
need
to
be?
E
E
If
I
want
to
add
a
logical
cluster,
what
are
all
the
types
of
policies
that
people
are
trying
to
solve
for
right
now
that
they
might
be
encoding
an
emission?
They
might
be
encoding
elsewhere,
but
they
don't
quite
line
up
with
the
actual
use
case,
so
per
person
or
who
you
are
does
who
you
are
get
to
determine
how
much
you
can
do
or
is
who
you
are
and
what
you
can
do
completely
up
to
the
end
user.
E
A
E
And
conversely,
namespaces
are
very
useful
for
layering
your
own
things.
On
top
of
that's
enabled
people
to
go,
build
these
yeah.
What
would
it
need?
What
would
it
need?
What
would
you
need
to
have
to
let
you
say:
oh,
but
we'll
turn
this
off,
so
that
an
admin
with
our
back
could
be
the
one
creating
logical
clusters
and
you
don't
need
any
self-service
mechanism.
So
I
think
both
of
those
are
are
reasonable
and
you're.
Talking
about
the
trade-offs.
E
A
E
With
I
mean
openshift
launched
with
self-service
namespace,
we
use
like
virtual
resources
and
the
and
then
we
a
lot
of
people
duplicated
various
different
approaches,
so
I'd
say
like
the
the
prior
art
is
that
most
people,
the
sum
of
all
people
using
kubernetes,
have
tried
most
variations
and
they
all
have
trade-offs.
Looking
at
those
trade-offs
which
are
the
common
ones,
you
know
any
organization
using
get
ops
is
effectively
making
those
trade-offs
to
the
higher
level.
Anyone
using
rbac
to
control
what
namespaces
you
have
access
to
is
typically
defining
it
at
a
higher
level.
E
A
E
To
be
able
to,
and
that
basically
requires
two
other
things
which
is,
I
can
limit
how
many
things
someone
asks
for,
and
I
can
limit
the
sum
of
the
things
inside.
What
I
asked
for,
and
the
flip
side
of
it
is
also,
I
can
see
all
of
the
things
I
own,
which
you
can
impose
on
top
of
namespaces
with
labels,
but
you
can
impose
on
top
of
namespaces
with
rbac,
without
building
a
layer
to
do.
E
Detection
of
our
mac,
like
openshift,
has
that
you
know
an
informer
cache
that
does
our
back
resolution
to
say
show
all
of
the
namespaces.
You
have
access
to
that's
through
a
different
resource.
We
discussed
at
the
time
with
namespaces
imposing
that
in
cube
and
then
just
decided.
You
know,
let's
just
skip
it
and
then
we'll
learn
from
what
people
do.
This
is
kind
of
the
all
right.
E
It's
five:
six
years
later,
let's
go
back
and
learn
what
people
did
take,
what
the
latest
from
working
group
multi-cluster
and
then
make
some
proposals
that
might
actually
we
can't
do
it
to
name
spaces
without
changing
the
semantics
and
name
spaces,
but
maybe
there's
a
coordination
point
for
everybody
to
say,
like
oh
logical
clusters
could
be
something
we
put
in
cube
could
be
something
layered
on
top
of
cube.
The
combo
of
those
could
build
these
in.
Could
we
build
enough
of
a
consensus
for
everybody
who's
trying
to
solve
this?
A
Yeah
and
the
self-service
is
distinct
from
create
on
demand
right
right
now.
You
can
create
a
new
logical
cluster
just
by
requesting
a
separate
context
right
or
inventing
a
new
name
for
your
logical
cluster
and
it
it
it
comes
into
existence
when
you
ask
for
it
that's
distinct
from
I
can.
I
can
ask
the
system
whether
it's
explicit
or
implicit.
I
can
ask
the
system
for
a
thing,
and
it
gives
it
to
me
a
a
logical
cluster
and
again
yeah.
E
Originally
name
spaces
were
actually
on
demand.
There
was
a
controller
that,
when
created
them,
it's
actually,
I
think
it's
still
buried
in
the
code
somewhere.
It's
just
not
on
by
default
in
the
admission
chain,
but
that's
actually
how
cube
focus
prior
to
1-0.
You
could
create
namespaces
when
you
use
them.
E
I
don't
know
that
many
people
have
actually
asked
for
it,
but
there
is
an
element
of
that
which
is
there
are
use
cases
where,
if
you
have
controls
be
like
the
first
hundred
people
to
grab
something
get
it
and
everybody
else
gets
denied
that
don't
quite
work
that
well
and
most
people
still
want
an
explicit
to
create
so
that
one's
a
little
bit
less
common,
but
it
does
come
up
and
the
only
reason
that
kcp
does
it
now
is
just
expediency
right
right,
so
putting
out
the
two
systems
of
controlling
logical
clusters
and
where
it
goes
from
creation
and
operation,
that's
something
in
name
spaces.
A
Yeah-
and
we
might
I
mean
I-
I
don't
think,
there's
a
reason.
We
need
logical
clusters
to
be
created
on
demand
forever.
We
might
decide
we
want
them
to
be
explicitly
created
for
the
for
consistency
with
the
rest
of
the
resources
in
in
the
world.
Right,
like
you
need
to
ask
for
a
namespace
to
be
able
to
get
one
and
community
spaces.
E
We
might
actually
say
you
know
what
logical
clusters
shouldn't
support
that
or
kcp
or
a
more
opinionated
version
of
logical
clusters
explicitly
doesn't
want
to
support
that
which
might
mean
that
you
don't
need
the
equivalent
of
the
namespace
delete
controller
and
what
the
lessons
learned
from
aggregated
apis.
A
lot
of
people
use
aggregated
apis.
E
There's
a
lot
of
serious
problems
with
aggregated
apis
using
different
lcd
stores.
The
first
one
is
like
it
just
doubles
or
triples
your
operational
complexity
for
restore
and
backup
and
you're,
adding
a
new
single
point
of
failure,
you're
adding
a
new
point
of
failure,
which
is
you
can
only
delete
a
namespace
if
all
aggregated
apis
respond.
What
are
some
lessons
learned
from
that?
Do
we
actually
need
to
solve
that
that
that
fits
into
the
model
around?
What
does
a
logical
cluster
mean.
A
E
We'll
probably
have
to
pick
something
to
make
logical
clusters
useful
well
to
pick
a
direction
to
go.
There.
E
E
E
You
don't
get
duplication
so
that
that
transactional
guarantee
is
either
imposed
above
cube
with
something
like
the
namespace
like
that's
imposed
above
cube
in
the
namespace
controller
and
when
attributes
on
namespace
or
it
can
be
imposed
underneath
with
transactional,
clears
and
all
that
in
ncd.
We
just
have
to
make
a
decision
one
way
or
the
other
or
come
up
with
a
way
where
a
system
that
solves
those
policy
problems
could
probably
make
different
decisions
than
cuban
get
away
with
it.
E
Your
controller
is
actually
responsible
for
that
through
euids
and
and
and
all
that
not
not
cube
itself
or
not.
The
kcp
control
plane
there's
a
whole
bunch
of
like
it's
funny
because,
like
we
just
went
through
like
probably
like
almost
like
two
or
three
years
of
like
long
debates
from
their
first
days
of
cube
like
in
just
like
five
minutes.
So
it's
like.
Do
we
really
need
to
rehash
all
that?
That's
partially
what
the
doc
is?
E
A
Yeah
I
mean
that's
it
that's
why
it's
absolutely
helpful
to
have
somebody
with
this
historical
context,
helping
with
this,
because
otherwise
we
would
spend
two
years
having
that
same
argument
again,
instead
of
just
saying
like
hey,
I've
read
this
book,
I
know
in
ohodens
or
I've
read
this
book
and
I
wish
it
ended
differently.
We
should
do
the
other
way
instead.
A
Yeah
yeah,
let's
just
all
feel
differently,
that's
really
what
we're
going
for
great!
That
is.
That
is
very
interesting,
and
I
look
forward
to
reading
that
anything
else
on
anybody's
mind
before
I
guess
we
have
12
more
minutes.
D
What's
up,
I'm
just
really
glad.
I
joined
this
meeting
because
it's
probably
the
best
meeting
I've
been
on
in
a
few
weeks.
D
I've
been
playing
around
with
working
with
multi-clusters
with
this
tool
scribe
that
copies
persistent
volume
from
cluster
to
cluster
and
I've
often
wondered
if
what
I
did
was
sort
of
the
best
way.
So
I'm
really
looking
forward
to
looking
at
the
code
and
to
see
how
how
you
interact
with
multi-clusters,
like
I
think
I
created
a
different
client
with
you
said.
Each
cluster
has
a
cube
config.
D
Basically,
it's
just
like
an
object
with
a
cube
config,
and
so
you
just
create
like
a
different
context
for
each
cluster
and
and
is
that
the
way
that
you
interact
with
multiple
clusters,
like
I'm,
I'm
interested
to
look
into
it
to
see.
If
what
I
did
kind
of
matches,
what
you
did
to
see,
if
what
I
did
was
a
good
idea.
A
Yeah,
so
so
the
main
every
cluster
object
that
is
basically
registered
with
the
kcp
has
basically
just
a
coupe
config
in
it.
But
what
we
do
with
that
is
when
we,
when
we
get
that
cube
config,
we
install
a
syncer
on
the
cluster
either
in
the
cluster
or
outside
of
the
cluster
either
way.
But
we
basically
say
this
cube.
A
Config
gives
you
the
information
you
need
to
talk
to
that
cluster
and
also
here's
the
information
to
talk
back
to
kcp,
and
then
this
synchro
job
lists
objects
of
all
types
labeled
for
that
cluster
and
copies
them
to
the
api
server
and
then
copies
the
status
back.
So
there
isn't
a
direct
synchronous
request
from
kcp
or
the
cluster
controller
for
every
object.
There's
really
only
one
synchronous
request
to
say:
go:
install
the
syncer
there
and
then
the
syncer.
Does
this
like
asynchronous
copy
back
and
forth?
A
There's
also
like
health
checks
on
the
sinker,
but
that
way
the
if
the
cluster
goes
goes
disconnected
for
some
amount
of
time
they
run
out
like
completely
lost
in
the
dark.
There's
there's
a
sinker
that
will,
when
it
comes
back
up,
it
will
check
and
and
pull
the
rest
down
and
sync
all
the
rest
of
the
statuses
up.
Does
that
answer
your
question?
That
is
so
it's
not
like
a
direct,
a
direct
synchronous
request
except
to
install
the
sinker
that
does
the
async
stuff
from
that.
D
No
scribe,
I
I
just
built
a
cli
for
scribe
and
that's
that's
I
I
kind
of
just
I
took
whatever
q
configs
you.
You
gave
and
then
created
a
client
based
on
those
contexts.
A
Yeah
yeah,
I
think
that's
going
to
be
something
like
that
is
going
to
be
very
necessary
for
this.
To
work
I
mean:
we've
we've
got
deployments,
you
know
like
run
this
workload
over
there,
what
keems
working
on
networking.
A
The
next
thing
is,
like
you
know,
persistent
data
and
how
that
moves
and
follows
you
know
as
clusters
come
and
go,
I
think
a
difficulty
will
be
if
that
cluster
goes
away
like
is
completely
deleted.
Then
we
don't
have.
A
We
need
to
have
already
copied
some
of
that
data,
or
hopefully
all
of
that
data.
Otherwise,
we've
lost
our
chance
to
like
go
get
it.
So
I
don't
know
exactly
I.
I
don't
think
I
have
thought
at
all
about
persistent
volumes,
but
I'm
excited
that
someone
is
and
yeah.
Let
me
know
if
you
have
any
trouble
and
will
will
help.
D
And
then
my
other
question
or
was
I,
is
there
an
easy
way
to
look
at
the
commits
from
your
kubernetes
fork,
but
then
I
I
saw
the
the
comment
there.
It's
like
hack
feature
workaround.
I
could
look
for
those
commits.
I'm
just
you
know
curious
to
see
what
what
you
had
to
do
to
kubernetes
to
make
this
work,
and
then
I
was
curious.
Why?
Why
118
as
opposed
just
because
that's
when
you
started
working
on
this.
A
More
or
less
I
mean,
I
think
that
actually
predates
me
so
me
as
a
redheader,
not
as
a
human,
if
I
think
it
would
be
useful,
because
I
think
people
have
asked
that
too
like
what
what
is
the
diff
between
what
we
are
doing
and
even
118.
I
think
it
would
be
useful
to
have
somewhere
a
handy
link
for
like
on
github
the
diff
between
those
two
things.
It's
not
it's
not
going
to
be
so
much
of
a
diff
that
the
ui
falls
over.
At
least
I
don't
think
so.
A
D
And
then
my
other
question
is:
if
that's
the
case,
then
how
is
it?
How
are
you
going
to
make
sure
it's
always
going
to
be
in
sync
like?
Is
there
in
the
future
going
to
be
like
a
spec
of
kubernetes
back
that,
like
you
know,
any
kubernetes
has
to
adhere
to
like
so
that
you
know
so.
E
Probably
by
saying
there's
a
set
of
conformance
tests
that
you'd
run
and
probably
with
this
we'd
say.
I
think
that
definition
is
not
there
yet,
but
the
but
like
the
key
goal,
is
95
of
applications.
Work
on
modified,
which
is
close
enough
to
saying,
like
conformance,
should
pass
there's
a
lot
of
stuff.
That's
not
in
conformance
that
would
be
absolutely
critical
for
a
client.
E
So
that's
more
of
a
goal,
a
gap
but
kubernetes
conformance
isn't
really
it's
just
one
small
thing
that
protects
the
trademark
that
has
a
fairly
noble
but
possibly
slightly
overly
unrealistic
goal
of
defining
that
someone
can
run
the
same
app
on
different
cube
clusters,
not
really
doing
that.
It's
a
set
of
you
know
it's
trying
to
test
that
the
api's
behavior.
What
you'd
expect
the
spec
that
we
would
probably
say
is
that
the
api
definition
objects
behave
in
a
consistent
way,
but
I
also
don't
know
that
that
would
be
necessarily
like.
E
It
would
need
to
be
consistent
enough
for
the
applications
to
work,
but
the
goal
necessary
might
not
be.
The
goal
is
probably
going
to
be
something
that
looks
more
like
a
kcp,
specific
conformance
definition
that
there
also
is
an
overlap
with
people
who
want
to
build
minimal
api
servers.
But
I
don't
because
apis
are
actually
the
bit
it's
how
the
apis
work
and
then
what
apis
as
two
different
things.
E
E
That's
a
has
to
be
an
optional
field
for
v1
and
therefore,
if
you
don't
support
it,
does
that
mean
you're,
not
conformant?
No,
it
just
means
you
have
to
take
that
into
account.
What
do
clients
have
to
do
most
cube
clients
once
it's
a
v1
api,
we
just
say
the
api's
got
to
keep
working,
so
I
think
sally
we're
probably
a
little
too
early
to
answer
all
the
implications
there.
E
But
yes,
a
cube
client
working
against
a
cluster
today
should
work
against
this
and
then
the
more
that
we
evolve
the
spec,
for
what
does
it
mean
to
be
a
cube?
Compatible
api
might
still
belong
to
cube,
but
it
might
be.
You
know,
part
of
that
conformance
program
there
or
it
might
be
outside
of
q.
Just
because
there's
other
projects
that
have
this
problem
that
want
to
use
minimal
api
servers
but
change
the
rules
somewhat.
D
A
David
did
you
did
you
have
some,
I
feel
like.
I
remember
you
saying
having
something
to
say,
but
I
don't
know
if
that
was
a
while
ago.
C
Well,
just
about
the
question
of
mainly
what
are
the
changes
in
the
kubernetes
branch?
It's
it's
mainly
three
points
or
two
mainly
enabling
customer
source
definitions
to
define
objects
that
initially
in
cube
where
you
know
internal
objects.
Let's
say
deployments
apps
stuff
like
that,
because
we
want
this
to
be
a
minimal
api
server.
C
That's
the
first
thing.
The
second
one
is
mainly
adding
tenancy,
so
logical
clusters
inside
kubernetes,
which
mainly
means
that
you
know
I
mean
the
current
implementation
is
that
from
a
sub
k
in
etcd
storage,
we
infer
the
cluster
in
which
the
data
is
set.
In
fact,
and
the
whole
point
of
a
number
of
changes
is,
is
to
ensure
that
we
correctly
set
and
get
this
cluster
name.
Cluster
name
already
exists
in
in
cube
objects,
but
it's
just
empty.
So
we
use
that.
So
that's
the
second
change
under
the
third
one.
C
I
don't
give
them
in
in
chronology
called
order.
Sorry,
but
the
third
one
is
is
mainly
for
custom
resources
to
support
the
the
tenancy.
That
means
that,
because
when
you
add
a
customer
source,
then
you
have
to
you
know:
build
the
overall
open
api,
html
and
also
the
discovery
information
that
you
get
from
from
cube
when
you
point
to
slash
apis,
and
so
we
have
to
build
this
in
a
way
that
is
not
compatible.
C
That
means
that
if
you
point
to
a
given
logical
cluster
context,
you
will
give
all
the
open
api
shamans
and
also
discovery
from
all
the
crds
that
were
added
in
this
logical
cluster.
And
if
you
point
to
the
context
of
a
distant
logical
cluster
that
then
you
would
get
a
distinct
set
of
apis
and
open
api
gmats.
So
that's
mainly
the
three
changes
that
were.
You
know
that
comprise
the
various
hacks
that
you
can
find
in
the
cuban
kubernetes
feature
branch.
A
Be
useful
to
write
those
down
and
with
like
distinct,
you
know,
diffs.
A
Like
diff
ranges
that
say
like
this,
this
is
where
we
were
doing
this
stuff.
This
is
where
we
were
doing
this
stuff
yeah.
D
C
Yeah,
unfortunately,
there
was
a
pending
document
whose
goal
was
to
be
added
in
the
in
the
ripple,
but
was
not
done
for
now.
So
it's
still
in
the
on
the
to-do
list
to
document
those
changes,
since
we
also
plan
to
rebase
that
those
changes
on
the
last
kubernetes
release,
that
would
probably
be
a
chance
to
first
a
bit
clean
up
the
the
comment,
history
and
then
more
easily
document
that
in
the
kcp
repository.
A
Yeah
great
sounds
good
everyone.
We
are
basically
out
of
time
and
a
lot
we
went
over
today
so
check
the
notes
when
they
get
posted
later
today
and
we'll
see
you
later
have
a
good
week.
Everybody.