►
From YouTube: Community Meeting, September 28, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
welcome
to
kcp
community
meeting
september
28th.
We
have
a
few
items
on
the
agenda.
The
first
one
by
complete
random
ordering
is
steve,
has
been
doing
some
work
on
transparent
multi-shard
list.
I
don't
know
steve
if
you
want
to
give
it
more
of
more
of
an
intro
than
that.
I
can
also
describe
at
a
high
level
what
I
think
is
happening
but
hear
it
from
the
horse's
mouth.
B
Yeah
sure,
so
I
think
I
guess
what
I'm
trying
to
work
on
is
making
like
a
started.
Pcp
deployment
serve,
listen
watch
to
clients
in
a
transparent
way,
and
so
the
part
that's
working-
and
I
can
show
off
today,
is
if
the
client's
asking
for
a
chunked
list
from
like
one
kcp
deployment
and
sharding
is
enabled.
Then
it'll
fan
out
and
sort
of
serve
data
from
a
bunch
of
different
kcps.
B
And
then
the
aggregate
resource
version
of
all
of
the
servers
involved
is
returned
and
that'll
be
how
we
support
watch
in
the
future.
So.
A
C
Would
probably
say
it
is.
This
is
actually
even
more
general
than
kcp.
What
steve
is
demonstrating
is
you
could
build
a
consistent
list
watch
across
two
cube
servers
that
expose
the
same
version
of
an
api
object
and
build
a
combined
view
through
some
intermediary,
whether
that's
on
the
client
side,
or
preferably,
a
server
which
hides
the
complexity.
C
That
would
allow
you.
If
you
have
a
consistent
list
of
the
membership
of
the
clusters,
you
can
then
have
a
consistent
list
that
a
controller
could
use
to
act
on
those
things
consistently
such
that
you
know
either
on
the
server
side
or
on
the
client
side.
You
could
scale
your
controller
to
more
than
one.
A
Cluster
gotcha,
so
I
think
I
I
think
I
had
misunderstood,
and
it
was
helpful
that
I
asked
that
question
then.
So
this
is
not.
If
you
have
three
kcp
shards
running,
it's
not
that
you
can
talk
to
any
of
them
and
they
all
aggregate
across
all
of
them.
It's
that
one
of
them
is
special
and
knows
about
the
other
two,
let's
say
and.
D
A
You
ask
it
for
things:
it
will
give
you
what
it
knows
and
forward
on
like
and
in
the
trunk
response.
That's.
B
B
Like
having
a
client
be
completely
agnostic
to
who
they're
talking
to
like
which
primary
just
requires
that
you
have
authoritative
names
for
all
of
your
shards,
which,
like
just
I
hadn't
tackled
yet,
but
if
authoritative
names
exist,
then
certainly
the
client
couldn't
take
their
requests
and
give
it
to
any
insurer.
C
Yeah,
the
the
the
two
actors
are:
there's
a
something
providing
consistent
list
watch
over
a
resource
inside
a
cluster
or
a
cluster,
and
then
there
is
something
that
can
calculate
in
a
a
list
in
a
watch
operation
over
those
that
is
consistent,
given
the
presence
of
an
authoritative
membership
set.
So
it's
kind
of
it's
a
little
bit
like
transact,
it's
a
little
bit
how
ncd
or
other
transactional
systems
work.
C
The
coordinator
could
be
very
light
and
the
specific
guarantees
that
list
wash
do
steve
is
exploiting
them
to
demonstrate
how
how
you
can
get
kind
of
that
consistent
thing,
that
a
controller
needs
which
all
of
the
cube
operator
controller
pattern
is
based
on
can
leverage
across
an
arbitrarily
wide
number
of
servers
in
theory,
which
is
a
very
unique
thing
that
nobody's
really
tried
to
do
before
in
the
community.
So
it
gives
us
an
option
for
scaling
a
number
of
dimensions.
A
C
Needs
to
have
a
to
be
able
to
offer
the
same
guarantees
of
being
able
to
list
and
watch
with
a
resource
version
that
any
resource
would
offer.
So,
like
think
about
a
controller,
is
creating
a
local
copy
of
the
information
in
a
server,
and
it
is
delivering
that
in
a
way
that
guarantees
like
ordering
and
like
liveness
like
if
you
just
made
list
calls
the
cube
server
guarantees
that
the
next
list
call
you
make
is
further
in
the
future
than
the
previous
list.
Call
watch
satisfies
that.
So
in
theory,
what
we're
doing
here
is
we're.
C
You
can
build
an
arbitrary
scaling
infrastructure
for
like
because
you
can
be
delayed
right
so
like
if
I
build
a
controller,
I'm
at
a
one
or
two
second
delay
at
most
most
of
the
time.
From
the
upstream,
but
it
could
be
a
larger
delay,
but
I
don't
care
because
I
have
a
consistent
view
that
I
know
that
is
some
point
in
the
past.
That
is
valid
itself
internally
consistent.
This
then
extends
that
to
I
could
look
at
a
whole
bunch
of
things
and
build
a
list.
C
That's
consistent,
but
one
of
the
nice
properties
is
that
you
can
scale
as
long
as
that.
That
chunk
is
small.
You
can
scale
that
through
replication
trivially,
so
that
someone
can
say
like.
Oh
you
know,
like
you,
have
a
list
of
shards.
This
is
authoritative
as
long
as
you
keep
those
guarantees
of
forward
progress
and
all
that
you
could
have
a
whole
bunch
of
read
replicas
out
there,
and
then
you
just
guarantee
that
someone
can
ask
for
a
new
consistent
list
and
get
that
up
to
date.
C
A
Right
right
right,
the
the
the
work
that
is
being
done
is
novel
and
useful.
However,
we
deploy,
however,
that
code
ends
up
actually
being
deployed
across
whether
the
shards
are
each
talking
to
each
other
or
something
is
fronting
all
the
shards
or
etc
yeah
you
mentioned
clayton.
You
mentioned
that
this
works
for
listing.
This
is
intended
to
work
for
listing
resources
across
workspaces
across
workspaces
across
shards
and
where
the
resource
is
the
same
type.
Is
that
a
restriction
we're
hoping
to
like?
So,
I
think
the
expectation
was.
B
Like
three,
like,
I
guess,
just
a
couple
cases
right
so,
if
you're
a
controller
that
has
been
installed
into
workspaces
by
someone
doing
like
an
api
binding
of
the
resources
that
you
operate
on,
you
know
the
exact
resource
version
or
sorry.
The
exact
schema
of
the
resource
that
you're
you're
worried
about,
and
so
in
that
case,
you're
able
to
ask
like
a
sharded
kcp
deployment
like.
Please
give
me
all
resources
at
this
like
specific
api
version,
because
I
know
the
exact
schema
and
so
we'd
be
able
to
index
which
workspaces
you
end
up.
C
That
could
also
tell
you
so
there's
a
couple
different
ways
that
we
could
do
that
consistency
and
that's
like
maybe
once
we
can,
once
we
kind
of
have
some
of
the
parts
done.
That's
then
the
next
thing
to
go
explore,
which
is
what
are
the
consistency
trade-offs
that
would
be
useful,
like
you
brought
up
jason.
If
that
shard
index
is
down,
you
still
want
people
to
be
able
to
each
charge
should
be
able
to
serve
that
a
workspace
or
a
logical
cluster.
C
The
idea
would
then
be
okay,
well,
there's
themselves
like
guarantees
that,
within
that
you
want
to
provide
consistency,
so
that,
like
you,
know
all
the
api
bindings,
so
you're
kind
of
copying
api
bindings
down.
So
you
already
are
in
a
spot
where
you,
you
need
to
have
a
transactional
record
somewhere,
that
you
can
list
watch
on
to
build
the
list
of
all
the
things
at
the
version,
but
you
can
also
conceivably
say
in
the
absence
of
something.
A
Right,
I
think
my
question
was
more
was
less
about
the
the
how
the
how
the
system
will
work
when
the
index
is
unavailable
or
how
you
all
write
and
read
at
separate
times,
but
more
like.
I
think
we
talked
about
last
time,
the
the
issue
of.
If,
if
your
logical
cluster
says
a
deployment,
is
these
fields
and
my
logical
cluster
says
a
deployment
is
just
one
single
field
called
foo?
A
B
End
was
going
to
start
decoding
the
specific
schema,
the
hash
of
it
or
some
some
derivative
into
the
actual
version,
and
we
had
talked
about
like
because
again
there's
two
cases
where
you
end
up
like
if
you're
a
controller-
and
you
know
the
exact
schema
and
you
can
determine
that
version
or
sorry,
you
can
determine
that
schema
in
that
api
version
from
the
api
binding,
there's
a
source
of
truth
on
like
a
hash
or
whatever,
like
whatever
identifier.
B
And
so
then
you
can
make
a
very
precise
request.
The
other
case
that
they
thought
of
was
the
sinker
which
would
need
to
go
through
discovery
and
ask
questions
like
what
are
the
logical
clusters
that
I
am
syncing
from.
C
There
is
a
nice
property
that
if
you
have
a
backwards
and
forwards
compatible
schema,
then
all
controllers
today,
like
every
cube
controller,
is
operating
implicitly
by
saying
give
me
a
specific
minimum
version
of
the
schema,
and
I
will
ignore
things
that
are
not
newer
so
like,
if
you
think
about.
As
we
talked
last
time,
yeah
like
there's
a
schema,
you
had
that
uniquely
identifies
some
common
schema
and
then
it
has
a
timeline
into
the
future,
with
unique
versions
that
are
forward
compatible.
C
In
theory,
we're
trying
to
we're
trying
to
look
at
a
bunch
of
shards
with
a
bunch
of
lines
with
a
bunch
of
things
on
different
spots,
and
our
criteria
is
give
me
all
the
deployments
of
which
you
have
at
least
this
minimum
schema
that
is
compatible
and
those
two
things
could
potentially
then
be
pretty
flexible,
but
we
would
obviously
need
a
more
complex
story
under
the
covers
to
offer
that,
but
it
may
just
be
that's
a
very
valuable
property
anyway.
C
So,
like
a
controller
when
it
wants
to
go
list,
something
probably
should
check
that
you
know
if
it
needs
a
minimum
version
to
function.
There's
no
way
to
express
that
today,
but
you
know
in
the
future.
You
might
very
well
have
that
problem
so,
and
some
of
these
may
be
problems
that
we
just
ignore.
We're
like
no,
it's
totally
fine.
If
you
could
just
get
a
duck
schema.
That's
like
this
is
within
the
lcd
of
of
the
base
schema.
We
all
support
most
controllers
today.
C
A
Right,
I
can
see
how
we
could
encode
the
like
the
hash
of
the
scheme
in
the
version
like
I
am
a
deployment
v1
hash
of
my
open
api
schema,
but
I
don't
know
how
you
would
be
able
to
and
that
would
allow
you
to
list
things
that
things
across
all
clusters
that
match
exactly
this
schema.
But
I
don't
know
how
you
would
be
able
to
encode
traits
of
that
object
or
subsets
of
schema
or
compatibility.
A
C
C
I
we
were
doing
when
we
were
doing
back
at
the
envelope
scale
numbers
which
are
in
the
the
doc
the
design
dock.
You
know
we're
probably
still
talking
about
per
server,
a
thousand
to
ten
thousand
api
types,
and
so
a
fully
materialized
graph
per
server
is
relatively
cheap
to
calculate
that's
deterministic,
you
can
go
from
any
schema
now.
If
you
change
the
schema
in
compatible
ways,
that's
already
something
we
have
to
detect.
C
E
Yeah,
because
the
the
question
of
compatibility,
mainly
the
the
lcd
algorithm,
is
it's
just
implemented
as
a
subtype
relation.
Well,
it's
just
just
a
subtype
subtyping
relationship
so
that
you
know
all
the
instances
of
one
type
are
supported
by
of
one
shima
supported
by
one.
Shima
are
also
supported
by
the
audition,
so
it's
mainly
just
exactly
the
same
relationship
as
subtyping
regarding
classes
and
instances
so
yeah.
It's.
C
Are
there
any
useful
simplifications
we
could
apply
and
one
of
them
may
just
be
like
we
may
be
able
to
simply
brute
force
and
reject
apis
that
are
not
in
a
hierarchy
and
then
say
you
know
it's
a
controller
that
has
to
look
across
three
different
types
or
be
associated
with
three
different
types
like
that's
not
the
end
of
the
world
either
because
of
the
controller,
they
probably
don't
care
about.
Whatever
that
change
is
so
we
we
may
be
able
to
like.
We
don't
have
to
do
the
perfect
solution
here.
C
There
may
be
a
very
good,
like
brute
force,
best
effort,
duct
type,
lcd
kind
of
approach,
which
is
you
can
probably
reduce.
Most
of
these
down
to
you
know
a
best
effort
that
gets
things
that
are
mostly
compatible
and
then
just
say.
Look
if
you
want
strict
compatibility,
you
can
do
this,
but
most
people
are
just
going
to
be
like
yep,
I
got
a
controller
yolo.
Here's
my
minimum
crd,
tell
me
what
I
gotta
go
request
and
we
can
hide
those
problems
from
the
user
yeah.
I
think
this
also.
E
I'm
sorry
yeah,
because
currently
even
in
in
what
exists
with
the
api
resource,
import
and
negotiated
api
resource
is
precisely
that
you
reduce,
if,
if
you
opt
for
you,
know
reducing
the
the
negotiated
api
resource
by
calculating
the
lcd,
mainly
you
just
design
this
lined
graph.
I
mean
this
this
line
by
reducing
each
time
a
bit
more
the
the
lcd,
but
then
you
can
also
have
an
apr
resource
import
that
is
really
incompatible.
E
D
C
A
Yeah,
I
think,
there's
an
opportunity
to
to
use
this
to
surface
a
different
kind
of
in
incompatibility.
So
right
now,
when
you
try
to
negotiate
the
api,
we
will
block
you
and
say
like
or
if
it's
incompatible,
we'll
say
hey.
This
is
this:
requires
human
intervention
to
these
two
types
are
incompatible.
There's
this
introduces
a
new
type
of
incompatibility,
potentially,
which
is
your
type,
that
you're
trying
to
change
is
compatible
with
both
downstream
things,
but
the
controller
that
is
watching
for
these
things.
A
B
Like
I
thought
when
I
was
talking
about
david,
that
if
you
seed
your
kcp
with
resources
like
the
negotiation
flow,
looks
different
than
if
it's
derivative
from
the
underlying
clusters,
like
I
thought
that
if
we
were
to
have
bindings,
they
would
be
authoritative
downwards,
and
instead
you
would
fail
to
join.
C
Yeah,
I
think
bindings
could
be
authoritative,
and
maybe
this
is
just
the
it's
like
the
what's
the
source
of
truth,
but
we
might
allow
two
different
tenants
to
create
the
exact
same
incompatible
resource
because
they
are
distinct.
Things
we'll
probably
have
to
we'll
have
to
think
about
this
more
there's
a
couple
other
use
cases.
I
can
imagine
that
maybe
we
just
like
we
can
skip
or
we
can
simplify
down
to
like
I.
I
don't
think
most
people
will
hit
these
problems
it's
more
for
the.
C
There
may
be
some
angles
which
were
just
like
well,
let's
just
treat
these
as
two
different,
completely
two
completely
different
versions:
the
same
tools
that
would
help
you
go
from
a
v1
to
a
v2
are
they
would
help
you
go
from
this?
I'm
using
v2
in
this
crd
sense,
not
in
the
cube
like
cube.
Built-Ins
can
go
from
v1
to
v2,
but
the
trick
is
is
that
v2
has
to
be
compatible
with
v1,
except
for
certain
behaviors.
C
So
v1
and
v2
really
aren't
even
like
most
changes
like
that
can
only
be
done
partially
so
effectively.
It's
like
creating
a
new
deployment
when
you
create
a
v2.
A
Okay,
steve
did
you
have
anything
else
you
wanted
to
talk
about
with
this,
or
I
have
a
demo
with
okay,
some
client
stuff.
If
you
want
to
see
it,
but
it
might
not
be
I
love
a
demo.
B
Oh,
I've
got
to
request
here,
share.
B
This
is
not
what
I
wanted:
okay
cool,
so
we
have.
B
I
attempted
to
create
a
bunch
of
service
accounts
and
namespaces
to
show
I've
I'm
using
an
impersonation
right
now
to
like
attempt
to
make
iam
work,
but
service
accounts
are
turned
off,
so
we're
not
going
to
be
doing
that.
Then
we
look
at
the
second,
the
one
that
has
is
aware
of
the
larger
deployment
and
we're
just
going
to
get
name
spaces
and
we're
going
to
ask
it
to
chunk
at
one
so
that
we
look
at
the
continued
behavior.
B
And
the
implementation-
this
far
is
completely
stateless
and,
I
believe,
has
a
minimum
amount
of
deserialization
involved.
We
do
not
need
to
mess
with
the
objects
that
are
getting
passed
back
and
forth.
So
if
we
look,
for
instance,
one
of
these
name
spaces
we'll
see
that
instead
of
just
having
a
cluster
name
of
an
admin
here,
we've
got
also
the
shard
that
it
came
from,
and
so
this
6443
is
in
comparison
to
here's
one
from
the
other
shard,
and
these
are
at
separate
resource
versions.
B
B
The
first
time
we
respond,
our
resource
version
just
shows
that
we
have
one
shard
with
one
version
or
sorry.
This.
This
call
here
is
not
a
started
call
it's.
I
think
some
other
controller
doing
something
but
yeah.
So
our
first
response
here
has
a
resource
version
from
one
server
and
our
continued
tokens
record
both
that
were
like
in
the
middle
of
chunking
one
of
these,
and
we
have
yet
to
do
anything
for
the
other
one
and
sort
of
as
we
move
down.
B
We
keep
varying
that
state
back
and
forth
client
to
server
and
by
the
time
we're
done
here.
We've
got
one
shard
at
169
and
then
one
shot
at
179,
and
then
this
would
be
the
complex
resource
version
that
a
client
would
then
ask
for
a
watch
to
start
at.
A
B
C
Yeah,
it's
it's
good
to
do
this
exercise
too,
because,
like
there's,
a
bunch
of
the
implications
like
I
was
it
was
helping
me
write
down
as
steve
was
going
through
like
what
are
the?
What
are
the
assumptions
of
the
list
watch
in
the
model
like
what
data
do
you
need?
What
does
watch
provide
and
like
as
we
go
through
list?
C
One
of
the
open
questions
that's
been
like
a
long
time
coming.
Is
it's
a
lot
of
work
to
list
multiple
resources,
but
listing
multiple
resources
at
the
same
time
is
roughly
analogous
to
what
steve
is
doing
right
here,
which
is
performing
multiple
list,
calls
over
multiple
different
resources.
C
So,
for
a
long
time,
we've
kind
of
had
this
tracking
thing
in
cube
of.
If
you
want
to
do,
you
can
do
lots
of
watches
and
that's
fine.
You
can
do
lots
of
lists
and
that's
fine.
Is
there
a
when
you
get
to
lots
of
resources,
but
a
few
number
of
objects
per
resource
things
get
kind
of
ugly
one
of
the
interesting
things
this
potentially
opens
the
door
to
is
listing
multiple
resources
at
the
same
time,
the
same
way
that
it's
doing
and
then
you
could
build
a
consistent
list
watch
against
them.
C
It's
a
little
bit
different
in
structure,
but
it's
the
same
problem.
Fundamentally
so
then
you
could
potentially
you're
thinking
about
ways
that
you
can
improve
overall
controller
scaling
for
things
that
have
to
watch
lots
of
resources
across
lots
of
shards.
You
could
potentially
even
get
to
the
point
where,
for
instance,
the
sinker
could
say
I
want
to
watch
the
resources
that
are
in
this
cluster
as
long
as
you
can
get.
I
want
to
get
these
set
of
resources
across
these
things.
C
As
long
as
you
can
get
that
consistent
set
of
resources,
which
means
that
you
know
the
same
guarantees
around
a
resource
version
that
increments
forward.
That's
consistent,
you
can
then
potentially
build
a
you
could
use
that
to
build,
to
extend
to
this
model
to
be
able
to
implement
that
on
a
server
side.
So
the
client
doesn't
have
to
know
about
it,
but
the
first
place
to
start
as
a
client.
A
Right
that
would
help
that
would
help
the
sinker
and
our
stuff,
because
right
now
it's
effectively
doing
discovery.
What
types
do
you
know
about
for
each
type
set
up
a
list
and
a
watch
for
each
type
and
instead
we
could
just
say
I
don't
care
what
you
have
give
me
everything
and
tell
me
about
every
new
thing.
C
You
can
actually
emulate
discovery
in
a
certain
fashion,
but
you
can't
watch
it,
for
instance,
and
one
of
the
things
kind
of
thinking
about
this
would
be
if
somebody
adds
a
new
logical
cluster
on
demand.
Obviously
you
want
the
controller
to
pick
that
up,
so
you
already
need
to
have
that
transactional
change
of
work.
C
Biological
clusters
you
can
see,
but
on
resources,
if
you
can
add
an
api,
you
also
need
to
be
able
to
watch
and
know
when
a
new
resource
is
added,
then
you
need
to
go
through
it
and
then
the
the
corresponding
other
use
case
that
this
is
starting
to
open
up
the
door.
To
is
when
you
want
to
move
a
logical
cluster
across
shards.
C
You
want
to
be
able
to
list
watch
everything
in
that
logical
cluster,
so
that
you
can
synchronize
to
the
other
instance
with
a
simple
controller
pattern
and
then
potentially
cut
that
over.
So
it's
setting
the
stage
for
three
or
four
different,
very
useful
characteristics
that
in
theory,
if
we
can
we're
just
extending
list
watch
in
one
dimension,
we
don't
require
client
changes
to
support.
A
Do
you
the
the
issue
of
moving
a
thing,
moving
a
logical
cluster
across
shards?
Do
you
imagine
that
being
facilitated
by
the
help
of
this
client
logic
like?
Does
it
does
this,
or
should
it
be
completely
opaque
to
the
client.
C
C
So
there
will
be
some
implications
on
how
you
build
the
sharding
logic,
like
the
implementation
of
sharding
assumes
concrete
types
for
the
buckets
so
where's,
you
know
backing
a
lot
of
a
cluster
and
then
it
also
assumes
a
concrete
way
to
store
where
you
are
and
where
you're
going
a
shard
and
then
that
concretely
assumes
a
shard
resource.
C
There
could
be
many
many
different
types,
many
implementations
of
that,
whatever
which,
whichever
one
we
prototype,
will
just
be
picking,
whichever
one
seems
the
most
likely
for
the
set
of
use
cases
that
we
imagine
for
heavily
chunked
multi-tenant
applications
on
cube,
but
we
would
have
to
there
would
be
some
implementation
that
effectively
knows
about
in
our
case,
in
the
case
that
we
were
kind
of
the
terminology,
we're
working
through
a
workspace,
a
workspace,
shard
and
api
sets.
C
C
A
Related
to
that
topic,
I
spent
some
time
in
the
last
week
or
so
working
on
the
namespace
namespace
granularity,
moving
and
scheduling
across
clusters,
which
also
necessitated
me
writing
another
implementation
of
watch,
all
things
and
discover
new
things
and
watch
those
things
exactly
the
case
that
we're
talking
about
simplifying
and
making
better
and
more
transparent
to
the
to
the
client
so
fully
support
the
idea
of
making
this
something
that
the
server
can
just
tell
me
without
me
needing
to
do
all
this
stuff.
A
But
I
settled
on
what
I
think
is
a
pretty
good
implementation
and
a
pretty
good
api
for
set
up
an
informer
that
will
notify
me
with
the
group
version
resource
and
an
object,
and
let
me
do
stuff
with
it,
which
I
think
is
if,
when
this
lands,
however,
it
lands
I'm
going
to
try
to
also
get
that
stuff
out
of
the
sinker,
because
the
sinker
does
exactly
the
same
thing.
A
It
says,
watch
all
things
and
discover
new
things
and
watch
those
two,
and
I
think
that
would
be
a
useful
thing
to
get
rid
of
and
then,
when
the
server
that
we're
talking
to
is
able
to
do
this
smartly.
For
us,
we
can
just
swap
out
the
client
and
just
say:
go:
ask
for
all
things
and
all
new
things,
but
yeah
I
have.
I
have
some
code
I
will
link.
A
I
think
I
linked
it
in
the
slack
but
I'll
link
it
again
and
I'll
send
out
a
pr
for
it
soon
stay
tuned
for
future
demos.
David.
Do
you
have
you
had
items
for?
Oh
I'm,
not
presenting
anymore.
E
Yes,
well,
no,
nothing
really
to
to
demo.
But
just
I
wanted
to
give
you
a
quick
view
about
the
last
work.
I
worked
on
mainly
fixed
a
number
of
and
still
fixing
a
number
of
bugs,
or
you
know,
incomplete
behaviors
inside
the
kcp
server,
mainly
inside
the
kubernetes
feature
branch,
especially
related
to
namespace
management.
E
So
everything
that
was
related
to
admission,
the
namespaces
was
in
fact
not
multi-cluster,
so
it
was
working
as
long
as
you
were
working
with
the
admin
logical
cluster,
but
then,
for
example,
if
you
wanted
to
create
an
object
in
a
new
in
a
given
namespace
on
a
logical
cluster
user,
you
had
to
create
this
namespace
with
the
same
name
on
logicalclusteradmin,
because
it
was
searching
all
the
namespaces
for
the
admission
and
for
anything
else,
the
default
logical
cluster.
E
So
that's
mainly
the
same
type
of
work
that
was
initially
done
on
on
the
crd
side,
to
to
add
crd
tenancy,
so
completely
partition
all
the
crd
management
and
all
the
logic
of
the
crd
controllers
for
a
logical
cluster.
And
then
we
have
to
do
that
also
for
namespaces
and
and
namespace
admission
and
also
namespace
controller.
E
E
C
So
don't
don't
be
afraid
to
do
a
quick
hack
for
the
namespace
controller
or
the
simplest
thing,
because
I
think-
and
I
was
kind
of
playing
around
with
some
of
this-
is
what
I
mentioned
yesterday
in
the
long
run,
it
may
be
better
for
us
to
do
transactional,
deletes
of
name
spaces
with
a
storage
level,
like
I'm
kind
of
contemplating
yeah,
mixing
the
current
cube
or
not
contemplating,
but
playing
around
with
the,
since
we're
already
going
to
be
like
want
to
track
apis
at
versions
which
is
a
little
bit
different
in
the
way
that
apis
are
tracked
in
cube,
there's
an
implication
there
that
may
be
a
valuable
property,
which
is
we
would
effectively
homogenize
storage
on
the
fcd
and
every
every
object
would
be
stored.
C
The
same
have
the
same
rules,
so
what
would
happen
would
be?
Namespace
deletion
would
really
just
be
a
scan.
The
list
of
apis
at
a
current
point
in
time
in
the
cluster
or
in
the
namespace,
and
then
delete
all
of
them,
and
then
namespace
workspace
deletion
would
be
very
similar.
C
It
would
be
delete,
find
all
the
resources
that
are
transactionally
bound
to
a
workspace,
go
and
delete
them
from
ncd,
so
there
would
not
need
to
be
a
controller
implementation,
but
that's
not
don't
worry
about
that
yet
because
I
don't
want,
we
don't
want
to
commit
to
that
kind
of
approach
that
will
break
aggregated
api
servers.
The
namespace
controller
is
required
for
that
behavior,
but
it
might
be
that
you
know
there's
a
reason
actually
where
we
we
support
both
models
or
something
yeah.
E
Yeah,
by
the
way,
sorry
good
yeah
by
the
way
for
the
namespace
control,
also
the
deletion
of
objects
inside
and
in
space.
I
already
modified
the
code,
so
it's
mainly
hacky
you
just
get
the
cluster
name,
each
logical
cluster
name
each
time
and
then
create
the
right.
You
know
client
go
request
to
to
the
right,
logical
cluster,
so
it's
it
sort
of
works.
Now
it's
a
bit
linked,
for
example,
to
cluster
roles,
because
at
start
pasta
hook.
You
also
try
to
we're.
E
So
with
the
the
role
factory,
I
think,
tries
to
create
new
roles
right.
C
Is
one
that
I
actually
I
don't
want
us
to
do
so
we
need
to
talk
about
our
back,
but
it
would
be
like
I
think,
an
avenue
for
investigation
of
our
back
would
be.
We
don't
want
to
create
300
objects
per
workspace.
That's
bad!
We
want
workspaces
to
be
cheap
or
logical
questions
to
be
cheap.
So
we
need
to
talk
about
what
the
rvac
implementation
is.
C
It's
probably
a
layered
model
where
each
workspace
has
its
own
rbac,
but
it's
a
hierarchy
a
little
bit
like
crds,
where
there's
a
there's,
an
r
back
rule
where
there's
an
rbac
engine
instance,
that's
very
lightweight
sitting
on
top
of
kind
of
the
default
roles
and
all
that.
So
we
need
to
talk
about
what
that
would
look
like
from
a
caching
perspective
before
we
jump
in
too
far
on
it.
E
E
Then
we
have
to
minimally
hack
the
the
airbag
aspect,
as
well,
at
least
for
it
to
to
point
to
the
right
namespace
in
the
right,
logical
cluster,
because,
as
just
kcp
doesn't
start
up
anymore,
so
yeah
I'm
going
to
twist
your
pull
request
with
the
minimum
changes
on
this
related
areas
and,
of
course
this
has
to
be
to
be
changed.
When
we
we
opt
for
more.
You
know,
yeah.
C
And
then
this
is
a
good
opportunity
for
us
to
actually
go
in
and
start
sketching
out
what
the
story
would
be
there
right
like.
How
do
you
keep
a
workspace
or
logical
cluster
lightweight
and
the
things
you're
working
through
are
which
caches
do
you
need
and
which
subsystems
are
different
in
this
model
like
admission
behaves
one
way,
our
back
behaves
a
different
way.
Aggregated
apis
behave
a
third
way
et
cetera,
because.
E
We
we
already
have
the
pending
work
on
on
city
tenancy,
because
that
was
our
discussion
from
from
a
long
time
ago
right
that
we
should
make
it
dynamic
and
not
and
not,
control.
Based
mainly
you
know,
crd
publishing,
of
open
api
channels
and
all
this
so
absolutely
the
same
type
of
of
you
know,
problems
that
we
encounter
and
refactoring
that
we
have
to
foresee.
C
And
maybe
it'd
be
useful,
then
to
like
what
we
should
document
is
what
we
are
trying
to
get
to,
for
the
purposes
of
showing
and
yeah
exactly
being
able
to
create
name
spaces
in
a
logical
cluster
is
very
reasonable.
Being
able
to
do
some
minimal
are
back,
is
very
useful,
being
able
to
delete
a
namespace
and
a
logical
cluster
is
very
useful
and
then
everything
beyond
that
is.
You
know
an
input
to
a
design
that
we
can
start
with
the
set
of
problems,
we're
hitting
yeah.
E
That
was
a
bit
a
plan
as
well
to
leverage
on
the
on
the
small
document
I
already
did
on
in
the
feature
branch.
You
know
with
the
main
changes
in
the
commits
and
also
the
potential
client
problems,
and
I
was
envisioning
a
third
section
about
what
would
be
the
real
and
clean
implementation
or
refactoring
for
each
of
those
main
main
points
yeah.
I.
C
E
E
Unfortunately,
we
cannot
demo
that,
because
it's
nearly
okay
but
not
100,
but
at
least
it
it
allowed
on
my
side,
spotting
some
areas
of
thinking,
especially
because
it
used
the
same
approach
as
the
initial
deployment
splitter.
So
you
know
you
have
an
ingress.
E
You
create
one
sub
ingress
per
per
cluster,
so
fixed
by
the
cluster
name
and
then
and
in
fact,
we
encountered
a
number
of
problems
with
the
devworks
based
controller,
because
it's
mainly
watching
for
ingresses
cleaning,
the
ingresses,
that,
according
to
its
labels,
are
not
really
useful
or
should
not
be
there
anymore,
and
the
fact
that
we
just
duplicate
ingresses
with
the
same
levels,
for
example,
just
mess
up
the
the
logic
of
the
controller.
And
it
seems
to
me
that
we
have
that.
E
It
opens
a
wider
question
about
the
fact
of
creating
additional
objects
that
you
know,
sub
objects
that
we
want
to
to
sync
to
to
physical
clusters.
The
fact
that
they
can
be
seen
by
client
controllers
in
a
number
of
cases
might
be
a
problem,
because
you
know
you
just
can
end
up
creating
an
object
that
a
given
controller
doesn't
expect
and
might
possibly
just
remove,
for
example.
E
A
So
yeah
yeah,
I
agree.
The
the
creating
creating
separate
sub-objects
was
a
bit
of
a
terrible
hack
to
be
able
to
sync
multiple
things
down.
We'd
also
talked
about
not
having
like.
If
you
split
the
depart
that
split
a
deployment
across
100
physical
clusters,
you
don't
want
to
have
to
store
100
objects
and
update
100
objects
and,
like
that
turns
into
a
right,
amp,
amplification
problem.
A
So
we
talked
about
having
some
hand
wavy
thing
of
virtual
resources
or
virtual
per
physical
cluster
objects
that
when
a
sinker
says,
give
me
a
deployment
for
me
or
give
me
anything
for
me.
A
Something
else
would
answer
that
with
here
is
the
slice
of
a
deployment
for
you
and
not
have
to
store
100
copies
of
subsets
of
things
yeah.
That
was
the
last
we
talked
about
it.
I
don't
think
we
went
into
more
detail
on
it
or-
or
maybe
I.
B
C
Certainly,
there
is
an
element
here
which
is
and
we're
kind
of
steve-
and
I
were
talking
about
this
a
little
bit
but
like
the
the
idea
of
a
virtual
workspace,
a
virtual
logical
cluster,
that's
a
little
bit
like
an
aggregated
api
like
it's
interpreting,
what's
going
on
so
like
having
one
of
those
do
that
transformation
would
be
potentially
because,
like
within
that
you'll
have
a
set
of
apis.
What
apis,
you're
gonna
have
you're
gonna
have
the
apis
that
the
syncer
expects
to
see
that
it
needs
to
copy
down
to
the
cluster
lcd.
C
So
there
may
be
that'll
be
one
mechanism,
but
I
I
think
there
may
be
others.
One
of
the
things
that
was
kind
of
striking
me
is.
We
should
probably
draw
a
diagram
of
terminology
for
the
different
parts.
So,
like
we've
been
using
logical
cluster
and
physical
cluster
we've
talked
about
you
know,
sinker
has
a
terminology.
When
we
talk
about
controllers
which
level
they
run
at,
you
know
delegated
controllers,
or
you
know
whatever
it
means.
When
you
let
the
underlying
physical
cluster
manage
the
object.
C
We
should
probably
come
up
with
terms
that
describe
what
a
higher
level
like
a
kcp
level.
Api
object
is
like
control,
plane,
api
versus
physical
cluster,
api
or
logical
api
versus
physical
api.
Whatever
we
come
up
with
some
terminologies
we're
all
using
the
same
phrasing
and
then
talk
about
you
know,
anytime,
we
have
a
lack
of
clarity.
We
show
on
a
diagram
what
that
what
that
actually
means
like,
so
that
we're,
because
I
don't
think
we're
starting
to
converge,
but
we
haven't
been
pretty
rigorous
about
it.
E
Yeah
yeah,
I
had
the
the
question
even
more
general
question
about.
I
don't
remember
exactly
where
we
are
about
leveling,
you
know,
or
using
affinity
and
anti-affinity,
but
anyway,
any
any
attempt
or
of
changing
for
the
sake
of
thinking
to
physical
cluster,
any
attempt
of
changing
the
context
that
the
external
you
know,
client
controller,
that
points
to
kcp
and
only
sees
kcp
any
attempt
at
changing.
Anything
in
this
context
is
possibly
error
prone.
I
mean
possibly.
A
C
I
think
there
would
be
a
statement
which
is,
if
you
have
a
so,
let's
just
say
we're
talking
about
logical
apis
and
physical
apis,
an
api
that
you
are
creating
that
you
were
getting
transparent,
multi-cluster
on,
has
to
be
designed
or
be
compatible
with
the
idea
that
it
is
useful
at
both
levels,
and
so
as
we
when
we
did.
The
exploration
work
on
the
transparent,
multi-cluster
deployment
is
useful,
both
levels,
because
the
deployment
is
a
chunk
of
things,
and
so
the
set
of
changes
minimal.
C
Not
every
physical
api
is
going
to
make
sense
like
that,
and
so
we
probably
need
a
lexicon
for
you
know,
kind
of
as
you're
saying
david
describe
what
api,
what
it
means
to
be
a
physical
api
that
is
suitable
for
transparent
multi-cluster,
conversely,
define
when
that
is
not
suitable,
such
that
you
know
no
matter
what
magic
strategy
we
come
up
with
in
the
sinker.
It's
just
not
going
to
make
sense
and
say:
okay.
That
may
be
a
scenario
where
the
logical
api
is
actually
distinct
from
what
the
physical
api
should
be.
C
So,
like
an
example
would
be
a
logical
api
might
be
creating
a
creating
a
12-factor
app
on
heroku,
so
a
heroku
deployment
could
be
a
logical
api.
It's
not
a
physical
api
because
there's
not
a
if
there's
a
control,
there's
just
one
controller
talking
to
it.
C
Whenever
the
sinker
gets
involved,
whenever
transparent
multi-cluster
is
in
play,
there's
a
certain
set
of
rules
that
apply
to
those
objects,
a
minimal
set
of
transforms
that
transforms
a
high
level
context
into
a
low
level
context
or
a
phys
logical
context,
into
a
physical
context,
or
something
like
that.
A
I
think
there's
still
a
problem
with
that,
though,
to
david's
point
about.
If
one
physical
cluster
is
able
to
see
the
details
of
another
physical
clusters,
you
know
split
of
a
deployment.
For
instance,
it
could
mess
with
it
or
mess
with
its
own,
or
it
has
a
lot
of
currently
in
the
demo
prototype
status.
It
has
a
lot
of
visibility
into
what
right
shouldn't,
because
it's
going
to
use
that.
E
So
yeah,
my
point
here
is,
is
really
about
visibility.
I
mean
that
a
client
controller,
pointing
to
a
logical
cluster,
should
not
see
any
change
done
by
anything
else
than
himself
and
itself
or
what
it
expects
from
a
typical
cube
to
to
do
on
on
the
resources
it
created
and.
C
E
C
Physical
api
today,
but
the
argument
is
that
it
could
be
a
logical
api
and
a
logical
controller,
because
there's
nothing
about
it
all.
It
depends
on
are
deployments
or
a
couple
other
objects,
but
it's
deploying
it's
depending
on
an
object
that
is
itself
a.
We
need
to
really
come
up
with
a
name
for
this
like
a
a
transparent,
multi-cluster,
compatible
api
or
follows.
C
A
E
C
What
we're
we're
kind
of
growing
the
aperture
of
things
we
weren't
really
talking
about
controllers
before
now
this
example,
this
physical
controller,
moved
to
the
logic
to
be
a
logical
controller
needs
to
it
expands
the
aperture
of
the
95
percent.
If
we
consider
the
use
case
of
moving
a
controller
from
a
lower
level
to
a
higher
level
valuable
because
it
simplifies
end
user
behavior.
But
in
this
case
I
think
it's
a
great
one.
C
There
was
an
example
that
was
given
so
for
crw,
specifically
how
much
how
much
of
the
behavior
of
the
crw
stipe
use
case
depends
on
looking
at
the
details
of,
what's
going
on
in
the
pods
versus
the
summarization
provider,
very.
E
Few,
in
fact,
I
I
still
so
in
the
code
that
it's
watching
pods,
I
mean
the
the
main,
the
workspace
controller,
but
to
be
fair,
I
only
think
it's
just
you
know
to
let's
say
ping,
the
controller
I
mean
to
just
push
the
controller
to
to
act
a
bit
more
quickly
when
something
changed
to
the
pod,
for
example,
you
know
to
just
detect
that.
Finally,
your
workspace
is
is
is
ready
to
run.
E
Yeah,
but
it
can,
it
can
so
yeah
sure.
Exactly
so
I
mean
there's
nothing,
for
example,
for
now,
obviously
that
in
what
I
did,
the
the
workspace
controller
is
working
at
the
kcp
level,
so
it
just
doesn't
see
any
pod
because
pods
are
not
replicated
from
downstream
to
upstream,
and
it
still
works
correctly
and
finally
reaches
the
the
ready
state
and
and
everything
so-
and
this
was
this-
is.
C
Actually
an
example
that
we
were
talking
about
for
etcd,
so
the
hypershift
team
was
prototyping.
You
know
the
lcd
operator
has
some
long-standing
challenges.
They
were
looking
at
alternate
designs
that
made
it
more
resilient.
One
of
the
ones
they
were
discussing
that
made
sense
for
ncd
was
the
idea
of
making
a
wrapper
a
layer
that
sits
around
each
pod
and
is
cube
aware
in
the
sense
of
being
aware
of
dns
and
other
injected
things
into
the
deployment.
C
C
It's
a
little
bit
like
what
mesosphere
prototyped
with
or
did
with
copilot
you
think
of
it
as
an
embedded
controller,
an
embedded
operator
from
a
terminology
standpoint,
but
that
pattern
actually
works
really
well
from
a
logical
and
physical
separation,
because
probably
a
lot
like
crw,
you
put
a
little
bit
more
logic
close
to
each
process,
and
then
you
keep
the
high
level
stamping
out.
You
know
controlling
deployment
behavior
up
at
the
high
level.
That
actually
makes
it
easier
to.
C
You
know:
keep
that
controller
completely
detached
from
the
physical
layer.
You
can
just
move
it
up.
Just
like
you
can
move
crd
w
up.
I
guess
the
thing
I'm
looking
for
then
david
is.
We
should
be
looking
at
the
examples
of
what
are
the
missing
things
that
you
know
a
controller
like
that
would
lose
out.
You
talked
about
readiness
td
has
like
recovery
scenarios,
backup,
etc.
C
We
should
look
at
what
are
the
set
of
requirements
for
those
kinds
of
logical
clusters
and
treat
those
as
gaps
in
transparent
multi-cluster
like
we
talked
about
getting
pod
logs
being
able
to
exec
pods
yeah.
D
C
And
david,
if
you
can
think
about
terminology
that
we
would
use
to
describe
this
type
of
controller,
that
would
be
awesome.
Like
is
this
a?
Is
this
an
agnostic
controller
like
something
like
level
independent
operator?
We
should
come
up
with
some
terminology
that
allows
us
to
identify
this
type
of
controller
versus
others.
A
I
think,
for
example,
tekton's
going
to
be
challenging
in
this
way
because
it
needs
to
see
pods
like
it.
It
create
users,
create
task,
runs
tech,
ton,
controller
changes
them
into
ads,
creates
pods
and
then
watches
those
pods
until
they're
done.
C
C
Most
of
the
time,
I
think
that
is
a
design
input
like
we
haven't
really
done
the
stepping
through
the
pod
example,
but
both
techton,
and
we
should
maybe
look
for
like
one
other
example
where
we
are
going
to
need
to
create
a
pod
at
the
logical
level.
C
That's
going
to
be
hard
because
pods,
a
pod
at
the
logical
level,
is
not
the
same
thing
as
a
pod
of
the
physical
level,
because
a
part
of
the
physical
level
has
a
finite
life
span
and
a
part
of
the
logic
level
has
an
infinite
lifespan
from
a
truth
perspective
and
that
implication
is
going
to
get
hairy.
But
I
think
that'll
be
really
important
for
the
job
or
batch
style
workloads,
which
is,
I
was
talking
about
this
with
rob.
The
other
was
I
talking
about
this
with
I've.
A
Ci
of
some
form,
I
think
batch
is
actually
a
really
good
fit
for
all
this,
because
we
don't
have
to
be
so
tight
on
latency.
You
know
like
failing
over
from
one
cluster
to
another.
We
don't
have
to
do
it.
You
know
in
a
millisecond
when
nobody
notices
we
can.
We
can
have
15
seconds
of
downtime
in
your
ci
pipeline
and
in
the
meantime,
we
moved
your
entire
thing
to
a
different
cluster.
You're
welcome,
like
seems
like
a
very
like
an
even
better
fit
than
application
moving,
but
right.
C
And
we'll
have
to
define
some
we
we
may
actually
end
up
having
to
do
things
with
pods
that
we
don't
have
today
like
at
most
once
and
at
least
once
semantics,
because
cube
is
definitely
an
at
least
once
system
for
some
parts.
But
there
are,
there
are
definitely
places
where
you
need
the
at
once
or
at
most
one's
behavior,
and
you
need
to
think
about
what
those
mean.
A
You're
reminding
me
of
a
very
fun
tecton
bug
we
had
where,
when
we
asked
for
a
pod,
we
got
two
pods
yep.
What
do
you
do
anyway,
we're
over
we're
over
time,
but
this
has
been
very
helpful.
I
will
post
the
recording
soon
and
if
you
have
any
notes
or
things
you'd
like
to
discuss
next
time,
feel
free
to
add
those
all
right
have
a
good
week.
Everyone.