►
From YouTube: Community Meeting, March 8, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
everyone
welcome
to
kcp
community
meeting
march
8th
2022..
We
have
a
number
of
items
on
the
agenda
and,
as
I
said
before,
let
me
know
if
there's
more
feel
free
to
add
them.
Stefan,
I
guess
you
have
you
have
the
first
few.
If
you
want
to
get.
C
A
And
I
will
make
sure
to
add
that
topic
for
next
week's
agenda
talk
about
this
more
thanks.
Yeah
is
there
while
we're
on
the
topic?
Does
anybody
have
anything
burning,
a
hole
in
their
mind
about
prototyping
in
the
next?
In
the
next
phase,.
B
Let's
go
back
to
this
one.
Let
me
share
one
second,
so
I
just
want
to
briefly
show
something.
Many
have
seen
already
a
couple
of
pr's
have
merged,
so
a
couple
of
people
were
involved.
B
All
workspaces
root
workspaces.
I
think,
there's
a
pr
for
the
demo
script
world
type
2..
B
I
think
it's
nearly
done.
I
haven't
seen
it
merged.
I
think
your
game
is
working
on
that
anyway,
so
lots,
a
lot
of
things
have
changed.
If
you,
if
you
use
the
current
main
branch,
basically
we
don't,
we
still
have
a
system
admin.
Can
you
see
that?
No
you
can't
right?
B
Can
you?
I
think
you
can
see?
Okay.
So
very
briefly,
what
has
changed
if
you
go
to
the
admin
cube?
Config
looks
like
that.
Has
a
couple
couple
of
contexts.
Most
of
them
are
not
so
interesting,
so
cross
cluster
is
nothing
you
will
use
as
a
user.
As
administrator,
you
will
start
in
something
called
the.
The
current
context
is
default,
which
points
if
we
do
default
results
that
system
default
to
my
knowledge,
so
you
land
in
some
organization
which
is
p
pre-created
called
default.
B
So
it's
always
there
there's
a
root
workspace
where
you
can
see
organizations
we
see
in
a
second
and
you
can
jump
around
between
workspaces
using
the
cli
plugins.
The
cube
cover
plugin,
so
david,
updated
it
the
pr
merged
yesterday
or
something
so
the
plugin
should
work
again.
There
are
still
some
small
things
we
have
to
rule
out,
but
basically
it
works
and
what
you
can
do.
B
You
can
do
everything
you
want
like
creating
namespaces
and
then
you
can
jump
to
the
root
workspace,
which
is
a
very
special,
unique
singleton
workspace.
And
if
you
look
there,
what
is
the
cluster
workspaces
there's
just
the
default
default
will
just
be
pre-created,
but
of
course
you
can
create
your
own
organizations
there.
So
just
take
some
cluster
workspace
name,
demo
type
as
organization.
That's
important
and
you
can
create
this
thing
and
then
you
have
a
new,
a
new
organization
called
demo
and
you
can
use
the
plugin
we
have
as
before.
B
So
you
can
say:
kcp
workspace
use
demo
and
then
you
land
in
your
new
organization.
It's
not
a
cluster
workspace
which
is
meant
for
applications.
So
not
everything
is
in
there.
So
you
won't
find
the
workload
cluster
object,
for
example,
so
you
cannot
run
applications
easily
by
using
the
syncer.
So
for
that
you
have
to
do
one
more
step
and
you
have
to
basically
jcp
workspace
and
create
just
another
one.
So
we
say
I
hope
I
get
the
syntax
right
create
meeting
just
the
name.
B
Now
it's
create
was
correct
and
you
can
say
use
meeting
and
now
you
are
in
the
actual
yeah,
basically
end
user
workspace
and
if
you
see
which
resources
are
there,
there's
also
workload
cluster.
So
from
this
point
on,
you
can
basically
start
to
do
whatever
you
did
in
the
past
system
admin.
As
I
said,
it's
still
there,
it's
a
workspace
which
is
basically
the
empty
string.
B
Cluster
name
workspace,
but
it's
not
used
anymore.
For
anything,
a
user
should
do.
There's
a
boots,
a
policy
for
arbuck
inside
that's
only
used,
I
think,
which
is
important
at
the
moment,
but
other
than
that.
Basically,
everything
starts
from
wood.
Hood
is
an
important
one.
They
are
in
what
they
are
all
and
in
orgs
there
are
always
workspaces
for
users
all
right.
D
I'll
add
one
quick
comment:
if
you're
playing
around
with
rbac,
creating
cluster
roles
and
cluster
rule
bindings
and
you're
confused,
why
things
aren't
working
in
the
org
or
workspace
that
you
are
fiddling
with
make
sure
that
you're
not
creating
them
in
the
the
default
system?
Admin
context,
because
that.
B
D
A
There's
a
there's,
a
shell
plug-in
that
I
have
used
in
the
past.
That
shows
you
your
current
cube
context
like
cluster
name
and
and
those
things
this
makes
me
want
andy.
Your
comment
makes
me
want
there
to
be
another
level
of
that,
for
which
workspace
you're
currently
in
which
you
know,
because
yeah,
even
just
within
a
cluster,
it's
very
easy
to
accidentally
be
in
the
wrong
name,
space
be
in
the
wrong
context.
D
And
encourage
like
folks
to
use
one
and
find
one
that
that
works
well
with
or
with
root,
orgs
and
workspaces.
E
E
I
was
just
saying
that
in
the
cube
ctl
plugin,
you
have
the
current
com
sub
command,
where,
if
it's
a
context
that
corresponds
to
a
workspace
or
organization
or
real
workspace,
then
it
will
answer
and
if
it's
not,
then
it
will
just
answer
that
it's
not
a
workspace
context,
in
which
case
you
can
just
your
cube,
ctn.
A
A
You
need
to
do
it
all
together,
yeah,
it's
hard
enough
to
get
lost
when
there's
two
levels
and
we're
adding
like
three
more
levels.
So,
but
no-
but
this
is
great
and-
and
you
know
more
levels
to
get
lost
in
is
a
good
thing
actually.
A
Did
you
you
also
had
let
me
present
again.
You
also
had
an
item
for
issues
yeah
milestone.
Do
you
want
to
go
through
that,
or
do
you
want
to
do
that?
Let
me
just
do
the
short
it
just.
B
Belongs
to
the
previous
one,
there's
doc.
If
you
want
to
read
about
the
airbag
topic
that
andy
just
mentioned,
just
read
to
it,
we
are
open
for
comments.
This
is
a
plan,
it's
not
implemented,
so
andy
and
or
david
mainly
he's
doing
a
powerful
implementation
at
the
moment.
B
A
Have
we
have
we
at
any
point
reached
out
to
like
sig,
multi-tenancy
or
sig
multi-cluster,
to
just
I
mean
not
like
to
change
what
they're
doing,
but
just
sort
of
to
give
them
an
idea
of
what
we're
doing
and
how
it
might
this.
This
reminds
me
a
lot
of
hierarchical
name,
spaces
in
sick
multi-tenancy,
but
you
know
just
actual
more
actual
more
levels
instead
of
yeah
more.
F
This,
if
you
looked
at
what
it
would
take
to
do
hierarchical,
name
spaces
actually
in
cube,
it
would
look
a
lot
like
the
things
we're
talking
about
here,
except
a
namespace
can't
change
the
api
boundary
and
workspaces
can
and
that's
actually
almost
more
important
for
safety
than
than
then
hierarchical.
The
namespaces
are
like
a
namespace,
isn't
a
boundary
for
safety
or
tenancy.
It's
a
boundary
for
names,
which
is
a
part
of
the
story,
but
just
not
the
whole
part.
A
Right
yeah,
I
mean
I
don't
it
sounds
like
everything
we're
doing
is
basically
not
everything.
We're
doing
is
making
hierarchical
name
spaces,
but
for
real
like
to
actually
make
them
a
safety
boundary
and
not
just
a
name
boundary
and
to
you
know,
hack
apart
kubernetes,
to
make
it
possible
anyway.
I
think
I
think
they
might
be
interested.
I
don't
know
when
they
meet
or
you
know
whatever,
but
we
can.
F
Also
yeah,
like
there's
the
hierarchy,
assumes
that
every
level
is
the
same.
One
of
the
key
points.
I
think
that,
like
even
stefan
and
andy,
what
you
guys
are
doing
is
like
you,
don't
necessarily
have
the
same
hierarchy.
You
have
like
a
you,
might
have
a
policy
hierarchy,
that's
different
than
your
quota
hierarchy
and
you
might
have
an
organizational
hierarchy.
A
Yeah
yeah
cool
anything
else
on
anyone's
mind
on
this
topic,
going
once
all
right.
A
Thank
you.
Do
we
want
to
go
back
to
presenting
thanks.
A
A
It
does
it's
there:
okay,
just
taking
its
time
yeah,
I
think
we'll
we'll
maybe
do
milestones
and
issue
stuff
with
the
time
remaining.
I
think
we'll
have
time.
At
the
end,
our
back
policy
use
case
for
organization
type
right.
D
Yeah,
just
real,
quick
and
up
status,
update,
I've
started
opening
up
one
or
two
pull
requests
and
there'll
be
more
coming.
So,
if
you
remember
the
api
inheritance
from
prototype
2
where
you
could
go
into
a
workspace
and
set
inherit
from
on
the
spec
and
point
to
some
other
workspace
that
is
going
to
go
away
and
be
replaced
by
the
api
exports
and
bindings
that
we've
been
brainstorming
and
designing
for
a
while
and
on
my
laptop,
what
I
was
able
to
do
was
rip
out
inherit
from
replace
it
with
api
bindings.
D
I
basically
pulled
a
crd
schema
from
one
of
the
kind
clusters
I
think
I
tested
with
endpoints
from
core
v1.
I
converted
it
to
an
api
resource
schema,
which
is
basically
our
way
of
defining
a
crd
without
it
being
an
actual
crd
that
users
can
create
instances
of
so
I
created
that
api
resource
schema.
I
created
an
api
export
for
it
in
some
workspace.
D
I
created
another
workspace
and
did
an
api
binding.
So
this
was
testing
cross,
workspace
binding
and
I
modified
the
crd
code
so
that
it
actually
would
populate
discovery
and
handle
your
requests
correctly,
and
I
was
able
to
get
an
api
bound
into
a
workspace
just
like
the
ap
api
inheritance
demo
showed
before.
But
this
is
with
the
new.
The
new
designs
so
I'll
be
continuing
to
work
on
that
this
week,.
A
A
Oh
steve
has
added
a
late
breaking
item.
Oh
and
there's
more,
oh,
my
goodness!
Coming
so
fast
steve.
You
wanna
talk
about
cockroach
as
a
backing
store,
yep.
G
So
I
I
think
I've
mostly
validated
the
idea
that
we
can
use
it
as
a
drop-in
replacement.
Everything
seems
to
be
totally
functional,
we're
passing
all
the
unit
integration
tests,
I'm
still
currently
waiting
through
all
the
things
we
broke
with
our
year
of
hacks
on
our
fork.
Never
bothered
to
validate
so
actually
running.
Ede
is
proving
to
be
a
bit
tricky,
but
we're
getting
really
close,
but
yeah.
Hopefully
this
week
will
be
able
to
validate
ede
it's.
G
I
guess
I
was
expecting
a
little
bit
more
of
the
the
baseline
like
api
semantics,
to
be
validated
at
a
less
complex
level,
but
I
think
all
of
the
important
like
I
can
actually
use
watch
sort
of
tests
happen
in
ede
and
only
there
yeah
and
then
I
think
we
should
probably
reconsider
what
the
next
steps
are
to
figure
out
like
are
we
are
we
fo
like?
G
H
So
can
I
ask
some
stupid
questions?
I
you
know
haven't
been
intimately
involved
here,
so
I'm
basically
lost.
So
when
you
talk
about
using
a
cockroach
raises
a
couple
of
big
questions.
Right
one
is
the
api
service.
Has
this
watch
cache
that
grows
with
the
you
know,
volume
of
addressable
data?
Are
you
turning
that
off
or
letting
it
grow?
The
other
question
I
have
is
the
kube
api.
Has
this
mvcc
semantics
through
the
resource
version?
H
G
Yeah
so
as
far
as
the
watch
cache,
one
of
the
reasons
we're
looking
at
cockroach
in
the
first
place
is
the
implications
of
the
watch
cache
and
then
underneath
the
implications
of
that
cd
having
to
store
everything
in
in
memory
for
it
to
be
functional
based.
Those
are
that
that
presents
scaling
problems
for
everyone,
and
so
the
idea
with
cockroaches
can
we.
G
So
the
watch
cache
is
off
and
we
expect
if
we
implement
a
watch
cache
and
we
need
one,
we'll
probably
have
it
at
a
different
layer
just
based
on
how
some
of
the
lower
level
functionality
works
with
cockroach
and
then
yeah
with
resource
version
and
whatnot.
Cockroach
specifically
was
chosen
because
it
supports
that.
So.
H
I
F
H
Right
and
equivalence,
so
they
don't
use
the
word
resource
version.
So
are
you
referring
to
the
the
server
time
stamps
from
the
time
traveling
feature.
F
The
hybrid
logical
clock
is
a
totally
ordered,
totally
ordered
per
resource
type,
effective
resource
version,
because
that
is
the
part
of
the
serializability
guarantees
that
cockroach
offers
and
we
still
have
to
validate
all
of
the
implications
of
it.
Like
I'd,
probably
say
we're
99
sure
at
this
point
that
we
can
offer
all
those
semantics.
But
we
still
have
to
as
steve
noted
completely
validated.
G
Yeah
so
like
one
of
the
you
know,
there's
obviously
some
gotchas,
for
instance,
the
hybrid
logical
clock
you
know
is
larger
than
one
64-bit
integer
and
so
there's
a
little
bit
of
hackery
going
on
in
terms
of
right.
Now,
I'm
still
collapsing
everything
into
one
64-bit
in
so
we
don't
change
the
surface
of
resource
version
to
users.
G
G
Editor
yeah,
the
downside
is
a
lot
of
them
are
and
there's
actually
quite
a
lot
of
places
even
in
the
core
cube
code
base.
Where
that
happens,
so
I
think
with
kcp.
If
we
break
that
it
would
be
unpleasant,
so
the
hope
is
we
wouldn't
need
to.
F
And
there's
two
prongs
is
that
there
are
community-wide
changes
that
would
slowly
move
people
away
from
that.
That
would
unlock
it,
but
the
pragmatic
of
how
do
you
meet
people
where
they
are
today
as
steve's
noting
so
you
have
to
you
have
to
both
you
have
to
take
both
of
those
but
they're
two
separate
prongs
we
would
fix.
We
would
find
out
ways
of
doing
this
in
the
ecosystem,
which
you
know,
we've
done
it
once
or
twice
an
api
machinery
we've
taken
baby
steps
towards
it.
H
H
Right,
yes,
I'm
dismayed
to
hear
that
there's
a
lot
of
violations,
but
yeah
that
clearly
implies
a
two-pronged
approach.
G
Yeah
generally,
like
the
what
I
found
in
when
I
was
looking
at
it,
like
the
the
number
one
most
common
place,
where
users
are
currently
parsing,
resource
version
is
when
they
have
an
outgoing
mutation
cache
and
they
want
to
know,
has
my
object
changed
since
I
last
updated
it,
and
then
they
haven't
implemented
a
generation
so
they're
using
resource
version
as
a
hypersensitive
generation
effectively
where
it
changes
on
every
update.
G
I
think
in
most
cases
that
could
be
replaced
with
a
generation
that
captures
like
the
semantically
meaningful
bits,
but
yeah.
It
would
take
some
time.
H
So
in
I
I
think
you're
alluding,
there's
really
kind
of
two
categories
of,
as
you
say,
users
here
or
clients
right
there
are
the
some
particular
controllers
and
then
there's
the
generic
client
libraries
right.
The
latter
is
more
disturbing
right.
We
can
address
or
have
use
cases
that
don't
use
certain
controllers.
But
if
there's
parsing
in
client
libraries,
you
know
that's
that's
bad
for
everybody
immediately.
G
Yeah
and-
and
I
think
the
the
other
thing
with
just
the
ecosystem
like
I
think
I
wouldn't
be
surprised
if
someone
using
some
strongly
typed
language
somewhere,
has
a
cube
client
that
they've
written
that,
like
explicitly
parses
the
resource
version
field
into
an
integer,
and
it
would
be
ideal
not
to
have
their
thing
fail
to
run,
even
if
they're
not
using
it
as
an
integer
like
yeah,.
H
Well,
I
mean
you
know,
there's
always
the
problem
of
you
know,
there's
the
source
we
can
see
in
tree
and
then
there's
all
the
source
that
we
can't
see
right
as
far
as
I'm
concerned.
There's
just
no
hope
of
you
know
enforcing
the
desired
discipline
on
on
source.
We
can't
see
you
know,
apart
from
getting
all
the
source
that
we
can
see
to
follow
the
discipline
and
you
know,
give
lots
of
warning
and
and
flip
a
switch,
and
you
know
people
that
break
the
rules,
suffer
the
consequences.
Well,.
F
And
that's
partially,
why
we're
doing
this
in
kcp,
like
I
want
to
be
really
clear
here
like
the
advantage
here
is
with
kcp?
Is
that
we're
going
after
workloads
and
use
cases
that
potentially
are
much
larger
than
cube,
even
though
initial
uses,
wouldn't
that
gives
us
a
plausible
mechanism
to
have
a
reduced
set
of
constraints
that
still
satisfy
most
clients
and
to
specifically
test
the
hypothesis,
which
is
you
know,
of
course,
like?
Oh
man,
I
really
want
to
go
to
kcp.
Oh
there,
I
can't
parse
the
resource
version,
okay!
F
Well,
here's
what
I
do
to
fix
it!
Oh
okay!
Now
I
know
I
fixed
it.
That
then
gives
us
a
better.
That's
like
the
two-leg
part
as
well,
which
is
like
we
can
fix
in
tree.
We
can
have
a
really
compelling
reason
to
make
this
change,
and
then
we
can
go
individually,
work
with
teams,
as
you
said
like
having
like
the
switch
and
like
what's
a
roll
out
plan,
we
were
going
to
have
to
think
about
this
anyway,
but
knowing
what
semantics
the
world
needs
is,
what
we
can't
be
confident
of.
F
We
steve
has
a
pretty
comprehensive
list
that
I
think
stefan
you
also
helped
with
as
well
like.
Here
are
the
semantics.
We
think
that
we
offer
to
clients
of
cube.
We've
never
actually
done
that
exercising
cube,
and
so
that's
a
document
that
would
be
like.
Do
we
actually
provide
these
semantics
or
not?
Can
we
tell
whether
we
support
them
or
not?
That's
a
larger
community
discussion
as
well.
H
I
think
I
followed
most
of
what
you
said,
but
I
got
lost
and
think
about
the
semantics
we
provide
right
because
today
it
is
a
64-bit
integer
and
we
have
no
bounds
on
you
know
what
client
code
is
making
use
of
any
details
of
that.
F
F
So
you
can
reason
about
when
you
dispatch
a
right
to
the
server
when
you've
seen
that
right,
we
don't
actually
support
that
very
well
in
cube
and
our
published
semantics
don't
support
it.
If
we
were
to
commit
to
supporting
it,
what
changes
would
we
need
to
preserve
in
the
next
version
of
resource
version
with
the
next?
What
would
be
the
evolution
of
that?
That
supports
the
semantic
we
need.
That's.
The
key
point
is
we
are,
why
are
people
parsing
resource
version
is
very
important
to
understand
before
we
change
parsing
resource
version.
H
H
And
also
compare
with
zero
and
compare
with
empty
string.
Those
are
two
special
values
that
get
called
out.
But
apart
from
that,
comparing
for
equality-
and
I
totally
agree-
I
think
it
would
be
great
to
identify
a
an
abstraction
that
says.
What's
what
we
support
and
I
think
you
you
put
your
finger
on
exactly
the
important
one
right
ordering.
If
we
can
get
ordering
you
know,
I
I
think
that's
a
huge
leg
up.
I
mean
I've
tried
to
I've
written
controllers
that
follow
the
only
equality
and
it's
a
real
pain
in
the
ass.
H
A
Yeah,
so
I'm
sure
there's
a
lot
more
on
this
topic.
We
we
can
and
undoubtedly
will
go
into.
I
want
to
make
sure
stefan
has
had
his
hand
up
for
a
little
while.
B
Yeah
I
wanted
to
quickly
talk
about
next
steps.
Steve's
question.
I
think
I
asked
steve
in
private
already
and
mike
also
mentioned,
that
watch
cache.
If
this
is
disabled,
we
lose
performance
on
things
like
level
selectors.
I
guess
these
kind
of
things
which
are
not
in
memory
anymore,
but
needs
I
mean
they.
They
will
scale
linearly
with
the
database
size
because
melody.
F
Filtering
is
bad,
so
yeah,
where
you've
taken
a
lot
of
data
and
filter
it
down,
would
theoretically
be
bad,
although
that's
just
we're
losing
performance
over
a
workload
we
don't
support
today,
which
is
the
primary
cube,
is
nodes,
selecting
by
field
name
and
a
few
label
selection
queries
that
people
do
at
scale.
So
my
question
is.
B
I
see
two
next
steps
or
two
directions.
One
is
basically
to
to
do
research
in
index
support
of
some
kind
from
cockroach,
so
changing
the
storage
stack
in
a
way
that
those
things
get
faster
again
and
the
other
direction
is
basically
to
have
partitioning
so
again
have
a
watch
cache,
probably
not
have
petitioning.
B
F
Watchcash,
I
would
honestly
say
that
watch
cash
is
a
very,
very,
very
targeted
solution
to
a
very
specific
set
of
general
problems,
and
the
alternative
is
the
watch.
Cache
was
intended
to
mitigate
ncd
watch
performance
on
an
earlier
version
of
entity.
That
is
why
it
exists
in
cube.
Today
we
have
continued
down
that
path
by
making
small
incremental
improvements
to
it.
It
is,
there
is
a
fundamental
like
I
think
we
would
maybe
stavan
another
way.
I
would
say
what
we're
trying
to
do
is
reframe
our
access
pattern
in
terms
of
efficiency.
F
The
list
of
access
patterns
we
have
is
different
than
cube
today,
but
if
no
one
can
say
how
much
they
need
a
hike
like
the
sinker
is
probably
the
worst
of
the
high
cardinality
filtering.
We
already
know
that
I
would
say
we
should
be
designing
to
have
the
right
set
of
trade-offs
if
cockroach
is
bad
at
watch.
That's
a
reason
to
have
things
like
a
watch
cash,
but
I
don't
know
that
the
solution
state
should
just
be
assumed,
like
I
think
we
should
jump
to.
F
We
will
hold
things
in
memory
for
specific
patterns
from
clients,
no
matter
what
controllers
will
hold
things
in
memory,
that's
how
reconciling
controllers
work.
Their
working
set
is
defined
by
the
largest
machine
they
can
run
on,
but
the
other
stuff
is
open,
like
we
might
actually
say
partitioning
is
maybe,
like
I'm
really
worried
about
about
being
too
myopic
on
the
partitioning
discussion,
just
because
the
access
patterns
there's
there's
only
a
handful
of
access
patterns
that
we
actually
have
to
support,
because
the
clients
are
still
basically
trying
to
read
and
hold
everything
in
memory.
F
F
F
A
workspace
is
implicitly
a
better
in-memory
thing
than
the
set
of
resources
across
multiple
workspaces.
The
same
way
in
cube,
a
namespace
is
a
better
like.
If
you
only
need
a
subset
of
things,
a
single
resource
or
a
namespace
is
better
for
the
set
of
things
to
hold
in
memory
than
everything
in
the
cluster,
but
I
think
we're
already
saying
we
will
have
scale
dimensions
outside
of
memory
for
resource
types.
F
A
So
my
this
is
amazing.
This
is
fantastic
and
I
love
it
and
I
want
to
start
telling
people
about
it,
because
I
think
this
is
an
example
of
something
that,
like
a
science
project,
that
kcp
has
done
for
kcp's
purposes.
That
may
be
very
interesting
to
the
broader
ecosystem
and
community
of
folks,
like
the
number
of
people
who
who
cared
about
kubernetes
being
able
to
run
on
sql
lite,
like
just
that
and
in
a
different
dimension
right.
I
I
want
to
tell
people
about
this,
but
I
don't
want
to.
A
A
Yeah
yeah,
I
guess
what
I'm
getting
at
is
like
when
what
do
we
need
to
satisfy
before
we
can
start
saying
that
this
works
more
lively
than
in
this
meeting.
Also,
there
was
a
comment
about
like
and
a
lot
of
the
discussion.
We're.
Having
is
like
what
does
this
solve?
What
problem
does
this
practically
solve?
What
problems?
A
Doesn't
it
practically
solve
to
clayton's
point
about
like
data
residency
and
latency
to
the
client
and
stuff
like
like
those
things,
it
will
not
completely
solve
that,
but
it
will
give
a
new
set
of
tools
to
people
who
are
interested
in
that,
and
I
guess
I
want
to
push
on
us
to
have
those
answers.
So
we
can
share
and
say,
like
look
at
this
cool
thing
that
steve
has
done.
G
I
think
the
the
data,
the
scale
one
is
a
pretty
easy
sell
and
then
the
the
resident
set
size
is
a
pretty
easy
sell
but
yeah.
Let's,
let's
pass
the
api
machinery
and
10
tests,
and
then
I
have
a
you
know:
a
kind
like
a
local
kind
setup
that
anyone
could
use
with
enough
hacks
to
try
it
out,
which
would
be
cool.
F
I
also
want
a
caution,
so
just
like
so
folks
are
aware,
there
was
a
discussion
going
on
steering
where
there's
concerns
about
ncd
being
effectively
unmaintained,
just
because
of
like
difficulty-
and
you
know,
people
moving
on
from
their
roles
in
the
fcd
community
and
so
there's
a
separate
discussion,
that's
going
to
be
happening
at
the
same
time.
I
would
be
very,
we
very
very
much
don't
want
to
describe
this
as
we're
doing
this,
because
we
don't
want
to
support
etcd.
F
That
is
a
completely
orthogonal
thing,
like
most
of
the
folks
involved,
have
no
intention
of
using
this
as
a
as
a
solution
to
that
problem.
This
is
more
about
opening
up
scale
options,
but
if
it
is
asked,
I
think
people
should
just
say
like
hey
like
steve,
if
you
get
asked
or
if
anybody
here
hears
this
like.
No,
this
doesn't
replace
that
cd,
because
that
would
like
that's
a
that's
gonna
break
community.
This
is
opening
up
options.
F
H
One
of
the
questions
steve
cockroach
has
you
know
like
an
enterprise
tier
and
a
free
tier
right?
Are
you
using
enterprise
features?
Are
you
only
using
free
features.
G
That's
a
really
good
question:
there's
certainly
a
number
of
things
that
the
enterprise
tier
makes
a
little
bit
easier,
but
everything's
available
in
the
free
tier.
So
I'm
not
using
anything
in
this
enterprise
right
now,
steve.
F
We
should
put
that
in
the
design
document
as
well.
Yeah.
A
Yeah,
I
think,
like
clayton,
to
your
point
about
how
we,
how
we
position
this
vis-a-vis
cd.
I
think
it's
there's
precedent
already
right,
like
kine,
is
a
different
fcd.
Replace
replacement
is
already
a
loaded
word,
but
a
different
option
for
not
using
ncd,
which
has
which
is
interesting,
because
it
has
different
scale
and
maintainability
in
whatever
characteristics,
we're.
A
That
has
different,
you
know
some
good,
some
bad
characteristics,
but
yeah.
Thank
you.
For
that
context,
that's
useful
to
know
in
the
general
sense
so
that
we
don't
step
on
a
beehive
or
something
but
yeah.
I
think
I
think
and
and
I
don't
think
we've
even
committed
to
using
it
ourselves
right
like
we're,
not
even
sure
we
want
it,
but
at
least
we've
gotten
it
to
the
point.
Steve's
gotten
it
to
the
point
that
it's
like
plausible,
that
it
could
work,
which
I
think
is
super
exciting.
H
So
yeah
again
one
basic
stupid
question
here.
You
know:
I've
also
seen
that
there's
talk
about
sharding
the
the
kcp
servers.
So
would
the
idea
be
that
every
shard
talks
to
the
same
cockroach
or
each
shard
talks
to
its
own
cockroach.
F
That's
the
issue
we
were
bringing
up
before
around
if
we
have
operational.
If,
if
all,
if
everybody
talks
to
the
same
cockroach
and
an
admin
fat
fingers
drop
table,
then
you
know
we
don't
have
operational
resilience.
A
key
part
of
a
control
plane
is
understanding
where
those
requirements
would
be
so
it's
possible
that
no,
we
would
anticipate
there
being
either.
You
know
you
can
set
up
a
hierarchy
of
control,
planes
that
have
different
apis
and
different
use
cases
and
different
audiences.
F
F
We
haven't
settled
on
the
exact
requirements
that
you
need,
that
we
believe
that
a
global
control
plane
would
need,
because
we
don't
understand
all
of
the
use
cases
that
would
lead
someone
to
like
an
up
very
concrete
example,
if
you're
a
large
enterprise,
you're
deploying
stuff
on
multiple
clouds,
and
you
want
a
control
plane,
you're
a
little
bit
worried
about
blast
radius.
If
someone
compromises
that
data
store
and
gets
access
to
right
access
that
could
change
everything
in
every
cloud,
you
may
wish
to
have
actually
very
hard
physical
boundaries
between
parts
of
your
control
plane.
F
F
F
Yet,
in
terms
of
understanding
all
the
requirements.
F
A
Jason,
we
all
wanted
to
solve
all
problems.
Yes,
yeah
yeah
now
that'd
be
great.
Unfortunately,
that's
those
stupid
fat-fingered
humans
that
are
the
real
problem.
So
when
skynet
comes
along
and
and
takes
care
of
that,
we'll
be
fine,
but
until
then,
anyway
yeah,
I
guess
steve.
I
will
also
mention
that
I
think
the
cfp
for
kubecon
in
october
is
opening
at
the
end
of
march.
A
We
have.
I
think
there
was
one
more.
Unless
I
mean
we
can
talk
about
cockroach
more.
You
still
have
20
more
minutes,
but
there
was
one
more
thing
I
think.
Stefan
added
I
mean
yeah.
B
Addition,
in
addition
to
to
andy's
topic
topic,
we
have
an
a
pretty
minimal
api
at
the
moment
which
is
merged
in
main
and
that's
what
andy
is
implementing
api
bindings
api
api
exports,
there's
nothing
advanced
yet
in
maine
about
evolution
and
checking
of
api
changes.
Like
is
this
breaking
if
you
remove
a
field,
something
like
that,
that's
all
in
this
pr.
So
if
you
want
to
take
a
look,
this
is
basically
the
future.
B
The
value
adds
on
top
of
api
bindings,
which
crds
don't
have
like
you
can
create
schemas,
which
are
basically
snapshots
of
clds,
and
the
system
will
tell
you
when
you
do
something
bad,
which
is
potentially
breaking
clients
you
can
override
like
you.
Can
you
can
acknowledge
certain
things
like
when
you
add
a
set
of
software
source?
It's
breaking.
We
know
that,
but
maybe
it's
something
a
user
or
not
a
user,
but
a
cid
author
wants,
and
you
can
say
yes,
this
is
something
I
I
know
it's
happening.
B
I
accept
that
and
I
want
to
hold
it
out.
Basically,
this
is
a
pattern
in
this
api
proposal
that
you
can
you
get
warnings.
You
get
rejections
actually
admission.
Will
do
those
checks
against
api
resource
schemas,
but
you
can
override
and
potentially
in
the
future,
you
can
maybe
migrate
or
have
different
ways
to
solve
problems
of
api
evolution.
So
shout
out,
please
take
a
look
read
that
give
comments,
ideas,
everything
that
is
missing
in
cids
today,
which
you
would
love
to
have.
This
is
a
chance
to
edit.
A
Awesome,
that
is,
that
is
also
a
topic
that
I
want
to
watch
a
kubecon
talk
about,
so
we
should.
I
think
it's
interesting,
that
we
are
solving
all
the
problems
or
doing
all
the
features
in
crds
that
we
wish
we
had
and
we're
doing
it
using
crds
effectively
right
like
we
are
building
the
layer
on
top
of
crds
on
top
of
crds
anyway,
yeah
carlos.
You
have
your
hand
up.
I
don't
know
if
this
was
about
this
topic
or
the
last
one
or
another
one.
I
It
was
for
the
cultural
topic,
so
I
have
a
question
on:
will
the
database
be
blocked
for
a
single
cluster
or
being
the
regular
sql
database?
Can
that
be
accessed
and
modified
by
another
resource.
I
Yeah,
so
my
use
case
is
mostly
academia
and
we
have
been
having
this
question
in
conversation
over
the
past
year,
mostly
on
the
what,
if
I
can
have
kubernetes
running
alongside
with
academia,
resource
managers,
but
being
then
able
to
be
friends
right,
like
hey,
I'm
using
this
node
okay,
I
will
leave
it
to
you.
Oh
I'm,
using
this
like
having
a
single
estate
manager
where
they
can
communicate
right,
like
I'm
using
this
resource.
F
So
if
the
workload
could
benefit
from
having
a
sequel
store
that
was
close
to
the
control
plane,
I
think
there's
things
to
think
about.
Cockroach
certainly
offers
the
ability
to
break
up
and
move
that
data
around
so
that
you
know
just
the
fact
that
you
have
two
tenants
on
the
same
database
doesn't
necessarily
mean
that
tenant,
a
or
tenant
b
even
have
to
be
co-located.
F
However,
there
is
a
trust
domain
which
is
giving
someone
even
the
limited
set
of
sql
rights
to
write
alongside
that
store,
might
impact
the
security
mental
model
of
that,
and
so
it
it's
not
that
that's
a
hard
blocker.
It
would
just
be
a
point
of
caution,
but
I
do
think
it's
much
closer
than
you
would
say,
like
you,
wouldn't
recommend
that
someone
reuse
the
core
cluster
ncd
today,
except
for
like
things
that
are
shipped
and
supported
by
the
same
teams
and
have
you
know
that
you
can
reason
about
both
of
them?
F
You
wouldn't
necessarily
open
that
up
to
end
users.
I
think
there'll
be
a
similarity
to
that,
but
I
also
think
there's
a
door
potentially-
and
this
is
something
that
cockroach
has
shown
with
their
multi-tenant
layer
on
top
which
isn't
open
sourced
at
this
point,
but
might
actually
be
amenable.
If
you
know,
there's
enough
interest
would
be
some
of
those
kinds
of
separations
actually
potentially
could
lead
to
the
idea
that
you
could
co-locate
control,
plane,
control,
plane
like
and
data
plane
like
capabilities.
F
We
just
I
don't
think
we're
ready
to
to
know
so
I
think
it's
worth
exploring
and
10
15
20
chance
that
it
turns
out
to
be
a
really
bad
idea
for
some
reason
we
haven't
figured
out
yet,
but
could
be
a
very,
very
good
idea.
That
could
be
a
huge
benefit
for
not
just
academic
systems
but
actual
like
if
you
have
a
control
plane
and
that
control
plane
is
already
a
single
point
of
failure
for
your
entire
enterprise.
F
F
F
And
this
is
a
great
follow-up
topic
that
we
should
put
in
the
database
design
as
a
sub-thread
that
we
don't
lose
track
of.
So
maybe
that's
something
we
can
add
to
that
design
dock.
Is
it?
Do
we
share
that
with
kcp
dev,
yet
steve,
okay,
I'll
show
that
with
kcp
dev
and
then
I'll
create
some
subsections
in
it
carlos?
So
we
can
add
some
of
those
notes
and
some
of
the
things
that
other
folks
discussed.
A
Cool,
thank
you
yeah.
Before
we
move
on.
Does
anyone
have
any
questions
or
comments
about
the
future
of
api
evolution
because
I
think
that's
a
very
interesting
area
of
topic,
but
otherwise
yeah
take
a
look
at
the
at
the
pr
poke
around
see.
If
see
what
you
think
about
it
all
right,
we
have
about
12
minutes
left
and
no
one.
Oh
thank
you
for
filing
an
issue
on
that.
Do
you
want
to
go
over
issues
filed
since
last
week?
A
Yeah,
I
guess
we
can
go
through
test.
Flake
andy.
Did
you
have
ideas
about
this?
I
saw
in
the
slack
not
really
just
some
sort
of
slowness
or
race.
D
Somewhere
weirdness
yeah,
I
we
have
somebody
who's
gonna
work
on
it,
so,
oh
cool.
Now
that
they
commented,
I
can
probably
assign
them.
A
B
F
G
F
As
long
as
a
right
as
long
as
there's
a
way
to
list
logical
clusters
from
an
admin
context,
that
may
actually
be
an
out
for
us,
which
is
anything
in
if
you
could,
it
may
be
that
there's
just
another
semantic
we
can
use.
That's
not
the
regular
cube,
delete
but
acts
like
it
like
a
virtual
workspace
available
to
a
root
admin
that
allows
them
to
see
all
things
across
all
child
workspaces
and
delete
them
potentially,
or
to
create
a
virtual
workspace
that
maps
to
a
logical
cluster
that
does
magic
discovery
on
the
store
level.
B
It's
a
very
easy
option,
and
this
is
a
good
first
bug
for
somebody.
We
have
the
convention
that
system
colon
is
a
prefix
for
local
on
I
mean
for
workspace,
which
are
local
and
just
for
this
chart
and
they
are
never
accessible
to
some
some
normal
user.
So
we
could
change
the
authorizer
that
he
added
or
some
way
that
we
check.
F
I
think
we'd
be
one.
I'm
really
sure
that
that's
how
we
wanted
to
separate
logical
clusters
and
storage,
though
I
mean
it's
kind
of
interesting,
because,
like
yeah,
the
colon
makes
sense
for
workspace
names.
Would
we
ever
want
to
map?
Would
we
ever
want
to
use
that
colon
to
do
composition
of
keys?
For
instance
like?
Would
we
ever
potentially
have
a
workspace
naming
scheme?
F
H
Okay,
also,
you
could
reserve
the
first
colon
for
one
purpose
and
leave
the
rest
of
the
string
open,
yeah.
B
B
D
I
was
thinking
like
if
we
needed
to
say
for,
for
whatever
use
case,
give
me
a
list
of
all
logical
clusters,
or
I
know
I
need
to
go,
delete
or
like.
Let
me
go
find
these
logical
clusters,
so
I
can
go
delete
some
things
that
need
to
get
deleted.
We
don't
currently
have
a
way
of
registering
and
tracking
logical
cluster
names
and
the
only
way
that
I
can
think
of
to
do
that
would
be
getting
in
the
handler
chain
and
tracking
it
that
way.
But
my
question
is
like
do
we
want
to
do
that.
F
I
at
some
level
all
rights
to
the
store
need
to
happen
on
a
resource
schema,
so
at
a
minimum.
A
resource
schema
needs
to
have
existed,
to
create
a
resource
and,
with
the
current
store
structure
we
have
anyway,
a
workspace
delete,
call
is
just
verifying
that
you
can
perform
a
delete
across
all
of
the
valid
resource.
F
Schemas
there
is
a
question
is
if
there
is
a
delete
that
just
allows
you
to
say
for
all
of
all
a
delete
works
might
be
another
option
like
it
could
be
that
there's
just
a
way
to
represent
this
as
a
normal
resource
of
type
lcd
key
or
you
know,
tuple
of
schema,
workspace,
name,
namespace
and
value,
and
that
the
delete
call
actually
just
operates
over
that.
So
the
discovery
is
there's
only
one
type
of
resource,
and
you
can
do
like
a
field
selector
on
the
resource,
schema
type
or
something
or
the
workspace
schema
type.
A
F
And
so
like,
I
think
this
is
a
use
case.
Driven
problem,
too,
is
like
we're
not
really
crisp
on
the
use
cases
we
need,
and
we
want
to
have
a
generic
thing.
That's
not
so
generic
that
we
spend
all
our
time
on
genericism,
but
we
want
to
concretely
solve
like
a
really
key
problem.
This
key
problem,
I
think,
basically
comes
down
to
the
thing
you
raise
jason,
which
is
if
an
admin
creates
a
resource
in
a
namespace
that
doesn't
exist
in
a
workspace
or
a
workspace
that
doesn't
exist
and
that
workspace
is
created.
F
F
How
would
you
clean
that
up
as
an
admin,
if
somebody
accidentally
created
a
whole
bunch
of
stuff
in
a
logical
cluster
that
either
you
don't
allow
it
or
you
give
a
person
a
way
to
recover
from
that
and
creating
a
workspace
that
you
then
can
delete
the
stuff
out
of
to
then
delete?
The
workspace
feels
like
the
wrong
approach.
If
you
have
this
lower
level,
abstraction.
A
Yeah
andy
go
ahead.
D
D
You
can
have
pvs,
like
whatever
you
can
put
in
that's
cluster
scoped
or
namespace.
Scoped
goes
into
a
logical
cluster
and
between
two
different
logical
clusters.
There
is
isolation,
so
I
can
have
a
default
namespace
in
logical
cluster,
a
I
can
have
a
default
namespace
and
logical
cluster
b
and
the
contents
are
completely
independent
and
unique.
The
reason
that
we
distinguish
between
logical
clusters
and
workspaces
is
that
we
want
to
provide
some
additional
functionality
via
the
workspace
concept,
such
as
typing
and
inheritance
and
other
functionalities.
D
So
I
give
you
a
concrete
example,
so
you
could
say,
like
I
work
at
acme
corp
and
I'm
going
to
provide
a
service
to
all
my
developers
and
whenever
they
create
a
workspace
they're
going
to
automatically
get
widgets
and
techdon
insert
manager
and
all
sorts
of
other
things
and
like
that
is
codified
in
a
workspace
type
along
with
api
bindings
and
maybe
some
other
things
which
a
lot
of
this
is
still
in
development.
D
So
it's
visionary,
but
the
idea
is
that
just
creating
a
logical
cluster
or
just
storing
stuff
in
a
logical
cluster
doesn't
give
you
any
of
that.
It's
basically
like
an
empty
kubernetes
cluster,
and
so
the
workspaces
give
us
organizations
and
as
a
hierarchy
and
organizations,
contain
workspaces
and
workspaces,
contain
namespaces,
and
then
all
of
the
the
policy
and
defaulting
that
you
get
with
with
the
pipe
system.
D
Thanks
for
the
the
in-depth.
A
D
Can
you
hear
me
yeah
now
I
can
yeah.
I
think
it
answered
most
of
my
questions.
It's
one
of
the
weird
things
is,
and
I
think
that
bug
kind
of
shows
it
is
that
there's
kind
of
two
places
to
go
when
you're
talking
about
these
logical
clusters-
and
I
I
suppose,
like
the
the
persona
for
the
top
level,
is
really
like
admins
of
like
the
super
cluster
or
the
the
the
kcp
server.
The
aggregate
of
everything.
A
Okay,
yeah,
I
think
so
far
the
decision
has
been
that
admins
can
literally
do
everything,
and
so
they
shouldn't
be
blocked
from
creating
workspace
or
creating
resources
and
workspaces.
That
don't
exist,
though,
that
is
confusing,
as
as
evidenced
by
the
fact
that
we
are
talking
about
it.
F
A
Cool
someone
was
putting
their
hand
up
alay,
I
think.
Did
you
did
you
have
something
on
your
mind.
J
D
A
Yeah,
that's
a
good
question,
though,
and
I
think
a
lot
of
that
comes
up
a
lot
as
like
what
is
the
difference
between
these
things,
and
why,
with
that,
I
think
we
have
run
out
of
time
and
made
it
through
two
of
our
issues,
but
it
was
a
good
good
discussion
today
on
a
wide
range
of
topics,
so
good
work,
everyone
and
we'll
see
you
next
week
I'll
try
to
get
this
recording
up
later
today
see
you
folks.