►
From YouTube: Community Meeting November 23, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Excellent
we'll
see
if
it
actually
is
all
right
welcome
to
the
kcp
community
meeting
november
23rd
2021.
We
have
a
few
items
on
the
agenda
now.
It
tells
me
that
it's
recording
so
we'll
see
if
that
gets
cut
off
yeah
I
wanted
to.
I
wanted
to
share
and
go
through.
Stefan
was
asking,
I
don't
know
if
he
is
here
well,
it'll
be
recorded
anyway,
the
for
for
sort
of
a
walk
through
of
how
namespace
scheduler
works
or
mostly
works.
B
Today,
it's
pretty
simple.
It's
not
terribly
deep,
though
there's
a
couple
of
little
tricks
it
plays.
So
this
is
this.
Is
this
will
become
a
pr,
or
this
is
the
pr
that's
out
right
now,
but
basically
it
well
it
has.
It
has
a
couple
of
tricks.
One
of
the
tricks
is,
I
kept
needing
to
translate
a
gbk
into
a
gvr,
and
so
I
built
this
little
doodad
to
do
that
for
me
not
to
start
that
away.
This
is
another
trick.
It
does
can
everybody.
This
is
big
enough.
B
I
guess
it's
taking
up
half
the
screen,
but
one
thing
it
needs
to
do
is
well.
Let
me
step
back
so
what
the?
What
the
namespace
scheduler
does
is
any
time
it
sees
a
namespace
in
any
workspace.
It
tries
to
pick
a
cluster
for
it
right
now,
it's
very
random.
It
says
list
all
my
clusters
pick
one
at
random,
assign
that
namespace
to
that
to
that
physical
cluster
across
all
workspaces,
and
the
other
thing
it
needs
to
do
is
anytime.
B
B
It
needs
to
do
this
for
every
resource
of
every
type
in
that
workspace,
and
so
one
of
the
things
it
does
is
it
has
this
dynamic
discovery
shared
in
former
factory,
it's
basically
a
collection
of
shared
and
former
factories,
dynamic
shape
and
former
factories
that
it
starts
up
inside
start.
It
discovers
all
the
types
in
this
workspace
and
then
it
pulls
every.
B
I
forget
how
often
minute
something
like
that
10
seconds
pull
types
every
minute
and
discovers
new
types
that
have
recently
been
added
to
this
workspace
and
starts
up
informers
for
those
two
and
then
it
it
tries
to
look
exactly
like
a
shared
informal
factory
to
everybody
else
with
event
handler,
and
these
handler
funks
we'll
show
later
how
that
actually
looks
to
use
it,
but
basically
every
minute
it
will
look
up
the
types
using
discovery
filter
out
some
that
it
knows
it
doesn't
care
about
namespace
or
non-namespace
things,
cluster
scope,
things.
B
It
knows
it
doesn't
care
about.
If
it
can't
list
it
or
can't
watch
it,
it
doesn't
care
about
it.
This
is
if
it's
new
start
a
dynamic
short
and
former
factory
for
it
with
the
event
handlers
and
run
it.
It
doesn't
currently
forget
about
types
there's
a
to
do
somewhere
in
here
to
forget
about
types
when
that
type
goes
away,
it
doesn't
need
to
keep
an
informer
open
for
it,
but
for
now
this
works
fine.
B
This
is
the
namespace
controller.
Andy
was
completely
correct
that
it
should
not
take
a
rest
config.
I
will
fix
that
before
too
long,
but
basically
it
sets
up
a
discovery,
client,
a
dynamic
client,
a
typed
client
and
a
cluster
client
to
look
at
clusters.
B
These
are
this
is
starting
to
get
confusing.
These
are
physical
cluster
objects
in
the
cluster
crd
that
we
have
and
not
logical
clusters,
which
is
something
we
will
also
come
up
with
later.
So
it
sets
up
the
dynamic
discovery
thing
and
just
says
anytime.
A
new
resource
of
any
type
or
any
newly
discovered
type
shows
up
in
queue
it
in
a
work
queue
of
resources
to
care
about
anytime,
a
cluster
change
happens.
Tell
me
anytime,
a
namespace
change
happens.
B
Tell
me
it
doesn't
care,
it
doesn't
currently
care
about
deleting
stuff
or
deleting
sorry.
It
does
care
about
deleting
clusters,
but
it
doesn't
care
about
deleting
objects
or
namespaces,
because
there's
nothing
to
do
when
those
happen
it
filters.
This
was
also
andy's
recommendation
that
it
filters
before
actually
enqueuing
anything
if
it's
not
if
it
can't
parse
the
key.
B
This
was
something
that
was
recently
broken
that
I
think
we're
fixing
about
it
I'll
get
to
it
later,
but
we
only
care
about
admin
cluster.
For
now
it
needs
to
become
fully
multi
multi
workspace,
multilogical
cluster
aware,
but
for
now
it
should
only
care
about
admin
clusters
and
then
it
skips
anything
in
kube
system,
coupe
public
namespaces
that
it
shouldn't
sync
down.
B
B
Sure
sure
it's
currently
using
one
cue
for
everything,
for
so
it's
putting
namespaces
in
the
same
queues
as
resources
and
cluster
updates.
I
don't
know,
I
don't
know
if
this
matters,
I
don't
know
if
we
care,
but
that's
something
that
we
might
want
to
do
later,
is
have
three
different
views
for
these.
B
B
Some
error
handling
stuff
when
it
processes
a
resource.
All
the
interesting
stuff
is
processing
resources
basically,
but
when
it
when
it
gets
that
it
splits
out
everything
and
then
passes
in
the
logical
cluster
name
of
that
resource,
the
unstructured
object
and
its
gdr,
this
group
version
resource.
B
Oh
here's
that
name
space
block
list,
yeah
yeah
keep
going
to
the
next
thing.
The
interesting
part
in
here
in
reconciling
resource
it
looks
up
in
the
cache
and
the
in
the
lister
cache
its
namespace
in
the
same
logical
cluster.
This
was
nice.
B
If
it's
already
assigned
to
the
right
cluster
doesn't
have
anything
to
do
otherwise.
It
patches
it
patches
with
merge,
merge
patch.
It
should
probably
use
a
json
patch,
but
for
now
this
works
for
namespace
when
it
reconciles
the
namespace.
If
it
doesn't
have
an
assignment
it
will
list
all
clusters,
it
will
to
do
filter
out
on
ready
clusters
and
set
the
assignment.
B
This
is
something
else
sorry
steve
says,
explain
why
jsonpatch,
I
don't
have
a
strong
preference
either
way,
but
my
understanding
is
that
json
patch
works
for
everything
and
merge
patch
only
works
for
some
things
is
that.
A
It's
strategic,
merge
patch
that
only
works
for
some
things.
You
should
be
able
to
do
either
json
or
merge
patch
for
just
fantastic.
C
D
Now,
but
like
any
place
where
you're
any
place,
where
you're
basically
trying
to
to
make
the
updates
over
time,
this
would
be
an
interesting
place
to
try
it.
It's
not
in
any
way
critical
but
like
that,
would
allow
you
to
basically
dispatch
and
say
hey
like
this
is
what
I
expect
it
to
be
after
it's
gone
through
all
the
internal
calculations
of
whatever
you
decided,
the
object
is,
and
that
becomes
the
service
problem.
So
in
theory,
syncer
is
a
perfect
use
case
for.
B
Server
side
apply,
so
it's
not.
It
doesn't
care
what
the
value
was.
Ever
it's
only
ever
setting
a
label
on
an
object,
and
so
I
don't
think
it
matters
if
it
still.
B
Right
if
a
human
comes
in
and
sets
it
to
something
else,
then
this
reconciler
will
put
it
back
right
and
again
like
it's.
It's
a
useful
input
to
the
next
kind
of
iterations.
It's
not
critical!
Now
yeah
gotcha
great
yeah!
So
after,
if
if
a
namespace
is
not
assigned
to
a
physical
cluster,
it
picks
one
at
random
and
patches
it
if
it
doesn't
currently
check
if
it's
or
it
doesn't.
B
If
that
cluster
isn't
ready,
and
then
this
was,
this
is
all
going
to
get
moved
elsewhere,
but
basically,
as
soon
as
after
the
namespace
gets
assigned,
it
goes
and
looks
up
all
objects
in
that
namespace
and
then
also
assigns
it
to
that
to
that
same
physical
cluster,
this
should
be
done.
This
is
just
inc
and
queue
items
for
all
of
these
in
so
that
reconcile
resource
is
responsible
for
this.
So
it's
not
just
like
blocking
doing
this
for
every
resource
in
the
namespace.
B
E
May
I
ask
you
a
question
jason
yeah
in
the
existing
way
api
negotiation,
I
mean
initial
api
negotiation
is,
is
implemented
regarding
types
imported
from
physical
clusters.
We
now
maintain
on
on
each
physical
cluster
object
a
list
of
the
apis
that
should
be
synced
in
case,
for
example,
or
in
the
case,
a
given
physical
cluster
had
a
change
on
a
just
on
a
given
api
that
made
it
incompatible
with
the
you
know,
accepted,
negotiated
api
schema
in
the
logical
cluster.
E
So
now
we
have,
I
mean
change
the
granularity
of
what
is
supported
or
not
on
by
each
physical
cluster
and
what
should
be
synced
by
to
each
physical
cluster
return,
the
granularity,
sorry
granularity
to
the
level
of
the
api
itself.
Yeah
I
mean
yeah.
Is
it
something
that
is
taken
in
account
here
and
how
we
do?
Would
we
manage
that?
Knowing
that
the
whole
point
of
this
controller
is
to
sync
all
the
objects
of
all
the
types
of
a
given
namespace
I
mean:
how
does
it
relate.
B
Yeah,
currently,
this
doesn't
take
any
of
that
into
account.
This
is
just
gonna:
go
through
everything,
whether
or
not
it
will
end
up
getting
synced
there
according
to
the
syncer's
logic
or
api
negotiation,
it
could
be
smarter
and
taken
it
take
into
account
only
types
that
are
negotiated
to
end
up
on
that
on
that
cluster,
like
you
said,
if
it's
not,
if
it's
not
going
to
be
valid
down
there,
then
we
shouldn't
we
can
we
can
avoid
the
work
of
assigning
it
there,
because
nothing
will
sink
it
down.
Yeah.
E
Yeah
and
my
question
was
well:
how
will
because
for
now
the
the
idea
of
syncing
all
the
content
of
a
given
name
space
to
a
given
physical
cluster
is
mainly
to
simplify
the
question
of
you
know
the
relationship
between
objects
and
the
interdependence
inter
interdependency
of
objects
inside
the
name
space.
So
I
assume
that
that
we
would
still
want
to
keep
this
constraint
of
moving
either
known
or
all
the
objects
of
a
given
name
space
to
a
given
physical
cluster.
So
maybe
it
would
be.
E
D
So
once
it's
been
assigned,
everything
that
moves
that
object
is
a
two-phase
command
from
that
on,
like
in
the
classic
database
sense,
you
have
to
indicate
your
desire
to
move
and
then,
if
something
like,
if
you
have
two
participants,
you
basically
have
to
wait
for
that
other
participant
to
act,
that
to
to
move
it
off,
and
so,
even
in
that
case
like
if
the
api
object,
the
thing
you
have
to
do
at
the
top
level
is
like
hey.
This
is
no
longer
a
candidate
anybody
who's
already
assigned
it's
still
a
move
right.
There
is
no.
D
There
is
no
like
stop,
so
that's
got
to
happen
at
the
higher
level
and
then
the
sinker
would
treat
that
as
no
different
from
any
other
remove.
But
that's
a
that's
still
a
like
in
the
big
sequence.
Diagram
in
the
sky
of
the
state
machine
here
is
once
you're
assigned
you
are
owed
by
that
cluster,
and
you
have
to
wait
for
all
that.
So
it's
still
like
it's
a
loop
outside
of
the
sinker,
roughly
like
the
scheduler.
D
Has
to
detect
that
indicate
it
that
has
to
propagate,
ideally,
that's
completely
opaque
to
the
sinker
and
the
scheduler,
which
is,
or
maybe
it's
not
like,
because
the
scheduler
does
need
some
bit
of
feedback
there,
but
somebody
has
to
make
that
pushback
decision
and
it's
probably
like
the
sinker
knows
when
it
fails
on
an
individual
cluster,
but
once
that
the
syncer's
also
responsible
for
telling
the
api
server
about
the
updated
schema
of
the
object
right,
like
the
source
of
truth
for
a
type,
is
on
the
physical
cluster.
D
So
the
sinker
has
two
roles:
one
to
keep
those
api
types
up
to
date
on
kcp
side
and
then
the
other
is
to
act
on
the
flow
through.
If
we
fail
here
that
back
pressure
mechanism
is
a
back
off
failure,
that's
an
unexpected
failure.
When
the
api
type
changes,
that's
got
to
go
up
propagate
over,
come
back,
and
then
that
should
result
in
the
sinker
seeing
less
things
seeing
things
be
moving
off,
yeah.
B
Yeah,
so
this
doesn't
currently
take
into
a
into
account
api
compatibility,
or
you
know,
type
compatibility
when.
B
That,
literally
just
randomly
selects
a
cluster
and
eats
it
into
it
and
hopes
it
works.
But
that
is
an
interesting
case
where
it
when
it
does
take
into
account
the
the
types
available
on
that
physical
cluster
it
it
is,
it
can
still
schedule
whole
name
spaces
and
if
a
foo
object
shows
up
and
that
foo
object
in
this
name,
space
is
not
compatible
with
the
cluster
that
that
namespace
is
assigned
to
that.
Should,
I
think,
is
what
we're
saying
trigger
and
unschedule
of
that
namespace
yeah.
B
B
Yeah,
it's
it's,
it's
currently
very
dumb
and
is
going
to
just
do
the
whole
name
space.
You
know
schedule
it
for
that,
and
the
sinker
will
try
to
apply
it
and
it
doesn't
get
any
of
that
back
pressure
back
up
to
say:
oh
you're,
going
to
give
me
a
foo.
I
don't
know
or
like
my
definition
of
who
is
incompatible
with
your
definition
of
who
unschedule
the
whole
name
space
and
reschedule-
and
you
know,
redo
the
scheduling
like
compatibility
check
on
some
other
cluster,
to
find
where
it
can
go.
D
So
so
this
is
maybe
we
can
see
this
up
for
the
end
of
this
section,
but
I
was
going
to
ask
it's
like
going
from
like
the
simple
one:
has
it
reached
its
for
the
simple
one?
Has
it
reached
the
maximum
amount
of
of
nutrients?
We
can
extract
from
the
from
the
the
bone
and
now
we're
ready
to
go
to
the
next
set
of
questions,
or
do
you
like?
What's
your
thought
process,
json
on
where
you
go
from
this
to
the
next?
B
Where
the
next
is
is
type
aware,
scheduling
or
yeah-
currently
it
also,
it
needs
to
take
into
account
cluster
health.
So
after
this
I'm
going
to
start
working
on
the
synchro
side
of
things
where
sinker
so
synchro
doesn't
currently
rename
the
namespace
when
it
comes
through.
If
two
logical
clusters
have
a
default
namespace,
it's
going
to
try
to
end
up
as
default
on
the
same
on
the
same
physical
cluster,
and
that's
that's
not
going
to
work.
So
I
need
to
have
synchro.
B
Do
the
namespace
mapping
and
then
also
take
into
account
at
least
cluster
health
when
it
tries
to
schedule
like
do
more
than
just
randomly
select
a
cluster
and
and
demoing
deleting
a
cluster
or
marking
it
as
unhealthy,
or
you
know
like
tricking
it
into
unhealthiness
and
having
the
namespace
pick
up
and
move
somewhere
else.
That's
that's
probably
next
and
then
that
gives
us
a
nice
foothold
for.
Oh
it's
starting
to
make
decisions
about
a
cluster
before
it
sends
resources
there.
D
So,
at
least
when
you're
thinking
about
like
execution
on
this,
is
that
also
a
a
a
a
second
stage
prototype
that
will
then
be
replaced
by
a
third
stage
thing,
or
do
you
think
you're
ready
to
start
making
fundamental
decisions
about
the
architecture
of
the
loop?
Do
you
want
to
do
the
loop
first
before
you
do
it,
because
I
don't
think
there's
anything
wrong
with
proto
like
say
this
is
prototype
one
and
there's
prototype
two
and
then
potentially
saying
like
we'll
wait
for
phase
three
to
go
and
look
at
longer
term.
D
B
Design,
I
think
phase
two
is:
is
a
good
spot.
For
that
I
mean
I
don't
I
don't
think
aside
from
cluster
health
and
api
compatibility,
I
think
we
can
have
a
very
compelling
prototype
with
just
those
two
things
like
beyond.
That
is
things
like
cluster
capacity
like
remaining
compute
capacity
or
whatever,
and
I
feel
like
that's
a
much
larger
problem,
for
I
mean
useful,
very
important,
but
not
like
yeah,
exactly
basically.
D
Like
one
of
the
things
that's
going
through,
my
head
is
like
at
what
point
do
we
evaluate
the
existing
scheduler
and
the
scheduler
framework
in
cube
and
how
we
use
it
for
nodes
in
the
context
of
its
usefulness
for
this
problem,
and
how
do
we?
How
do
we
get
to
the
comfort
level
that
we
can
either
say
three
quarters
of
the
problems
overlap?
We
can
reuse
that
infrastructure
and
that's
a
net
win
or
you
know
one
quarter
of
the
infrastructure.
Overlaps
and
we'd
end
up
rewriting
three
quarters
of
it
anyway.
D
So
we
need
to
take
the
bones
of
it
versus
the
core
algorithm
and
the
core
loop
is
actually
fundamentally
different,
because
the
cube
scheduler
is
a
one-shot.
B
Scheduler
yeah,
I
think
I
think
my
the
the
more
I
have
looked
into
this
the
more.
I
am
convinced
that
this
is
a
different
problem
than
the
like
existing
coupe
scheduler
right.
D
But
you're
still
going
to
have
to
deal
with
things
like
bulking
priority
weighting,
modeling
capacity
and
all
that
so
like.
When
do
we
do
the
about
like
what?
What
what
phase
is?
Where
we
evaluate
the
parts
of
that,
because
certainly
you
are
absolutely
right:
you're
going
to
have
to
have
a
reasonably
efficient
scheduler
if
we're
doing
per
type
scheduling
or
per
object,
scheduling,
because
you're
effectively
doing
one
right
for
every
object.
D
That's
in
a
transparent,
multi-cluster,
workspace
and
so
you're
talking
about
potentially
holding
large
chunks
of
the
whole
cluster
in
memory,
so
that
we
start
getting
into
the
sharding
working
step
problem
and
so
that
that
gets
into
like
the
efficiency
of
the
caches
that
you
need
to
support.
Making
some
of
these
decisions
they're
going
to
be
probably
simpler
than
cubes
in
some
dimensions.
D
B
Yeah,
I
mean,
I
think
we
can
also
cheat
and
have
the
the
sinker
you
know
if,
if
this
cluster
doesn't
want
to
have,
we
can
reuse
the
the
actual
coupe
scheduler
logic,
by
virtue
of
it
being
involved
in
the
physical
clusters
and
the
sinker
can
say.
Oh
this,
like
this
cluster,
didn't
want
more
than
100
foo's
in
this
name
space
bubble
that
that
error
up
rather
than
re-implement
it
up
here.
Have
it
try
that
cluster
and
when
it
fails,
learn
from
that
to
move
it
somewhere
else
that
way
we
get
the
best
of
both
worlds.
D
And
that's
actually
a
really
fair
point
because-
and
this
is
like-
I
think
this
is
like
the
this-
is
a
design
dock
that
comes
off
transparent,
multi-cluster,
which
is
like
at
least
a
proposal
for
what
this
the
scheduler
architecture
is,
and
what
the
flow
architecture
like
transfer,
multi-cluster
design
dock
has
most
of
this,
but
something
that's
very
clearly
like
we
expect
to
take.
D
This
is
input
make
this
decision
flow
it
out
to
here
when,
like
the
state
diagram
or
the
flow
dagger,
maybe
it's
just
in
addition
to
the
existing
tmc
design,
but
like
that
clarification
is
really
important
for
when
we're
committing
to
it,
something
that's
maybe
six
months
to
a
year
of
lifespan,
we
don't
have
a
commitment
yet
for
prototype
2
to
live
that
long
or
to
to
be
the
source
of
continuous
evolution.
I'm
just
trying
to
every
problem
we're
looking
at
is:
does
it?
D
D
B
Yeah
something
else
that
has
occurred
to
me:
you,
you
said
that
this
will
fundamentally
need
to
do
a
write
for
every
object
in
every
namespace
like
in
order
to
set
that
label
in
order
to
set
scheduling.
Is
this
something
else
we
can
use
a
virtual
workspace
view
for
where
so,
instead
of
going
through
and
labeling
each
object
in
the
namespace,
with
its
namespaces
cluster
assignment,
we
could
have
some
and
storing
those
you
know
each
of
those
updates.
We
could
have
something
that
the
sinker
talks
to
that
says.
B
D
I
think
that's
like
a
fundamental
sinker
scheduler
kcp
transparency,
cluster
design
point,
which
is
if
we
don't
write
the
object
and
we
write
something
else.
Then
we
start
to
map
that
right
to
the
object
or
we
have
to
stop
watching
objects
and
so
does
an
end
user
need
to
see
where
every
object
is
in
a
transparent
multi-cluster.
D
Maybe
there's
transactional
integrity
concerns
there,
which
is,
if
you
ever
want
to
run
two
objects.
At
the
same
time,
someone
deletes
the
object
out
from
underneath
you
and
it's
not
modeled
in
the
same
object.
So
I'd
probably
say,
like
my
gut's
telling
me
just
looking
at
the
problem
space
we
need
to
get
formal
about.
It
is
we're
still
right
into
the
object.
It's
okay
to
make
one
right
to
every
transparent,
multi-cluster
object
if
the
right
rate
is
ridiculously
low
and
it's
probably
likely
that
the
right
rate
is
ridiculously
low,
except
on
failure
transitions.
B
B
Yeah,
that's
exactly
the
case.
I'm
worried
about
is
when
yeah
a
hundred
thousand
objects
are
scheduled
to
a
physical
cluster
and
it
goes
away
or
works
like
flaps
like
it's,
it's
fine.
If
it
just
disappears,
it
really
sucks.
If
every
five
minutes
it
shows
up
and
says
I'm
healthy,
I'm
unhealthy,
I'm
healthy,
I'm
unhealthy.
D
Yeah,
so
so
this
is
that,
like
we
getting
to
the
point
where
we
start
from
protect
one
prototype,
two
I
need
to
I'd
like
to
see.
We
really
need
a
really
crisp
like
either
one
proposed
flow
and
that
we
can
then
pick
apart
and
talk
about
like
order
of
magnitude
where
the
rights
go
like
rights
on
object
probably
is
fundamental
just
because
of
all
the
other
things
we've
said
so
far,
but
we
should
test
the
we
should
make
sure
we
have
a
design
thing
that
we
can
talk
about
alternatives.
D
I
guess
I
was
just
asking
prototype
2,
where
you're
going
to
try
these
designs
or
prototype
2,
where
we're
going
to
be
like.
We
know
enough
now
to
go
through
design
and
to
hammer
it
out
in
words.
We
don't
need
to
go
out
through
in
code.
Do
we
all
do?
Do
we
have
enough
breadth
of
understanding
of
the
problem
now
to
really
attack
and
come
up
with
a
draft
design
that
we'd
be
comfortable
with
for
a
year's
time
right?
So
go
ahead,
david.
E
No,
no
sorry,
I
have
another
question
on
the
pr,
and
maybe
it's
related
also
to
prototype
2
or
you
know,
execution
steps
is
how
this.
How
does
this
peer
relate
to?
You
know
the
sinkers
watching
objects
in
any
logical
cluster
or
any
workspace.
E
That
is
that
this
physical
cluster
joined
too
and
yes,
I
mean,
because
I
mean
I'm
mainly
asking
the
question
related
to
the
way
we
discover
the
various
api
through
discovery,
because
here,
if
I
understand
correctly,
you
are
mainly
pointing
to
the
discovery
published
by
a
given
logical
cluster,
but
a
single
one,
and
so
obviously
from
you
know,
in
what
we
are
discussing
regarding
api
modeling.
There
is
also
all
this
question
of
api
view,
calculating
either
the
lcd
or
the
contrary.
E
You
know
to
to
try
to
merge,
or
at
least
negotiate
a
number
of
apis
that
have
some
variants,
so
I
mean
how
do
how
does
this
work
relate
and
how?
How
much
does
it
integrate
the
the
the
current
work
or
and
the
goal
of
having
a
thinker
that
would
be
able
to
watch
across
a
high
number
of
logical
cluster
or
workspaces
to
sync
to
a
given
physical
cluster?
I
mean
that's
not
clear
for
me.
B
F
B
As
a
silly
hack,
I
can
have
something
that
just
lists
all
logical,
logical
clusters
and
does
watches
on
all
of
them,
but
that's
not
gonna.
I
mean
that
will
work
but
will
not
scale.
We
need
to
have
a
better
view
a
better
way
of
serving.
I
am
a
sinker
and
I'm
interested
in
objects
for
me
across
all
workspaces.
B
D
And,
and
and
to
be
fair,
this
is
a
fundamental
security
thing
right,
like
the
end
of
it,
the
end
goal,
it's
just
like
a
node,
which
is,
if
do
you,
if
a
synchro
is
compromised,
should
the
syncer
be
able
to
see
workloads
that
aren't
scheduled
to
it?
No
should
the
secrets
not
scheduled
to
it.
No,
so
I
think,
like
we
put
this,
this
goes
in
like
it
may
not
be
a
hard
requirement
from
a
modeling
perspective,
but
odds
are
it's
probably
going
to
be
something
a
little
lines
up?
D
There
will
be
some
sort
of
virtual
workspace
for
a
sinker
that
looks
similar
to
the
controllers,
but
it's
not
identical
and
that
virtual
workspace's
job
is
to
prevent
the
disclosure
of
information
and
potentially
to
reduce
the
amount
it
shouldn't
have
to
read
every
object
in
every
workspace
on
every
shard
to
do
its
job.
That
is
the
heart.
Like
we
kind
of
like
punted
on
workspace
index
right,
we
said
like
well,
we
can
probably
keep
working
in
a
single
shard.
It's
not
that
bad.
E
E
It
would
just
you
know,
think
that
that
is
it,
it
points
to
a
given
cluster
and
then
it
it
just
points
to
a
virtual
workspace
that
will
gather
everything
required,
be
it
on
the
api
level
or
on
the
instance
levels.
That
is,
you
know
that
has
to
be
seen
by
the
thinker.
According
to
view
options
we
want.
E
D
The
access
pattern
of
a
syncer,
so
the
access
pattern
of
a
controller
across
workspaces
and
the
access
pattern
of
a
sinker
across
workspaces
are
similar
in
one
part.
In
all
cases,
the
access
pattern
is
what's
really
critical
right,
like
cube,
all
controllers
basically
have
the
same
access
pattern
and
cube
like
there's.
Only
like
three
types
of
queries:
there's
all
on
a
cluster,
something
in
a
namespace.
B
D
Something
by
a
global
field,
selector
like
pods,
like
the
pods
that
host
name.
The
three
axis
patterns
that
we
were
talking
about,
making
working
through
virtual
workspaces
are
the
pod,
the
spec
host
name,
which
is
a
field
selector
on
cluster
like
for
syncer,
which
is
what
the
syncer
label
is.
Is
the
field
selector
on
where
you're
placed
the
controller,
which
is
all
api
types,
and
then
we
haven't
identified
an
exact
equivalent
of
the
third
one,
but
it's
probably
a
controller
either
on
a
shard
or
a
controller
on
a
single
workspace
which
are
fairly
trivial.
D
We
don't
know
if
we
have
a
fourth
axis
pattern,
yet
some
of
the
stuff,
maybe
like
stefan's,
probing
it
and
we'll
probably
get
to
some
questions
where
we
might
actually
come
up
with
some
new
access
patterns.
E
B
A
But
just
to
recap,
the
idea
is
just
a
prototype
for
saying
I
have
a
workspace
and
then
it
can
inherit
apis
from
some
other
workspace,
and
this
is
really
more
of
an
exercise
of
figuring
out
what
sort
of
changes
we
potentially
need
to
make
to
discovery
and
to
custom
resource
handling
and,
at
some
point
open
api,
probably
as
well
to
see
what
happens
when
we
want
to
say
that
a
crd
that
lives
in
a
source
workspace
is
accessible
in
a
target
workspace
and
as
a
consumer
of
that
target,
workspace
you're.
None.
A
The
wiser
that
you
didn't
actually
create
that
crd
in
your
logical
cluster,
and
so
I
got
discovery
working
for
slash
apis,
which
is
done
by
the
aggregator
and
slash
apis.
Slash
a
specific
group
which
is
also
handled
by
the
aggregator
where
I
ran
into
problems,
was
at
the
version
level
which
is
handled
by
the
api
extensions
code.
So
the
custom
resource
discovery
and
handlers.
A
Custom
resource
discovery
is
implemented
via
a
controller
that
basically
feeds
static
data
structures
that
live
and
are
referenced
when
someone
asks
for
discovery,
and
the
controller
makes
it
fairly
difficult
for
me
to
try
and
do
what
I
did
with
aggregation,
where
I'm
injecting
a
decorator
that,
basically,
if
you
don't
inject
anything
if
you're,
the
normal
cube,
aggregator
and
whatnot
everything
works
normally,
but
if
you're
kcp
and
you
inject
a
decorator,
we
can
do
things
like
say.
Oh,
you
asked.
B
A
Slash
apis
in
your
workspace,
we'll
go
see
if
there's
any
other
apis
that
we
need
to
add
from
your
inherited
workspace
same
thing
with
the
version
level.
Stefan
you
want
to
ask
now.
Are
you.
G
There
a
technical
reason
that
this
is
a
controller
in
upstream
and
not
just
in
the
real-time
computed
discovery
handler.
D
I
I
gotta
be
honest,
I'm
pretty
sure
it
was.
We
were
like
hey.
We
got
some
controllers,
we'll
do
some
more
controllers,
with
some
controllers
and
get
some
controllers,
there
were
reasons
for
it
like
reconciliation,
but
at
the
time
controller
pattern
I
would
probably
say
my
ninety
percent
guess
on
what
david's
answer
is
gonna
be.
Is
it
was
the
simplest
pattern
that
matched
all
the
other
problems
we
were
solving
and
there
was
no
need
to
go
anything
further.
E
Yeah
I
mean
initially,
it
was
mainly
just
accommodating
the
existing.
What
was
existing
in
cube
and
I
think
in
the
current
state
of
things
we
might
sorry.
E
D
E
E
Sure
I
mean
I'm
just
meaning,
there
is
something
probably
that
that
complicates
things
is
the
requirement,
if
I
understood
correctly
initially
in
cube
to
publish
discovery
for
both
crds,
and
you
know,
api,
aggregated
api
servers
and
so
to
sort
of
merge.
Both
we
just
we
there
is,
you
know
the
cd
registration
controller
that
mainly
takes
the
crd
and
creates
the
api
services
objects
for
the
aggregator
to
take
them
in
account,
and
so
it's.
E
Exactly
we
could
really
get
rid
of
it,
because
now
in
in
in
kcp
we
just
don't
have
the
the
aggregator
I
mean
we
have
it.
G
D
And
david
mike
david
might
have
other
reasons
that
he
might
remember,
I
I
feel
like.
Ultimately,
we
did
even
crds
that
we
were
there's
like
three
places
where
crds
plug
into
the
infrastructure
that
were
we
didn't.
We
did
the,
I
don't
say
lowest
cost,
but
we
did
the
practical.
We
were
working
from
tpr's
and
their
their
things.
We
reorganized
some
of
the
the
handler
chain,
we
organized
the
serving
chain,
and
I
think
that
we
picked
the
abstraction
that
worked
fine
for
a
thousand
crds,
but
we
did
not.
A
Yeah
so
like
fast
forwarding
to
the
actual
problem
so
like,
let's
say
that
the
controller
thing
is
not
an
issue
and
we
don't
have
controller
based
crd
version
discovery
anymore.
The
thing
that
I'm
challenged
with
is
figuring
out
what
sort
of
interface
that
I
could
optionally
inject
into
custom
resource
handling
that
would
allow
us
to
serve
inherited
crds
from
a
different,
logical
cluster.
A
D
They're
early,
so
I
would
say
this
is
I
probably
get
down
to
the
point
of
like
what's
an
interface
that
hides
the
problem
that
is
reasonably
abstract
for
the
for
the
problem
we
are
fixing
in
cube
and
then
think
about
composing
that
I
think
the
problem
is.
Is
that-
and
I
was
thinking
about
this
with-
like
storage
and
registry
they're,
both
existing
apis?
It's
a
big
hurdle
to
change
those
apis
like
like
it's
something
that
somebody
has
to
go.
D
Take
time
away
to
review
and,
like
I
think,
there's
a
is
there
any
problem
that
overlaps
with
it
is
usually
my
existing
one
is
like
there
any
problem.
We
know
about
with
how
the
crds-
and
that
is
like
it's
an
efficiency
thing
right,
like
crds,
are
still
a
big
chunk
of
memory
in
the
api
server.
You
know
you'd
want
to
come
up
with
an
interface
that
actually
let
you
you
know,
solve
the
problem
or
improve
something
else.
Those
are
the
easiest
ones.
I'd
say
the
rest
of
it
is
we're.
D
Gonna
like
these
are
great
questions
to
ask
andy
it
comes
down
to
is
this
a?
Is
this
a
code
cleanliness
that
benefits
everybody
and
the
things
that
sig
api
machinery
is
comfortable?
Doing
we
need
to
like
be
actually
asking
ourselves
like
what
makes
our
long-term
maintenance
of
the
cube
api
servers
easier.
D
A
A
D
So
you
need
to
know
what
the
interface
looks
like
and
then
a
bunch
of
people
who
probably
know
the
code
are
going
to
be
like.
Does
this
interface
have
to
look
like
this
and
then
ask
those
questions
so
at
least
getting
to
a
point
where
you
have
an
interface
is
the
other
bit
is
so,
and
this
is
like
minimal
api
server.
Minimal
api
server
is
a
goal,
probably
most
people
kind
of
agree
with
a
lot
of
what
we're
doing
is
a
very
specific
combo
of
those
apis.
D
D
Cut
out
all
the
other
stuff,
like
any
of
those
things,
if
you
can
come
up
with
a
100
line,
example
where
the
interface
plug
points
are
reasonably
justified
for
the
are
reasonably
justified
for
the
also
solves
logical
cluster
injection
in
places,
because
the
interface
is
coarse
enough,
where
you
can
compose
it,
that's
probably
better
right
now
cube
is
not
composable
in
any
way.
D
We
started
that
way
and
then,
over
time,
like
we
kind
of
as
we
got
more
complex,
like
the
composability,
just
wasn't
composability
by
itself
is
not
a
goal,
and
so
you
know,
even
though
early
cube,
probably
pre
1
0,
there's
a
lot
of
places
where
we
actually
did
have
very
nice
interfaces
where
we're
like
yep
bring
these
10
pieces
together
in
a
100
line.
A
Okay,
yeah
so
I'll
get
back
to
trying
to
do
some
interface
work
next
week
after
the
holiday,
and
basically
that
means
I
we're
probably
going
to
end
up
throwing
away
aggregator
based
discovery
and
the
current
crd
discovery
implementation
and
replace
it
with
one
of
our
own.
A
For
the
time
being,
and
I
have
to
see
what
I
can
do
for
actual
handling
of
crs,
because
that'll
need
the
same
sort
of
abstraction
to
figure
out
where
the
crd
is
coming
from,
but
the
storage
would
obviously
differ
because
you'd
want
it
stored
in
the
desired
workspace,
not
in
the
workspace
that
owns
the
crd,
so
I'll
work
through
that
or
start
to
work
through
that
more
next
week.
I
think
that's
all
I
had
for
this
topic.
B
D
Going
to
be
really
fast
so
like,
as
we
were,
kind
of
working
through
like
sharding.
One
of
the
big
questions
is,
you
know:
is
that
cd
really
the
optimal
store
for
some
of
the
global
geo
problems?
So
I
think
it's
useful
to
say
we
started
down
a
path.
We
were
asking
about
ncd
and
we
said
okay,
like
is
this
the
only
path?
So
the
sharding
dock
has
two
options
and
I
added
a
third
recently
so
option.
One
is
the
current
bunch
of
small
led
cds,
logical
clusters
live
on
each
ltd?
D
We
reuse
a
lot
of
the
cube
code
as
it
is
today.
Don't
have
to
think
about
too
many
changes.
Each
of
those
shards
is
a
failure.
Isolation
unit.
There
are
some
fixed
limits
on
how
much
those
charge
can
grow
million
million
to
10
million
keys,
and
then
they
fall
over,
because
that's
what
ncb
that's
what
sc
roughly
gets
to
today,
but
that
also
corresponds
roughly
to
the
amount
of
memory
that
the
api
servers
use
for
watch
caches.
D
It
also
corresponds
to
like
roughly
it's
roughly
proportional
to
the
garbage
controller,
keeps
every
object
in
memory
as
well.
So
there's
a
bunch
of
keep
every
object
in
memory
such
that
the
natural
working
set
of
a
cube
cluster
or
a
set
of
a
cube
problem
is
something
around
a
hundred
thousand
to
ten
million
keys
that
would
fit
on
a
single
server
in
memory.
D
So
then,
and
then,
when
we
talk
about
sharding,
we
were
like
okay,
can
we
stitch
those
up
consistently,
which
is
a
bunch
of
little
shards?
How
do
you
do
lists
across
them?
That
means
you've
got
some
other
shard.
That's
got
a
consistent,
it's
got
a
source
of
truth,
and
then
you
can
hit
that
source
of
truth,
but
that's
another
actor
that
you
have
to
talk
to.
D
So
you
have
to
talk
to
the
consistent
source
of
truth
and
then
talk
to
all
the
individual
shards,
and
if
the
consistent
source
of
truth
is
down,
we
need
to
come
up
with
special
rules
about.
How
long
do
you
wait?
You
know,
can
that
shard
be
out
of
date,
you've
got
to
have
there's
certain
properties.
That
controllers
depend
on
to
keep
their
caches,
updated
correctly,
which
steve
has
been
enumerating
all
that
basically
led
to
the
question
of
this
is
a
lot
of
work.
D
We
understand
why
it
might
be
very
valuable
to
have
individual
failure
domains
because
individual
failure
domains
that
match
the
working
set
of
a
problem
is
basically
all
of
computing
right,
like
even
a
database
is
like
if
your
database
stops
being
able
to
hit
working
set
you're
likely
to
transition
from
a
very
happy
regime
to
a
very
unhappy
regime
where,
suddenly,
you
know
like
there's
really
no
practical
way
around.
Only
a
certain
number
of
workloads
don't
really
fit
in
memory.
D
D
So
the
alternative
is.
Could
we
move
the
data
problem
out
to
a
more
capable
store,
something
that
actually
gives
us
some
of
the
same
properties
with
geo
replication
distribution,
failure
mode?
D
D
The
question
I
think
we're
considering
for
option
one
versus
option:
two
is:
if
you
moved
all
storage
out,
then
you
have
more
work
at
the
under
layers
of
cube
and
that
we'd
have
to
do
to
change
that,
but
you
might
be
able
to
get
some
wins
such
as
we
wouldn't
have
to
model
move.
If
you
had
a
geo-replicated
database,
of
which
there's
a
few
out
there,
you
would
potentially
have
to
deal
with
the
fact
that
that
database
is
a
single
point
of
failure.
D
So
you
need
to
understand
the
failure
characteristics
of
that,
and
then
you
need
to
think
about.
Are
there
things
that
that
would
give
you
that
option?
One
does
not
that
are
meaningful
trade-off
where
you'd
say
okay
well,
maybe
we'd
still
have
the
working
set
in
memory
in
the
kcp
instances,
but
the
back-end
store.
You
know
the
working
set
actually
is
distributed
across
a
bunch
of
servers
and
that
database
does
a
better
job
of
distributing
working
set
and
falling
over.
D
When
your
working
set
falls
out
of
memory,
then
we
would,
we
can
short
circuit
a
whole
bunch
of
duplicating
complex
database,
theorems
class
and
catch
up
to
the
state
of
the
art
and
then
maybe
come
back
to
it
three
four
years
down
the
road
and
say:
oh
sorry,
so
concretely,
there's
really
two
options
that
are
open,
broadly
supported
and
accessible
in
a
way
that
would
work
in
multiple
cloud
environments
and
it's
basically
there's
a
number
of
options
and
I'll
have
a
doc.
That
kind
of
goes
through
some
of
these
and
explains
reasoning.
D
But
all
of
this
I
kind
of
rejected
out
of
hand
all
of
the
existing
databases
that
don't
implicitly
support
a
multi-geo
strongly
consistent
approach,
because,
ultimately
those
are
a
step
backwards
because
for
all
of
etcd's
faults,
it
is
a
strongly
consistent
survives
one
failure,
replica
almost
effortless
three
instance
resiliency,
like
in
the
seven
years
of
cube.
The
problem
has
almost
never
been
at
cd.
D
In
any
practical
scenario
I
can
think
of
when
we
are
within
the
bounds
of
usage.
So
if
we
brought
postgres,
which
could
be
much
faster
in
single
instance,
it
wouldn't
actually
help
us
because
postgres
multi-cluster
or
multi-h
a
is
possible.
It's
very
complex.
We
would
essentially
be
duplicating
the
same
problems
in
option.
One,
a
geo-distributed
database,
cockroach
and
gigabyte,
are
roughly
there.
Cockroach
is
about
maybe
a
year
ahead.
Maturity
model
wise
they're
kind
of
getting
to
the
point
now
where
they're,
credible,
and
so
I've
spent
some
time
just
familiarizing
myself
with
the
database.
D
The
access
patterns
that
we
use,
I
was
looking
at
kine,
which
kind
is
an
xcd
adapter.
So
it
implements
the
lcd
grpc
api
as
a
shim
process,
and
it
sits
in
front
of
a
database
using
that
in
front
of
cockroach
there's
some
trade-offs.
That
kind
had
to
choose
to
work
generically
across
sqlite
postgres
mysql,
which
I
think
is
a
completely
different
set
of
trade-offs
than
we're
exploring.
So
they
made
some
choices
in
how
they
modeled
watches,
roughly
basically
to
model
a
watch.
D
They
had
to
do
something
fairly
inefficient,
because
postgres
by
default
does
not
implement
a
strongly
ordered
history.
Cockroach
does
by
default,
so
I've
been
going
through
steve's
list
and
watched
semantics,
which
actually
very
useful.
I
came
up
with
a
couple
of
other
properties
that
I'll
add
into
some
of
the
docs
understanding
the
access
patterns
and
what
consistency
guarantees
I'm
fairly
certain
at
this
point,
cockroach
actually
now
finally
fully
supports,
but
I've
not
tested
fully
supports
the
semantics.
D
We
would
need
for
a
repeatable
historical
list
without
an
additional
construct
in
the
database
layer,
which
is
roughly
on
the
underlying
level.
You
can
receive
a
total,
a
consistently
ordered
history
of
rights
within
the
gc
interval,
which
is
24
hours
by
default
on
a
cockroach
database,
which
is
tunable,
but
you
can
list
as
of
a
time
stamp
and
then
get
a
change
feed,
which
is
what
they
call
it
from
that
time
stamp.
D
You
can
repeat
that
time
stamp
or
you
can
repeat
that
change
feed
consistently
and
then,
when
the
time
the
feed
ends,
you
can
effectively
use
the
time
stamp
of
the
last
entry
as
the
starting
point
of
the
next
one.
If
you
needed
to
there's
some
subtleties
there,
that
need
to
be
tested-
as
I
said,
like
cube,
has
compaction
intervals
where
we
basically
cube
is
telling
etcd
to
do
the
compaction,
which
clears
old
history.
D
Every
five
minutes,
cockroach
scales,
much
better
in
that
dimension,
but
I'm
still
kind
of
looking
at
some
of
the
trade-offs
and
the
running
footprint
of
simple
rights.
So
in
theory
watch
is
fully
supportable,
even
things
like
bookmarks
in
the
watch
stream.
There
are
cockroachdb
equivalents
today,
which
is
you
can
have
the
server
keep
a
liveness
that
tells
you
the
timestamp.
If
there's
been
no
updates
over
a
change,
all
that
needs
to
be
tested,
like
probably
something
along
the
lines
of
steve
your
consistency,
testing
of
some
of
those
watch
semantics.
D
D
Watch
performance,
oh
performance,
so
the
biggest
challenge,
I
think,
from
a
straight
up.
Apples
to
apples
comparison
between
xcd
and
cockroach
or
other
databases
is
in
theory
like
in
practice.
A
postgres
instance
might
actually
be
faster
than
xcd
on
linear
single
key
contended
rights
to
the
same
object.
If
you
take
all
indices
out
of
the
picture,
right
like
in
theory,
postgres
should
be
as
fast
or
faster,
because
it's
a
much
more
optimized
right
path
on
single
key
updates
to
a
single
row
in
the
database
and
like
post,
both
postgres
and
ncd.
D
D
You
can
probably
get
something
on
the
order
of
like
three
thousand
writes
per
second
on
a
single
key
in
etcd
in
a
in
a
raft
quorum,
it's
probably
going
to
come
down
a
bit
because
you
still
have
to
do
round
trips,
so
like
single
key
rights,
ultimately
will
be
contended
the
more
scale
you
have.
So
probably
all
things
being
equal.
I
think
cockroach
is
going
to
be
in
striking
distance
performance,
but
there'd
be
a
bunch
of
work.
D
We
need
to
do
to
validate
that
a
single
key
right,
I'm
not
worried
about
reads
right
now.
That's
probably
there's
some
overhead
there
that
we
need
to
think
about.
Like
again
cube
has
overhead
on
the
right
path
that
is
probably
more
than
what
an
equivalent
database
would
have.
D
So
it's
likely
the
cube's
latency
on
rights
is
going
to
dominate
whatever
the
data
store
is,
but
that
needs
to
be
verified
and
then
the
failure
modes
of
cockroach,
like
at
the
end
of
the
day
like
if
option
one
is
about
us
solving
failure,
modes
ourselves
and
being
able
to
model
them
so
you're
like
hey
these
workloads.
These
workspaces
want
to
deal
with
this
g.
Like
have
these
constraints-
and
this
you
know,
sharding
puts
them
in
the
right
places
and
then
you
can
say:
oh
well,
your
high
level
control
plane.
D
You
only
lose
that
when
europe
goes
down
or
something
what
are
the
equivalent
tradeoffs
for
cockroach,
which
is
you
know,
what
is
the
the
way
that
it
works
under
the
covers?
There's
a
lot
of
similarities
there.
The
goal
be
to
write
something
up,
so
we
could
see
like
if
we
go
down
option
one
path.
We
get
these
benefits,
but
we
have
to
solve
all
these
problems
option
two.
D
We
would
be
trading
for
a
different
set
of
short-term
things,
which
one
is
operationally
more
valuable
for
us
in
the
short
run
might
be
and
how
much
work
it
is
in
the
short
run
option.
One
kind
of
has
an
advantage
there
at
least
right
now.
We
think
we
could
go
to
prototype
phase
and
early
development
without
really
having
to
do
all
the
scaling,
hard
scaling
problems
on
sharding,
but
the
data
storage,
one
would
be.
D
You
would
get
some
things
just
for
free,
like
movement,
which
you
know
is
not
trivial
but
needs
to
be
tested.
So,
at
the
current
time,
there's
a
bunch
of
open
questions,
I'm
trying
to
get
the
doc
together.
It
does
seem
like
there's
a
promising
set
of
tradeoffs,
like
my
gut's
kind
of
telling
me
that
it
probably
just
makes
more
sense
for
us
to
continue
in
the
option
one
path
for
right
now,
no
matter
what?
D
The
moment
we
have
to
start
tying
those
instances
together,
we
probably
only
need
a
minimal
set
of
sharding
story,
which
is
the
root
chart,
probably
based
on
what
staphon
and
folks
have
kind
of
sorted
out
like
we
probably
think
we
can
fit
the
scale
dimensions
of
a
single
master
at
cd
or
control,
plane,
root,
chard
and
then
have
a
bunch
of
like
smaller
shards.
To
get
to.
D
D
Maybe
we
actually
would
like
the
sharded
data
to
stay
in
ncd
and
then
anytime,
you
have
an
api
problem.
That's
like
I
need
millions
of
these
or
I
want
global
distributed.
We
put
an
adaption
layer
on
top
of
it.
That
might
actually
allow
us
to
do
both
in
parallel
and
then
you
know
keep
option
two
as
a
maybe
they
could
converge
once
we've
developed
some
work
there.
So
that's
that's
a
set
of
trade-offs
for
apis
that
don't
quite
fit
the
current
cube
model.
D
But
when
you
start
getting
into
tens
of
millions
of
keys
in
a
problem,
maybe
that
actually
would
become
more
reasonable,
so
and
obviously
indices
across
tens
of
millions
of
things
are
what
databases
eat
for
breakfast,
so
some
of
the
sinker
problems
might
actually
make
more
sense.
Modeled
option.
Two,
it's
just
sinker
is
one
particular
problem
and
it
wouldn't
necessarily
help
all
the
other
controller
patterns
that
we
might
want
to
work.
Synchro's
very
important
though
transparent
multi-cluster
is
important.
So,
okay,
that's
it.
D
Better
everything
that
we're
describing
as
option
one
of
like
how
would
we
make
the
access
pattern
of
a
high
cardinality
is
basically
something
that
a
distributed
database,
at
least
solves,
which
is,
if
you
have
a
core
model
that
works
well,
and
you
can
build
a
secondary
index
and
you're
willing
to
accept
the
trade-off
for
the
secondary
index.
You'd,
say
yeah,
you
just
add
the
secondary
index
and
boom.
Your
access
pattern
is
roughly
solved
in
practice.
D
Is
that
really
the
most
important
thing
we
have
to
solve?
Or
can
we
just
like
cube
hand
weight
bit
right?
Every
problem
in
cube
roughly
fits
in
like
trivial
amounts
of
memory.
As
far
as
like
hard
computer
science
problems
go,
we
may
get
a
little
bit
of
flexibility.
I
think
I
want
to
get.
I
want
to
get
more
clarity
on
what
the
do.
We
have
examples
of
problems
that
are
hard
from
a
distributed,
scalability
problem
that
we
can
make
substantially
easier.
B
Yeah
cool,
thank
you
for
sharing
that
and
I
look
forward
to
hearing
more
either
about
how
it
definitely
works
or
how
it
definitely
doesn't,
or
I
forgot
I.
I
realized
how
much.
C
We
will
see
you
all
next
week.
That's
like
saying
it's
btcd
or
pm
or
cube
cuddle,
see
you
bro
bye.