►
From YouTube: Community Meeting, February 8, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello
and
welcome
to
kcp
community
meeting
february
8th.
We
have
a
couple
items
on
the
agenda
and
I
think
a
couple
more
I
will
add,
while
we're
talking
about
the
first
two
is
paul
here
I
am,
he
is
amazing.
Yeah
I
don't
know
like
you
said
it
doesn't
need
a
discussion.
P3
prior
prototype,
three
scoping
to
be
completed
this
week.
Do
you
have
anything
else
you
want
to
talk
about
on
the
topic
or
anybody
else
feel
like
chiming
in.
B
Nothing
to
add
here,
if
you
have
questions,
please
let
me
know,
I
guess
the.
A
The
output
of
that
is
in
that
doc,
just
I
know-
that's
probably
obvious,
but
can
somebody
add
that
doc
link
to
this,
or
I
can
in
a
little
bit.
B
B
Steve,
okay,
so
already
I
already
migrated
the
pieces
in
the
dock
to
just
pushes
for
us
we'll
get
there.
I'm
fantastic.
A
Steve,
do
you
want
to
talk
about
sharding,
cockroach,
wow
wow,
that
just
that
appeared
at
the
exact
right?
Second,.
C
Okay,
I
I
tried
to
put
some
some
thoughts
into
here,
but
basically
I've
spent
the
last
like
two
two
and
a
half
weeks,
looking
at
some
of
the
nitty-gritty
implications
of
of
starting
kcp,
and
I
guess
I'll
just
try
to
run
through
what
I
have
written
here,
but
so
for
this.
C
For
the
last,
like
like
two
months
or
three
months
or
whatever,
we've
been
looking
at
option,
one
for
the
storage
engine
for
kcp,
which
is
assembling
a
bunch
of
discrete
lcd
clusters,
which
is
something
we
need
to
do,
because
that
cd
holds
everything
in
memory
and
we
have
like
a
scale
limit
for
one
right,
and
it's
been
clear
from
the
beginning
that
we
have
a
bunch
of
problems
that
we
need
to
solve.
If
we
do
that,
how
do
we
provide
consistent?
C
Less
watch
like
we
do
need
to
have
these
different
parts
of
our
key
space,
talk
to
each
other
and
be
able
to
say
things
about
them
together
and
so
the
rabbit
hole
that
I
just
went
down
was.
You
know
consider
data
movement
between
shards,
providing
reasonable
guarantees
on
the
resource
version
exposed
to
users,
and
you
know,
at
least
in
the
implementation.
We
were
looking
at.
C
That
at
some
point
requires
us
to
start
locking
rights
and
being
able
to
cut
over,
and
then
we
started
going
down
the
rabbit
hole
of
what
does
it
mean
to
lock
rights
and
arrived
at?
C
So
there's
a
bunch
of
behaviors
that
we
think
are
going
to
be
really
nice
in
kcp,
so
like
being
able
to
mark
workspaces,
read
only
not
only
for
the
movement
case,
but
also
for
like
audit
enforcing
some
sort
of
strict
quota
just
so
that
the
multi-tenancy
here
is
bounded
and
then
having
a
performant
and
simple
mechanism
for
storage
version
migration.
Since
we
expect
that
to
be
a
much
more
fluid
concept
and
all
of
these
problems
require
distributed.
Consensus
about
the
state
of
some
data
in
one
entity,
cluster.
C
Even
in
our
sharded
case,
like
generally
we're
talking
about
one
cluster,
and
so
we
have
solutions-
or
there
are
attempted
solutions
for
this,
like
on
top
of
that
city,
so
they're
using
crds
they're
in
like
the
kubernetes
user
space
and
all
of
them
are
pretty
complicated
because,
ultimately,
you
have
to
like,
for
instance,
in
the
locking
case
there
has
to
be
an
understanding
of
like
I
have
an
h,
a
like
api
server
and
all
of
them
understand
that
this
part
of
the
key
space
is
locked
and
what?
C
What
is
the
point
at
which
that
happens
right,
like
that's
the
distributed,
consistent
problem
that
shows
up
in
all
these,
and
so
while
there
are
solutions
on
top,
you
end
up
with
a
bunch
of
complexity
about
like
who
saw
what,
when
what
happens,
if
one
of
them
falls
out
what,
if
we
restart
an
api
server,
what
if
one
of
them
dies
or
what?
If
we
scale
up
you
know
all
of
that
comes
into
play.
C
So
those
solutions
are
pretty
complex
and
one
insight
was,
since
we
have
a
transactional
data
store
and
we're
talking
about
transactional
things
about
the
data.
What
if
we
just
use
the
transactional
data
store,
so
the
implementation
there
like
looks
very
simple
right,
like
being
able
to
say
you
know,
I
want
to
write
this
key
as
long
as
it's
not
read
only
is
a
very
simple
thing
as
soon
as
you're
at
that
cd
level.
But
you
know
one
of
the
scary
things
there
is
we're
talking
about
changing
the
storage
layer.
C
You
know
for
etcd,
which
is
right
now
at
the
very
bottom
of
cube
and
like
every
character
in
that
file
has
a
pretty
large
implication
on
the
performance
characteristics
of
cube.
Basically,
and
it's
a
pretty
complicated
set
of
code
to
change
so
kind
of
bubbling
up
the
stack
you
know.
Not
only
do
you
have
like
potentially
extremely
complicated
stuff
at
that
layer,
then
you
know,
even
if
you're
not
doing
that,
you
have
a
pretty
complicated
stuff
in
the
user.
C
Space
sharding
opens
up
a
bunch
of
questions
like,
for
instance,
stefan,
and
I
were
thinking
if
you're
watching
discovery,
for
instance,
and
some
part
of
that
discovery
is
coming
from
data
that
you're
replicating
from
the
org
shard.
C
What
is
the
resource
version
on
an
object
that
exists
somewhere
else,
but
is
also
mirrored
like
you
have
a
bunch
of
these
like
very
quirky
little
cases
and
there's
a
lot
of
thinking
needs
to
go
into
it.
So
bubbling
up
another
layer
of
the
stack
we're
doing
all
this
work
because
we're
thinking
about
option
one
for
the
storage
engine,
which
is
this
multi-cd
situation.
There
are
other
options,
namely
like
option
two
says
what,
if
we
just
use
a
geo-distributed
database,
we
get
to
ignore
all
of
that.
C
D
A
question:
I
don't
understand
this
jump
from
option,
one
everything
is
complicated.
Why
should
option
two
solve
any
offset
in
a
better
way,
because
everything
we
described
here
is
basically
on
one
chart?
The
only
exception
is
maybe
the
movement,
but
everything
else
is
one
shot.
Why
is
lcd
worse,
like
touching
lcd
storage,
the
storage
abstraction
layer
in
cube?
Why
is
this
worse
than
doing
option
two
in
the
same
code.
E
F
Cockroach
or
what
is
maybe
maybe
a
way
to
say
it
is
they're
both
actually
very
large,
and
if
the
requirements
like,
if
you're,
already
opening
up
the
door
for
a
large
set
of
things
on
the
first
side
that
have
to
be
correct
and
you
have
a
higher
layer
of
problems,
maybe
I
go
another
way.
Is
it's
not
necessarily
worse?
F
It's
the
error,
bars
and
the
the
magnitude
is
large
and
the
error
bars
are
high
on
the
complexity
and
that's
kind
of
leading
to
where
steve
was
going,
which
was
worse,
is
different
from
there's
a
bunch
of
moving
parts.
Are
we
making?
Are
we
over
solving
one
part
of
the
system,
while
not
actually
contrasting
it
to
the
alternative.
D
C
I
mean
they
are
by
implication
right,
like
every
problem
that
requires
cross-shard.
Things
goes
away,
like
I
think
mara,
and
I
had
a
long
conversation
about
how
does
an
author
of
a
controller
that's
watching
something
across
shards
handle
the
fact
that
they
have
a
potentially
incomplete
set
of
state
that
gets
synced
behind
the
scenes.
Right,
like
you,
even
have
an
entire
set
of
documents
about
what
does
it
mean
for
us
to
give
incomplete
data?
And
how
do
we
talk
about
that
with
the
user?
D
I
I
don't.
I
don't
believe
that
I
think
we
are
exchanging
dragons,
which
we
saw
now
with
new
dragons.
I
don't
talk
about
against
option
two,
exploring
that
we
shouldn't,
but
I.
C
So
I
guess
what
my
conclusion
is
here
is
that
we
understand
the
cost
of
option
one
at
least
better
now
than
we
did
before
it's
pretty
large,
so
I
think
it
seems
appropriate
to
delay
explicit
work
on
that
until
we
understand
better
the
other
options.
So
I
was
going
through
like
the
mind
process
of
like
what
does
it
mean
to
not
do
this
so
like
why?
Why
are
we
starting
we're,
starting
because
we
have
so
much
data
we
can't
fit
it
into
one
lcd?
Is
that
a
problem
we
have
today?
C
No,
when
do
we
think
we
might
have
that
problem?
So
there
are
cases
with
cube
where
you're
looking
at
you
know,
10k
namespaces
with
a
bunch
of
users,
that's
somewhere
in
that
range,
it
starts
to
fall
apart.
We
expect
a
pretty
different
access
pattern
for
workspaces.
You
know
we're
not
like
exploding
the
number
of
secrets
and
service
accounts
and
all
this
stuff
in
every
single
name
or
every
single
workspace,
so
we
might
even
see
better
performance
or
might
see
worse.
C
I
don't
know,
but
in
any
case
I
think,
like
the
scale
problem
seems
far
enough
away,
that
we
can
risk
waiting
until
we
understand
better
what
option
two
looks
like
I'm
not
saying
give
up
on
one
or
the
other,
but
maybe
postpone
the
work
for
now
and
then
some
of
the
other
stuff
like
does
this,
hamper
our
ability
to
provide,
like
some
of
the
other
things
that
we're
really
hoping
to
show
so
the
transparent,
multi-cluster
stuff
doesn't
really
have
any
impact
like
they're,
fairly
orthogonal.
C
As
far
as
I
can
tell
from
the
consistent
list
watch
across
workspaces
sort
of
situation.
C
I
think
it
might
it'll
be
easier,
much
easier
if
we
assume
that
we
don't
have
an
h,
a
kcp
and
it's
not
sharded.
Basically,
all
of
these
consensus
problems
boil
down
to
do.
I
have
consensus
with
my
n
of
one,
and
the
answer
is
always
yes,
so
we
can
move
quickly
to
show
these
features
and
then
understand
better
later,
which
of
the
two
options
we
need
to
take
and
which
one
we're
going
to
take.
The
cost
of.
C
This
is
not
me
saying
this
ron
hi,
obviously
I'd
love
to
hear
other
people's
thoughts
on
this,
but
I
think
this
gives
us
a
pretty
good
amount
of
breathing
room
so
that
we
can
understand
options
twos
and
two
and
three
better
from
the
from
the
storage
level.
I.
D
F
Some
some
of
the
other
problems
like
like
steve,
raised
a
good
point
when
we
were
going
through
this,
which
is
the
this
is
probably
a
measure
twice
cut
once
thing,
which
is
the
access
patterns
of
transparent,
multi-cluster,
the
access
patterns
on
workspace,
any
additional
operational
requirements
like
they
were,
anticipating
running
a
multi-tenant
control,
plane
service
which
to
be
fair
kind
of
underdeveloped
right
now,
like
we
haven't
even
really
tackled
things
like.
Would
we
actually
want
a
snapshot
of
workspace
for
reason
x
and
be
able
to
restore
it
holistically
to
its
exact
state?
F
We
will
probably
have
a
better
idea,
two
months
from
now
or
three
months
from
now
on
some
of
the
non-functional
and
functional
characteristics,
and
not
that
we
don't
have
to
understand
them.
Just
like
we're
changing
the
order
around
is
we'll
have
some
experience
that
can
ground
the
exploration
and
more
time
to
accumulate
trade-offs
as
well.
A
So
I
think
I
I
think
I
agree
with
stefan
that
the
framing
of
this
sounds
good,
which
is
basically.
This
is
not
our
biggest
most
urgent
fire.
We
can
put
innovation
into
other
stuff
for
now,
while
we
figure
out
the
access
patterns
of
the
the
stuff.
That's
coming
down
the
pike.
Do
you
have
like
specific
things?
You
want
to
do
to
investigate
option
two
like
so
now.
It
makes
sense
to
now
investigate
option.
Two
we've
investigated
the
hell
out
of
option
one.
A
C
So
clayton
has
some
ongoing.
I
think
from
my
understanding
the
the
biggest
worry
there
is
the
performance
of
watch.
I
think
I
don't
know
he's
thought
about
this
a
lot
more
than
me,
but
I
would.
I
would
assume
that
at
some
point,
some
sort
of
like
large-scale
simulations
might
be
useful.
C
One
thing
that
I
noticed
when
I
was
doing
performance
simulations
on
hacking
that
cd
layer
is
like
the
setup
is
kind
of
really
important
so
like
if
etcd
was
not
already
the
bottleneck
in
your
cube
cluster,
it
did
not
matter
like
you
could
multiply
the
amount
of
transactions
that
you
did
to
ncd
by
10
and
not
see
a
blip
in
how
long
it
took
to
do
a
patch.
C
Similarly,
like
a
small
amount
of
writers,
can
increase
the
p99
for
a
small,
a
large
amount
of
readers,
but
like
maybe
not
in
the
other
way.
So
I
think
it's
kind
of
tricky
and
you
almost
have
to
have
a
little
bit
of
an
understanding
of
what
your
access
pattern
looks
like
and
then
because
it's
so
intertwined,
but
I
guess
that
would
be.
C
F
F
It
doesn't
have
a
total
ordering,
but
we've
already
kind
of
started
poking
at
whether
total
ordering
is
even
possible,
and
I
think
roughly,
we
don't
even
really
guarantee
total
ordering
if
you're
behind
an
informer
today.
So
there's
already
like
a
bunch,
we've
weakened
some
of
the
key
guarantees
just
by
investigating.
So
at
the
fundamental
level.
F
It's
more
you
know,
we'd
have
to
go
test,
those
assumptions
right
so
a
test
model
and
then
some
kind
of
verification.
With
the
cockroach
team,
I've
got
some
background
contacts
that
I've
been
kind
of
poking
questions
up.
We
haven't
gone
the
next
level,
the
watch.
Ultimately,
it's
the
access
patterns
really
matter,
which
is
we
already
know.
The
ac
we've
talked
about
access
patterns
above
multiple
led
cds.
F
We
actually
haven't
talked
about
the
use
case
access
patterns,
so
we
actually
need
to
formally
define
what
the
access
pattern
for
syncer
and
for
a
cross
instance
resource,
would
be
and
define
what
resources
we
actually
have
to
go.
Fetch
that'll
help
determine
the
scoping
of
how
much
has
to
be
held
in
memory,
and
then
we
have
the
scalability
of
watch,
which
is
you
know
the
combo
of
ncd
already
supporting
you
know
about.
You
know
on
average
about
10,
000
watchers
would
be
reasonable
and
the
watch
cash.
F
On
top
of
that
also,
we
have
to
ask
the
question.
The
watch
cache
does
not
scale
past
memory,
and
so
that
is
a
characteristic
of
sharded
etcd
that
you
inherit,
but
we
would
have
to
deal
with
the
implications
of
that
and
conquer,
so
I'd
probably
say
we're
in
the
phase
we've
gotten
halfway
through
the
first
one,
and
then
we
would
need
to
spend
probably
at
least
two
months
working
through,
maybe
like
two
or
three
different
of
those
characteristics
to
get
an
understanding
of
you
know,
do
they
hold
up.
F
Do
we
understand
the
access
patterns
enough
that
we
can
model
the
the
working
set
of
what
we
would
need
to
do
to
hit
to
be
client
compatible
right,
like
tens
of
thousands
of
small
watchers
on
workspaces?
We
need
to
be
able
to
answer
the
question
you
know.
Do
we
need
a
watch
cache
to
satisfy
that?
If
you
need
to
watch
cash,
then
we
have
to
have
sharded
kcp
instances.
So
it's
a
bunch
of
the
topology
stuff
trickles.
Out
of
that,
I
think
that's
probably
something.
F
Maybe
we
could
even
try
to
get
the
exploration
documented.
Maybe
that's
like
something
steve
you
and
I
could
work
on,
is
try
to
get
the
topics
for
exploration,
have
those
up
for
review,
and
that
would
give
us
a
rough
estimate
of
what
still
needs
to
be
understood
before
we
could
make
a
broader
discussion
about
this
or
make
a
broader
decision
about
the
different
avenues.
A
And
so
in
the
meantime,
while
this
exploration
is
going
on,
the
proposal
is
that
we
just
run
kcp
as
a
single
non-aha
service,
backed
by
a
single
lcd,
and
that
should
meet
our
scale
for
the
immediate
future,
to
be
able
to
prototype
stuff
build
stuff
on
top
understand
usage
patterns.
Things
like
that
is
that,
okay,
it's
not
connected.
I
think.
F
C
C
F
F
If
we
believe
there's
gaps,
we
can
potentially
paper
over
those,
but
the
option
left
is:
if
you
need
the
gap,
if
you
need
to
basically
be
able
to
close
the
gap
as
the
service
ramp
ramps
up
it's
just
a
matter
of
time
until
you
need
some
protection
there,
and
so
it's
the
do.
You
start
by
saying
we're
not
gonna
have
any
production
and
we
need
the
complex
approach
or
do
you
leave
the
room
for
the
we
can
do
a
more
straightforward,
simpler
approach
that
fits
within
the
bounds
of
the
problem.
F
Steve
and
I
were
just
bouncing
off
like
numbers
and
stuff,
at
least
in
the
proposed
like
initial
ramp
up,
you
get
some
experience
with
10
000
workspaces,
maybe
if
we
were
to
be
at
100,
000
workspaces
potentially
be
changing.
Some
of
these
time
points
if
we're
at
5000
workspaces
or
2000,
or
you
know
we're
able
to
keep
the.
We
actually
have
lots
of
workspaces,
but
the
key
size
is
small
per
workspace.
F
That
may
be
a
different
set
of
factors,
but
aha,
really
just
comes
down
to
ease
of
implementation
versus.
Do
you
actually
need,
and
then
we
had
the
discussion
about
like
you,
don't
actually
need
h
a
at
cd
to
hit
your
slas
as
long
as
you
understand
how
long
it
takes
to
update
how
long
it
takes
to
start
and
restart
your
expected
interruption,
and
so
that's
what
we're
kind
of
working
around
like
use
the
error
budget
for
what
it's
worth,
which
is
time
to
upgrade
your
time
to
recover.
I
mean.
D
F
Yeah
and
then
that
gives
us
the
option
to
implement
those
options
as
and
when
we
need
them
and
then
we'll.
If
we
need
those
implementations
sooner
versus
later,
we
have
some
options
around
that
and
again,
there's
always
the
possibility
of
doing
the
more
complex
layer
on
top
or
in
front,
and
we
have
the
mechanisms
within
cube.
I
think
you
brought
that
up
like
we
could
use
resource
quota.
We
could
even
do
specific,
targeted
fixes
to
improve
resource
quota.
F
We
could
even
do
a
specific
series
of
hacks.
It's
just
opening
making
sure
we
have
the
most
number
of
options
to
hit
the
sla,
but
don't
try
to
blow
the
sla
out
of
the
water
with
like
the
perfect
service
like
that's,
not
what
the
ramp
would
look
like
the
ramp
would
look
like
get
the
experience
and
feedback
to
make
a
good
decision
about
the
the
more
than
one
lcd
scale.
That's
probably
the
most
important
problem
that
a
control
plane
has
to
solve
is
scaling
past
single
instance
and
I
think
it'd
be
a
priority
inversion.
F
F
C
Yeah,
I
think
I
think
I'll
try
to
summarize
there
and
then
I
think
that's
a
good
place
for
us
to
split
out
like
avenues
for
investigation
for
option.
Two.
F
A
Do
we
have
any
ideas?
What
option
three
is
if
so,
we've
spent
a
lot
of
time
figuring
out
option.
One
is
huge,
and
maybe
we
should
investigate
option
two.
If
we
figure
out
in
a
couple
months
that
option
two
is
huge
and
we
should
try
something
else.
Do
we
have
any
idea
what
something
else
might
be?
Well,
we
already
know
option
two.
F
Has
so
the
problem
with
option?
Two
is
if
the
underlying
geo-distribute
you
can't
break
apart
the
security
or
failure
domain
or
upgrade
implications
of
that
geo-distributed
database.
F
So
if
somebody
accidentally
does
delete,
you
know,
drop
drop,
table,
star
dot,
star
you've,
just
taken
down
your
entire
global
geo,
resilient
control,
plane,
and
so
that's
a
use
case
discussion.
So
then
the
question
would
be:
do
you
actually
still
want
to
be
able
to
break
it
up,
in
which
case
that's
there's
an
option
three,
which
is
a
hybrid
of
one
and
two,
which
is
you?
F
Could
you
know,
focus
on
option
two
more
for
the
super
high
scale
use
cases,
which
is
somebody
who
has
tens
of
millions
of
keys
in
a
single
workspace
you
could,
but
you
may
still
potentially
need
workspaces
to
be
able
to
do
cross-shard
listing
then
there's
maybe
an
option
for
which
I
don't
think.
We've
really.
I
haven't
spent
a
ton
of
time
on
option
four
yet,
but
it
basically
comes
down
to.
F
I
may
I
may
end
up
with
one
root
of
trust
for
identity,
but
I
want
two
specific
control
planes
and
I
want
to
hard
fence
one
control
plane
to
a
set
of
users,
and
I
want
to
put
all
of
the
problems
like
who
has
root
access
to
my
cloud
accounts.
That's
all
in
control,
plane,
a
and
control
plane,
a
can
only
see
these
users
and
I'm
a
I
get
warm
fuzzies
from
it,
and
I
know
that
unless
somebody
roots
aws-
I
am
you
know,
for
instance,
they
can't
get
access
to
the
service.
F
That's
really,
I
think
what
determines
one
versus
two
versus
three,
which
is
the
scale
one
if
we
solve
the
scale
one
and
we've
missed
the
other
non-functionals,
that's
a
real
problem,
because
ultimately,
someone
may
come
to
us
and
say
yeah
I
want
to
have
I
wanna,
I
wanna
break
on
data
protection
privacy.
I
wanna
break
on
security
boundaries.
I
wanna
break
on.
F
I
wanna
have
this
chunk
in
aws
cloud
or
in
a
govcloud,
that's
independent,
and
then
I
want
to
have
an
on-premise
version.
The
on-premise
version
is
my.
We've
talked
a
couple
of
scenarios.
The
on-premise
one
is
the
one
that
is
the
controller
for
all
the
rest
of
them.
It
doesn't
participate
in
global
sharding.
It
doesn't,
it
doesn't
participate
in
cross,
chart
queries
except
from
the
inside,
or
something
like
that.
So,
like
some
of
these
get
into,
we
need
to
do.
F
We
need
to
be
very
pragmatic
about
what
use
cases
we're
gonna
target,
I'm
a
little
worried
about
jumping
to
the
scale
one,
but
we
know
that
we're
gonna
hit
the
scale
one
almost
right
away
like
that
is
the
beauty
and
the
curse
of
cube
is
the
problem
for
cube.
Is
local
failure
domain
with
bounded
working
set
inside
memory
and
we're
trying
to
find
the
minimum
set
of
trade-offs
that
get
you
some
of
the
key
properties
there
outside
of
working
set?
So
I
think
this
is
probably.
This
is
something
in
the
next
month.
F
You
know
like
we
should
be
articulating
a
couple
of
trade-off
questions
and
what
to
go
search
for
we
should
be
accumulating
data
from
stuff
beyond
the
scale
trade-offs,
and
we
should
kind
of,
as
we
start
ramping
up
the
service
side
be
able
to
get
good
exploration
points
that
may
help
figure
out
option
four
might
be.
We
don't
actually
need
some
of
the
the
cross
stuff
from
a
security
thing,
in
which
case
like
you
know,
maybe
the
option
two
does
become
more
palatable.
Conversely,
you
might
find
that
option.
Two
is
not
successful.
I.
A
No,
I
I
completely
agree
so
so
it
sounds
like
right
at
a
high
level.
We
don't
know
how
people
will
use
us
either
how
users
will
use
us
or
how
our
own
components
will
effectively
use
the
other
components,
and
so
we've
done
enough
work
on
option,
one
to
know
that
here's
how
it
falls
over
here's,
how
it
works.
Well,
here's
where
it
will
be
operationally
difficult,
go
explore
option
two.
It
makes
me
that
sounds
fun.
A
I
agree
thumbs
up
if
option
three
is
some
hybrid
of
option,
one
and
option
two:
where
customers
who
fit
inside
option
one
use
option
one
and
customers
who
fit
inside
option
two
use
option
two
and
that's
what
we
call
the
hyper
adoption
three.
That
makes
me
a
little
nervous.
A
I
don't
know
if
that's
like,
actually
what
we'll
do,
but
that
makes
me
a
little
nervous,
because
now
we
have
the
operational
overhead
of
understanding
and
operating
both
option,
one
and
option
two
and
deciding
which
users
go
into
bucket
one
and
bucket
two
and
like:
when
do
they
grow
out
of
option
one
into
two?
We
gotta
like
migrate
so
and
that's
fine
like
like.
F
Three
is
actually
an
execution
option
which
could
be,
you
might
actually
be
able
to,
for
instance,
get
an
option
two
off
the
ground
faster
than
getting
the
cross
shard
stuff
right,
in
which
case
you
can
go
from
scale.
You
know
10
000
workspaces,
possibly
all
the
way
to
scale.
You
know
million
workspaces
with
some
cheats
and
some
hacks
without
actually
attacking
sharding,
and
then
you
know
if
you
can
still
run
multiple
instances
of
that,
so
you've
kind
of
got
a
you've.
F
Given
yourself
a
little
bit
more
head
room
to
go,
look
at
option
one
more
seriously.
That's
one
way
to
think
about
option
three
and
then
option
four
could
be
as
a
result
of
exploring
this.
When
we
understand
use
cases
something
else
may
come
up,
because
we
haven't
really
defined
like
the
span
of
control
for
a
control
plane
and
that's
where
users
like
honestly,
we
need
a
prototype
three
style,
homogenous
user
experience
to
be
like
hey,
here's,
a
control
point.
It
will
scale
to
this.
F
What
are
some
ways
that
you'd
use
a
control
plane
like
this
from
actual
consumers,
users,
adopters
people
who
are
like?
Oh,
we've
always
wanted
this,
but
we
definitely
would
never
run.
We
would
want
two
hard,
isolated
components
and
we
want.
You
know
these
guys
to
be
the
overseer
of
the
developers
and
that's
just
how
we're
cool
that's
a
different
use
case
than
what
we've
talked
about
so
far
with
global
control.
A
F
B
F
A
Now's,
the
time
to
like
step
back,
let
users
actually
use
it,
see
how
they
use
it
and
make
decisions
based
on
you
know.
Real
data
is
anyone
else
listening
following
along
and
having
burning
feedback
to
give
to
the
conversation,
it's
mostly
been
the
three
or
four
of
us
don't
want
to
leave
anybody
out.
A
All
right,
I
guess
not,
is
there
excellent?
How
was
the
how's
the
wall.
A
Good
good
yeah,
I
think,
we'll
learn
a
lot
from
putting
this
in
front
of
actual
people
and
see
you
know
like,
like
all
things
like,
don't
push
that
button.
Why
are
you
pushing
that
button?
That's
not
the
button
for
you
to
push.
We
will
learn
after
users
touch
it
that
that
was
the
button
they
push.
F
Yeah
and
also
I'll
just
add
one
more
note
like
option.
Two:
I've
had
a
lot
of
behind
the
scenes
in
formal
interest
from
a
lot
of
different
people
who
have
cube-like
problems
or
single-cluster
scale,
problems
that
there's
some
appetite
for,
and
so
that's
at
least
something
to
consider
is
you
know,
we've
it's
definitely
coming
up
that
there
may
be
some
interest
in
option
two,
purely
in
the
cube
sense.
That
is
why
kind
exists
and
that's
one
of
the
advantages
like
a
rancher
uses
it
for
fleet
manager.
F
At
you
know,
oh
100,
000,
sub-domain
scales,.
A
And
so
that's
a
is
kind
more
than
my.
My
understanding
of
kind
might
be
incomplete
is
kind
mostly
a
an
alternative
to
using
ncd
that
uses
sql
lite
or
is
it
any
sql
kind.
F
Kindness,
a
sequel
is
an
adapter
that
emulates
the
specific
lcd
apis
grpc
entity,
apis
the
cube
calls,
and
it
applies
a
generic
sql
abstraction,
so
it
can
work
against
multiple
sql
types,
as
a
consequence
of
that
it
has
to
simulate
watch
in
its
data
model,
which
means
that
you
are
effectively
layering
multi
multiversion
concurrency
on
top
of
sql,
which
means
your
data
model
in
sql
is
layered
on
top
of
whatever
the
underlying
databases.
Multiversion
semantic
is
the
reason
why
cockroach
is
interesting.
F
It
speaks
cockroach
actually
exposes
the
multiversion
semantic,
which
means
you've
cut
out
that
that
abstraction
kine
would
probably
also
be-
and
this
may
actually
be
a
short
term
option
too,
which
we
can
do
like
a
two.
A
and
two
b
kind
would
actually
potentially
help
us
get
to
like
a
hundred
thousand
or
two
hundred
thousand
scale,
for
a
single
instance.
To
do
that.
A
thousand
workspace,
like
would
probably
give
us
close
to
at
least
half
an
order,
magnitude
of
northern
magnitude,
so
that
might
actually
be
a
a
time
extender.
F
While
we
evaluate
other
options,
I
think
cockroach
potentially
is
the
only
one
I've
seen,
though,
that
still
gives
you
like.
Full
geo
distribution,
as
well
as
like
geo
replication,
as
well
as
the
ability
to
transparent
data
as
well
as
the
underlying
semantic
is
the
database
semantic
which
means
watch
is
as
efficient
as
the
underlying
store.
A
A
G
I
have
been
working
along
with
lots
of
folks
on
the
call
in
getting
the
code
base
up
to
date
and
workable
for
prototype
2.
For
a
demo,
I
have
a
pull
request
open.
That
starts
the
starts
adding
a
script
for
running
the
demo.
It
is
not
in
any
way
done
what
I've
discovered.
In
writing.
It
is
a
series
of
some
issues,
small,
some
big,
with
just
either
flakes
or
issues
in
the
test
code
or
flakes
or
issues
in
some
of
our
controllers
and
logic.
So
I
want
to
thank
everybody.
G
So
in
the
short
term,
I'm
essentially
entirely
focused
on
that
and
again
I
appreciate
everybody
who's
been
helping
out
so
as
soon
as
we
can
close
out
prototype
2,
I
I
know,
as
paul
said
or
paul
wrote
at
the
beginning
of
the
meeting,
we
will
be
talking
about
prototype
3
tasks,
but
some
high
priority
code,
things
that
we'd
like
to
get
in
as
soon
as
proto2
has
done,
are
getting
the
kubernetes
123
rebase
in
I've
been
periodically
going
back
to
my
branch
and
trying
to
keep
that
up
to
date,
as
we
make
some
changes
to
the
122
branch,
although
I
probably
don't
have
the
latest
stuff
with
the
refactoring
that
you
did
stuff
on
for
starting
things
up
and
after
the
rebase
is
in.
G
I
will
try
and
get
my
scoping
work
finished
and
then,
hopefully
everybody
can
take
advantage
of
all
of
that
goodness.
So
that's
that's
where
the
demo
stands
right
now.
Maybe
we
could
try
and
hold
off
on
merging
any
any
major
refactorings
until
we
get
the
demo
script
closed
out.
That'll,
because
if
there's
refactorings
that
come
in,
I
may
make
it
take
longer
for
me
to
get
the
demo
finished.
A
Yeah
andy,
thank
you
for
your
heroic
work
in
getting
this
in
a
state
where
it's
demo-able
at
all.
When
I
I
know
that
we
got
rid
of
it
in
favor
of
end-to-end
tests,
and
I
think
that
was
also
a
good
idea,
but
for
a
while
we
had
the
demo
one
script
running
as
part
of
our
ci,
just
to
make
sure
that
we,
as
we
were
making
improvements
to
the
system
that
we
weren't
breaking
the
demo
because
we
broke
the
demo
a
bunch.
A
Is
there
any
interest
either
from
you
or
from
anyone
else
or
disinterest,
in
putting
it
in
as
a
ci
check?
Just
in
the
it
is
effectively
an
end
to
end
it's
it's
the
end
end-to-endist
test
we
have,
which
means
both
it
will
cover
the
most
and
will
break
the
easiest.
A
Does
anybody?
How
do
you
feel.
D
I'm
not
a
fan
of
I
mean
if
we
had
the
command
test
for
a
long
time
which
were
like
that
they
are
hard
to
maintain
and
when
things
change
that
breaks
it's
easier
to
fix
code
in
this
phase,
which
is
typed,
I'm
having
a
script
which
is
not
typed.
G
F
E
D
H
This
is
something
that
we
we
want
to
be
able
to
have
a
good
experience
for
people
who
are
kicking
the
tires
and
we
need
some
way
of
having
that
be
super,
easy
and
automated
as
possible
in
code
sure
some
of
it
maybe
I
mean
I
don't
really
care
whether
it's
in
code
or
in
bash
bash
sucks,
but
you
need
something:
that's
comprehensible
to
the
average
person
and
it
needs
to
be
validated.
So
it's
not
constantly
breaking.
C
F
Options
is
like
you
still
need
to
explain
somewhat
the
reason
we
did.
The
shell
script
was
to
actually
automate
doing
the
demo
recording
because
we
actually
were
struggling
to
get
a
repeatable
demo
recording
and
being
able
to
use
the
shell,
for
that
was
very
valuable.
I
would
probably
say
I
might
actually
suggest
you.
We
need
a
good
doc
that
explains
the
demo
in
clear
human
language
with
the
minimum
number
of
dependencies.
F
Like
tomorrow's
point,
I
might
suggest
as
an
option
like
put
it
in
markdown
and
then
go
through
the
steps
and
run
those
steps
and
look
for
all
the
places
where
someone
has
to
inject
a
external
dependency.
Those
are
usually
places
where
end
users
fail.
That
might
be
another
one.
We
did
this
in
cube
a
couple
times
where,
like
you
know,
we
read
markdown
and
parsed
out
the
objects
and
validated
them
like
at
the
end
of
the
day.
F
It
doesn't
matter
how
good
the
system
is,
if
no
one
tries
it.
The
primary
user
you're
trying
to
serve
is
the
person
who
comes
to
that
page
and
says
I
want
to
understand
what
this
does
and
if
you
don't
get
your
message
across,
there's
no
point
to
having
project
right.
The
project
is
an
abject
failure
over
of
being
over
things,
so
I
think
like.
F
If
we
want
to
test
that
end
users
are
successful.
There
are
multiple
ways
you
can
observe
that,
but
a
very
concrete
way
is
making
sure
you
don't
break
an
actual
sequence
that
should
reliably
work
if
you
want
to
have
a
dock
and
keep
the
go
code
insane
great.
If
you
want
to
have
a
go
code
that
you
know,
looks
at
a
structure
and
does
certain
steps
sure.
But
I
think
the
shell
code
is
not
the
important
part.
A
Yeah
I
mean
for
for
what
it's
worth.
I
also
died
a
little
bit
inside
when
I,
you
know
even
lightly
suggested
having
our
tests
be
in
bash.
I
don't.
I
don't
want
that
either.
My
the
reason
for
wanting
anything
like
that
at
all
is
sort
of
like
clayton
says
like
it
works,
we
get
a
demo.
We
record
that
demo,
we
put
it
in
the
readme
everyone's
happy
and
then
a
week
later,
a
totally
unrelated
change
breaks
it
and
the
next
user
that
wants
to
go,
try
it
out
and
is
frustrated
and
just
eats.
A
C
There
a
translation
layer
between
if
you,
if
you
increment
the
verbosity
of
client,
go
far
enough.
Is
there
something
that
will
take
that
and
may
keep
control.
H
F
But
you
I
bet
you
could
write
a
actually
okay.
Actually,
let's
take
this
back
if
you
write
good
enough
like
a
go
flow,
and
that
actually
is
your
demo
and
someone
reading
that,
like
you,
could
just
take
that
go
file
and
turn
it
back
into
a
markdown
script.
Like
turn
the
comments
into
paragraphs
and
you're,
explaining
what
each
step
does
that
might
be,
another
option
generate
the
markdown
from
the
go
code
of
the
test
or
something.
G
Yeah
yeah
I
mean
I,
I
am
entirely
happy
not
to
continue
to
hack
together
a
script
that
has
to
be
flawless
and
maintained
and
whatnot.
I
want
to
record
an
ascii
cinema.
I
would
like
to
have
human
readable
language,
as
you
said,
clayton.
That
makes
it
easy
for
people
to
understand
and
try
it
out.
But
if
we're
not
going
to
maintain
the
bash
in
perpetuity,
I
don't
think
it
makes
sense
to
check
it
in
because
we
have
done
zero
jobs
of
keeping
it
functional,
as
the
code
has
evolved.
G
F
F
The
only
metric
that
matters
from
the
perspective
of
the
repo,
the
readme
and
the
flow
is,
do
you
get
that
idea
across
and
so
the
metric
here
is
people
succeeding
at
that
everything
else
we
do
is
irrelevant
or
everything
else
we
do
is
in
support
of
that
metric.
So
we're
optimizing
for
the
metric
of
you
come
to
our
readme
and
you
successfully
see
the
potential
either
by
watching
reading
or
trying,
and
the
trying
is
the
one
that
we
need
to
make
sure
it
doesn't
break.
So
the
the.
A
For
maybe
maybe
the
solution
going,
the
the
most
scalable
solution
is
have
some
script,
don't
check
it
into
the
repo
or
if
we
check
it
into
the
repo
say
this
is
this
was
last
seen
working
against
commit
abc
to
try
this
yourself
check
out,
commit
abc
and
run
it
against
that.
You
won't
get
recent
changes
since
then,
but
that's
probably
good
because
we
probably
broke
the
demo
script
since
then.
I'm.
F
F
G
F
Yeah,
I
I
didn't
want
to
we,
we,
the
the
demos
magic
script,
like
I
think,
it'd
be
fine
if
that
dies
like
that.
Okay,
that
was
only
so
that
I
could
step
through
recording
a
video
as
we
let
up
to
the
may
yeah
and
it
just
it
hung
around,
because
we
had
nothing
else.
Let's
make
the
nothing
else
replace
it.
G
A
A
No,
no,
I
won't
all
right
anything
else.
C
Do
we
all
by
the
way,
stefan
and
andy,
do
we
ever
make
a
decision
on
whether
we
should
auto
detect
the
fsync
thing
for
max,
like
that
seems
like
a
really
heinous
thing
that
everyone
on
mac
trying
this
out
will
hit.
G
So
to
explain
for
those
who
aren't
familiar
with
what
steve
was
talking
about
the
on
etsy
d
or
ncd,
it
uses
fsync
by
default
to
avoid
data
corruption.
G
The
implementation
of
fsync
on
mac
was
corrected
several
go
versions
ago
to
you
do
a
full
sync
which
makes
it
slower
and
but
but
it
is
correct
so
like
functionally
it's
working.
The
way
that
it
needs
to.
G
It
just
means
that
if
you're
trying
to
run
fcd
on
a
mac,
it
tends
to
run
a
little
bit
slower
than
it
might
otherwise,
and
there
is
a
flag
you
can
set,
especially
if
you
are
on
a
single
node
at
cd
or
single
member
ncd,
to
tell
it
not
to
use
fsync
it's
it's
marked
unsafe
for
a
good
reason,
but
it
does
speed
things
up.
So
I
don't.
C
C
Does
it
cause
those.
C
G
C
G
G
Faster,
even
if
you're
doing
you
know
one
parallel
one,
it's
still
significantly
faster.
C
Gotcha,
I
was
really
surprised
that
yeah
fsync
on
mac
takes
like
100
times
more
than
linux,
but
whatever
okay
cool
it
doesn't
sound
like
we
need
to
like
call
that
out
in
the
review.
I
just
didn't
want,
like
a
new
user
like
starting
kcp
and
just
having
a
cd
fall
over
and
be
like
what
is
going
on
here.
G
No,
I
mean,
I
think,
we're
better
off
having
a
correct.
You
know:
data
corruption,
free
version
of
ncd
running
by
default.
Then
then
otherwise,.
A
Cool,
we
have
a
few
minutes
left,
but
david
also
added
an
item
two
minutes
ago.
I
Yeah,
just
a
heads
up
heads
up
that
some
of
you
saw
a
pull
request
at
the
end
of
last
week,
thursday,
where
I
was
proposing
having
a
dedicated
cubeconfig
for
the
virtual
workspaces.
I
That
was
mainly
related
to
the
fact
that
until
now,
the
case,
the
generated
cubeconfig
generated
by
kcp
mainly
was
using
the
loopsback
client
certificates,
which
are
cannot
be
shared
with
any
other
external
component.
So
it
was
not
possible
to
share
the
search
between
both
components,
kcp
and
the
virtual
workspaces
command.
Now
that
I
just
fixed
that
it's
possible
to
have
a
very
transparent
way
to
access
the
virtual
workspaces,
even
if
you
are
just
if,
even
if
your
current
cubeconfig
is
the
admin
cubeconfig
generated
by
by
kcp.
I
So
even
if
you're
on
the
admin
cube,
config
and
admin
context.
If
you
point
to,
if
you
try
to
access,
for
example,
the
kcp
workspace
plugin
without
any
change,
you
can
get
the
list
of
the
workspaces.
It
mainly
takes
the
current
workspace
and
just
change
the
the
port,
and
I
mean
part
or
path
accordingly,
according
to
the
need
and
and
everything
works.
So
that
should
this
time
I
assume
unable
to
simplify
the
correct
in
a
better
way
the
prototype
to
scenario
so
yeah
feedback
welcome,
probably
offline.