►
From YouTube: Community Meeting, April 19, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Cool,
so
I
just
wanted
to
give
a
quick
update
on
the
cockroach
benchmarking
work,
so
a
super
quick
background
in
case
people
weren't
in
some
of
the
other
meetings.
So
I'm
looking
at
what
does
it
take
to
use
cockroachdb
as
the
backing
store
for
kubernetes
and
as
follow
on
kcp,
specifically
we're
looking
at
high
scale
use
cases
where
you
know
previous
kcp
iterations
would
have
thought
about
tessellating,
multiple
lcd
instances
together.
B
I've
got
a
couple
links
in
the
comment.
One
is
a
repo
where
I
have
my
benchmark:
setup:
data
acquisition
sort
of
stuff.
Oh
sorry,
let
me
also,
I
will
add,
a
link
to
the
document
where
I'm.
B
Putting
down
like
what
kind
of
testing
I'm
doing
and
and
why.
B
I
got
some
feedback
on
the
doc.
If
you
have,
you
know
thoughts
of
different
benchmarks
that
you
want
to
see
or
specific
things.
You
want
me
to
investigate.
Please
please.
Let
me
know.
B
B
So
the
first
test,
while
I'm
figuring
out
screen
sharing,
is
the
setup
is
basically.
B
Start
a
server,
we're
gonna
have
the
store
in
mode.
You
know
we're
not
storing
anything
in.
B
In
memory
like
there's
all
these
different
options
that
make
the
store
look
a
little
bit
more
like
a
test
or
a
fake
rather
than
a
real
thing,
so
I'm
trying
not
to
use
any
of
those.
So
start
a
kubernetes
api
server
start
either
at
cd
or
cockroach
in
h,
a
mode
with
all
the
best
bells
and
whistles
that
we
know
about
turned
on
then
the
first
thing
that
the
test
does:
is
it
loads
five
gigabytes
of
data
into
the
into
the
storage?
B
So
in
this
case,
if
I
remember
correctly
we're
using
100
kilobytes
of
filler
data
inside
of
a
pod
and
then
we're
just
posting,
however
many
500
000
pods
or
something
and
then
once
we
have
a
database
size,
we
start
doing.
B
B
B
We
just
do
a
bunch
of
turn
and
then
what
I'm
measuring
is
actual
client
client
request
response
times
and
then
also
the
server
load
required
to
to
do
that,
and
so
in
this
case
on
the
left
there
are,
you
can
think
of
these,
basically
as
histograms
and
so
create
performance
is
fairly
equal
between
the
two
delete.
You
know,
crdb
has
a
slight
edge
on
the
long
tail
on
gets.
B
Crdb
is
much
faster
than
ncd
on
updates,
crdb
seems
to
be
a
little
bit
slower
and
then
on
the
right,
because
the
crdb
gets
were
so
much
faster
and
we're
doing
a
one
to
ten
right
to
read.
The
crdb
test
actually
finished
first,
so
the
x-axis
is
time
here
and
we're.
Seeing
that
you
know,
cockroach
is
pretty
good
at
keeping
its
working
set
under
control,
but
it
trades
that
off
for
a
bunch
of
cpu
intensive,
whereas
lcd.
B
C
Of
the
memory
sizes
of
these
processes,
okay
for
both
okay,
thank
you
yeah.
So
this
is
the
sum
for
both
now
that
doesn't
make
sense
to
me,
because
if
you
said
five
gigabytes
of
data,
wouldn't
the
sum
be
15
gigabytes.
B
All
I
can
tell
you
is
that
this
is
the
data
that
c
advisor
gave
me
about
that.
Okay,
here's!
My
on
my
workstation,
oh.
C
Well,
the
other
question,
of
course,
was
at
least
with
cd,
and
I
guess,
logically
with
crdb
as
well
right,
there's
the
history
trimming
or
compaction.
Yes,
what's
the
compaction
behavior
in
this
test.
B
I
believe
this
test
ran
for
just
under
five
minutes,
and
so
it
was
not
readily
apparent.
That's
something
that
I
intend
to
focus
on
in
a
future
test.
Thank
you,
yeah
also
to
be
super
clear,
like
please
jump
into
that
repo
that
I
linked.
B
If
anyone
wants
to
reproduce
these
everything
from
you
know,
starting
the
containers
getting
the
data
plotting
it
all
of
it
should
be,
you
know,
sort
of
one
button
push
in
there
and
if
it's
not,
then
you
know
please
let
me
know
so
one
thing
I'll
note,
I'm
still
looking
at
a
potentially
better
visualization
for
this,
as
the
actual
size
of
the
database
grows,
the
edge
for
crdb
increases.
B
So
when
we're
looking
at
single
replica
deployments
with
100
megabytes
of
data
at
cd
is
is
very
efficient,
as
we
start
to
get
to
these
enormous
sizes.
Crdb
comes
out
on
top,
at
least
in
my
testing,
so
I'd
like
to
capture
that
in
a
different
visual,
but
for
now
we
have
this.
Stefan.
A
B
B
I
can
only
do
tests
up
to
that
scale
until
I
start
working
on.
You
know
if
I
try
to
automate
like
aws,
bring
up
or
something
and
then
on
top
of
that
ncd
wouldn't
exist
at
those.
Unless
you
found
an
instance
that
had
a
terabyte
of
ram.
B
It's
it's
a
good
question
and
there
are
a
bunch
of
tools
about
asking
cockroach
to
distribute
ranges
across
geographic
areas.
I
have
not
looked
into
that
yet
mike
you
had
your
hand.
C
D
A
C
Okay,
right
yeah,
so
yeah
again,
I'm
just
learning
about
cockroach.
Also
yeah:
it's
is
there
some
automatic
shifting
of
data
or
is
I
thought
it
was
explicitly
managed
the
the
sharding
and
k
in
cockroach
there's
like.
B
A
manual
the
size
of
a
book
on
it,
so
I
don't
really
want
to
poorly
try
to
summarize
anything.
But
you
have
a
quite
a
lot
of
tunability
as
an
administrator.
B
C
C
B
Perhaps
yeah.
B
I
think
also
intelligent
partitioning
of
data
and
access
would
help
there.
So
if
we're
lucky-
okay-
great
this,
oh
sorry
before
we
go
on
any
other
questions
or
comments
on
this
benchmark.
B
Cool,
so
the
next
thing
I
looked
at
was
basically
you
know:
cockroach
sequel
have
indices
what
happens
when
we
actually
use
indices,
and
so
what
I
did
for
this
test
was,
you
know,
fairly
simplistic
and
definitely
not
like.
This
is
not
what
I
think
the
final
solution
should
look
like,
but
you
know
in
kubernetes
api
there's
a
set
of
interfaces
that
are
implemented
by
both
the
storage
layer
and
upstream
caching
and
the
caching
has
indices
and
the
constructors
are
all
the
same.
B
So
I
basically
took
the
indices
that
are
currently
used
in
the
watch
cache
and
I
plumb
them
into
the
storage
and
then
cockroach
can
look
at
it
and
say:
okay,
I'm
writing
a
pod.
I
know
that
the
pod
has
this
index,
I'm
going
to
go
ahead
and
explicitly
write.
You
know
a
separate
table
that
also
has
this
index
set
up.
B
You're
using
implicit
indices
based
on
the
the
structure
of
the
json,
that's
an
interesting
avenue
of
investigation.
I
haven't
looked
at
it
yet
so
what
we're
looking
at
here
is
very
explicitly.
These
are
indices
that
were
created
at
create
or
update
time
and
then
we're
looking
at
if
I
create
a
whole
bunch
of
data
and
then
I'm
doing
lists
against
it
and
my
lists
have
a
field
selector,
depending
on
how
selective
my
field
selector
is.
B
What
does
my
performance
look
like,
and
so
here
I
think
we're
selecting
everything
from
one
item
out
of
500
000
to
whatever
40
of
it
and,
as
you
can
see
like
the
more
selective,
your
query
ends
up
being
work,
orders
and
orders
of
magnitude
faster
than
the
api
server
on
ncd,
because
we're
able
to
offload
all
of
the
selection
onto
the
database
instead
of
having
to
load
all
of
it
into
memory
and
then
do
the
filtering
and
then
at
enormous.
B
So
you
know
when
we're
talking
about
a
list
as
a
field
selector
that
selects
40
of
all
pods
here
we're
talking
about
like
200
megabytes
of
data
and
we
start
to
see
the
times
converge.
I
imagine
that's
mostly
data
throughput.
F
B
I
could
do
that
yeah.
Okay,
let.
F
B
F
Curious
if
the
transfer
then
starts
to
overweigh,
this
is
neat.
B
Also
to
be
clear,
we're
looking
at
performance
without
paging
and
without
the
watch
cache.
Obviously,
if
you
let
the
api
server
hold
the
entire
data
set
in
memory
and
you're
serving
the
whole
thing
from
memory,
then
the
watch
cache
is
fast
but
you're.
Looking
at
cockroach
performance
here
with
zero
memory
overhead
on
the
api
server.
B
Cool,
so
one
really
exciting
thing
about
this
test,
for
me
is
there's
a
whole
bunch
of
conversation
from
many
years
ago
about
allowing
people
to
define
generic
field
selectors,
especially
on
crds.
B
With
you
know
a
database,
that's
somehow
aware
of
schema.
This
sort
of
thing
at
least
has
legs
in
that
cd
case.
Obviously,
the
solution
is
to
hold
everything
in
memory
and
do
an
index
there
derek
you
have
a
question.
F
Yeah,
I'm
sorry.
I
wanted
to
make
sure
I
understood.
F
I'm
assuming
that
all
content,
eventually
at
rest,
will
need
to
be
encrypted
by
some
customer
managed
key,
and
so
are
you
saying
that
that
being
the
case,
we
would
still
get
these
performance
benefits,
and
it's
only
in
the
case
where
it
wasn't
encrypted.
You
could
get
even
better.
I
just
want
to
make
sure
I
understood
well.
B
So
today,
kubernetes
has
a
very
unique
approach
to
encryption
at
rest,
where
cube
is
encrypting
the
data
at
the
api
server
level
and
passing
it
down.
I'm
sure
you
know.
B
Right
so
I
think
a
customer,
a
customer
managed
encryption
key
at
the
cockroach
level,
would
not
stop
cockroach
from
being
able
to
do
implicit
indices
or
you
know,
inverse
indices
on
like
json
data
there.
B
F
B
B
In
any
case,
this,
even
as
an
explicit
thing,
so
what
we're
doing
here
is
basically
taking
the
cache
functions,
computing,
the
indices
at
the
cube
level
and
just
doing
raw
inserts.
B
C
Yes,
I
was
also
wondering
again
the
question
about
the
cpu
right
you've
offloaded.
So
did
you
look
at
what
happened?
What
happened
to
the
cpu
as
you
offload
this
work.
A
B
A
B
So
so,
more
concretely,
the
watch,
the
actual
watch
part
of
the
watch-
cache
that's
holding
on
to
100
events
per
type,
there's
no
reason
that
that
shouldn't
be
possible.
B
So
I
would
imagine
that
both
of
those
being
tunables
would
allow
you
to
get.
You
know
whatever
performance
you
were
looking
for,
based
on
my
understanding
of
where
the
watch
cache
is
most
useful
today,
but
as
far
as
like
resuming
watches
and
correctness
semantics,
I
don't
imagine,
we'd
have
any
impact.
G
A
B
Yeah,
I
actually
remember
reading
that
now,
especially
yeah
there's
on
gets
that's
the
current
behavior.
A
B
A
E
Yes,
so
last
week-
and
maybe
the
week
before,
I
spent
some
time
fleshing
out
some
of
the
additional
functionality,
that's
in
the
epic
for
api
schema
exporting
and
binding.
So
I
have
a
demo
to
show.
Let's
see
if
this
is
going
to
work.
E
Okay,
can
you
all
read
this?
Okay?
Yes,
all
right.
So
the
demo
that
I
want
to
show
today
is
starting
off
by
putting
on
my
api
service
provider
hat
so
to
speak.
So
the
example
that
we've
been
using
throughout
our
planning
has
been
with
cert
manager-
I'm
not
demoing
cert
manager
today,
but
imagine
that
I
wanted
to
provide
something
like
certificates
and
apis
for
certificates
to
other
users,
and
I
don't
want
them
to
have
to
worry
about
getting
cert
manager
or
any
operator
installed.
E
They
just
want
the
apis
and
then
from
the
from
my
perspective,
since
I'm
writing
a
controller
or
an
operator,
and
I
want
to
see
all
the
instances
of
just
my
type
or
my
types
that
I'm
exporting.
I
need
some
way
to
do
that,
and
so
I
have
a
virtual
workspace
for
that.
That
david
has
put
together
a
really
good
starting
framework
for
making
that
pretty
easy
to
set
up.
E
So
let
me
start
and
I'm
going
to
switch
to
a
workspace
that
is
called
foo,
and
if
we
look
in
here,
I
have
an
api
resource
schema
this
one.
I
was
playing
around
with
some
trying
to
make
sure
that
we
couldn't
have
any
sort
of
identity
hijacking.
So
the
name
here
is
a
little
strange,
but
so
there's
an
api
resource
schema
that
represents
the
api
group
andy.io
and
the
resource
is
called
dashed
endpoints.
I
happen
to
be
testing
if
you
could
have
a
dash
in
the
resource
name.
E
So
if
we
take
a
look
at
this
in
yaml,
it's
let
me
skip
over
that.
So
what
you'll
see
it
looks
a
lot
like
the
custom
resource
definition.
The
spec
has
a
group.
It
has
names
for
the
kind,
the
list
kind,
the
plural,
the
singular.
What
scope
it
is,
how
many
different
versions
it
supports.
E
This
is
the
schema
for
the
core
v1
endpoints
resource,
just
because
it's
relatively
small
and
what
I
was
playing
around
with
and
if
we
get
down
to
the
bottom
here,
that's
just
the
end
of
the
schema
there's
not
much
else.
So
it
looks
very
much
like
a
custom
resource
definition.
It's
just
a
different
type
and
a
couple
minor
changes
here
and
there
so
the
that's.
My
api
resource
schema
you'll
notice
that
in
in
my
workspace
I
don't
actually
have
any
crds.
E
E
So
I
have
an
api
export
here
that
if
you
take
a
look
at
it,
it
has
a
name
dashed:
endpoints,
it
has
a
in
the
spec
it
references
one
or
more
api
resource
schemas
by
name.
So
this
is
a
direct
name
reference
to
the
api
resource
schema.
E
E
It
happens
to
have
the
same
name
as
the
api
export
that
we
were
just
looking
at,
but
these
names
don't
have
to
match,
and
what
you'll
see
is
that
the
controller
has
come
in
and
said.
Okay,
there's
an
andy.io
group,
there's
dashed
endpoints
and
then
there's
a
schema
that
has
a
uid,
an
identity,
hash
and
the
name,
and
we
keep
track
of
the
storage
versions
from
that
api
resource
schema.
E
E
So
if
I
works
so
I've
created
a
dashed
endpoint
called
test
one,
and
if
we
take
a
look
at
what
this
is
actually
doing,
you'll
see
that
it's
going
to
the
root
default,
andy
logical
cluster
and
it's
asking
for
dashed
endpoints
inside
of
my
default
namespace.
E
E
Okay,
here
we
go
so
we
have
a
new
virtual
workspace
at
slash
services,
slash
api
export
and
then
there's
a
series
of
path
segments
that
represent
the
the
workspace
or
logical
cluster
that
the
api
export
is
in
dashed.
Endpoints
is
the
name
of
the
api
export.
E
This
hash
is
the
identity
and
then
the
rest
of
the
url
is
a
normal
url
for
any
sort
of
query
that
we
might
want
to
do
so.
I
can
do
a
wildcard
get
or
list
against
all
of
the
dash
endpoints
in
all
of
the
workspaces
and
what
you
end
up
with
is
a
list
there's
only
one
item
in
it
right
now,
but
you
see
the
cluster
name
is
root
default
andy,
and
nowhere
in
my
query,
did
I
ask
for
root
default
andy.
So
what
this
is
doing
is
it's
saying
because
of
the
wildcard
character.
E
Here,
it's
saying
go
to
any
any
workspace
within
kcp
in
the
control
plane
that
has
andy.io
v1,
endpoints,
matching
the
identity,
hash
and
api
export,
and
the
name
or
the
workspace
that
it
came
from.
So
if
somebody
else
were
to
export
a
dashed
endpoint
resource,
it
would
not
show
up
in
this
query
if,
but
this
will
show
all
of
the
instances
across
all
the
workspaces
that
have
bound
this
particular
export
into
them.
E
So
that's
the
bulk
of
the
demo
here.
E
What
we
have
left
to
do
is
getting
a
couple
of
prs
in
place
in
front
of
this
before
I
can
open
up
a
pr
to
do
the
virtual
workspace
and
I
haven't
addressed
anything
related
to
authorization.
Yet
so
everything
that
I've
been
testing
is
just
running
as
the
root
user.
Who
has
full
access
to
everything.
So
we
definitely
need
to
address
that
and
then
one
other
thing
that
we'll
want
to
do
is
it's
extremely
likely
that
a
controller?
E
E
I
need
to
pull
in
everything
that
has
been
exported,
plus
secrets,
config
maps
and
whatnot,
and
ideally
have
it
be
a
subset
of
whatever
types
they're
asking
for
so
that
it's
just
like
we
don't
want
to
give
people
read
all
secrets
across
all
workspaces
when
you
may
only
need
just
a
certain
subset
of
them.
So
that's
what
I've
been
working
on
expect
the
prs
to
get
updated
over
this
week
and
hopefully
done
this
week
too,
and
that's
what
I
got
so
nick,
I
saw
your
hand
had
gone
up
briefly.
G
Yeah
more
of
a
comment
on
a
parallel
with
the
problem
that
we
have
been
trying
to
pan
out
with
authorization
really
really
configuration
in
general
for
operators
in
olm
against,
like
a
regular
cube
cluster.
G
So
like
the
the
problem,
is
as
I
sort
of
expand
the
constraints
of
the
operator
or
expand
its
view
over
a
cluster
in
terms
of
like
the
name
spaces
it
can
watch
and
the
things
it
can
do
as
more
and
more
users
on
board.
G
How
do
I
provide
that
information
to
the
operator
to?
I
guess
the
cluster
admin
or
whoever
is
like
expanding
those
privileges,
and
so
like?
We
really
generalize
the
problem
to
like
any
sort
of
configuration.
It's
any!
It's
like
an
operator
author
can
provide
a.
G
Basically,
a
template
and
it
templatizes
their
config
as
a
bunch
of
other
cube
resources
that
get
applied
to
the
cluster.
So
I
am
very
interested
to
see
how
that,
like
really
generic
approach
could
like
pair
up
here,
where
it's
just
like
a
a
template
of
resources
that
you
apply
in
the
operator
or
whatever
controller.
G
Let's
not
use
the
word
operator.
Let's
use
root
controller.
The
controller
requires,
because
you
could
imagine
light
it
requires
some
secrets
in
the
workspace
to
be
generated
in
order
for
it
to
work,
or
it
needs
not
just
our
back
et
cetera.
So
I
might
put
up
an
issue.
E
Yeah,
I
think
it
would
be
helpful
if
we
could
set
up
a
time
for
you
to
walk
me
through
that.
I'm
not
super
familiar
with
it
so
yeah.
If
we
can
take
any
lessons
learned
and
carry
some
of
those
designs
forward
and
get
emerged
in
here,
I
think
that
would
be
pretty
beneficial.
A
Ideas
I
mean
if
the
names
are
static,
known
in
advance.
We
can
of
course
put
that
in
some
policy
or
some
permission
claim
or
something
like
that.
The
interesting
part
comes
when
a
cr
or
an
object.
Reference
is
another
one
like
a
secret
f,
so
we
might
want
to
to
point
with
json
paths
or
something
like
that.
The
secret
reference
and
the
pointed
object
is
automatically
visible
or
something
like
that.
So
if
you
have
ideas
around
those
lines
where
welcome
so,
let's
see
what
you
have
done
there,
maybe
there's
a
fit
here
as
well.
A
A
I
think
we
should
probably
talk
about
that
in
their
modeling
session.
What
we
definitely
don't
want,
a
controller
just
because
it
has
to
read
one
secret
shouldn't
see
all
the
secrets,
because
there
might
be
a
thousand
customers
behind
right.
So
we
know
what
we
don't
want,
but
we
need.
We
have
to
find
something
to
formulate
that
to
specify
that.
G
Yeah
the
the
trap
that
I
think
we
fell
into
like
very
early
in
the
design
of
olam
that
we
had
to
like
fix
later
on
was
really.
G
We
thought
it
was
all
about
our
back
and
scaling
out
far
back
and
maybe
secrets,
but-
and
it
was
all
doing
that
up
front
so,
like
you,
can
figure
your
tendency
up
front
and
then
that
our
back
would
be
like
generated
and
spit
out
to
whatever
namespaces
that
configuration
had.
But
we
kind
of
realized
that.
G
That
can
it's
not
just
permissions,
it's
not
just
our
back.
It's
whatever
resources
that,
like
a
controller,
might
need
to
be
generated
to
work
with
a
specific
namespace.
So
you
take
that
up.
One
level
to
workspaces
and
logical
clusters
could
be
applicable.
All
I'm
saying
is
that
we
probably
shouldn't
like
over
specify
if
we
probably
want
it
to
be
as
generic
as
possible.
A
So
the
background-
I'm
not
sure
everybody
got
this,
we
wanted
to
get
rid
of
white
cards,
especially
steve
mentioned
that
a
couple
of
times,
because
of
those
reasons
that
you
can
hijack
data
by
creating
a
cid
in
your
in
your
workspace,
and
because
of
that
we
were
forbidding
wildcards
without
admin
or
system
master
permissions.
Actually.
A
So
this
is
a
sk
path
that
we
keep
white
cards,
but
in
a
secure
way.
So
the
identity
is
something
it's
a
secret
intentionally
and
you
didn't
mention
that
the
secret
is
just
a
random
string.
A
secure,
securely
generated
string
put
in
a
secret
you
as
a
controller.
Also
you
know
that
so
you
have
to
keep
that
along
make
a
backup
out
of
it
and
you
can
even
move
your
api
export
to
a
different
place
copy
the
secret
and
you
would
get
access
again
to
the
same
old
data
everywhere,
where
we
replicate
this.
A
Private
key
concept,
basically
yeah,
so
public
one
is
used
in
the
lcd
keys
and
we
can
use
a
private
key.
We
haven't
spent
it
out
completely
yet,
but
we
can
use
a
private
key
for
authorization
of
course.
So
if
you
know
the
private
key,
you
will
get
access.
If
you
don't
have
it,
if
you
lose
it,
the
data
is
gone.
A
And,
of
course,
in
that
city
you
haven't
seen
shown
that
right
in
the
in
the
table,
yeah,
that's
interesting.
I
think.
E
So
this
is
what
one
of
the
keys
looks
like
just
for
what
stefan
was
talking
about
like
there's,
the
secret
that
was
automatically
generated
called
dash
endpoints
and
it's
just
basics,
or
you
know
it's
the
encoded,
rsa
private
key
and
then,
if
I
do
a
shot
256
on
that
it's
the
identity,
but
so
yeah.
If
we
run
ncd
cuddle.
I
you'll
see
a
few
things
in
here.
I
was
fiddling
around
with
my
kubernetes
code,
so
ignore
the
first
one:
that's
not
actually
how
it's
stored.
E
This
one
that's
currently
highlighted
is
how
this
gets
stored.
So
it's
slash
registry
and
then
the
group
name
and
then
the
resource
name
followed
by
a
colon
with
the
identity,
hash
and
then
the
rest
of
it
is
the
normal
cluster
name,
namespace
and
name
of
the
resource.
E
And
then,
if
anyone
is
creating
just
normal
crds,
not
using
the
api
exports
and
api
bindings
apis
here,
those
crds
do
not
use
this
brief
or
suffixing
with
the
identity
at
the
end,
and
they
would
just
show
up
as
normal
entries.
So
like
the
one
I
have
highlighted
right
now
for
the
api
resource
schema
that
is
acr
for
the
api
resource
scheme
of
crd.
So
you
don't
see
any
colon
with
the
identity
hash.
There.
A
There's
another
idea
we
are
exploring
if
you
have
breaking
api
changes
and
you
want
as
a
controller
author,
you
want
to
have
two
informers
against
the
old
and
the
new
world
you
might
be
able
to.
You
can
use
different
identity,
a
different
export
and
those
have
the
same
resource
resource
version,
google
resource
version,
but
that
nd
is
different.
So
it's
actually
a
different
resource
for
the
user.
A
It
doesn't
matter,
the
user
doesn't
see
it,
but
as
a
controller
author
you
can
partition
your
your
key
space
and
you
can
run
an
old
controller
manager
or
controller
operator
whatever
and
the
second
one
a
new
one
and
a
new
one
just
sees
the
new
objects,
so
those
things
can
be
done
here.
But
of
course
it's
it's
in
lcd,
so
it's
stored
on
disk,
so
you
cannot
easily
switch
over.
A
E
Or
or
conversions
of
some
sort
like
we,
we
don't
have
conversion
web
hooks
enabled
so
we're
still
working
on
the
story,
for
how
do
we
do
conversions
between
api
versions
and
whether
that's
like
within
an
existing
identity
or
across
two
different
ones?
It's
kind
of
the
same
problem.
E
Yeah
it
took
me
under
two
hours
from
and-
and
that
was
going
from
not
having
looked
at
anything
that
david
had
written
for
virtual
workspaces
to
he
pointed
me
to
a
couple
places
to
start.
I
copied
the
syncer
virtual
workspace
that
he's
got
in
a
pull
request
and
just
ripped
out
all
the
synchro
specific
bits
and
started
plumbing
in
the
api,
export
and
identity
things,
and
it
was
pretty
straightforward.
A
E
E
So
we
just
need
to
have
a
discussion
about
all
of
this.
Like
I,
I
would
benefit
from
a
conversation
overviewing
the
proposed
apis
for
locations
and
scheduling
and
then
see
how
that
fit
ties
into
negotiation
domains
and
then
the
api
resource
imports
and
negotiated
api
resource
crds
that
we
currently
have
do
they
fit
in.
Do
they
need
to
change?
Do
they
go
away
and
what
all
that
looks
like
yeah,
so
the.
A
E
Yeah,
this
one's
we
discovered
it
yesterday
or
friday
where,
because.
E
That
the
leader
election
code
is
written
in
client
go.
It
assumes
that
it
can
look
the
in
cluster
namespace
file
and
that
that's
where
the
locks
need
to
go
so
just
need
somebody
to
explore
how
to
make
this
work
like
that
a
workaround
is
in
kcp.
E
You
create
a
namespace
that
has
the
same
name
as
the
automatically
created
namespace
in
the
workload
cluster.
The
one
that's
kcp,
followed
by
the
hash
of
the
of
the
locator,
but
that's
that's
not
elegant,
because
you
end
up
just
creating
extra
name
spaces
in
you
know
in
kcp
that
you
don't
necessarily
need.
E
A
So
it's
a
cleanup
when
we
have
the
external
generators
we
can
get
rid
of
that
right.
A
E
A
D
Yes,
basically,
we
need
to
create
the
the
tests
for
the
downstream
essay
feature.
We
are
missing
those
tests,
but
I
guess
there
is
already
another
issue
about
that.
A
A
E
I
just
want
to
mention
that
there
were
a
lot
of
things
that
were
in
the
0.4
milestone
that
were
not
blockers
and
also
just
there
was
no
way
that
they're
going
to
get
done
in
the
next
week
and
a
half.
So
I
did
clear
the
milestone
from
several
issues
and
we
should
find
a
time
to
review
all
issues
without
a
milestone.
E
E
And
then
finally,
I
don't
know
that
we're
necessarily
going
to
get
all
42
of
these
open
issues
and
prs
in
for
0.4.
So
if
you've
got
something
that
you
see
in
here
and
you
want
to
change
the
milestone
because
it's
not
required
or
it's
not
going
to
make
it,
please
add
a
comment
and
we'll
try
to
chat
about
it.
E
Should
we
do
a
buck
scrap
session,
maybe
next
week?
I
think
it
would
be
helpful.
There's
just
you
know:
we've
accumulated
a
bunch
of
stuff
and
we
need
to
do
some
housekeeping.