►
From YouTube: Community Meeting, November 9, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
We
have
some
discussion
topics,
but
some
some
quick,
some
pr's
to
go
over
and
a
doc
that
was
shared,
clayton
shared
a
terminology,
adr
adr
is
architecture,
design,
review
document,
eight
somebody
can
somebody
can
tell
me
if
that's
wrong,
but
basically
some
terms
that
we
usually
throw
around
as
if
we
know
what
they
mean.
This
is
a
place
to
sort
of
argue
about
and
agree
on
what
those
words
mean
if
you're
curious
take
a
look
yeah.
I
think
that
will
be
useful.
A
Andy
also
sent
a
pr
recently
to
refactor
how
this
how
the
kcp,
startup
or
server
starts
up.
It
seems
in
general,
good
to
me.
I
think
steve
has
also
reviewed
it.
I
don't
know
andy
if
you
have
more,
you
want
to
say
about
it.
B
Yeah,
I
did
just
push
up
another
fairly
large
commit
to
it.
So
what
you've
seen
was
about
half
of
what's
there
now
and
it
it
elevates
the
creation
of
some
clients
and
the
shared
and
former
factories
higher
up
in
the
stack
so
that
the
clients
and
the
informers
can
be
shared
by
multiple
controllers,
and
I
also
plumbed
through
the
the
done
channel
from
the
context
in
as
many
places
as
I
could
find
quickly
this
morning.
So
one
positive
effect
there
is
that,
if
you're
just
running
the
kcp
process
by
hand.
B
You
hit
control
c
to
quit
it.
It
takes
much
less
time
to
shut
down
now,
because
the
I
think
it
was
the
namespace
controller
was
probably
the
thing
that
was
keeping
it
open
because
it
was
basically
coded
to
never
stop
so
that
last
commit
is
a
a
bigger
refactor,
but
it's
really
just
moving
stuff
up.
There's
a
chance
that
I
got
the
order
of
something
wrong,
because
when
you,
when
you
do
it
this
way,
you've
got
to
make
sure
that
the
informers
get
started
or
get
created.
The
factory
gets
created.
B
Then
the
controllers
get
created
where
they're
referencing
the
informers,
then
you
start
the
informers,
wait
for
the
cache
sync
and
then
start
the
controllers.
So
I
think
I
got
the
order
right.
The
startup
seems
to
look
the
same
in
terms
of
the
log
output,
but
definitely
some
eyes
on.
That
would
probably
be
helpful
just
to
make
sure
I
didn't
miss
anything,
and
you
know
if
you
all
have
tests
that
you
run
just
to
fiddle
around
with
stuff
locally.
A
That's
good
to
know,
thank
you
yeah,
as
steve
said,
the
classic
bait
and
switch
you
get
a
couple
of
approvals,
and
then
you
completely
rewrite
it
from
the
ground
up.
I'm.
B
Also
happy
to
split
this
third
commit
into
a
separate
pr.
If
you
all
want
it
seems,
it
seems.
A
Fine
to
me
so
we
do
have,
we
do
have
some
tests.
I
don't
see
them
running,
which
is
weird
so
I'll.
Look
into
that,
but
we
do
have
some
end
to
end
tests
that
run
through
the
basic
like
demo
scenarios,
so
that
should
at
least
prevent
you
from
breaking.
You
know
everything.
B
A
A
Nice,
nice
and
last
night
I
sent
out
a
pr
that
is
also
pretty
complex
and
pretty
probably
in
need
of
some
simplicity
which
basically
just
reconciles
name
spaces.
So
we
want
to
be
able
to
schedule
more
fine
grained
than
just
namespaces,
but
in
the
meantime,
being
able
to
schedule
whole
name,
spaces
worth
of
things
is
better
than
what
we
do
now,
which
is
almost
nothing.
A
So
this
is
an
attempt
to
match
up
a
namespace
to
a
cluster
label
that
namespace
for
that
cluster
and
have
it
synced
and
scheduled
there
eventually
someday
and
any
new
resource
that
comes
into
a
namespace,
assign
it
to
the
same
cluster
as
its
namespace
is
assigned
to.
It
also
does
some
very
basic
health
checking
on
a
cluster,
so
if
a
cluster
becomes
unhealthy,
it
will
shed
all
of
the
name
spaces
attached
to
it
and
have
them
reassigned
somewhere
else.
A
I
will
not
say
that
it
is
any
kind
of
bulletproof
useful
good
for
production
anything,
but
it
is,
you
know,
miles
ahead
of
what
we
have
now.
I
need
to
write
some
tests
for
it.
That's
mainly
why
it's
work
in
progress.
I
think
just
some
basic
tests
to
make
sure
that
things
work
like
we
think
they
work
would
be
useful.
A
It
was
called
in
the
logical
cluster
and
that's
not
going
to
work
if
we
have
two
logical
clusters
that
share
the
same
namespace
name
and
there's
also
like
filtering
like
it
shouldn't
write,
cube
system,
namespace
stuff
and
it
shouldn't
write
anything
you
know
there's
a
lot
it
shouldn't
do
that
it
currently
would
do
if
you
tried
to
make
it
so
we
should
avoid
that,
but
this
is
at
least
a
step
forward.
A
C
A
Moment
right,
I
wouldn't
say
that
we
have
a
formal
policy
now
I
think
in
general
we
have
not
self-merged
changes.
If
you
know
I
would
say,
depending
on
what
the
change
is,
if
it's
some
typo
change
and
it's
approved,
I
don't
have
a
problem
with
somebody
self-merging,
but
if
it's
like
andy
rewriting
startup
or
me
writing
rewriting
you
know
hundreds
of
thousands
of
lines
of
new
controllers.
I
think
somebody
else
should
bonk
the
button
for
that
yeah.
D
Obviously,
up
to
this
point
I
think
has
been
approve
the
pr
ignore
it
for
like
a
month
and
then
actually.
A
A
very
scalable
and
high
velocity
policy
we
have
approve
it,
ignore
it
approve
it
later.
I
like.
D
A
Yeah,
I
tend
to
not
prefer
policies
until
they
are
necessary,
but
if
we,
if,
if
we
are
reaching
the
point
where
we
think
a
documented
regimented,
you
know
actual
automated
process
is
useful,
I
think
it.
I
would
welcome
it,
but
yeah
does
that
sound
we're
not
there
yet.
A
Like
me,
I
am
open
to
entertaining
the
idea
that
we
are
there
if,
if
we
think
we
are,
but
definitely
the
policy
of
approving
and
ignoring
for
a
month
is
not
something
we
should
do
so
maybe
we
should
automate
not
not
ignoring
things
for
a
month,
but
yeah
are
there
any
other
recent
pr's
or
changes?
People
want
to
talk
about
or
upcoming
changes
that
people
want
to
talk
about.
D
I
think
I
guess
one
of
the
conversations
I
was
having
with
andy
is
like
for
the
sharding
work.
D
I
think
it's
possible
to
break
it
up
into
like
discrete
chunks
that
perhaps
aren't
very
useful
in
and
of
themselves,
but
at
least
are
like
thematically
similar.
So
there's
no
reason
for
the
pr
to
be
one
monolith.
If
we
want
to
merge
it,
and
we
think
that
smaller
prs
would
help,
I'm
happy
to
do
that.
A
D
D
It
could
I
could
try
to
break
it
up
my
cube
fork.
Pr
does
have
things
broken
out.
I
was
also
not
entirely
clear
what
we
expected
on
the
kubernetes
side
like
there
are
definitely
shortcuts.
I
took
I'm
not
sure
if
we're
trying
to
write
commits
that
are
like
one
for
one
going
to
be
things,
we
can
take
up
stream.
A
I
think
if,
if
you
can
make
them
something
that
is
easy
to
upstream,
if
it
is,
if
it
is
trivial
or
relatively
easy
to
do
that,
I
think
it
would
be
nice,
but
I
don't
think
you
sh,
I
don't
think
we
have
to
like
spend
a
week
crafting
the
absolute
best
upstream
pr
in
order
to
get
it
merged
into
kcp
or
our
fork.
I
think
at
least
so
far
what
we
have
been
doing
in
our
fork
of
kubernetes
is
whatever
we
have
to
and
then
so
far
we've
made
like
one.
A
What
david
made
one
good
pass
of
like
upgra
rebasing
it
on
top
of
recent
kubernetes
and
cleaning
it
up
a
bit.
I
still
don't
think
those
changes
are
like
exactly
one
for
one
ready
to
be
upstreamed
and
everything
but
yeah.
I
wouldn't
I
wouldn't
worry
too
much
about
it.
A
Don't
don't
make
garbage,
but
don't
don't
try
to
make
it
pristine
and
perfect
either,
because
no
matter
what
at
the
end
of
the
day
you're
going
to
have
to
like
someone
will
make
you
make
a
change
upstream
also
to
be
able
to
merge
yeah.
So.
D
I'd
had
to
do
a
lot
of
hackery
in
the
end-to-end
suite,
but
it's
at
least
separate
and
those
are
ready
for
reviews.
Some
of
the
changes
they're
like,
for
instance,
the
manner
in
which
I
refactored
client
go
semantics
to
allow
for
clusterless
clients
scoping
down
to
a
cluster
like
that
only
really
makes
sense
once
it's
layered
on
top
of
some
of
the
other
changes.
So
it's
currently
in
one
place,
but
you
know
we
could.
If
we
want
to
review
pieces
of
it
and
merge
it
individually.
That's
fine
too.
A
Yeah,
stefan
did
you
because.
C
Yeah
I
would
like
to
see
not
in
this
meeting
but
in
general
some
discussion
or
some
meeting
where
we
go
over
the
changes
and
identify
those
we
want
to
push
upstream.
Maybe
early,
I
remember
steve,
you
had
the
question
about
porting
controllers
upstream
to
be
independent
of
resource
version
or
partners
or
not
compactness.
C
A
I
think
there's
are
there
also
are
there
like?
Are
we
also
generating
functional,
non-behavior
changing
but
clean
up
changes
that
will
lay
the
groundwork
and
help
us,
like
small,
small,
small
changes
that
will
help
us
along
our
path,
but
are
not
necessarily
needing
of
a
cap
and
not
necessarily
needing
of
a
you
know,
discussion
about
where
we're
going
with
all
of
these,
I
think
small
changes
could
all
could
be
upstreamed
relatively
easily
if
we
have
them.
C
And
there
are
great
things
you
can
just
pick
up
new
people
who
want
to
contribute
and
help.
They
can
just
pick
one
of
those
things
and
try
to
get
them
upstream
yeah.
I
remember
I
mean
it's
a
bigger
topic,
but
remember
that
it's
like
splitting
up
the
aggregator
upstream
from
the
api
extension
api
server.
C
A
Yeah,
especially
things
that
are
just
cleanups
and
and
sanity,
changes
that
don't
change
functionality
or
don't
change,
you
know
behavior
or
anything.
If
we
as
we
identify
those,
we
should
put
them
somewhere.
Maybe
in
an
issue
of
things
we
want
to
upstream
or
a
dock
of
things
we
want
to
upstream
there's
no
reason
to
hold
off
on
those
right
if
they're
small,
if
they're
not
behavior,
changing
if
they're
just
cleanups,
especially
cleanups,
that
will
help
us
later,
like
you
said
it
takes
a
while
anyway
to
get
them
upstream.
B
Sorry,
I
was
gonna
say:
let's
create
a
doc
and
start
to
enumerate
the
you
know
full
set
of
things
that
we've
done,
probably
with
the
eye
to
what
we'd
like
to
do.
You
know
and
then
start
creating
those
tasks
that
you
were
talking
about
stuff
on
and
then
maybe
later
this
week
or
at
the
next
community
meeting
we
can
review
them
and
start
to.
A
A
All
right,
I
wanted
to
bring
up
a
topic
we
had
talked
about
in
some
meetings
over
the
last
week,
which
is
virtualizing
resources.
A
I
think
I
think
we
in
general
think
this
will
help
a
lot
with
either
exposing
different
views
of
the
same
object
and
storage,
because
we
can
have
a
virtualized
layer
that
sort
of
transforms
it
on
the
way
out
to
the
client,
or
I
think,
one
way
it
came
up
in
discussion
that
was
new
to
me,
which
was
when
forking
a
workspace.
I
think
I
had
mainly
thought
that
we
would
create
a
new
logical
cluster
and
copy
all
the
objects
from
previous
workspace
into
new
workspace
and
now
boom.
A
You
forked,
but
I
think
it
was
clayton
it
was
saying,
was
that
was
too
copying
objects
both
takes
too
long
and
cost
too
much
and
show
so.
Instead,
we
should
have
some
virtualized
like
resource
pointer
that
says:
okay,
now,
I've
created
a
fork
and
it
is
literally
just
pointing
at
the
same
objects
and
as
you
change
the
fork
you
overlay
on
top
of
those
or
something
I'm
I'm
not
quite
sure.
I
understand
if
others
others
understand.
A
C
B
So
I
don't
know
about
forking
workspaces
specifically,
but
I
know
from
talking
to
clayton
earlier
this
week,
which
I
guess
was
yesterday.
He
feels
very
strongly
that
creating
a
new
workspace
is
a
zero
cost
or
as
close
to
a
zero
cost
operation
in
terms
of
storage,
at
least
so,
if
you're
going
to
create
a
new
workspace
that
is
of
a
specific
type,
where
the
implication
is
you
get
a
whole
bunch
of
apis
pre-installed
in
your
brand
new
workspace.
B
C
Yeah,
I
think
the
cig,
I
think
we
talked
about
it
before
something
like
a
cid.
We
need,
but
it
could
be,
of
course,
something
like
I
know,
workspace
type
which
was
synced
to
a
cluster
and
then
shared
by
many
workspaces.
This
would
work
yeah.
B
So
there's
probably
at
least
a
couple
different
use
cases
like
I
want
to.
I
have
an
existing
workspace.
That's
got
a
whole
bunch
of
deployments
and
other
things
in
it,
and
I
want
to
fork
it.
I
would.
I
haven't
talked
to
clayton
about
it,
but
my
gut
feeling
is
like
that's
a
copy
like
you're
copying
the
data.
That's
in
there,
the
the
alternative
would
be.
B
You
do
a
copy
on
right
type
of
optimization
so
that
you
somehow
don't
have
actual
storage
copies
until
there
is
a
change,
and
I
think
that
the
workspace
forking
is
probably
a
different
use
case.
Then
I'm
creating
a
brand
new
workspace
and
I
want
it
to
have
some
apis,
pre-configured
and
pre-installed.
A
Yeah,
so
is
the
is
the
idea,
as
you
understand
it,
that
so
I
create
a
new
workspace
of
some
type,
which
means
I
get
these
to
me.
It
looks
like
it
comes
pre-installed,
these
hundred
crds
or
whatever
that
re.
In
reality,
those
are
just
virtualized
and
pointing
to
some
crd
definition
elsewhere,
maybe
in
another
workspace.
B
I
mean
so
the
use
case
that
I
would
think
of,
for
that
is
like
there's
some
service
operator
that
is
giving
you,
I
don't
know
cert
manager
as
a
service,
so,
like
you,
don't
have
to
worry
about
installing
it
and
operating
it,
and
so,
if
that
operator
entity
is
creating
crds
that
represent
issuers
and
certificates
and
things
the
s,
the
schema,
the
the
actual
crds
are
read
only
to
you
as
a
consumer.
B
Who's
got
them
bound
into
your
workspace,
like
that's
my
gun
feeling,
and
so,
if
you
wanted
to
modify
one
of
those
crds,
it
would
either
be
rejected,
or
maybe
you
are
sort
of
forking
it
at
that
point.
But
I
don't
know
I
I
would
tend
to
say
like
if
it's
operated
by
somebody
else,
you
shouldn't
be
mucking
with
the
schema.
C
Go
ahead,
stefan
yeah
they
could
be
virtual.
That's
what
jason
said
and
just
as
a
background
we
have
cids,
which
are
near
to
the
limit
of
objects
like
a
megabyte
of
data,
so
100
of
them
means
100
megabytes
of
json
per
workspace,
and,
if
you
think
one
cluster
of
lcd
today
has
8
gigabyte
max
of
memory,
this
limits
the
number
of
workspaces
considerably.
A
Yeah
virtualizing,
these
things
seems
like
well,
it
seems
important,
it
seems
like
we
have
to,
but
also
it
seems
like
a
fairly
large
unspecified
area
of
work
like
where,
how
is
how
is
virtualization
implemented?
Does
it
look
to
users?
I
think.
B
That
I
would
caution
us
to
be
careful
about
using
virtualization
as
an
overly
broad
and
generic
term,
because
somebody's
got
to
go,
write
some
code
somewhere
to
do
whatever
task.
It
is
you're
trying
to
do
so.
If
you
want
to
say
there's
a
crd
in
logical
cluster
a
and
I
want
it
to
magically
be
available
in
logical
clusters,
bcd
to
z,
like
there's
specific
code,
that
has
to
be
written
to
do
that,
there's
different
code
that
you
have
to
write.
If
you
want
to
create
a
resource
that
represents
the
workspaces
I
have
access
to.
B
So
I
think
at
least
from
for
me
personally
talking
about
just
virtualization.
I
struggle
with
it
because
it's
it's
a
very
useful
concept,
but
it
has
specific
implementations
for
each
thing,
you're
trying
to
do,
and
so
I
would
prefer
to
talk
about
like
each
specific
use
case.
C
And
we
have
examples
like
projects
in
openshift,
so
we
have
experience
with
that.
Publishing
the
same
object
in
different
places
is
pretty
simple:
it's
it's
pretty
much
a
shortcut,
an
api
endpoint,
nothing
else.
If
you
project
and
you
think
about
updates
and
writes,
and
so
on.
This
is
getting
complex,
so
the
big
topic
of
virtual
workspaces
with
lcd
gcd
and
maybe
maybe
other
things.
This
is
a
big
thing,
making
something
visible.
That's
the
easy
part.
A
A
C
Api
extension,
api
server
has
a
place
basically
a
handler,
a
gpa
antler.
It
checks
the
the
the
http
path
for
apis.
It
knows
and
john,
then
calls
the
right
hand
lines.
It's.
That's
pretty
simple!
That's
what
I
said
making
things
visible
in
the
in
the
api
surface.
This
is
the
easy
part
we
can
put
a
typeset
if
somebody
wants
to
play
with.
That
should
be
too
hard.
B
Yeah,
that
was
something
that
david
and
I
were
starting
to
talk
about
yesterday
and
I
think
he
was
going
to
take
a
look
at
it,
but
he's
gonna
be
out
for
a
little
bit.
So
I
may
pick
that
up
in
his
absence
and
jason
to
your
question,
I
think
there's
probably
a
difference
between
hacking
with
some
of
the
existing
infra
like
crds
versus
I
just
want
to
create
a
new
resource
type
and
back
it
with
custom
code,
like
that's,
typically
what
you
do
with
an
aggregated
api
server
in
like
standard
kubernetes.
B
I
think
kcp,
that's
where,
like
I
don't
know
where
in
the
code,
we
do
that
other
than
it's
in
the
server
somewhere,
and
so
maybe
we
can
work
with
staphon
or
somebody
else.
To
just
give
an
example
like
here's,
where
you
would
inject
some
code
for
a
virtual
resource.
A
B
Like
workspaces
like
as
a
relatively
unprivileged
client,
I
want
to
do
list
workspaces
and
have
it
return
all
the
workspaces
that
I'm
I'm
able
to
access,
and
that
is
a
virtualized
or
materialized
view
on
the
full
set
of
workspaces
scoped
down
based
on
permissions.
B
Okay-
and
I
think
clayton
has
extended
that
to
scope
things
similarly,
that
are
potentially
cross
workspace,
but
you
need
to
limit
to
what
you
have
permission
to
so
like
going
back
with
my
cert
manager
example,
I
don't
know
that
a
client
would
necessarily
want
to
do
this,
but
you
know
if
you're
a
person-
and
you
say
like
show
me
all
my
certificates
across
all
my
workspaces.
B
That
is
another
example
where
you
need
to
only
see
the
ones
in
the
workspaces
that
you
can
access
and
so
like
that
that
probably
falls
under
steve's
cross
shard
list
and
watch
to
a
large
extent,
but
I
don't
know
how
much
is
filtered
by
you
know
our
back
in
there.
Yet,
if
anything,
nothing
yeah
I
mean,
given
that
we
don't
have
the
workspace
as
the
type,
then
we
don't
have
workspace
our
back.
I
think
that's
that'll
probably
be
the
thing
that
comes
after
steve's
prototyping
on
cross
chart
list
watch
is
okay.
B
C
B
C
B
Think
I
don't
think
it's
merged
yet
and
I
think
we're
like
we
can
go
ahead
and
just
merge
when
it
looks
good
and
if
you
know,
if
it's
good
enough-
and
we
want
to
follow
up
real
quick
with
some
minor
changes,
we
can
do
that.
But
having
the
types
exist
in
the
tree
is
better
than
having
the
pr
stay
open,
forever.
Yeah.
C
D
D
D
If
we
yeah,
we
can
either
merge
it
now
and
then
just
change
it
afterwards
or
just
open
a
separate
pr.
I
mean
running.
The
generators
is
not
that
hard
do.
We
want
to
have
a
conversation
today
about
the
actual
shape
of
that
like
binding
the
workspace
to
a
shard
or
scheduling
it
rather.
A
Absolutely
sure
yeah
I
mean
what
what
is
the
I.
D
C
D
B
C
B
Well,
no,
I
mean
it
could
but
like
any
time
you
have
to
recreate
something
from
scratch.
Then,
if
you,
if
something
made
a
determination
for
replacement
or
you
know
for
some
sort
of
calculated
value
and
it's
not
recoverable,
based
on
what
you
can
observe,
so
you
know
the
scheduler
has
made
a
decision.
I'm
going
to
put
this
pod
on
some
node.
B
C
I
mean
a
pot
becomes
an
identity
in
the
moment
it's
scheduled
and
then
it's
owned
by
this
cubelet.
It
cannot
be
moved
again.
It's
like
in
this
scheduled
state.
End
state
final
state
terminal
state
same
thing
for
workspace.
It
doesn't
make
sense
to
reschedule
the
workspace
elsewhere,
because
also
lcd
data
belonging
to
it.
C
C
D
D
B
Yeah,
I
think,
if
you're
in
the
middle
of
a
move
and
you
like
lose
at
cd,
I
mean
you're
recovering
from
a
backup,
like
that's
a
very
complicated
edge
case,
to
work
through.
I
think,
in
the
interest
of
making
progress
on
stuff.
D
D
I
think
you'd
have
to
like
when
you
create
it,
you
have
to
set
the
target
and
then
something
would
have
to
update
spec
when
it's
done,
and
then
it
also
changes
the
semantics
of
how
you
read
the
status
right,
because
right
now
in
status,
you
record
current
and
like
first
and
last
resource
version
like
the
resource
version
boundaries
that
I
was
present
on.
The
specific
chart
right,
which
is.
B
A
B
Yeah,
then,
you
create
a
new
cr
new
crd,
which
is
like
a
workspace
move
cr
and
that
can
contain
the
information
that
says
this
workspace
needs
to
move
from
here
to
there.
Here's
the
resource
versions
that
I
need
to
capture,
and
then
you
can
have
a
controller
operate
on
that
and
go
and
update
the
the
actual
workspace
when
the
move
is
done.
C
B
B
D
Right,
and
so
this
this
was
something
I
was
thinking
about
previously,
but
I
think,
like
maybe
we
can
take
this
offline,
but
I
think
you
know,
since
the
workspace
object
right
now.
The
entire
point
of
it
is
to
store
this
like
shard
mapping
and
everything
that
we
choose
to
do
with
that
object
depend
like
that,
informs
how
we
create
the
index,
how
we
like.
What's
the
access
pattern
for
this
data,
we
should
probably
nail
this
down
before
we
like
it
makes
sense
to
think.
B
B
But
that
those
would
have
work
space
crs
in
them
for
the
actual
workspaces,
and
so
I
don't
know
what
the
top
level
index
would
be,
but
assuming
you
can
figure
out
what
organization
you're
in
you
go
find
the
shard
that
has
that
org
and
then
you
can
go.
Look
at
the
workspace
cr
on
there
to
figure
out
what
shard
the
workspace
is
on,
but
again
like
we
need
to
actually
design
all
of
this.
It
feels
like.
C
We
I
mean
we
can
postpone
merging
this
and
getting
a
design
for
the
whole
big
thing.
I
don't
think
it
makes
sense.
B
Yeah
well
like
one
thing
that
clayton
was
interested
in
having
david
explore
after
the
workspace
types
get
merged,
is
adding
just
a
placeholder
field
on
the
workspace
spec
to
say,
like
my
type
is
some
string
and
then
the
value
is
the
name
of
another
workspace,
and
so
then,
basically,
like
you'd,
have
the
code.
That
would
say:
okay,
here's,
this
new
workspace.
B
A
B
Yeah,
I
think
we
should
try
and
get
clayton's
time
today
and
and
basically
we
don't
need
to
merge
it
until
somebody
needs
to
do
something
with
a
workspace
cr,
which
could
be
the
crd
prototype.
Three.