►
Description
#sig-cluster-lifecycle
#capn
#capi
#kcp
A
Cool
good
morning,
everybody,
this
is
the
may
11th
cap
and
office
hours.
As
always,
this
is
recorded
and
posted
to
youtube
after
the
link
will
get
posted
into
the
agenda
doc
as
well
and
I'll
drop
it
into
our
slack
channel.
Just
make
sure
everybody
has
access
to
go
view.
That
so
don't
say
anything
you
wouldn't
want
to
be
posted
to
the
rest
of
the
world
and
yeah
we'll
get
started.
A
A
So
if
you
want
to
start
get
started
on
that
also
if
anybody
wants
to-
because
this
is
a
bunch
of
new
folks
give
introductions
to
who
you
are
to
the
group,
that
would
be
good.
B
I
I
can
go.
I'm
a
current
grad
student
at
rit
interested
in
captain
project
nice
to
meet
you.
A
D
Well,
that's
clayton!
No!
I'm
just
kidding.
I
will
introduce
myself,
I'm
jason.
I
work
at
red
hat
and
I
have
been
working
with
clayton
and
david
on
the
kcp
project,
which
I
imagine
we'll
talk
about
a
little
bit
prototype
prototype
yep.
You
got
me
so.
C
Easy
to
say,
I'm
not
clayton
coleman,
so
I've
been
working
on
the
kubernetes
project
for
a
very
long
time
and
kind
of
pushing
things
along.
You
know
like
evil,
mastermind
kind
of
plots
in
the
background
for
like
how
can
we
all
get
more
value
out
of
the
ecosystem
and
how
can
open
source
evolve
to
support?
You
know,
users
needs
is
kind
of
the
thing,
and
so
this
is
just
the
latest
in
a
series
of
kicks.
I
give
to
to
the
red
hatters
among
us
and
trying
to
make
people's
lives.
A
Cool
all
right,
so
basically
thanks
clayton
for
coming
in
here
and
and
chatting
with
us
over
twitter.
About
this,
I
think,
there's
gonna,
be
some
there's
gonna,
be
some
good
overlap
here,
especially
from
the
last
call.
It
sounds
like
there's
a
lot
of
similar
goals
that
we're
trying
to
attack
from
a
multi-cluster,
as
well
as
a
multi-tenancy
perspective
within
within
kubernetes,
from
what
we're
doing.
To
give
a
little
background.
A
I
don't
know
if
you
all
are
well
up
on
what
virtual
cluster
is,
but
it
originally
started
from
the
folks
at
alibaba
and
basically
started
in
the
multi-tenancy
repo
within
kubernetes
sigs
as
a
way
to
nest
kubernetes
within
kubernetes
as
pod
based
control
planes
and
being
able
to
take
that
and
then
sync
workloads
from
a
from
those
multi-tenant
or
from
those
single
tenant
control
planes
into
a
large
scheduling
domain.
In
essence,
so
it
uses
some
parts
of
it
doesn't
at
the
at
the
lowest
level.
It
doesn't
use
all
of
kubernetes.
A
It
basically
treats
those
top
level
clusters
as
a
way
to
to
shard
out
into
those
lower
level
deployment
or
into
those
lower
level.
Super
clusters
that
sounds
very
similar
to
some
of
the
things
that
that
you
all
are
approaching,
with
the
the
syncing
logic
that
you've
built
into
kcp
and
so
on.
C
Yeah
and
a
lot
of
what
and
so
kcp
is
kind
of
like
a
prototype,
it
was,
you
know,
we've
known
for
a
long
time
that
single
cluster
tendency
in
cube
has
challenges,
and
you
know
one
of
the
things
that
I
was
kind
of,
and
you
know,
even
since
the
very
early
days
like
federation
and
cube
there
was
this.
You
know
how
do
we?
C
How
do
we
take
the
benefits
within
a
cluster
which
is
a
single
failure
domain
and
either
lift
them
up
or
spread
them
out
or
cut,
put
cut
lines,
and
so
I'd
almost
say,
there's
like
a
parallel
track
for
most
of
the
last
seven
years,
which
is
like
how
do
you?
How
do
you
cut
cube
up
and
we
we?
We
have
there's
lots
of
good
ways
to
do
it.
Actually,
the
best
part
is
the
core.
Primitives
are
okay.
C
We
have
some
some
fault
lines
that
don't
quite
cut
or,
like
you
know
you,
you
make
different
trade-offs
at
different
levels
and
so
yeah
kcp
from
a
prototype
perspective
was.
Could
we
go
one
step
further
than
virtual
cluster
and
there
was
another
dimension
which
is:
could
we
take
the
cluster
out
of
the
equation
and
part
of
this
is
like,
so
I
don't
know
how
much
folks
deal
with
this
in
their
day
jobs
or
their
personal
experiences.
C
One
of
the
things
that
I
see
a
lot
of
is
people
have
multiple
clusters
and
most
people
at
a
large
enough.
You
know
enterprisey
scale
or
whatever
they
absolutely
have
the
problem
of
like.
How
do
I
keep
a
workload
running
even
if
I
bork
one
of
the
clusters,
and
so
today
most
people
have
some
layer
on
top
which
is
like
take
it
and
move
it
like
in
git
ups
and
ci,
cd
bash
scripting
whatever,
and
then
you
know,
there's
the
various
projects
in
cube
which
do
it.
C
C
Wouldn't
it
be
awesome
if
most
people
creating
cube
apps,
never
had
to
know
about
the
cluster,
they
were
on
there'd
been
other
projects
like
virtual
cubelet
and
fargate
stuff
and
eks,
and
all
these
things
that
kind
of
map
to
those
ideas.
So
it's
really
looking
at
a
different
perspective,
so
yeah,
I
do
think,
there's
a
fair
amount
of
benefit.
If
we
could
figure
out
how
to
do
the
use
cases
match
and
do
the
solutions
match.
A
Very
cool
yeah,
I
agree,
100
agree
leeds.
You
wanted
to
bring
up
anything
specific.
I
know
you
had
some
ideas
on
on
where
you
were
thinking
that
that
the
vc
integration
or
or
where
we
could,
where
we
could
match
up.
E
Right
so
yeah
when
you
originally
working
on
a
virtual
cluster,
so
I
think
I
think
also
free
also
mentioned
this
before
we
are
heavily
focused
on
the
heart
multi-tenancy
model
in
kubernetes
and
obviously
a
straightforward
way
is
that
we
provide
isolated
control
plan
to
users,
and
then
we
use
syncer
to
actually
sync
the
their
workloads
and
resources
which
are
concerns
from
the
runtime
to
the
underlying
runtime
cluster.
E
So
that
is
the
original
goal,
but
we
were
recently
also
looking
at
a
a
new
scenario
where
actually
you
can
so
you
can
think
of
that.
Okay,
you
can
provide
a
generic
hardware
tendency
model
to
the
whole
cluster
tenants.
Everybody
will
have
a
small
virtual
cluster
to
interact
with
these,
so
this
is
a
generic
model
make
sure
which
is
exactly
what
we
are
doing
today.
E
So
so
I'm
also
thinking
I'm
also
actually
thinking
another
scenario
where
you
know
for
most
of
the
users.
They
actually
do
not
need
to
have
a
virtual
cluster.
It's
just
a
talk
with
the
online
runtime
cluster,
but
for
certain
workloads,
especially
for
crd-based
workloads,
then
we
can
create
a
small
virtual
cluster
for
them.
E
You
know
integrate
into
that
into
that
approach,
so
we
basically
packaged
kcp
for
the
users
out
of
the
virtual
cluster,
but
we
do
the
same
thing
to
make
to
make
the
workload
running
in
a
runtime
cluster,
and
in
that
case
the
virtual
cluster
applied
to
that
tenant
and
just
need
to
only
care
about
the
crd-based
workloads
in
that
case,
as
well
as
other
resources
is
needed.
So
this
is
a
new
scenario,
I'm
thinking
about
which
kind
of
different
from
the
generic
multi-tenants
model
we
are.
We
are
heavily
focused
before,
but
I'm
not.
E
C
Yeah
and-
and
I
think
one
of
the
questions
that
I
and
this
is
this
is
probably
a
longer
term
thing,
but
I
wouldn't
want
so
logical
cluster.
The
logical
cluster
idea
would
be,
I
think,
it's
most
valuable
if
most
people
deal
with
it
most
of
the
time.
So
then
the
question
would
be
you
know
in
a
single
cluster
today,
most
peop
everybody
has
the
same
versions
of
the
api.
C
So
controllers
have
an
easier
time,
except
that's
not
actually
true,
because
nodes
are
controllers
and
nodes
actually
have
different
views
of
what
they're
capable
of
right.
So
as
you're
going
through
upgrading
machines,
a
node
is
implicitly
determined
by
what
it
can
offer,
and
so
there
was
a
there's,
an
angle
here,
for
we
never
really
in
cube
tackled
the
problem
of
if
you
go
layer,
15
different
extensions
onto
a
node
so
which
everybody
roughly
does
right,
whether
it's
storage
or
networking
or
policy,
or
security,
or
monitoring
tools
or
security
tools.
C
C
A
big
pain
point,
if
you
upgrade
a
single
cluster,
is
upgrading
all
of
the
apis.
At
the
same
time,
we're
not
really
pushing
any
of
the
work
to
the
controllers
right,
because
the
the
mechanism
in
the
controller
could
very
easily
look
at
two
or
three
different
versions
of
a
resource
right.
That's
for
as
people
write,
integrations
and
extensions.
C
It's
actually
reasonably
easy
to
put
some
of
that
load
on
the
controller.
We
just
don't
do
it
today.
So
one
of
the
thoughts
that
kind
of
inform
this
would
be
in
the
long
run
if
you've
got
a
lot
of
integrations
and
your
integrations
don't
deal
with
version
skew
and
drift
and
aren't
upgraded
than
lockstep.
C
You
run
into
some
challenges,
and
so
logical
clusters,
actually
to
me,
might
be
something
that
I
think
belong
in
kubernetes
in
the
long
run
or
something
like
it,
where
you
might
actually
say
which
versions
of
the
objects
you
support,
and
it's
up
to
the
people,
the
other
controllers
in
this
in
the
cluster,
to
deal
with
the
implications
of
it.
We
really
in
cube
today.
We've
never
dealt
with
a
breaking
core
api.
Nor
do
we
really
have
plans
to
do
that.
C
We
have
some
machinery
that
does
an
okay
job
of
it,
but
every
new,
like
events,
v2,
actually
has
kind
of
been
cut
or
the
events
evolution
has
been
kind
of
painful.
We've
got
a
couple
of
other
examples:
crds,
that's
still
painful,
and
so
I
was
kind
of
looking
at.
What's
the
machinery
I
would
use
to
break
the
life
cycle
connection
between
a
cluster
and
an
end
user
application
that
to
me,
is
what
logical
clusters
and
crd
tenancy
is
one
dimension
of
it.
C
So
there's
kind
of
an
angle
of
not
everybody
wants
to
do
to
make
that
trade-off.
Cluster
is
still
a
pretty
useful
boundary,
but
we're
actually
not
the
more
stuff
you
pack
into
a
cluster
the
more
challenges
you
have
operationally
as
life
cycles
diverge,
so
the
thought
would
be
a
new
tool
in
the
toolbox
could
be
used
in
many
different
ways.
That's
the
logical
clusters
idea
separate
from
kcp
the
prototype,
which
has
other
concepts
in
it.
C
So
logical
clusters
might
actually
be
something
that
down
the
road.
We
start
off
like
at
a
higher
level,
but
then
come
and
could
actually
benefit.
You
know
people
doing
apple,
schema
evolution
or
workload,
evolution
on
a
single
cluster.
I
don't
know
if
that
trade-off
is
worth
it.
But
I
also
think
like
cube
is
a
single
cube.
Cluster
probably
does
too
much
today
and
there's
no
real
boundary
between
user
plane
and
data
plane
or
control
plane,
a
data
plane.
So.
C
C
The
root
like
the
unmodified,
the
normal
cube
behavior,
would
still
be
there,
but
is
there
a
way
that
we
could
all
kind
of
come
up
with
a
a
slight
iteration
of
cube
that
still
feels
exactly
like
cube,
but
cuts
a
couple
of
those
hard
boundaries?
That's
another
thought
for
where
logical
clusters
might
go.
So
I
don't
know
if
those
ideas
resonate.
These
are
pretty
far
out
like,
but
I
wanted
to
kind
of
everybody
who's
doing
some
level
of
hard
tendency.
C
If
we
could
have
more
tools
that
let
us
do
hard
tendency
trying
to
find
out
which
tools
are
what
tools
can
we
take
and
modify
and
make
more
useful,
because
the
cost
of
some
of
the
tools
is
high,
like
you
can't
really
use
the
admission
chain
effectively
today
in
lots
of
little
clusters,
but
the
admission
chain,
salt,
like
building
into
the
admission
chain
of
cube,
is
where
I
like
web
hooks,
are
kind
of
a
failure.
Mode
of
admission
admission
is
much
more
powerful
than
what
web
hooks
can
do.
C
F
Yeah
yeah
yeah.
I
think
I
agree
with
most
of
clay's
the
statements,
because
when
I
look
at
the
kcv
project,
so
I
don't
I
don't
to
me-
I
don't
I
don't
I'm
more
interested
in
all
that
you
know
the
virtual.
C
F
C
F
I
think
that
idea
is
makes
makes
me
more
exciting
about
marty
tennis,
because
if
you
think
about
the
monitoring
is
about
how
you,
how
to
you
know
separate
the
control
plan,
so
I
think
even
with
kcp,
even
with
our
current
vc
model,
you
know.
Chris
knows
we
are
trying
to
make
you
know
virtual
cluster
as
a
r
3
type,
there's
no
fixed
type,
it
can
be
any
type
of
virtual
cluster.
That's
our
goal!
That's
the
reason
we
have
cap
here
right.
We
have
a
design,
the
interface
we
just
designed
api.
F
We
never
you
know,
fix
implementation
like
it
has
to
be
when
you
know
kubernetes,
so
to
me,
kcp
phase
to
vc
model,
there's
no
big
challenge,
because
we
already
supported
sync
crd
already,
so
it's
all
about
you
know
we
just
limit
the
sum
of
a
building
minimum
set
of
building
objects
and
the
rest
of
the
objects
may
make
it
at
crd.
So
I
think
the
current
vc
thinker
and
everything
probably
will
just
with
minimum
modification.
We
can
support
that.
F
So
I
like
more
idea
about
the
logical,
logical
cluster
idea,
because
that
seems
to
be
you
know
just
think
about.
It
is
just
give
a
minimum
set
of
capability
to
to
the
user.
It's
not
a
full
set
of
capability
which
can
limit
because
more
easy
to
use
more
secure,
more
or
less
error
pro,
because
you
don't
have
that
fancy
api.
Nobody
can
just
call
it
right.
You
just
focus
the
api
that
you
are
caring
about,
so
I
like
that
idea
more
and
especially
the
larger
class.
I
think
it
should
be
lighter
weighted
right.
F
I
I
if
I
look
at
the
reaper
as
soon
as
you
put
edc
as
a
pro
set,
so
you
just
make
one
binary,
including
everything
right.
It's
not
that
you
have
kind
of
like
another
complicated.
You
know
micro
service,
like
type
of
architecture,
it's
just
simple
binary
and
make
it
as
simple
as
another
benefit
of
it.
So
I
I
see
kcp,
I'm
more
interested
in
that.
In
regard
it's
like
a
simplified
interface.
You
give
a
minimum
set
of
api
that
only
care
people
know
about
and
probably
a
lightweight
resource
saving.
F
D
F
My
point
so
I
I
don't
think
you
know
the
multi-tenancy
part
there's
a
lot
of
challenges.
It's
just
you
have
a
single
everybody,
just
trying
to
put
the
kind
of
thing
to
a
root
cluster,
the
single
logic
we
have
honestly.
I
already
see
about
two
to
three
versions
of
thinker
which
may
all
work
already.
So
it's
not
a
big
big
challenge.
F
So
the
oh,
I
think
the
if,
if
this
tube
project
wants
to
you
know
collaboration
or
whatever
you
can
chris-
and
I
can
think
of,
you
know
how
to
make
just
make
ksp
as
another
type
of
virtual
cluster
in
our
term,
so
just
another
type
of
cluster,
so
just
follow
yeah.
You.
F
Exactly
the
same
thing,
you
can
say
just
put
a
virtual
logical
cluster.
In
this
stage
everybody
get
a
larger
cluster,
the
rest
of
things,
because
the
simpler
part
of
you
may
not
know
so
we
are
trying
to
make
it
as
a
framework.
It's
not
exactly
tied
to
a
particular
implementation
or
even
api
type.
So
we
try
to
make
it
as
a
framework.
F
You
just
support
kind
of
what
kind
of
kubernetes
supported
object,
synchronization
and
maybe
next
we
can
try
to
make
it,
as
some
extension
point
make
people
easy
to
extend
like,
like
you
know,
plug-in
something
like
that
right.
You
you,
when
you
do
synchronization
you're,
trying
to
modify
the
object
due
to
mutation,
making
it
easy
like.
Maybe
that's
another
way
that
chris
and
I
can
think
of
how
to
move
the
campaign.
C
Right
there
and
a
key
point
that
you
brought
up-
and
I
think
this
is
one-
is
like
the
set
of
apis
that
are
visible
inside
a
cluster-
defines
what
you
can
do
and
today
there's
just
one,
which
is
everything.
C
C
It
should
be
easier
to
get
different
sets
of
apis
everything
we
can
do
to
make
chunks
of
apis
feel
more
put
together.
So,
like
sometimes
like,
you
know
a
key
example
for
me
in
tendency
having
a
set
of
implicit,
our
back
rules
that
you
can't
change
right
so
like
today
in
cube,
you
can
do
that,
but
there's
always
the
risk
that
someone
cuts
with
the
lines.
You
know
a
defense
and
depth
approach
makes
sense.
Running
multiple
cube.
Api
servers
makes
sense.
C
Then
it
gets
down
to
like
how
efficient
is
that
and
all
that
and
I'll
be
honest
like
for
some
of
the
things
we've
been
thinking
about
for
a
kcp
like
evolution
would
be.
You
know
we'd
like
to
run
app
little
app
clusters
as
a
service
and
we'd
actually
like
to
get
into
the
millions
range
charted
of
course,
but
like
that
kind
of
scale,
be
like
okay,
well,
where's
the
fat
come
from
and
it
comes
from.
You
know:
99
of
people,
don't
change
the
default,
cube
definitions,
but
everybody
gets
a
copy
of
those
resources.
C
Crds
don't
change,
but
they're
basically
the
same.
How
would
you
offer
apis?
Let
someone
extend
it,
but
then
not
overwrite
it.
So,
like
you
know,
imagine
a
you
know.
An
end
user
has
a
version
of
a
pod
that
actually
is
malicious
or
breaks
through
some
security
rules
trying
to
avoid
trying
to
solve
that
problem
is
or
from
a
different
way.
C
I
want
to
open
the
door
for
going
and
solving
some
of
these
problems
in
different
ways.
So,
like
there's
the
when
we
were
thinking
about
kcp-
and
we
were
talking
about
this
in
the
community
meeting,
like
imagine-
cube
api
server
as
a
library,
you
can
pick
and
choose
what
apis
you
want
is
one
problem.
C
It's
valuable
even
for
cube
to
some
degree.
It's
just
you
know,
there's
not
a
use
case
demanding
it
today
so
be
like
bringing
the
use
cases
together.
So
we
have
that
library
once
you
have
kind
of
a
library
mentality,
then
there's
like
what
are
the
other
features,
so
one
of
them
is
like
logical
clusters.
C
One
of
them
is
how
do
we
support,
watches,
more
scalably,
client,
tooling,
and
all
that
that
would
be
kind
of
what
we
were
thinking
about
starting
with,
and
then
we
can
kind
of
play
around
with
the
ideas
and
say,
like
you
know,
if
we
had
this
library,
we
were
all
converging
and
we
wanted
to
cut
apis
out.
What
would
that
look
like?
Would
we
actually
have
like
virtual
cluster
as
a
project?
Could
you
know
provision
these
or
whatever,
making
it
easier
for
people
to
have
forks
of
the
chunks
of
code?
F
Yeah,
you
know
what
what
looks
when
I
first
look
at
ksb.
I
thought
it
was
just
you
know
major
refactoring
of
the
kubernetes
api
server.
You
know
you
know,
because
that's
something
that
we,
I
think
we
should
do,
but
nobody
did
that,
but
I
have
to
because
for
singularity
we
have
to
leverage
whatever
just
autobahn.
We
have
to
use
no
matter
how
complicated
it
is
we
have
to
because
we
see
how
they
do
api
server.
Although
we
claim
it's
just
it's
cheap,
because
it's,
but
in
fact
it's
not
very
cheap.
F
You
know
it's
still
very
big,
complicated
chunk
of
thing,
but
most
people
just
doesn't
need
those
minutes.
C
C
Certified,
but
I
mean
like
from
as
a
project
what
we
actually
want
is,
if
you
do
an
app
in
one
spot,
it
should
work
in
another
spot
and
then
we
should
have
variations
on
it
and
to
be
fair,
very
little
of,
like
conformance,
is
a
should
be
about,
like
boring
features
in
cube
that,
like
one
person
uses
like
that's
cool,
but
I
don't
think
it's
really
the
point
what
we
would
really-
and
this
is
again
my
personal
opinion-
you
know
I
go
to
the
conformance
working
group-
you
know
almost
every
week
I
skip
sometimes
it's
my
fault,
but
that
that
mindset
would
be.
C
If
you
can
do
something
conformant,
the
underpinning
of
it.
It
should
really
be
up
to
you.
We
never
wanted
cube
to
be
a
mono
culture
in
library
or
in
implementation.
It's
just
in
practice.
It
kind
of
is
so
there's
an
element
here
too
of
like
if
we
can
get
the
library
side
of
the
project
going,
get
the
use
cases
and
get
people
who
are
motivated,
there's
not
really
a
lot
of
resistance.
C
I
was
trying
to
kind
of
build
some
of
that
consensus,
because
api
machinery
is
ultimately
going
to
have
to
do
a
lot
of
it
and
so
and
you're
talking
to
dan
smith
and
others
like
we
were
kind
of
like
okay.
Well,
if
we
could
get
enough
people
motivated
and
get
some
cut
lines
agreed
and
some
use
cases
agreed,
the
problem
has
always
been.
We
didn't
have
enough
use
cases
to
really
argue
through
what
a
cap
would
look
like,
but
I
kind
of
think
just
based
on
even
what
I've
heard
in
the
last
couple
weeks.
C
F
Yeah,
I
think
to
me,
I
think
the
real
challenge
is
is
is
the
use
case,
because
that
is
biggest
motivation
right,
so
I
I
don't
know
so
so
yeah,
it's
based
on
my
personal
experience,
so
I
think
they
may
have
more
experience.
When
we
talk
about
the
interface
that
we
exposed
to
user.
The
use
case
is
always
the
primary
concept.
Otherwise
we
have
a
big
resistance
of
doing
anything.
F
So
I
thought
I
I
I
bet
this
project
also
may
face
the
same
problem
because
the
the
so
the
problem
is,
although
you
know
we
have
so
many
problems,
but
we
already
has
a
kubernetes
api
server,
that's
often
running.
So
how
do
you?
How
do
you
convince
people
that
switch
to
a
simple
version?
Because
you
know
it
still
works
with
a
bad
heavy
version
right?
So
I
think
we
we
nee.
F
We
we
need
to
clear,
define
some
use
cases
for
that,
so
yeah,
because
my
just
my
personal
opinion,
because
when
I
first
look
at
your
description,
you
you're
talking
about
putting
the
parts
into
the
controller
right.
That's
one
of
the
put
put
in
the
crd
and
and
move
parts
into
the
controller
right.
That's
that's!
The
kcp
claim
right
so.
G
I
mean
the
thing
is:
is
that
in
kcp
today,
also
based
on
the
demo
that
has
been
done
in
kubecon?
There
are
obviously
several
layers
that
are,
you
know
gathered
together.
You
spoke
about
the
logical
cluster
part,
which
is
much
more
on
the
on
the
kubernetes
side
of
things,
hacked
kubernetes
kubernetes
fork,
and
then
there
is
the
thinker
or
cluster
manager,
which
is
mainly
you
know
an
example
of
what
it
could
be
precisely
even
on
on
the
cluster
customer
source.
It's
just
you
know,
example
domain.
G
So
obviously
those
various
type
of
things
are
are
bundled
together
in
kcp
for
now,
because
you
know
to
just
at
least
to
showcase
what
it
can
do
and
what's
the
the
purpose
could
be,
but
obviously
there
are
different
things
that
possibly
should
should
learn
in
you
know
different
places
or
even
some
of
them
in
sub
projects,
so
yeah.
G
I
think
we
have
to
be
that
in
mind
and
not
take
kcp
as
a
you
know,
monolithic
thing
it's
it's
mainly
just
the
minimal
api
server
and
su
and
and
resource
tenancy,
not
not
to
say
crd,
because
it's
mainly
resource
tenancy
resource.
This
was
resource
type
to
necessary,
and
that's
that
that's
to
me
more
the
core
of
it
and
then
the
cluster
manager
on
sinker
are
mainly
just
examples
of
things
that
you
can
build
upon
that.
G
According
to
the
use
case,
you
have
yeah
and
and
probably
in
the
future,
even
on
on
the
kcp
documentation
and
and
and
repository
it
would
be
much
more.
You
know
explained
as
as
long
you
know,
while
other
use
cases
are
being
explored
and
maybe
other
demos
or
other
examples
are
being
added
to
be
more
common.
F
I
think
that
one
more,
I
kind
of
try
to
I
get
more
ideas,
but
when
I
first
glance
it,
if
you
claim
that
okay,
we
are
trying
to
get
our
get
rid
of
pause
and
make
it
into
controllers.
I
was
thinking.
Kcp
was
not
fitting
your
kubernetes
scenario.
You
are
just
leveraging
commands
trying
to
solve
other.
You
know
non-kubernetes
framework
problems,
so
that's
my
first
impression,
but
now
get
more
and
more.
I
try
to
understand,
because
it's
not
your
intention
so.
C
And
like
another
example
of
something
like
an
idea
would
be,
one
of
the
things
is
like
transparent
multi-cluster
so
like?
Could
you
take
a
cube,
app
and
literally
move
it
across
two
clusters
without
anyone
being
the
wiser
thinking
through
that,
like
that's
a
thought,
experiment,
that's
a
hard
problem.
C
C
Some
of
the
tenancy
use
cases
and
others
it
would
have.
Let
you
set
it
up
so
that
you
couldn't
actually
physically
change
the
api,
but
you
could
see
them
so,
like
an
example
would
be
you
go
into
a
logical
cluster
and
you
can
get
foo,
no
matter
what
you
do.
You
can't
actually
change
it
so
that
you
can
set
up
those
rules
like
you
might
have
apis
that
come
from
other
places,
apis
that
come
from
real
clusters
or
you
can
mix
and
match
so,
like.
C
G
This
would
somehow
leverage
also
on
the
crd
work
that
already
has
been
down,
but
but
probably
making
making
it
even
more
generic.
You
know,
because
for
now,
crds
are
quite
limited,
and
even
in
cuban,
even
in
kubernetes,
you
know
issues.
There
are
a
number
of
requests
for
being
able
to
add
additional.
You
know
some
resources.
For
example,
other
people
already
had
those
type
of
problems
so
being
able
to
declare
it.
G
A
Interesting
yeah,
the
the
sinking
mirror
like
mirroring
objects.
Back
that
you
can't
change
is
really
is
an
interesting
thing
that
I
think
we've
we've
dealt
with
in
in
some
regards
with
just
very
simple
back
porting
of
things
back
through,
but
it
really
ends
up
being
that
you
still
end
up
having
two
two
versions
of
the
object
that
we're
then
maintaining,
which
it
sounds
like
you're
kind
of
approaching
it
slightly
different,
where
you're
actually
trying
to
make
it
so
that
it's
the
physical
objects.
C
Challenge
as
well
like
the
v1
federation,
and
then
it's
tough
because,
like
you,
can't
do
wristwatch
on
it,
like
you,
can't
get
the
controller
watch,
but
another
idea-
and
this
was
like
if
we
could
like
we
need
to
figure
out
a
way
to
bulk
watches
together,
it's
kind
of
a
general
acute
problem
for
a
lot
of
controllers
like
it's.
Just
it's
really
ugly
to
build
a
bunch
of
caches
and
the
client
side.
It
works.
But
it's
not
obvious.
C
One
of
the
thoughts
would
be
like
if
we
could
make
getting
a
bunch
of
watches
to
start
at
the
same
time,
more
efficient,
there's
like
three
dimensions
of
it:
watching
multiple
resources,
watching
chunks
of
resources
like
three
different
namespaces
at
the
same
time,
or
watching
three
different
clusters
with
different
criteria,
but
having
something
that
kind
of
pulled
those
together
again.
Nothing
like
there's
no
prototype
code
for
any
of
this,
but
it
was
kind
of
that.
C
C
Of
pretty
different
use
cases
that
one's
a
little
harder
to
see
could
they
all
overlap,
but
I
think
it's
worth
trying
just
because,
like
the
the
the
use
case
for
kcp
that
we
were
trying
to
like
okay,
let's
try
to
make
this
use
case.
Work,
transparent,
multi-cluster,
create
a
deployment
sync,
the
deployment
to
the
clusters.
Don't
pull
replica
sets
or
pods
back?
C
It
it's
it's
funny
because,
like
we
use
so
an
open
shift,
we
use
aggregated
apis
heavily
and
I
use
them
for
two
things:
virtual
apis.
So,
like
the
pod
metrics
api
is
kind
of
a
virtual
api.
It's
translating
from
a
different
domain.
We
also
use
it
to
front
real
resources
so,
like
we
have
virtual
a
virtual
resource
shim
projects
instead
of
in
front
of
namespaces,
so
you
can
create
delete
projects.
C
That
was
like
a
it's
good,
but
it's
it's
actually
still
pretty
hard
to
do
it
so
like
there's,
maybe
a
second
goal
of
api
server,
which
would
be
aggregated
apis,
are
still
just
a
little
too
darn
painful
to
run
operationally
at
scale
just
like
web
hooks
are
one
thought
would
be
if
you
wanted
to
do
an
aggregated
api
and
have
it
on
the
same
life
cycle
or
you
wanted
to
do
some
of
these
like
tricks.
Could
you
mesh
them
more
cleanly
with
existing?
C
So
could
you
take
a
crd
and
combine
it
with
an
aggregated
api
for
all
the
bits
that
aren't
crd-based?
And
the
answer
is:
yes:
did
you
make
all
the
client
tools
properly
handle
it
generically?
There's
still
bugs
and,
like
you
know,
people
just
assumed
like
I
was
actually
looking
at
the
smithy
documentation,
the
other
day
for
aws's
idl
and
there's
a
concept
that
we've
been
missing
in
cube,
which
is
like
this
object.
C
This
object
is
actually
just
a
representation
of
this
object
from
open
api,
but
something
like
the
smithy
idl
actually
had.
I
was
like
oh
that's,
a
thing
that
we
would
hit
if
you
made
it
easier
to
do
some
virtual
resources
through
aggregated
apis.
So
it's
not
that
I
think
aggregated
apis
are
a
part
of
this.
It
was
more
like
what
are
the
tools
like
come
up
with
a
use
case.
C
That
kind
of
we
think
is
aspirational
where
we'd
like
to
get
better
at
for
hard
tendency
and
crd
tenancy
work
through
the
the
common
tools
and
and
problems
in
cube
and
see.
If
we
can
get
anybody
else
interested
in
any
of
those
and
be
like
hey,
let's
work
together
so
like
a
logical
cluster
could
be
one.
The
discussions
we're
talking
about
here
about,
like
library
of
cube,
a
cube,
api
server
is
a
library,
so
you
could
slim
stuff
down
for
virtual
and
still
be
conformant,
like
that's
other
stuff.
C
C
C
So
I
guess
like-
and
I
don't
know
like
I
don't
want
to
monopolize
the
whole
meeting,
we'll
probably
just
keep
continuing
to
poke
at
some
of
these
examples.
It
sounds
like
for
v
cluster.
Maybe
we
want
to
do
a
follow-up
and
chat
about
some
of
the
places
where,
like
I'll,
try
to
work
on
we're
going
to
try
and
work
on
getting
logical
clusters
described
out
a
little
bit
more
effectively.
C
Maybe
we
can
spend
some
time
come
back
and
then
say
like
here's
like
three
ways
that
virtual
clusters
and
logical
cluster
could
actually
benefit
from
each
other,
based
on
other
feedback,
we're
getting
from
other
people
as
well
as
then,
a
separate
topic
would
be
for
minimal
cube
api
server.
C
A
C
To
getting
too
good
at
pressing
that
mute
button,
what
I
can
try
to
do
is
set
up
a
follow-up,
I'll,
probably
try
to
set
up
a
follow-up
time
where
maybe
we
can
just
walk
through
some
things,
maybe
on
slack
or
whatever
or
chat
through
it,
and
then
we
can
just
like
talk
about.
You
know
if
you
guys
have
a
specific
cut
line
or
use
case,
or
you
can
point
me
to
some
of
those
existing
docs.
C
C
I
can
definitely
work
with
with
y'all
on
this.
This
topic
specifically
and
jason,
can
support
me.
A
Sweet
that
sounds
fantastic
yeah
we'd,
really
appreciate
that
the
conformance
side
of
things
is
going
to
be
really
interesting,
especially
because
we're
we're
implementing
ourselves
as
a
as
a
cappy
implementation.
But
as
of
right
now,
we
actually
don't
pass
conformance,
which
is
kind
of
against
their
spec
because
of
simple
things
that
are
like,
if
I
recall
correctly,
faye
or
chow
or
charles,
like
correct
me.
If
I'm
wrong,
but
it's
like,
we
don't
support.
We
can't
support
subdomaining
in
the
service
name
spaces
or
the.
A
B
A
C
Do
multi-cluster
stuff,
eventually
you
need
some
of
that
stuff,
specked
out
and
a
little
bit
less
restrictive,
because
there
are
going
to
be
limitations.
I
bet
you
there's
other
people
who
have
those
limitations,
and
so
it's
like
we
don't
want
conformance
to
be
a
choke
hold.
F
Yeah
also
yeah.
I
think
this
key
project
does
it's
kind
of
comp
to
help
each
other.
It
says
that
you
know
the
the
vcs
we
focus
more
about
in
a
single
object,
but
I
think
ksp
is
helping
it's
focusing
more
on
making
the
vc
or
logic
cluster
more
efficient
or
you
know
more
customized,
you
know
fit
customized
needs
kind
of
thing
because
yeah
this
is
the
most
part.
F
I
like
this
kind
of
collaboration,
I
think
next,
maybe
I
can
one
thing
I
can
do
is
I
can
try
to
connect
to
some
of
our
internal
use
case
of
vc.
I
can
understand
what
are
the
apis
they
use
because
they're
coming
from
the
production
team,
I
don't
know
exactly
which
api
they
use.
I
can
collect
some
information
and
share
you
guys.
Maybe
chris
can
share
your
internet.
You
know
experience
about
how
you
guys
supposed
to
use
the
capability
of
the
virtual
cluster.
F
G
Yeah,
sorry-
and
maybe
one
point
is
that
we
are
currently
working
and
that
should
reflect
quite
soon
in
the
documentation
as
well,
at
least
in
the
development
documentation
working
on
on
you
know,
separating
those
those
layers
so
that
you
can
just
run
kcp
the
minimal
api
server
without
anything
without
the
cluster
manager,
nor
the
the
syncer
or
only
with
the
cluster
manager,
but
then
start
your
own
thinker.
G
G
C
Actually,
that's
a
really
good
point,
so
it
would
be
really
useful
david
for
you
to
study
what
virtual
cluster
is
doing
with
its
sinker
and
so
like.
Maybe
that's
actually
an
angle
which
is,
we
probably
need
to
categorize
and
look
for
commonalities
among
the
existing
synchro
impulse
and
try
to
get
like
a
taxonomy
going.
That
would
be
great
actually,
because
I
think
that
helps
other
folks
who
have
sinker-like
problems,
and
ideally
that
may
be
something
that
is
again
like
a
carved
out
piece,
which
is
the
broad
effort
of
sync.
C
What
what
library
shared
tools
do
we
all
have?
Which
implementations
can
we
bring
together,
and
how
do
we
make
those
implementations
more
efficient
is
definitely
something
has
been
very
much
on
our
mind.
G
Yeah
because
I
was
wondering
hearing
what
we
discussed
just
before
in
cluster
api,
so
you,
I
assume
you
have
some
you
know
crds,
to
represent
the
cluster
or
stuff
like
that.
I
didn't
I'm.
I
looked
into
it
some
time
ago,
but
I
don't
remember
so
and
and
as
I
said
for
now,
you
know
our
own
cluster
manager
is
just
you
know,
based
on
on
an
example
cld.
That
is
mainly
just
a
cube
config.
So
it's
it's
really.
G
You
know
minimum,
and
so
it
could
be
very
interesting
trying
to
to
hook
into
I
mean
into
kcp
attempts
of
other
projects
at
defining.
You
know
cluster
customer
resources
and
also
thinking
mechanism
I
mean,
maybe
maybe
we
could
also
try.
You
know
to
experiment
this
and
and
and
try
to
see
what
we
can
bring
from
other
projects.
Like
virtual
cluster
into
you
know,
a
very
minimal
kcp.
D
D
G
G
Work
with
kcp
so
that
I
I
think
this
would
be
one
very
interesting.
You
know
seeing
the
common
parts
between
cluster
api
and.
F
I
totally
agree,
I
think
so.
In
my
opinion,
the
sinker
is
at
least
a
very
valuable
part
in
kcp,
because
this
is
there's
nothing
new
from
here.
It's
just
a
bunch
of
engineering
work
and
we
have
already
best
package.
So
I
I
would
like
to
really
so
if
you
want
to
use
thinker,
if
you
think
our
thinker
can
feed
your
requirements
or
you
need
some
changes,
just
bring
up
issues,
we
can
try
to
resolve
all
of
them.
F
So
so
I
tried
to
you
know
minimum
the
the
work
in
your
side
for
the
single
side,
but
you
can
put
all
your
efforts
on
the
you
know,
other
cool
concepts
you
know
other
than
a
single
part.
So
that's
my
hope,
because
so
so
chris-
and
I
know
actually
you
know
making
all
those
things
support.
Ksps
should
be
minimum.
I
mean
these
minimum
changes.
D
Yeah
yeah,
I
think
the
reason
the
reason
we
we
started
with
our
own
sinker
was
we
wanted
something
we
knew
would
work
against
casey,
but
but
that
isn't
the
intention.
If
we
delete
it
all,
that's
fine
all
right.
We
want
something:
that's
that's
the
good
and
more
mature
and
better,
at
least
that's
what
I
think
I
don't
know.
C
Yeah,
I
think
I
think,
there's
a
separate
there's
just
the
topic
of
stinker
coordination,
collaboration
in
the
ecosystem
and
that
that
what
are
all
the
ones
that
are
out
there,
let's
make
sure
that
we
have
a
good
description
of
problems
like
maybe
that
we've
either
pulled
this
into
a
new
investigation
or
a
sub
bit
of
the
multi.
The
transparent
multi-cluster,
because
transparent
multi-cluster
kind
of
depends
on
syncing
and
some
flexibility.
So
so
it
sounds
like
that's
another
good
one.
C
Are
there
any
other
topics
you
all
would
like
us
to
take
back
any
other
areas.
A
As
of
right
now,
I
can't
think
of
any.
I
think,
like
the
way
that
you
structure
your
cluster
controller
and
how
you
actually
address
like
what.
What
are
clusters,
I
think,
will
be
really
interesting,
because
I
know
we're
currently
tackling
that
with
two
different
types.
We
have
like
our
original,
which
was
virtual
cluster,
and
then
we
have
the
new,
which
is
cappy,
which
has
its
own
types
and
there's
going
to
be
a
lot
of
different.
A
The
types
of
those
implementations
like
how
that
actually
integrates
together,
like
thinking
about
maybe
what
k
native's
done
with
duck
typing
and
like
if
there's
ways
that
we
can
make
this
more
more
fluid
between
all
of
those
types,
I
think,
could
be
really
interesting.
Yeah.
C
And
that
actually,
on
the
tenancy
side,
like
one
thing,
would
be
if
we
can
inject
like
so
part
of
like
the
mental
minds
tendencies
like
I
have
a
set
of
apis,
I
inject
I
don't
necessarily
want
to
inject
the
same
apis
so
like,
and
this
is
like
kind
of
what
we
do
with
virtual
resources
and
aggregated
api
servers
in
a
few
places
today
is
like
there's
the
fundamental
one:
that's
an
admin
one
and
then
there's
an
end
user,
one
that
has
restricted
scope.
Looking
at
what
the
different
facets
of
a
cluster
are.
B
C
Like
bring
everybody
around
so
like
carmada
has
one
cube
fed
one
and
two
three
five
seven
have
all
done
variations
on
of
it.
I
know
that
you
all
have
a
couple
of
different
bits
of
it
is
like.
Can
we
get
everybody
together
and
be
like
okay?
Look
if
this
is
what
we
agree
on
and
it's
mostly
transparent?
What
does
that
look
like
for
the
physical
resource,
the
logical
one
that
a
human
might
see
that's
more
scheduling?
What
are
the
different
scheduling
options,
and
maybe
there
isn't
just
one.
C
I
actually
probably
think
there
probably
would
be
multiple,
but
could
we
find
some?
You
know
what
is
the
representation
that
makes
sense
if
you
don't
care
and
what's
the
representation
that
makes
sense
if
you
do
care
and
do
we
have
tools
that
can
make
that
surfacing
easier,
because
that's
that's
the
big
thing
for
logical
clusters
for
me
is
you
might
have
one
resource
in
one
namespace
that
only
admins
can
see
or
a
set
of
admins
could
see
for
physical
clusters.
C
Everybody
else
gets
a
very
slim
minimal
one
that
is,
you
know,
maybe
not
even
tell
you
what
the
name
of
the
cluster
is.
Maybe
it's
a
mapping
that
tells
you
like
something
that's
relevant
to
you
like
this.
Is
your
location,
a
you,
don't
care
what
location
a
is
if
an
admin
comes
along
and
re-maps
you
to
location
a
you
still,
don't
know
like
it's
a
detail
under
the
covers
and
just
trying
to
think
through
what
that
would
look
like
that's
a
deeper
dive
on
the
transparent,
multi-cluster
one,
but
we
can
come
back
to
that.
C
C
A
I
would
say
the
only
thing
that
we
haven't
talked
about,
which
you
can't
you
kind
of
called
out
a
little
bit
in
the
way
that
you've
changed.
Things
is
like
what
we're
doing
with
how
we
interface
with
nodes
and
and
how
we
basically
proxy
everything
through
as
like,
we
proxy
through
virtual
nodes
to
your
tenant
control
plane
and
can
address
them
via
a
proxy,
which
we
do
some.
We
do
some
rewriting
of
name
spaces.
I
don't
think
you've.
We
haven't
talked
about
that
as
well.
Basically
to
do
tendency
within
the
clusters.
A
You
still
create,
unlike
say,
for
instance,
like
a
v
cluster
from
the
the
loft
folks,
where
there
everything
gets
from
one
virtual
cluster
goes
into
a
single
namespace.
We
actually
pre
re-rename
space,
everything
and
prefix
it
yeah.
C
And
so
we
want
to
do
that
as
well
and
jason,
and
I
were
talking
about
that,
like
is,
could
we
come
up
with
a
standardization
of
that
pattern
that
we
could
say
like
if
you've
got
this
problem,
you
use
it,
and
then
everybody
just
picks
the
same
convention
would
be,
would
be
amazing.
A
Yeah
keep
us
involved
in
that.
Please
cause
that,
because
that
will
directly
reflect
a
lot
or
directly
change
a
lot
of
what
we're
doing
and
it,
and
it
is
a
big
problem
to
to
solve,
because,
like
that's
the
main
reason
why
we
have
to
have
the
way
that
we
do
our
the
the
actual
node
connections,
because
we
run
a
really
transparent
proxy.
That
just
looks
up
the
namespace
that
you're
supposed
to
proxy
it
through
and
then
rewrites
the
request
so
that
it
goes
to
the.
F
C
B
C
Depth,
I
think
you
know,
of
all
the
projects.
Y'all
are
basically
what
I
could
probably
frame
this,
as
is
how
can
we
help
you
be
more
successful
and
where
are
the
places
where
any
of
the
goals
we
have
that
don't
quite
align?
Why
don't
they
align
with
the
difference
in
the
mindset
teasing
that
apart
that'll
actually
help
us?
We
can
take
that
on
our
side
and
focus
on
it.
So
it
sounds
like
we
have
a
lot
of
stuff
to
go.
Do
and
all.
F
So
indeed,
I
think
psp
covers
a
lot
of
the
aerials
to
me.
You
know
each
aerial,
you
know
convenient
of
mighty
cluster
is
a
big
big
thing,
because
I
see
so
many
requirements
and,
in
my
opinion,
there's
no
good
single,
there's,
no
single
solution
that
I
can
fix
everybody.
I'm
pretty
sure
about
that.
I
can.
I
think
there
has
to
be
no
caseback
case,
but
at
least
we
can
summarize
the
usage
pattern
that
you
best
practice.
That
would
be
very
valuable
for
many
of
the
users.
A
Yeah,
I
agree
and
just
to
add
this
as
a
note,
if
you
all
haven't
had
this
on
your
on
your
plans
to
talk
with,
I
would
reach
out
to
the
law
folks
as
well
and
kind
of
get
theirs,
because
it
is
interesting.
It's
a
it's
a
different
use
case
but
or
it
has
similar
use
cases,
but
a
little
bit
different
and
they're
like
they're
the
just
they
just
open
source
v
cluster
itself.
What
like
two
weeks
ago
or
something
like
that.
C
A
And
then
I
guess
the
last.
The
last
question
I
would
have
is
what's
the
long-term
plan,
so
this
is
a
a
little
bit
specific
to
a
couple
of
us
that
are
from
the
apple
side
of
things.
A
What's
the
long-term
plans
on
like
repos
and
ownership,
I
know
if
we
were
to
if
we
were
to
ever
try
to
to
help
out
in
any
of
these
places,
I'm
going
to
get
a
bunch
of
pushback
and
I'm
curious
if
the
long-term
plans
are
to
try
to
bring
this
into
a
kubernetes
lens,
or
I
guess
I
should
say
a
kubernetes
org,
at
least
from
a
github
perspective,
because
I
know
myself
and
and
way
and
james
are
gonna,
basically
not
be
able
to
contribute
or
help
out
in
in
much
deep
ways
unless
it's
under
the
kubernetes
lens,
which
is
it's
organizational
things
so.
C
Yeah,
I
I
don't
so
the
thing
that-
and
this
is
like-
I
was
kind
of
talking
to
a
couple
of
folks
like
within
the
community
and
like
we
were
one
of
the
challenges
we
have
is
we're
trying
to
avoid
generating
too
much
churn
on
the
core
projects
until
we
so
like.
I
would
say
this
is
like.
Can
we
build
consensus
with
folks
from
within
the
orgs?
I
don't
think
the
intent
right
now
is
for
kcp,
like
you
think
of
it
as
like
prototype
it'd,
be
there's.
C
Probably
the
mental
mindset
that
everything
in
there
either
ends
up
as
a
kubernetes
cap.
It
may
have
a
life
as
those
evolved
so
like
as
a
feature
branch
and
so
what
we
probably
need.
We
haven't
quite
sorted
out
all
the
details,
but
we
didn't
want
to
go
to
project
stage
without
figuring
out
what
we're
actually
doing.
First,
so
that's
probably
the
biggest
wrinkle
I'd,
probably
say
yeah,
like
caps
and
finding
alignment
on
that's
easy
feature.
C
Branch
on
cube
is
easy,
that'll
probably
stay
as
it
is,
and
then
the
other
aspects
that
don't
fit,
whether
it's
cube
or
cncf
or
something
else
I
don't
know
yet
and
honestly
I'd
probably
wait
to
see
what
does
come
out
of
it,
because
most
of
it
is
like
like
what
are
the
things
that
we
cobble
from
things
like
even
carmada
or
others
that
aren't
quite
cncf
or
cube
like.
I
don't
want
to
be
just
cube
projects,
but
I
understand
there'll
be
some
challenges
for
some
organizations
for
sure.
A
C
C
Had
this
challenge
for
a
while,
which
is
the
only
way
you
can
get
anything
done
at
openstack,
was
to
go,
create
it
as
a
new
openstack
project,
at
which
point
everybody
like
cut
your
project
apart
until
it
was
just
the
the
smallest
sliver
of
an
idea.
One
of
the
things
we
were
looking
for
is
like:
if
there's
something
that's
not
quite
cube,
it's
cube,
adjacent
and
cube
is
a
library.
What
does
that
change
things
the
home,
for
it
should
definitely
be
something
that's
aminable
for
contribution,
so
we'll
take
that
into
account
for
sure.
A
C
A
F
Charles
for
the
captain,
we
can
just
sync
up
on
the
slack
channel
so
for
the
new
things
that
we
need,
but
small,
I
don't.
I
think
I
don't
think
we
have
anything
need
to
discuss.
A
Yeah,
I
think
the
biggest
thing
that
I
wanted
to
do
is
just
do
a
little
background
back
back
log
grooming
and
try
to
get
some
more
issues,
issues
filed
for
what
we're
doing
next.
Just
moving
on
to
to
how
do
we?
How
do
we
integrate
the
two?
How
do
we
actually
make
it
something
works
with
with
vc,
and
so
we
can
yeah
we'll
just
file
those
issues
off
or
async,
and
then
we
can
collaborate
over
slack.
Yes,
sound
good
cool.