►
From YouTube: 2021-06-01 Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
to
the
kcp
community
meeting
june
1st
2021.
We
have
some
topics
already
on
the
agenda.
If
you
have
anything
else,
feel
free
to
add
it
either
here
or
interrupt
me
or
anything
else.
You
want.
We
talked
last
week
a
bit
about
scheduling
for
multi
for
multi-cluster
and
I
think
the
the
summary
of
that
was
that
it
was
a
bit
too
focused
on
exactly
scheduling
deployments
exactly
to
clusters
and
less
general.
A
So
I
did
some
thinking
and
exploration
and
sort
of
note
writing
in
a
discussion.
I
don't
know
if
the
discussion
is
a
good
spot
to
do
that,
but
I
did
it.
A
Basically,
to
summarize
some
of
the
stuff
we
talked
about
last
week
with
scheduling
strategies,
one
was
split
so
like
a
deployment,
would
have
a
scheduling,
strategy
of
split
across
any
number
of
clusters
or
locations
or
spots
or
spaces,
or
whatever
we
end
up
calling
them,
whereas
a
like
a
volume
strategy
might
not
split
it,
because
it
doesn't
make
sense
to
split
a
persistent
volume,
it
would
make
sense
to
just
assign
it
to
any
or
figure
out
which
clusters
or
locations
make
sense
for
this
thing
and
give
it
to
any
of
them
in
here
there
was
also
yeah
pods
demon
set,
scheduler
might
have
all
of
them
like.
A
If
I
give
a
demon
set
to
a
kcp
connected
to
five
clusters,
I
would
want
it
to
give
it
to
all
five
clusters,
and
then
each
cluster
would
assign
it
to
each
node
like
a
daemon
set
does,
and
a
lot
of
things
are
probably
just
going
to
be
like
as
needed,
and
namespace
or
namespace
is
sort
of
special
anyway,
but,
like
a
surface,
account
is
put
me
in
this
cluster
if
something
needs
it
like
like,
if
a
pod,
if,
if
you
give
a
pod
to
kcp-
and
it
sends
it
to
cluster,
a
also
send
the
service
account
that
that
pod
uses
to
cluster
a
or
whatever
does
that
make
sense?
A
Is
that
sort
of
a
is
everyone
tracking?
What
I'm
saying
so
far,
because
it
gets
worse
very
quickly.
A
B
A
Yeah,
and
so
we
also
talked
last
week
about
letting
crd
authors
choose
the
strategy
for
their
type,
because
we
will
probably
have
to
guess
in
most
cases
what
the
what
the
strategy
is
going
to
be.
But
if
a
crd
author
says
hey
a
k,
native
service
should
be
split
or
a
canadian,
I
don't
know
something
other
demon,
setish
type
thing
should
be
copy.
They
should
be
able
to
tell
us
that
somehow.
C
And-
and
the
thing
you've
got
this
this
idea-
I
I
didn't
see
where
you
mentioned
it
explicitly
where
there's
dependencies
between
like
objects,
there's
objects
that
need
to
stick
together
like
it
doesn't.
Do
me
any
good
to
say:
hey
I've
got
these
pods
and
hey.
I've
got
these
service
accounts
and
go
put
them
wherever
because
they
have
dependencies
on
each
other
right.
A
Right,
I
did
so
this
section
of
this.
I
wrote
before
the
weekend
and
then
over
the
weekend.
I
thought
a
lot
more
about
the
scheduling
of
the
dependency
scheduling,
constraint,
stuff
and
added
a
bit
more
notes
this
morning
about
that.
But
not
only
do
we
have
to
have
strategies
for
how
to
schedule
them.
We
also
need
to
have
a
mirrored
strategy
for
how
to
collect
the
status
back
right
like
we,
we
sync
object
specs
down
to
clusters,
but
we
also
need
to
see
their
their
status
and
merge
them
back.
A
So
a
copy,
a
copy
strategy
object.
Might
I
guess
a
copy
and
a
split
would
probably
have
a
similar
strategy
for
coalescing
an
aggregating
status
back,
but
that's
sort
of
something
we
also
need
to
think
about
when
we,
when
we
write
these
things.
The
idea
is
that
at
no
point
should
the
general
object.
Schedulerizer
know
what
a
deployment
is
or
know
what
a
demon
set
is
or
know
what
it's
you.
C
B
Jason-
I
don't
know
if
you
captured
this
in
the
doc,
but
one
thing
that
might
be-
and
somebody
else
brought
this
up
and
I
don't
remember
who
it
was
apologies.
B
It
might
actually
be
a
thing
going
forward
that
when
a
certain
strategy
is
applied
to
a
crd,
it
might
be-
and
this
is
more
of
a
might-
that
the
that
it's
certainly
possible
that
the
transparent
multi-cluster
use
case
could
actually
ensure
fields
exist
on
crds
for
the
purposes
of
satisfying
the
strategy,
rather
than
vice
versa,
because
there's
nothing
that
actually
says
that
all
of
the
fields
on
the
the
object
that
the
the
aggregation
layer
have
to
all
exist
on
the
underlying
layer
right,
that's
already
kind
of
an
implicit
story
from
one
direction
for
like
crd
normalization.
B
But
if
you
think
about
the
other
way
like
what
happens.
If
you
have
an
object
that
doesn't
have
a
status
field
right,
the
next
one
is,
it
doesn't
have
conditions
and
then
another
question
would
be
well
like.
Could
we
just
add
fields
to
conditions
to
carry
data,
and
so
there's
a
bunch
of
trade-offs
that
would
have
to
be
thought
through.
B
D
D
Expected
replicas
and
observed
replicas
the
json
path
that
you
specify
in
your
crd.
You
must
correspond
to
really
existing
fields
that,
in
you
see
the
shima,
so
this
type
of
checks
already
exists
for
the
two
subresources
for
this
case
of
resources,
which
is
the
main
one
implemented
in
series
today.
B
Yeah
yeah,
like
a
shot
like
because
jason,
I
think
the
way
you
were
saying
it
really
highlighted
this,
which
is
like
a
resource
that
is
chartable,
has
certain
characteristics
of
what
the
resource
means,
but
on
the
flip
side
of
it,
a
resource
that
is
chartable
could
also
like
by
taking
deeper
control
over
the
crds
and
the
normalization
process.
We
can
actually
open
up
the
door
for
use
cases
like
you're,
saying
david
for
for
scale.
B
The
presence
of
scale
implies
something
on
the
crd,
but,
conversely,
choosing
the
scale
the
shard
strategy
might
actually
result
in
characteristics
like
we
never
really
had
this
option
before
with
things
on
a
cluster
right,
because
you
can't
go
to
cube
and
say,
add
three
new
fields
to
cube.
There's
some
there's
some
danger
in
that,
like
taking
resources
and
taking
fields
in
a
name
space.
B
On
the
other
hand,
like
think
about
the
power
of
that,
where
you
are
adding
a
facet
to
a
resource
type
at
a
level
that
is
only
ever
seen
at
the
aggregated
type
as
long
as
you're
still
compatible
with
what
someone
would
do
on
a
because
implicitly
like
if
you've
got
a
helm
chart
and
you
want
to
have
it
be
transparently
multi-cluster.
Obviously
you
know
a
cube
thing
moving
to
it,
but
then
you
want
to
start
tweaking
it
everyone
today
to
do
that
has
to
have
new
objects.
New
objects
are
great
like
there's.
B
We
shouldn't
take
away
from
that,
like
policy
objects
and
all
that
sit
alongside,
but
we
have
a
capability
that
no
one's
really
ever
had
before,
which
is,
we
can
add
fields
to
pods
in
a
responsible,
mature
way,
blah
blah
blah
blah.
If
we
can
fit
that
into
our
loop.
Think
about
how
much
opportunity
that
is
for
declaring
something
is
shardable
might
result
in
these
fields
showing
up
which
exposes
a
generic
status
resources
you're,
saying
david.
That,
then,
could
also
be
the
place
where
spec
field
starts
to
show
up
and
there's
some
there
here
be
dragons.
E
So
I
have
a
question
regarding
the
strategy
here.
It's
mentioned
in
the
form
of
annotation.
I
was
thinking
like
how
can
we
black
box
standardized
strategy
so
that
it's
it's
defined
with
apis
and
people
could
tweak
that
black
box,
but
yet
have
that
api,
where
we
can
query
and
decide
based
on?
Where
should
we
schedule
or
on
on
that
particular
strategy?
E
B
Think
I
think
that
definitely
aligns
like
that's
the
you
could
start
with
an
annotation
and
then
surface
it
as
a
status
resource.
But
then
you
could
do
the
flip
side
and
say
at
some
point.
Instead
of
the
annotation,
you
add
fields
magically
to
all
crds
that
show
up
that
are
part
of
the
transparent,
multi-cluster
use
case.
Now
the
flip
side
of
that
the
downside-
and
this
is
like
something
we
have
to
do-
is
you
still
respect
the
api
and
it
has
to
not
collide
with
everything
and
you're
you're
effectively.
B
E
Field
later
I
I'm
thinking
on
the
level
of
like
if
we
look
at
cni
cri,
whatever
these
standards
have
like
add,
remove
whatever
and
that's
kind
of
like
black
box,
how
you
that
doesn't
matter,
but
you
know
there
are
plugins
that
implement
that
and
I
think
of
crds
or
custom
resources
have
things
behind
them.
That
knows
how
to
implement
that
kind
of
requirement.
E
E
Yeah.
That's
fair
yeah.
A
So
so,
clayton
to
your
point,
about
being
able
to
add
fields
to
a
type
so
like
the
pod
field
doesn't
have.
A
Maybe
maybe
pod
is
a
bad
example,
but
we
could
add
fields
to
at
least
the
status
of
things
to
say,
like
hey,
you're,
a
charitable
resource.
This
is
the
status
of
your
shards,
like,
instead
of
cramming
them
into
one
condition
or
one
array
of
conditions.
We
could
have
a
map
of
locations
to
conditions
on
that
location
or
whatever.
A
It
seems
a
lot
easier
to
do.
Well,
it
seems
a
little
fraught
to
do
with
with
status,
because
messing
with
the
types
means
the
client
that's
talking
to
kcp.
To
get
that
status
has
to
understand
the
structure
of
that
right
and
we
can.
We
can
do
with
conventions,
and
we
can.
We
can
probably
play.
C
A
B
B
On
every
client
right,
like
older,
cubelets,
don't
understand
new
pod
fields.
That
is
a
very
it's
a
thing
that
most
people
don't
think
about
in
their
day
to
day
in
cube
and
when
it
breaks
it
breaks
you
in
surprising
and
novel
ways
right.
So
every
field
has
to
be
optional,
but
an
optional
field
doesn't
mean,
has
no
measurable
impact,
and
so
there's
just
there's
things
that
are
fundamentally
hard.
B
I
think
thinking
of
it
as
a
tool
that
we
could
use
damon
said
lacking
conditions
might
be
a
great
example
like
when
damon
sets
lacked
conditions.
The
idea
of
saying
well,
no
every
status
should
have
a
condition
field
is
reasonable.
Then,
thinking
about
the
scenarios
that
would
happen
when
you
upgraded
to
a
cube
server
that
had
statuses
on
deployments
damage.
You
have
to
think
about
that,
so
normalization
might
be
one
way
of
using
that
a
separate
one
would
be
the
difference.
B
If
you
said
you
know,
if
you
come
up
with
a
really
clever
short
name
for
transparent
multi-cluster
having
that
on,
spec
is
not
out
of
the
realm
of
possibility.
I
do
think
that
that
is
a
composition
like
thinking
about,
like
you
know,
cube,
is
resources
that
compose
well
and
fields
on
objects
that
solve
a
large
enough
section
of
the
problem
that
most
people's
problems
are
addressed.
B
Right,
80,
20
rule
having
a
balance
of
that
in
the
transparent
multi-cluster,
I
think,
is
important,
which
is
there's
just
enough
to
have
the
source
of
truth,
for
where
things
are
annotations
can
do
that
as
well
as
anything
there's
the
interfaces
that
other
people
expect
out
of
objects,
status,
conditions,
replicas
scale,
metadata
labels
etc
and
then
like-
and
I
guess
we
talked
about
this
last
time,
like
topology,
keys
or
or
anything
that
anything
that's
kind
of
part
of
scheduling
where
you
could
add
stuff
and
then
strip
it
off
like
those
are
all
our
tools.
B
When
is
a
tool
appropriate,
we
should
not
be
afraid
to
say
maybe
there's
some
new
tools
we
need,
but
we
should
definitely
absolutely
not
cross
into
the
bounds
of
making
things
not
mean
the
thing
they
mean.
I
think
that's
you
know.
That's
the
thing
that
we're
really
trying
is
status
means
status,
transparent,
multi-use,
transmit
parent,
multi-cluster
means
a
cube.
Object
behaves
like
a
cube
object
most
of
the
time,
and
the
places
that
are
the
exceptions
are
where
we
should
say
it's
for
a
good
enough
reason
that
it
justifies
it.
A
A
You
get
more
structure
and
typedness
out
of
it,
but
and
the
the
type
ofness.
B
Is
probably
the
strongest
benefit
of
it,
although
there's
nothing
that
actually
prevents
you
from
doing
validation
on
annotations
right,
so
you
know
we're
kind
of
looking
for
the
we
shouldn't
get
overly
fixated
on
the
tool.
We
should
have
the
tools
yeah.
A
Yeah,
but
so
I
think,
adding
editing
fields
gets
gross,
though,
because
most
of
the
clients
will
be
coming
from
client
go
with
or
or
or
a
generated
a
generated
go
client
where,
in
order
to
be
able
to
see
or
set
anything
in
our
namespace
of
typed
fields
in
the
in
the
object
you
have
to
like
regenerate
your
client
with
the
with
the
traits
that
we
injected
in
there
yeah.
I
agree
with
that.
C
Yeah,
the
other
thing
to
look
at
is
that
there's
gonna
be
operators
all
over
the
place
that
are
interacting
with
these
crs
and
one
of
the
things
that
they
well
behave
for
some
definitions
of
well-behaved
operator
does
is
look
and
see
whether
your
cr
is
different
from
what
it
expects
and
then
just
replace
it
just
blow
it
away,
thereby
blowing
away
our
our
field
right
right.
Well,.
B
That
that's
not
necessarily
true,
though,
because
if
you're
writing
an
operator
for
the
control
plane
that
takes
advantage
of
like
multi-cluster
you're
already
going
to
have
to
do
one
small
thing
and
that
one
small
thing
is
probably
going
to
be
something
like
you
have
to
be
aware
of
the
presence
of
multiple
clusters
anyway.
So
I
I
agree
like
I
was
actually
gonna
say
I
think
we're
the
we
shouldn't
overly
fixate
on
changing
crds.
B
What
we
should
probably
say
is
a
crd
is
not
at
the
cluster
level
is
not
the
same
thing
at
the
kcp
level
or
at
a
control
plane
level.
They
have
different
purposes.
We
want
them
to
overlap
as
much
as
possible,
so
you
can
pull
all
of
your
intuitions
over
and
then
we
should
specifically
puncture
those
intuitions
on
the
places
where
it
actually
makes
sense,
but
no
no
more
and
that's
a
key
tenet
of
transparent
multiples.
B
The
flip
side
of
that,
though,
is
for
someone
who's,
I'm
trying
to
think
of
like
a
great
example
here
off
the
cuff,
and
I
may
name
this
one.
When
you're
talking
about
transparent
multi-cluster,
you
are
effectively
layering
on
top
of
the
objects,
and
so
whatever
that
layer
is
on
top,
you
want
it
to
be
mostly
agnostic.
B
That
user
still
has
to
roughly
use
the
same
primitives,
and
so
we're
going
to
have
to
be
careful.
It's
like
yeah.
I
don't
think
I
don't
think
we
should
go
too
far,
but
you're
going
to
absolutely
have
to
go
from
the
transparent
case
and
solve
that
well
to
the
slightly
less
transparent
and
then
be
mostly
not
transparent,
but
useful
and
then
hand
off
to
a
much
more
complex
solution
which
should
still
compose
well.
If
we
do
our
jobs
right,
that's
going
to
be
the
hardest
problem.
We
don't
have
to
do
that
today.
A
Yeah
yeah,
I
do.
I
do
agree
that
I
mean
this
is
this
is
why,
last
week,
I
dove
more
into
completely
completely
how
to
how
to
tune
your
transparent
multi-cluster,
even
though
that
means
it's
not
transparent,
anymore
yeah,
so
so
the
the
next.
The
next
thought
with
this
was
right.
You
you
might
also
need,
though,
a
pod
is
pick
anywhere
for
me.
A
If
a
pod
depends
on
a
persistent
volume
claim
that
has
been
scheduled
to
protect
to
a
particular
cluster
that
pod
should
actually
get
scheduled
to
where
that
thing
is
already.
The
alternative
to
that
is
to
unschedule
the
persistent
volume
claim
and
then
together
schedule
them
somewhere
or
something
like
that,
but
I
think
that's
probably
not
the
direction
we
want
to
go
at.
First.
A
Whether
this
is
an
annotation
or
a
field
that
we
inject
or,
however,
we
do
that,
I
think
annotations
would
be
easiest
for
now
at
first,
I
was
going
to
play
with
detecting
trying
to
detect
what
objects
depend
on
what
other
objects
among
core
types
among
like
common
core
types
and
then
even
like
popular
crd
systems
like
k-native
or
tecton
or
others.
A
If
people
have
other
ideas,
but
this
seemed
like
a
fairly
good
coverage
of
a
heuristic
look
for
fields
called
service
account
name
or
things
that
have
volumes
a
field
called
volumes
is
a
good
hint
that
something
depends
on
a
volume
owner.
References
are
really
good
because
they
depend
on.
You
can
make
the
whole
web
of
things
as
long
as
people
have
owner
references,
local
object,
references,
object,
references
and
then,
when
we
detect
it
store
it
somewhere.
A
This
might
be
just
for
us.
This
might
be
something
we
just
have
for
our
own
internal
use.
We
might
want
to
have
a
separate
like
user,
visible
blob.
We
could
base64
encode
this
if
we
want
to
make
it
clear,
like
hey,
you're
you're,
not
supposed
to
see
this
we're
not
going
to
keep
keep
it
from
you,
but
you're
not
supposed
to
see
it
and
then
extending
this.
If
a
user
wants
to
explicitly
say
these
two
pods
should
always
end
up
in
the
same
cluster
together.
A
A
This
also
goes
into
our
non-cluster
resources,
stuff.
We
were
talking
about
like
put
this
pod
next
to
it's
in
the
same
cluster
as
its
database,
which
would
also
have
its
own
scheduling.
You
know
logic
and
heuristics
and
weird
stuff.
B
Jason,
how
deep
did
you
get
into
the
move,
because
I
do
think
we
want
to
spec
out
what
a
move
must
do
in
order
to
be
safe
of
something
reasonably
complex
and
then
make
sure
we
use
that
as
a
constraint,
because
we
could
definitely
probably
throw
we
could.
Probably.
I
think
this
is
like
you
know
directly
to
that
previous
point
is
it
could
be
that
hard
things
are
actually
hard
and
the
easy
thing
we
want
to
make
work
like
with
transparent.
B
We
could
maybe
make
the
argument
early
on
that
we're
going
to
bias
towards
not
having
to
think
about
this.
The
vast
majority
of
the
time
such
that
having
a
higher
bar
for
what
you
want
to
do
to
do
something
complex
is
actually
okay,
so
like
whether
it's
you
know
an
entire
transparent
cluster
or
an
entire
logical
cluster
has
the
same
scheduling
domain
or
everything
in
a
namespace
is
always
co-scheduled.
Unless
you
tell
us
not
to
or
like
those
are
all
reasonable
trade-offs
that
we
could
put
in
the
way.
B
A
Yeah,
so
I
think
for
for
moving
the
way
that
we
are
in
order
to
move
you
have
to
have
some
force,
you
know
apply.
Some
change
has
to
happen.
That
applies
to
cause
the
system
to
become
perturbed
and
move
into
a
different
place.
If
we
don't
want
to
go
down
the
path
of
signaling,
explicit
scheduling,
constraints
like
like
one
way
to
make
things
move
is
to
say
I
know
I
had
told
you
to
only
schedule
in
cluster
a.
A
I
am
updating
your
annotation
to
say,
scheduling
cluster
b,
that
triggers
a
move,
rather
than
do
that
which
sort
of
opens
the
door
to
explicit
scheduling,
taints
and
tolerations
and
affinity,
and
all
of
that
stuff,
which
I
think
we
do
want
to
get
to
eventually.
A
But
not
yet
the
thing
that
we
will
do
to
instigate
a
move
is
to
delete
a
cluster
or
to
make
a
cluster
unscheduleable
somehow
which
we
could
also
do
by
changing
its
crd
type
in
that
cluster,
to
say,
the
type
you
gave
us
is
no
longer
like
you
know,
compatible
and
normalizable
with
the
one
that
we
have,
so
you
have
to
get
out
of
here.
So
that's!
A
That's,
I
think,
what
we're
going
to
do
to
make
move
happen,
and
that
will
just
look
like
in
my
mind
at
least
I
haven't
written
any
of
this
code,
but
this
would
look
like
cluster
b
that
you
had
been
scheduled
on
is
gone
on.
A
Annotate
all
the
objects
annotated
to
schedule
to
cluster
b.
This
will
restart
the
reconciliation
of
oh.
No,
we
need
to
find
new
homes
for
all
these
things
that
are
no
longer
are,
you
know,
are
currently
homeless,
assign
them
to
clusters
that
do
exist
that
do
have
compatible
types
and
sync
them
down
and
see
what
happens.
So,
that's
like
at
a
very
tiny
molecular
level.
That's
how
rescheduling
would
work.
A
What
you
would
actually
see
is
my
deployment
is
scheduled
across
three
clusters.
One
of
the
clusters
went
away,
but
the
status
is
reported
as
rebalancing
or
finding
a
new
home
or
something
and
then
ends
up
getting
put.
You
know
crammed
into
the
two
clusters
that
were
made
yeah.
That
was
a
lot
of
words
for
that,
but
no.
B
And
I
think
so
that
previous
comment,
then
about
like
what
we
want
with
transparent
multi-cluster,
is
a
user's
intuition
about
the
behavior
to
be
principle
of
least
surprise,
probably,
which
is,
you
should
have
a
reasonable.
You
should
have
being
a
kubernetes
user
today
and
I
think
eric
to
your
point
about
like
fields
and
all
that,
that's
like
another
good
heuristic,
we'll
use,
is
to
a
reasonable
kubernetes
user.
The
expectation
is,
is
that
things
mostly
behave
like
you'd
expect.
B
If
you
are
a
reasonably
like
within
the
aligned
to
what
cube
is
supposed
to
do
now.
That's
not
always
the
case
right
there'll
be
exceptions.
Where
cube
doesn't
do
what
most
people
think
it
does.
Vice
versa,
we
could
probably
say
you
know
there'll
be
a
few
constraints
around
like
safety
right
so
like
if
a
cluster
goes
away
today
when
a
node
goes
away.
B
There
are
some
very
subtle,
but
very
key
rules
around
like
what
a
node
is
allowed
to
do
who's
allowed
to
delete
what
what
does
it
mean
for
it
to
still
be
there,
we'd
probably
want
to
mirror
those
up
to
the
higher
level
right,
like
a
replica
set
by
virtue
of
being
structured
for
replicas
and
being
explicitly
not
stateful,
because
we
have
a
stateful
set
that
says:
nope,
I'm
the
opposite
of
a
replica
set
in
many
ways.
Certainly
a
use
of
a
replica
set
in
that
fashion
of
the
cluster
went
away.
B
I
expect
a
new
pod
to
be
scheduled
for
a
replica
set.
Therefore,
I
expect
the
replica
set
to
be
moved
to
a
new
cluster
is
perfectly
reasonable,
whereas
with
volumes
and
staple
sets,
we
might
have
have
expectations
and
intuitions
that
are
consistent.
B
A
staple
set
or
a
persistent
volume
would
not
move
unless
these
rules
and
these
criteria
are
set.
So
I
think
that's
a
good
principle,
maybe
like
I
could
put
that
down
in
the
use
cases
as
we
go
but
yeah.
I
think
that
I
think
that's
a
good
way
to
framing
kind
of
what
you
described
as
the
our
first
use
case
is
making
replica
sets
at
a
transparent
level.
Behave
like
replica
sets
at
a
cluster
level,
so.
C
To
be
clear,
who's
responsible
for
setting
these
metadata
and
for
setting
them
correctly.
A
For
setting
the
strategy
for
a
type
sure,
I
think
we
would,
I
think
we
would
probably
hand
annotate
types.
We
know
about
of
the
core
types
we
we
would
say.
Replica
sets
should
do
this
and
damon
said
should
do
this.
If
you
bring
us
a
new
crd
type,
we
don't
know
what
that
is.
A
We
could
guess
something
I
don't
know
which
of
these
strategies
would
be
best
as
a
default,
but
I
mean
like
the
types
that
I'm
thinking
of
the
crd
types
that
I
know
of
some
would
definitely
want
to
be
the
any
type
like
I
don't
care
which
cluster,
but
only
make
one
copy
of
me
and
put
it
over
into
any
cluster
modulo
asked
to
risk
all
of
the
scheduling
stuff
we
just
talked
about,
and
some
of
them
would
definitely
be
split
like
a
k-native
service
would
need
to
be
split
across.
A
You
would
expect
it
to
be
split
across
multiple
things.
Any
is
probably
the
safest,
because
what
you
get
is
something
that
works
but
isn't
optimal.
If,
like.
If
you,
if
you
any
strategy
a
canadian
service,
it
would
work,
it
should
work,
but
it
wouldn't
be
optimal.
It
wouldn't
give
you
like
the
transparent,
multi-cluster
failure
domain
niceness,
but
then,
if
somebody
annotated
it
with
hey
when
applied
to
a
kcp
split
it,
then
that's
a
fairly
small
change
to
put
on
k-native
developers
that
would
make
it
really
super
powerful
when
it
comes
to
kcp.
A
C
A
B
B
Where
are
we
going
to
spend
the
bulk
of
our
time,
making
sure
default,
cube
applications
behave
in
in
ways,
there's
deep
use
cases
and
to
er
to
your
point
about
density.
Spreading,
run
ones
run
many
pods
as
exact
probes
pods
as
like
you
know,
you
could
certainly
use
a
pod
as
a
way
to
get
an
exact
session
on
a
cluster,
and
that
is
invalid.
So
you
can
do
anything
in
linux
in
a
pod.
Therefore,
you
know
the
pod
use.
Cases
are
unbounded,
I
would
probably
buy
us
towards.
B
We
should
be
spending
like
you
know.
Let's,
let's
make
up
some
numbers.
Eighty
percent
of
the
time
we're
focused
on
core
cube
objects.
We
go
eighty
percent
of
the
way
down
the
reasonable
cube
use
cases
for
a
pod,
but
we
we
may
not,
in
the
short
run
prioritize
the
other
twenty
percent.
If
eighty
percent
of
the
things
we
expect
our
core
cube
resources,
how
do
you
get
that
20
percent
work?
B
Well
so,
like
transparent
for
a
crd
author
may
be
less
transparent
than
transparent
for
a
cube
user,
because
the
expectation
is,
you
know
like
an
ncd
object
like
an
entity.
Workload
crd
is
the
expectation
that
there
should
be
a
minimal
step
for
that
crd
author
or
someone
adapting
that
crd
author
for
transparent
multi-cluster
to
do
the
right
thing.
Yes,
we
should
focus
on
the
initial
piece
of
experience
and
then
say
you
know,
like
kind
of
the
the
two
options,
there's
the
easy
option
and
the
complex
option
for
an
ncd
operator.
B
We
should
make
the
easy
option
really
easy,
because
we
want
to
capture
that
20.
You
know
90
of
the
20
of
all
the
other
types
of
resources,
and
then
you
pick
a
strategy
and
go
tell
us
the
field
that
should
be
started
on
whatever
and
then
the
deeper
cases
probably
are
it's
best
solved
with
code
down
the
road
by
someone
else
update
your
operator
to
work
better,
which
probably
is
what
we
will
do.
B
So
I
think
that's
a
really
useful
of
saying,
like
the
first
user
is
the
cube
user
coming
in
deploying
their
app?
The
second
user
is
the
admin
or
crd
author
trying
to
guess
at
which
strategy
makes
sense
for
transparent
multi-cluster.
B
The
third
user
is
someone
bridging
from
the
transparent
case
to
the
non-transparent
case
and
what
is
the
core
set
of
use
cases
we're
going
to
target
there
like
the
works
by
default,
and
then
I
make
one
tweak
and
I
can
get
h
a
and
then
works
go
from
h,
a
to
singleton
or
dense
mode
or
whatever
you're
talking.
You
know.
Different
types
of
packing
modes
are
also
in
there.
C
Maybe
a
better
idea
is
to
have
this
be
configurable,
like
100
configurable
out
of
the
gate,
and
then
for
the
cases
that
you're
talking
about
to
get
to
the
80.
We
provide
the
default
config
for
you
know
for
whatever
that
80
for
whatever
we
can
and
then,
if
somebody
has
their
use
case,
they
can
go
in
and
twiddle
it
to
their
hearts
content.
C
A
I
think
I
think
I
think
I
think
you
and
I
disagree
less
than
I
thought.
I
think
that
was
roughly.
A
What
I
was
trying
to
get
with
this
style
of
annotation
is
is
to
get
something
that
when
we
see
it,
if
you
didn't
give
us
any
any
config,
we
will
try
to
guess
your
config
and
annotate
with
our
guests
and
say
that
it
was
a
guess
and
if
you
want
to
override
my
guess
and
or
give
me
more
information,
you
can
and
then
you
say-
or
this
is
for
this
thing,
but
you
could
say
for
my
crd
type:
the
strategy
is
going
to
be
split
and
it
was
detected
or
I
guessed
that
it
was.
A
It
was
split
and
if
you
want
to
make
it
copy,
then
you
can
make
it
copy
and
we
trust
you
not
to
set
detected
is
true,
and
then
you
could
even
do
that.
I
think
the
next
level
of
that
decision
is.
You
could
do
that
for
explicit
objects
per
deployment.
You
could
say
this
deployment
object.
I
know
you
normally
split,
but
I
would
like
you
to
copy
or
this
pod.
I
know
you
normally
any,
but
I
would
like
you
to.
C
A
B
B
They
expect
transparent,
multi-cluster
to
work,
there's
a
second
level,
which
is
the
administrator
or
the
service
provider,
has
set
up
a
policy
that
should
work
for
the
set
of
resources
they're,
offering
sooner
or
later,
like
anything,
that's
not
part
of
the
standard
set
of
resources.
Somebody
has
to
configure
something
to
make
that
happen.
The
policy
and
how
those
things
are
installed
does
kind
of
go
together,
so
that
fits
into
the
the
end
user
case
and
then
maybe
there's
a
couple
of
other
use.
B
Cases
here,
which
are
we
haven't,
talked
about
this
a
lot
but
like,
for
instance,
should
it
it
probably
should
be
possible
for
should,
as
in
air
quotes
here,
to
have
locations
per
name,
space
and
locations
for
a
logical
cluster
that
are
distinct
so
that
you
could,
for
instance,
have
a
techton
flow,
where
promotion
from
dev
to
stage
to
production
happens
in
a
logical
cluster
and
all
you're
doing
is
able
to
use
a
pipeline
that
has
access
to
use
cases
because,
again,
like
from
a
composition,
perspective,
the
end
goal
that
we're
hoping
to
get
to
with
kcp
as
a
control
plane.
B
That's
your
solution,
ideally
we'd,
be
able
to
empower
so
that
multi-cluster
is
as
much
a
capability
as
single
cluster
is
so
that
something
like
a
tecton
or
a
jenkins
or
a
gitops
solution
can
actually
not
have
to
deal
with
multi-cluster
itself
completely.
B
A
Yeah,
I
think
I
think,
there's
a
question
in
the
back
of
my
head
of
what
are
name
spaces
for
if
we
have
logical
clusters
the
way
we
expect
them,
you
know
the
way
that
we
expect
them
to
show
up.
Do
we
still
need
namespaces?
Are
we
just
going
to
have
a
bunch
rather
than
have
a
cluster
with
100
namespaces,
but
we
have
100
logical
clusters
each
with
a
single
namespace
or.
B
B
Logical
clusters
are
kind
of
a
a
step
in
between
namespaces
and
clusters
that
works
for
apis.
The
best
outcome
would
be.
You
can
still
get
all
the
benefits
of
namespaces
if
you
need
them,
but
you
don't
have
to
contort
either
namespaces
or
clusters
physical
clusters
into
unnatural
spots
to
work
around
the
limitations
of
those.
B
Instead,
we
have
more
of
a
a
middle
concept,
which
is
the
set
of
apis
that
come
together
at
a
logical
cluster
that
that
also
powers
up
namespaces,
because
the
namespace
can
now
have
less
double
duty
with
tenancy,
and
you
know,
there's
some.
I
think,
there's
some
big
ideas
here
that
need
to
be
explored
before
we
can
say
they're
really
worth
it.
We'll
have
some
time
and
room
to
go.
Do
that
and
we
just
want
to
make
sure
we
don't
back
ourselves
into
a
corner
too
early.
A
Yeah
in
particular,
you
mentioned
something-
maybe
maybe
I
miss
her
do
so.
If
it's
a
miscommunication,
that's
on
me
was
one
that
that
a
delivery
pipeline
might
have
dev
namespace
staging
namespace
and
prodnamespace,
and
the
prodnamespace
just
happens
to
be
mapped
to
a
you
know.
Multi-Cluster.
You
know
three
three
cluster
real-world
scenario,
so
the
pipeline
just
says:
I'm
copying
objects
from
namespace
a
to
b,
to
c
and
behind
the
scenes,
we're
doing
the
magic
to
make
that
actually
deploy
to
different
clusters
around
the
world
or
whatever.
B
It's
it's
kind
of
ibm
yeah.
It
really
depends
on
the
semantics
of
what
transparent,
multi-cloud
cluster
offers.
Is
that
adult
kind
of
asks,
which
is
at
some
point?
And
if
you
had
one
location
or
two
locations
for
prod?
It's
really
not
the
end
user's
case
some
organizations-
and
I
think
this
is
like
the
trick-
is
some
organizations,
their
applications
and
their
use
cases
and
their
design
patterns
and
their
processes
will
lead
them
towards
certain
types
of
solution.
B
Like
two
clusters
for
aha
workloads
spread
between
them.
That's
a
human
process
imposed
on
top
of
technology.
Ideally,
we
remain
flexible
to
different
types
of
processes,
while
giving
people
more
tools
that
more
closely
model
the
actual
problem
they're
trying
to
solve
you.
Don't
you
want
aha
clusters
because
you're
trying
to
model
against
the
problem
of
most
failures
or
correlated
failure,
domains
for
config
change
or
software
change,
or
geography
failure,
and
so
what
we're
trying
to
offer
is
the
best
primitive
wouldn't
be
one
that
we
like.
B
That
might
be
a
way
of
you
really
want
to
do
all
your
debugging
and
dev.
So
you
want,
you
may
actually
end
up
in
trans,
even
in
transparent
multi-cluster,
wanting
to
get
access
to
dev
pods
and
to
see
what
node
they're
on
and
to
exec
through
and
to
see
logs
as
you
flow
through
a
maturity
pipeline.
I
do
think
there's
an
argument
that
coming
up
with
a
way
to
take
away
some
of
those
powers
while
still
giving
you
the
same
tools
and
it's
just
a
permissions
or
a
control
issue,
it's
worth
discussing.
B
Is
it
the
thing
that
we
should
all
be
fixated
on
right
now?
Now
I
just
wanted
to.
I
wanted
to
bring
that
one
up
there,
because
it's
the
it
is
one
of
those
things
that
is
a
bit
challenging
to
the
current
model
of
you
know.
Logical
cluster
has
two
locations,
and
if
you
want
to
have
multiple
different
types
of
scheduling,
you
have
to
have
multiple
logical
clusters
totally
reasonable
place
to
start.
A
Yeah,
I
think
I
think
the
the
trick
is
going
to
be
giving
we
want
to
be
transparent
so
that
you
don't
have
to
make
any
change
to
your
config
to
get
value
from
us.
But
at
some
point
you
can
get
more
value
by
doing
a
little
bit
more
right.
E
C
A
B
Want
to
gently
tease
people's
fingers
off
of
no
off
of
nodes,
and
once
you
can
do
that,
you
can
gently,
tease
them
off
clusters.
If
you
can
do
that
in
a
way
that
keeps
all
of
the
benefits
that
they
really
depend
on,
they
become
primed
to
accept
other
kinds
of
flexibility
that
they
might
not
have
needed.
Everyone's
going
to
be
at
a
different
point
on
that
progression,
we
shouldn't
we're
kind
of
just
trying
to
offer
something:
that's
both
useful
and
leads
in
a
direction,
even
if
they
can't
get
there
today.
D
Yeah,
would
it
make
sense
in
the
current
status
of
the
of
the
prototype
to
completely
the
cluster,
our
example
cluster
customer
source
to
add
resource
strategies?
I
mean
information
inside
it
if
we
consider
that
as
a
first
step,
you
know
having
sync
strategy
directives.
D
Scoped
to
a
logical
cluster
could
be
a
first
step,
then,
maybe
just
adding
some
some.
You
know
resource
strategies,
fields
or
structures
in
the
cluster
definition,
which
is
for
now.
What
is
our
location.
A
D
In
fact,
I
told
it
wrong.
I
was
meaning
having
a
customer
source
for
logical
clusters,
because
for
now
we,
you
know
logical
clusters.
Are
you
know
existing
implicitly,
because
you
know,
based
on
on
on
the
url
you
are
pointing
to,
or
you
know
the
the
header
that
you
set
in
your
http
request,
but
then
at
some
point,
if
we
were
introducing
some
customer
source
for
a
logical
cluster,
mainly
just
you
know,
if
the
customer
source
doesn't
exist,
then
you
don't
have
any
any
settings.
D
And
then,
if
there
is
a
customer
source,
logical
cluster
customer
source
that
exists
for
a
given
logical
cluster
name,
then
it
would
allow
adding
the
such
type
of
strategies,
like
you
know,
the
default
merging
strategies
according
to
to
a
given
type
of
resource,
for
example,.
B
Maybe
I
would
probably
I
would
probably
be
inclined
to
say
we
could,
but
I
might
be
inclined
to
say:
let's
I
think
cluster
is
a
target.
Logical
cluster
is
a
concept
and
policy
transparent,
multi-cluster
placement
policy
are
three
distinct
concepts
at
least
right
now,
so
we
could
join
them
and
actually
there's
part
of
this
is
to
get
enough
experience
so
that
we
could
say
yeah
in
theory.
Having
them
split
is
awesome,
but
in
practice
we
might
actually
want
to
couple
them.
I
just
don't
know
that
we're
there
yet.
D
Yeah,
because
I
mean
I'm
saying
that,
because
even
the
the
physical
cluster
custom
resource
that
we
have
today
is
just
you
know
an
example
one.
We
just
know
that
that
it's
a
dummy
one
so
as
a
very
first
step,
just
to
be
able
to
play
with
this
type
of
concepts
and
and
this
type
of
parameterization,
I
I'd
say
of
strategies.
A
Had
a
question
in
the
chat
about
progressive
deployments
of
applications
through
kcp,
or
is
it
too
early
to
consider
that
use
case?
I
think
if
kcp
does
its
job
right,
big
if
big
asterisk,
but
if
it
does
its
job
right,
the
progressive
deployment
pipeline
is
still
the
job
of
the
pipeline
system
and
the
pipeline
author
to
either
have
no
awareness
of
kcp
and
kcp.
A
Does
a
smart,
a
reasonably
smart
kind
of
smart
thing
or
to
have
more
visibility
into
what
kcp
can
you
know
it
knows
it's
talking
to
kcp
with
this
many
clusters
behind
it
and
can
tell
it
promote
to
staging
which
means
this
and
this
arrangement
of
clusters
or
promote
to
prod
or
slowly.
You
know
slowly
roll
out
to
prod,
but
I
think,
ultimately,
that
logic
is
still
encoded
in
some
pipeline
executed
by
some
pipeline
workflow
thing
yeah,
it
is
not.
It
is
kcp,
agnostic
or
kcp.
Is
it
agnostic.
B
The
transparent
multi-cluster
concepts
that
would
exist
within
a
control
plane,
a
kcp
like
control
plane,
are
intended
to
be
actuatable
by
your
deployment
pipelines
more
effectively
than
multi-cluster
is
actuatable
by
deployment
pipelines
doesn't
mean
you
can't
we
are
trying
to
like
looking
at
argo
is
a
great
example
is
like
every
looking
at
the
set
of
problems
that
have
been
solved
with
an
argo
approach.
What
are
the
things
that
we
could
add
that
would
a
improve,
self-service
b,
add
resiliency,
see
allow
these
things
to
happen
more
transparently.
That's
the
goal
of
transparent
multi-cluster.
B
Adele
to
your
question
on
cross-cluster
overlay,
so
I
would
say
an
important
component
of
this,
and
I
almost
called
this
out
before
is
an
important
component
of
transparent
multi-cluster
is
the
goal
is
to
have
a
good,
transparent,
multi-cluster
use
case
that
feels
natural.
That
works
like
cube,
does
doesn't
surprise
you
too
much
and
gives
you
a
couple
of
strong
benefits
and
gives
the
operations
team
a
couple
of
strong
benefits.
B
The
second
one
of
the
levers
we
have
to
work
with
is
those
underlying
clusters
are
implicitly
under
the
control
of
the
same
people
who
want
to
accomplish
these
objectives.
What
are
the
set
of
standardizations
configurations
or
conventions
that
would
exist
underneath
a
kcp?
They
could
make
transparent,
multi-cluster
better
so,
for
instance,
having
a
service
mesh
configured
in
federated
service
mesh
mode
on
a
bunch
of
clusters
with
the
right
you
know,
dimensions
set
up
would
make
transparent,
multi-cluster
service
mesh
much
easier.
B
So
it's
kind
of
a
balancing
act
between
the
two,
which
is
the
goal
of
kcp,
is
not
to
work
completely
agnostic
of
all
clusters.
It's
to
enable
someone
who's
setting
up
who
has
a
bunch
of
clusters
to
do
the
right
thing.
I
think
you'd
want
someone
to
be
able
to
drop
in
a
kcp
and
run
their
stuff
on
a
bunch
of
clusters.
That's
one
of
our
goals,
but
it's
not
the
it's
a
40
goal
or
a
30
goal.
The
dev
case
that
try
it
out
to
kick
the
tires.
B
I
think
in
the
production
case,
it's
very
reasonable
to
say
we
should
expect
that
it
is
totally
allowed
for
us
to
push
back
and
say
like
if
the
right
thing
for
cube
in
the
long
run
is
to
let
two
cube
clusters
talk
to
each
other
directly
via
pods,
which
there's
definitely
reasons.
That's
not
always
desirable.
B
Then,
if
you
configure
your
clusters
like
that,
we
should
think
about
how
transparent
multi-cluster
can
be
better.
But
if
that's
not
desirable
in
all
cases,
thinking
about
places
where
we'd
actually
restrict
that.
So,
for
instance,
if
you
could
program
service
to
service
communication
to
accomplish
99
of
that
or
98
of
that,
it
might
be
reasonable
to
say
no
overlay
is
not
a
requirement
of
where
we
go,
but
we
do
expect
that
services
could
be
programmed
on
both
clusters
to
be
able
to
talk,
which
would
be
an
acceptable
trade-off.
B
We're
still
kind
of
I'd,
say,
I'd
say
the
goal
of
this
is
to
get
enough
of
the
use
cases
that
we
can
start
getting
prescriptive
about.
What
transparent
q
native
would
need
what
transparent,
istio
would
need,
what
transparent
liquid
would
need?
What
transparency
insert
five
other
use
cases,
what
spiffy
would
need
or
aspire
to
do.
You
know
these
are
longer
term
things
that
we
tee
up
by
kind
of
going
through
some
of
these
first
discussions.
E
E
Does
it
well
and
if
I
just
focus
on
kcp
to
to
kind
of
like
give
me
the
the,
as
you
said,
the
deliver
to
to
configure
how
that
data
path
would
react,
also
allowing
for
providing
certain
policies
on
how
to
do
so.
So
that
would
be
at
least
for
me
enough,
given
that
I
assume
that
the
the
underlying
multi-cluster,
whatever
mesh,
does
its
job
yeah.
B
And
a
great
example
of
this
is
something
like
a
data.
Plane
cares
about
networking
and
policies
that
tell
it
how
to
connect,
and
you
want
to
enforce
those
policies,
kind
of
broadly
part
of
what
we
could
potentially
offer
in
the
idea
of,
like
you
know,
tecton
not
having
to
know
about
like
just
the
example
of
tecton
not
having
to
care
about
what
the
cluster
is.
Perhaps
techton
is
installed
on
zero
physical
clusters
and
the
controllers
are,
on
a
cluster,
pointed
at
a
control
plane.
B
Imagine
that
as
a
potential
like
it's
a
it's
a
level
up,
it's
a
way
to
power
up
tecton,
to
make
it
more
powerful
and
more
useful.
Just
by
changing
these
three
small
things
where
tekton
still
may
need
to
create
pods
or
jobs
or
launch
things,
but
it
it
has
less
privilege
across
that
as
well,
which
allows
us
to
separate
out.
You
know,
clusters
have
less
privilege,
there's
less
things
being
installed
on
clusters.
B
There's
more
things
that
span
clusters
the
shape
of
what
techton
would
need
would
challenge
us
to
say:
oh
well,
you
know
this
is
part
of
that
80
we
mentioned
before
in
launch
and
pods.
If
tech
time
can't
do
it,
that
means
you
can't
decouple
techton
from
the
data
plane,
move
it
up
to
the
control
plane.
Therefore,
that
means
we
have
it
in
transparent,
multicluster
made
singleton
pods
getting
run
by
a
techton
job
launcher
work
well,
for
instance,.
A
Yeah
tekton's
also
sort
of
a
weird
one
there,
because
it's
its
data
plane
and
its
control
plane
are
not
as
cleanly
separated
as
other
things
and
maybe
as
cleanly
as
it
should
be.
I
I
wanted
to
with
it
with
just
four
minutes
left
I
wanted
to
mention.
A
The
other
thing
on
the
agenda
was
scott
nichols
over
the
weekend
from
vmware
working
on
canadian
stuff
was
tweeting
about
playing
with
kcp
and
trying
to
get
kcp
to
run
as
a
canadian
service,
which
I
think
is
a
very
unique
and
exciting
use
case
for
that
kind
of
thing,
and
I
think
he
hit
a
lot
of
it
sounds
like
he
hit
some
stumbling
blocks,
divorcing
kcp
from
its
embedded
lcd
and
was
looking
to
split
that
part
out.
B
I
I
think
it'd
be
totally
reasonable,
just
like
listen
address.
If
somebody
could
define
the
minimal
set
of
flags
that
would
allow
you
to
point
to
an
ncd,
and
the
problem
is:
is
that
minimal
sets
at
least
five
different
flags?
So
if
we
can
do
that,
I
think
that's
totally
reasonable
to
have
right
now.
You
might
want
to
couch
it
with
a
the
prototype,
is
not
the
project,
and
so
the
prototype
is
doing
that
to
be
suitable
for
playing
and
it
will
leave
an
asterisk
in
the
code
that
says
we'll
come
back
to
that.
A
B
And-
and
I
think
like
the
prototype
is
about
showing
the
ideas
when
we
get
to
the
point
of
like
the
transparent
or
the
minimal
api
server
minimal
control
plane,
that's
a
place
and
kind
fits
in
this
as
well
is
yeah,
like
we've
said
we're
not
as
fixated
right
now
on
replacing
ncd.
B
If
somebody
wanted
to
take
the
library
of
let's
call
it
kcp
the
library
it
for
the
lack
of
a
better
name
and
replace
the
ncb
implementation,
I
think
that's
a
goal
that
we
should
list,
as
I
think
we
listed
it
on
the
minimal
api
server
goals.
Maybe
I
haven't
added
that
yet,
but
I
can
go
back
and
add
that,
so
that's
a
certainly
a
reasonable
answer.
We
could
I'll
I'll
tweet
back
to.
A
And
then
there
was,
let
me
get
his
name,
michael
vorberger,
who
said
he
couldn't
make
it
to
this
sent
a.
I
don't
know
if
it's
a
pr
but
was
working
on
splitting
kcp
the
minimal
api
server
out
from
the
cluster
controller
and
deployment
splitter,
because
we
had
all
previously
bundled
them
all
into
one
binary
together,
and
this
is
sort
of
a
re-splitting
of
that
out.
I
don't
know.
I
don't
think
that
these
changes
are
bad.
A
I
think
we
just
don't
know
how
we
want
to
bundle
these
things
together
exactly,
and
this
is
just
more
signal
that
some
people
want
certain.
You
know
one
star
from
the
constellation
and
we
want
the
whole
thing
together.
Sometimes.
B
A
B
Having
like,
I
think
it
would
certainly
be
reasonable
to
have
another
command
entry
point
on
under
kcp
repo.
That
starts
in
a
different
mode
just
to
having
some
package
with
the
most,
like
the
literally
the
simplest
possible
interface,
that
also
kind
of
signals
that
we
need
to
be
thinking
about
how
you
would
reuse
this
as
a
library,
because
library
reviews
is
a
key
goal
of
all
the
parts
of
this
so
far,.
A
Yeah
yeah
yeah,
I
think
that's
roughly,
where
that
work
will
end
up,
is
just
being
another
command
in
the
kcp
binary.
That's
kcp,
no,
really
just
kcp,
but
yeah
so
we're
out
of
time.
But
thank
you.
This
is.
This
has
been
a
very
good
conversation.
I
will
post
notes
to
the
issue.
If
you
have
anything
else,
you
want
to
talk
about
before
next
week
feel
free,
bring
it
up
in
a
discussion
or
an
issue
or
a
pull
request,
or
a
slack
or
a
carrier
pigeon
or
a.
A
I
don't
know
any
other
thing,
but
yeah
thanks
for
thanks
for
the
discussion.
Everyone
we'll
see
you
next
week
all
right.
Thank
you
see
everyone.
Thank
you.