►
From YouTube: KCP-Edge Community Meeting, November 17, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everybody
and
welcome
this
is
the
Sig
kcp
Edge
community
meeting
for
November
17
2022..
Thank
you
all
for
joining.
We
have
quite
a
few
items
on
the
agenda
today.
We'll
get
started
right
away,
so
I
wanted
to
give
everybody
a
little
update
on
a
few
things.
We've
got
I
think
potentially
three
things
I'd
like
to
discuss,
then
we'll
switch
the
floor
over
to
June
dwan
who's
got
some
items
that
he'd
like
to
discuss
with
the
community
having
to
do
with
ways
of
working
or
Logistics
as
well.
A
As
you
know,
the
the
creation
of
potentially
an
edge
workspace
and
then
Apollo
has
some
more
information
about
multiculture
scheduling
which
is
starting
to
develop
and
would
like
the
community's
input
there
and
see.
If
there's
any
questions
and
open
discussions,
I'll
be
following
up
afterwards
with
an
entry
into
the
into
the
Google
group,
as
well
as
some
items
potentially
for
the
slack
group
for
others
to
get
involved.
So
if
you
have
any
input,
please,
if
you
don't
share
it
now,
you
can
share
it
there.
Okay,
let's
get
started.
So.
A
Thank
you
very
much,
Andy
Goldstein.
For
all
your
help.
In
the
past
week
week
and
a
half
and
getting
our
repo
up
and
running,
we've
got
the
CI
automation
now
working
for
the
ability
to
when
we
add
requests
or
we
add
issues
they
get
added
to
our
projects
and
I'll
share.
What
that
looks
like
in
just
a
second
we've
got
an
initial
readme
up
and
in
place
as
a
matter
of
fact,
somebody's
already
inputted
and
had
approved
a
spelling,
typo
mistake
that
was
made
in
that
readme.
A
So
that's
cool,
so
we've
already
got
and
we
were
able
to
use
the
automation
to
go
ahead
and
Mark
it
as
okay
to
test
and
looks
good
to
me
and
then
eventually
approve
the
request
and
and
watch
it
move
through
the
system,
so
that
all
seems
to
be
working
well.
We've
got
a
number
of
different
issue
types
that
we've
also
put
ahead.
A
Some
of
them
are,
you
know,
I've
lifted
some
of
that
directly
from
what
kcp
had
in
there
in
terms
of
design,
I
mean
sorry
kinds
and
areas
so
I
thought
that
was
good
to
use
there.
We've
also
got
a
few
new.
Let's
see,
statuses
that
have
been
created
as
well.
A
Open
pull
requests
at
the
moment,
we'll
be
going
over
this
one
in
particular.
This
is
this,
has
just
the
content
of
what
we
thought
might
be
the
beginnings
of
an
investigation,
doc
that
could
be
used
in
the
kcpio
page
for
investigations.
We
envision
there'd,
be
something
called
The,
Edge,
multi-cluster
denoted
here
and
then
clicking
on
that
link
would
open
the
document.
That's
that's
surfaced
here.
Well,
that's
on
the
agenda
for
today
to
discuss
that
in
more
detail
in
projects.
There's
a
public
project
now
listed
called
kcp
Edge
and
has
a
few
categories.
A
We
have
a
category
here
called
closed.
It's
not
visualized
here,
but
it's
something!
That's
a
it's
an
intermediary
step
between
in
progress
between
what's
done
and
done
done
so
you
can
Envision
done
is
done
done
at
the
moment.
A
The
items
that
are
in
progress
here
are
most
of
what's
listed
in
today's
meeting.
So
that's
that's
cool
that
that's
working
and
moving
ahead
using
a
kanban
boat
for
the
most
part,
because
that's
what
I
feel
most
comfortable
with.
But
there
are
other
options
here
as
well:
I've
copied
some
of
the
kcp's
items
such
as
good,
first
issues
incoming
those
that
wind
up
in
new
status
in
progress
and,
of
course,
the
main
pillars
that
we
believe
will
be.
A
You
know
the
possibility
of
these
being
main
pillars
of
of
focus
for
us:
Edge
workspace,
edgemc
placement
and
edgemc
scheduler,
they're
they're,
just
placeholders.
For
the
moment
they
don't
just
anticipating
the
possibility
of
having
work
there,
but
we
don't
know
for
sure
we'll
see
how
the
community
steers
us
and
that
I
believe
about.
Does
it
for
for
that
portion
of
the
agenda.
Any
questions
or
comments
concerns
about
the
repo
structure
and
the
and
the
project
structure.
A
Okay,
all
right
so
I'll
flip
the
order
of
these
two,
because
I
think
there's
something.
This
might
be
a
little
quicker
to
discuss
and
open
us
up
to
the
community
to
discuss
it
should
and
can
we
adopt
a
code
of
conduct
for
the
calls
themselves.
I
noticed
that
on
many
of
the
cncf
calls
even
the
kubernetes
iot
edge
working
group,
they
mentioned
the
code
of
conduct
at
the
beginning
of
each
call,
and
I
was
wondering
if
kcp
maintainers
or
anybody
else
on.
A
And
so
it's
just
more
or
less
saying
that
you
know
this
is
the
cncf
code
of
conduct
and
it
governs
how
we
discuss
things
on
the
call
so
that
we
don't
either
discriminate
or
disclude
people
that
from
discussing
things
or
engage
in
bullying
or
anything
like
that.
So
I
think
those
are
they're
kind
of
high
level,
but
I've.
C
Heard
yeah
just
to
clarify
your
question
and
to
to
understand
your
question,
so
we
already
have
this
what
Stefan
mentioned
we
have.
We
have
also
another
project,
so
you
are
asking
whether
you
should
kind
of
have
some
quick
announcement
of
the
code
call
of
conduct
before
in
each
call
is
that
the
question.
C
D
We
don't
do
it
for
kcp
I'm
in
favor
of
doing
it,
I
used
to
run
the
cluster
API
Community
calls
and
like
Andy
was
saying
at
the
beginning
of
every
meeting.
I
could
probably
repeat
it
verbatim,
but
I
used
to
say
every
week,
which
was
basically.
This
meeting
is
governed
by
the
cncf
code
of
conduct,
which
basically
means
be
nice
to
everybody,
and
you
know
that
basically
covers
it.
A
A
Take
a
note,
great
okay:
are
there
any
other
comments
or
questions
about
this
specific
item.
F
On
a
different
topic
is
not
here
in
the
agenda,
but
do
we
want
also
to
give
a
quick
update
on
the
call
we
had
yesterday
with
the
iot
Edge
working
group.
A
Okay,
all
right,
if
there's
no
further,
if
there's
no
objection
to
this,
then
we'll
make
it
as
a
summarily
as
a
part
of
each
and
every
call
that
we
have
from
this
point
forward.
Thank
you
very
much
ratified
all
right.
Let's
move
on
kcpio
investigation,
so
maybe
I
should
just
instead
of
I'll
just
do
the
readout
first,
as
as
Paulo
suggested
for
the
iot
edge
working
group,
and
then
we
can
get
to
the
interaction
on
the
kcpio
investigation.
A
So
the
iot
working
group
for
Edge
out
of
kubernetes
we
attended
their
meeting
yesterday,
introduced
our
our
Edge
MC
repository
The
kcp
Edge
Sig,
and
gave
them
a
little
information
about
where
we'd
like
to
operate
and
and
the
kinds
of
things
that
we're
looking
to
zero
in
on
in
terms
of
augmentation
and
and
additions
and
ERS
to
the
existing
kcp
Upstream.
They
there
was
some
preliminary
interest,
and
there
also
is
some
curiosity
as
to
what
kcp
was
also.
We
gave
a
brief
description
of
what
kcp
was
focused
on
and
where
it
was
operating.
A
I
think
it
might
even
serve
Andy
and
Stefan
or,
and
whomever
else
here
is
from
the
kcp
group.
If
you
would
like
to
schedule
some
time,
I
know,
DeJean
is
part
of
that.
Dejan
bosnick
I
believe
is
his
name
he's
part
of
that
community,
and
he
was
the
person
that
we
used
as
our
hook
to
get
in
there
and
introduce
ourselves
might
be
useful
for
kcp
to
do
the
same
kcp,
proper
I.
Couldn't
do
it
the
justice
that
needed
it,
but
just
a
suggestion.
F
Apollo
no
I
think
yeah.
My
feeling
is
that,
though
they
didn't
really
know
they
attend
the
attendees
about
kcp
at
least
Master
them,
but
in
general
I
think
there
was
interested
to
you
know
to
hear
new
topics.
Maybe
kcp
topic
would
be
interesting
for
them.
We
didn't
really
present
that
topic,
because
we
wanted
to
focus
on
the
edge
side
of
things,
but
we
we
felt
that
they
could.
They
could
have
benefited
from
at
least
having
some
initial
background
of
kcp
before
even
we
talk
about
the
kcp
forage.
A
A
All
right,
okay,
final
item
for
me
on
today
is
the
kcpio
investigation
document.
So
there
is
a
I
put
together
a
preliminary
PR
so
that
people
can
jump
on
and
and
take
a
look
at
what
we're
proposing
in
terms
of
the
items
that
are
listed
here.
A
If
anybody
has
had
some
time
to
already
review
this
or
preview,
it
probably
not,
but
I
intended
on
dropping
this
link
in
the
slack
channel
for
kcp
Dev
at
the
conclusion
of
this
call,
so
the
folks
can
get
involved
and
edit
to
their
hearts
content
I'm
going
to
find
those
little
spelling
mistakes
and
grammatical
mistakes
and
other
things
that
maybe
are
not
German
to
or
necessary
for
this
document.
So
if
you'd
like
to
take
a
look
at
it
and
give
us
some
feedback,
we'd
love
your
input.
D
D
Often
a
little
bit
easier
to
do
GitHub
reviews
on
on
Pros.
If
the
lines
are
wrapped
at
I,
don't
know
100
characters,
120
characters
or
something
like
that.
So
if
you
wanted
to,
you
could
go
back
and
just
do
the
word
wrapping.
The
editors
should
be
able
to
automate
that
for
you,
okay.
D
Mean,
like
word
break
sorry
like
make
sure
that
each
line
is
no
longer
than
oh
okay
period.
150
characters
because
like
if
you
have
like
line
25,
for
example,
is
a
paragraph
and
if
I
wanted
to
comment
on
some
part
in
the
middle
or
the
end
it
you
know,
I
basically
have
to
go.
D
C
A
Yeah,
there's
no
there's
no
direction
in
there
for
the
browser
to
take
any
yeah.
It's
not
like
BR
or
any
or
break,
but.
F
C
A
A
Did
I
start?
This
is
in
my
yeah.
F
G
F
C
A
June,
would
you
like
to
open
up
with
ways
of
working
and
then
we
can
move
on
to
Edge
workspace.
H
Okay,
thank
you
Andy.
Let
me
share
my
screen
for
for
a
while,
so
I
want
to
get
started
with
this
one
first,
this
one
is
basically
regarding
the
relationship
between
the
two,
the
two
repositories,
kcp
and
agency.
H
I
wrote
this
text
roughly
a
week
ago
and
I
had
some
initial
discussion
with
Ezra
I
realized
that
this
text
might
be
confusing.
So
I
come
up
with
some
picture
here,
okay,
so
the
question
basically
is:
what
should
be
the
exact
content
inside
this
Repository?
H
Should
we
just
put
some
Edge
specific
apis
plus
controllers
there,
as
we
did
as
similar
to
this
controller
income
example,
or
should
we
also
in
addition
to
that,
also
import
the
existing
assets
from
PCP,
including
the
apis
controllers,
especially
the
API
server
itself?
H
This
question
might
look
very
simple,
but
we
do
need
some
confirmation
from
all
suggestions
from
Community
before
we
can
really
get
Hands-On
to
push
code.
So
that's
why
I
put
this
question
here.
I
want
to
I'm
happy
to
hear
from
the
community,
which
way
should
we
go.
H
D
Yeah
thanks
so
not
option
two.
We
don't
want
people
to
embed
all
of
kcp
just
so
they
can
use
it.
There
are
legitimate
and
valid
use
cases
for
embedding
portions
of
kcp.
If
you
want
it
like
it,
it's
not
easy
to
do
today,
and
this
is
something
we
will
be
working
towards.
If
you
wanted
an
embeddable
API
server
that
supports
workspaces
or
API
findings
or
whatever
there
are
value
use
cases
for
that,
but
for
edgemc
I,
don't
think
that's
one
of
them.
D
H
So
this
that
this
scenario
would
be
very
similar
to
this
controller
runtime
example.
I
just
want
to
confirm,
because
this
thing
assumes
that
there
is
a
PCB
server
already
running
there,
and
this
adds
some
controllers.
On
top
of
that.
H
F
Before
we
close
another
question
here,
so
of
course
now
we
we
are
assuming,
then,
that
any
controller
like
in
this
case
agency
that
we
had
they
will
require
a
p
cluster
to
run
right.
We
are
not
I
wonder
if
there
is
any
plan
or
roadmap
to
somehow
give
the
ability
also
to
start
somehow
within
the
kcp
binary
controllers,
a
kind
of
plugin
mechanism.
E
D
There
may
be
a
flavor
of
a
kcp
binary
that
incorporates
the
the
TMC
bits,
but
honestly
we'd
rather
not
like.
We
don't
want
to
give
special
privileges
to
things
that
are
outside
of
the
core.
If
we
can
avoid
it.
So
I
would
anticipate
that
we'll
have
kcp
core
deployed,
we'll
have
the
front
proxy
and
then
for
things
like
TMC,
once
they're
split
out,
they'll
be
deployed
separately.
I
Obviously
there
is
some
Legacy
here,
or
you
know,
history
in
how
TMC
has
started
really.
You
know
deeply
integrated
into
into
the
kcp
core
before
a
number
of
of
New
Primitives
and
new
features
in
kcp
core.
I
You
know
exist
so
now
you
know,
and
we
might
have
to
identify
and
and
cope
with
the
you
know
some
places
still
in
TMC,
where
it's
still
highly,
coupled
with
with
the
the
core
and
for
now
at
least
requires
you
know
it
to
be
collocated,
but
especially
in
terms
of
you
know
being
privileged
in
in
terms
of
permissions.
But
but
clearly
we
inherit
from
from
some
history
here
and
and
the
direction
is
to
decouple
them
TMC
from
from
the
core,
so
I
mean
starting
a
new
project
like
EMC.
I
Obviously
the
goal
would
be
to
directly
adopt.
You
know
the
the
future
approach.
H
F
I
E
I
Know
in
this
regard,
so
eight
of
TMC
is
not
necessarily
100
the
example
to
follow,
because
it
it
has
to
cope
with
the
history
of
how
it
started
and
evolved
until
now,.
C
A
quick
question,
although
it's
not
related
to
it
at
all,
so
it's
more
of
a
kcp.
So
if
you
do
that
change,
what's
the
vision
of,
for
example,
some
apps
cloud
service
or
services,
like
you
know,
you
you
you,
you
give
everyone
everything
right,
ostmc
and
everything
so
would
I
be
able,
for
example,
to
just
consume
the
core
from
the
service
and
have
my
own
TMC
running
somewhere
else.
You
will
supply
both
of
them
coupled.
What's
the
what's
the
vision
here,.
B
The
idea
is
that
everything
is
composable
like
you
can
use
core,
maybe
there's
TMC
on
the
cluster
as
well
on
the
sap.
But
if
you
want
to
bring
your
own,
they
should
be
possible.
So,
as
as
Andy
said,
basically,
those
Services
should
not
be
privileged.
That's
a
rule
of
trump
we
follow,
which
means
you
can
provide
your
own.
Even
you
can
run
the
same
TMC
under
your
own
identity.
If
you
want
to
even
that
should
work
or
a
deaf
version,
plus
support
version.
B
No,
the
the
API
bindings
are
separated
through
the
identity
concept
like
you
can
provide
the
same
CID,
the
same
API
types
as
a
different
project,
and
you
get
your
own
identity
and
people
can
bind
to
your
variant
or
to
the
other
variant.
B
H
It's
just
a
curious:
is
there
a
milestone
or
a
plan,
or
when
will
this
happen?
I
mean
when
will
the
TMC
being
decoupled
from
your
call.
D
E
B
It's
it's
part
of
this
enhancement
documents
that
Paul
votes
basically
is
a
plan
for
2023.
B
H
I
See
yeah
as
I'm.
Sorry
as
I
mentioned.
Probably
it
could
be
in
you
know
several
steps
isolating
the
repos
first
and
then
isolating
also
in
terms
of
you
know,
run
of
coupling,
as
we
mentioned
to
avoid
having
to
you
know,
build
TMC
enabled
kcp,
but
but
obviously
there
would
be
several
steps
in
in
this
process,
some
of
them
quite
more
impactful
or
harder
than
the
the
first
ones.
Obviously,.
F
So
assume
that
that
will
have
implication
the
way.
Also
you
sort
of
bootstrap
kcp
with
the
TNC,
so
you
will
have
to
start
also
with
leaving
physical
cluster
initially
of
some
place
where
we
deploy
the
TNC
controllers.
I
think
will
be
the
same
for
ENC,
so
you're
gonna
have
your
kcp
binary.
You're
gonna.
Have
your
cluster
your
semi
to
bootstrap
these
two
together
in
a
way
to
to
get
that
to
work.
D
C
H
Okay,
I
think
the
inputs
are
really
helpful
to
me.
I
guess:
I
got
this
question
answer
or
I
have
another
question:
it's
29.
H
This
is
a
more
simpler
question,
which
is
I'm
thinking
whether
we
need
a
specific
workspace
type
fold
at
use
cases,
because
with
that
we
can,
we
can
have
some
control
on
the
behavior
of
that
workspace,
including
a
specific
initializers
and
API
bindings
extension,
but
because
I'm
still
learning
I'm,
not
sure
whether
I
fully
understand
what
are
the
benefits
and
possible
drawbacks
if
we
have
a
specific
at
workspace
type,
so
I
want
to
I'm
happy
to
get
some
help
from
the
community.
B
You
can
answer
that.
Maybe
you
want
to
take
this
one
yeah,
so
conceptually
the
type
is
just
convenience.
It's
just
something
to
get
the
right
set
of
bindings
into
your
workspace.
You
can
do
that
totally
manually
and
during
development.
You
will
do
that
manually
in.
If
you
run
kcp
start
you
will
get
a
couple
of
types
by
default
and
but
consider
those
just
as
so.
We
call
that
batteries
included
right.
It's
it's
just
a
set
of
configuration
which
makes
sense
to
play
with
kcp.
B
B
C
D
Thanks
and
just
to
point
out
with
the
initializers,
they
generally
don't
or
shouldn't,
have
any
more
permissions
than
what
you
as
a
regular
user,
have
permission
to
do
and
just
highlighting
what
Stefan
said
about
convenience.
D
D
D
A
F
No
he's
about
okay,
so
initially
I
was
thinking
that
we
we
could
have
sought
to
achieve
the
saying.
Oh,
it
looks
like
the
cluster
workspace
type.
You
can
add
your
own
right.
You
can
Define
the
API
bindings
there.
So
as
long
as
you
have
API
exporting
root,
it
looks
like
you
can
actually
pretty
much
put
the
apis.
That
you
want
right.
D
Yes,
you
can
use
that
so
if
you
know
that
every
time
an
edge
workspace
is
created,
you
need
seven
API
bindings
to
be
created,
go
ahead
and
create
a
custom
type
and
in
the
types
spec
you
list
out
the
API
exports
that
you
want
to
bind
to
and
then,
whenever
an
instance
is
whenever
a
workspace
is
created
of
that
type,
the
API
bindings
will
automatically
be
created
pointing
at
those
exports.
So
that's
definitely
a
valid
use
case.
D
If
you,
you
need
to
do
more
than
that,
if
you
need
to
pre-populate
things
besides
API
bindings,
then
you
just
have
to
weigh
like
do
I.
D
In
the
short
term,
or
do
I
want
to
go
right,
you
know
an
initializing
controller
that
has
to
go
and
watch
for
these
things.
D
Yeah,
so
we
we
do
the
API
binding
initialization
for
you,
that's
built
into
kcp.
H
F
Yeah
and
yes,
I
have
some
other
point
about
the
workspace,
some
Behavior
about
that,
maybe
something
we
can
talk
about
once
we
talk
about
the
scheduling
and
I
think
it
is
more
related
to
this
overall
problem
of
the
coupling.
F
H
G
Okay,
so
June
have
you:
have
you
received
enough
information
for
this
issue?
Yes,.
F
E
F
The
idea
here
was,
we
started
a
document
on
not
really
focusing
on
the
scheduler
for
Edge
multi-cluster.
F
These
are
dock
I
share,
also
with
the
kcp
dev
distribution
list,
so
I
think
most
people
should
have
access
or
may
have
a
glance.
Study
stock
is
still
very
early,
but
the
idea
was
to
sort
of
Rise
questions
that
we,
you
may
want
to
answer
and
look
at
options
and
try
to
focus
really
on
this
in
this
particular
case,
just
on
the
scheduler
part,
not
necessarily
tackling
all
the
other
problems
that
we
mentioned.
When
we
talk
about
Edge,
you
know
we
have
problems,
we
we
talk
about
in.
F
We
reference
that
document
here
where
we
talk
about
the
difference
between
transparent
multi-cluster.
Of
course,
there
are
many
areas
there
that
we
we
mentioned,
but
for
now
we
did.
It
was
to
start
at
least
looking
at
the
scheduler,
because
this
seems
to
be
probably
the
simplest
piece
that
we
may
start
working
on.
There
may
be
also
some
implication
or
some
impact
on
the
thinker
as
we
discussed
previously,
but
for
now
again
this
is
really
focusing
on
the
specific
Edge.
F
So
the
idea
was
to
start
this
talk
and
hopefully
get
some
comments,
some
feedback
from
the
community-
and
there
are
some
open
questions
here
and
maybe
want
to
quickly
summarize
some
of
the
points
that
we
we
have
here
and
possibly
get
some
answer.
If
possible,
or
you
can
also
comment
offline,
you
don't
need
to
necessarily
give
all
your
feedback
now.
F
So,
first
of
all,
the
user
story,
I
really
focuses
on
on
sort
of
the
difference
with
what
is
the
current
scheduling
in
TNC.
So
we
we
have
some
of
the
parts
that
are
already
in
TNC,
for
example
the
idea
to
write
as
part
of
the
policy
and
space
selector
for
any
space
objects.
We
also
would
like
to
tackle
at
least
the
ability
to
have
some
non-space
resource.
I
should
be
able
to
do
some
selector
for
that,
and
that's
probably
one
difference
from
the
current
placement
in
in
C
and
also
the
other
difference
here
is.
F
F
So
again,
the
idea
here
is
to
select
all
the
matching
locations
rather
than
one
location,
to
select
all
the
matching
sync
targets
within
a
location
and
also
be
able
to
provide
a
selector
for
non-space
resources.
So
it
looks
like
pretty
much
the
current
placement
API
for
TNC.
There
is
some
difference
here.
We
are
still
debating
the
name
of
this
API
here
we
put
Edge
placement,
but
then
we
realized
that
actually,
this
API
may
be
also
used
in
general
for
other
multi-cluster
use
case.
So
maybe
it's
not
maybe
Edge
is
not
necessarily
the
best
terminal.
F
There
was
some
comment
about.
Maybe
we
should
call
this
multi-placement
or
something
like
that.
So
I'm
welcome
to
suggestions
here.
If
anybody
has
some
thought
what
will
be
the
right
name?
In
any
case,
we
we
thought
we
want
also
to
have
a
different
API
Group
here.
F
Unfortunately,
right
now
scheduling
you
know
there
is
just
API
Group
scheduling.
Maybe
we
add
something
like
Edge
dot
scheduling
to
distinguish
this,
for
that
is
really
the
placement
for
Edge,
but
we're
also
debating
maybe
just
for
the
kind
we
want
to
have
maybe
a
different
name
to
avoid
potential
confusion,
and
we
talk
about
these
ideas
about
you
know
having
this
workspace
of
type
Edge.
So
when
there
is
that,
when
I
use
a
greater
workspace
will
have
this
API
binding
the
API
binding
for
this
API.
F
B
Just
as
background
about
scheduling,
this
was
meant
to
be
totally
use
case,
independent
I'm,
not
sure
we
succeeded
in
that,
but
scheduling
has
nothing
to
do
with
workloads
and
you
see
it
in
the
location
resource
there,
but
we
also
have
said
I
think
in
in
the
placement
right.
So
it
points
to
a
resource
which
then
gives
meaning
to
what
a
location
and
what
placement
actually
is.
B
Today
we
only
support
I
think
one
resource
or
one
kind,
but
this
is
just
because
we
didn't
generalize
it
so
anything
which
isn't
scheduling
should
be
use
case
independent.
So
if
you
have
a
different
kind
of
placement,
which
is
not
around
Edge
like
some
idea
or
that
yeah,
maybe
a
kind
in
scaling
makes
sense,
but
if
it's,
if
it
uses
the
word
Edge
I,
think
then
we
need
a
different
API
Group.
F
F
Okay,
by
the
way,
what
was
the
reason-
because
this
is
also
a
question
that
came
up
in
our
own
internal
discussion?
What
was
the
the
rational
to
have
this
different
potential
different
type
of
location
resource
here,
I
mean?
Is
there
we.
F
B
E
F
Right
because
he
has
also
some
implication
of
location
itself
right,
that's
too,
that
has
to
be
General
2.2
to
different
I
mean
the
location.
Cr
has
to
to
basically
indicate
other
type
of
resources
as
well.
I,
don't
think
is
the
case
today,
yeah
so
yeah.
The
question
with
was,
if
should
keep
on
inheriting
this
location
resource
here
or
not,
then
display
much
the
the
discussion
we
are
leaving
between
ourselves
for
now.
G
C
E
C
C
G
F
E
F
So
let's
talk
about
the
some
of
the
issues
that
we
have
here,
so
one
challenge
that
we
have
now
is
currently
in
the
current
placement
for
TNC
we
select
pretty
much
only
one
location,
the
scheduler
will
select
only
one
location.
In
the
general
case
of
fetch.
F
That
is
this
video
introducing
this
concept
of
placement
slice
resource
where
you
could
potentially
have
multiple
of
these
slides
to
to
go
around
potential
limitations
and
sort
of
partitioning
this
and
in
this
resource,
Pascal,
will
keep
track
of
the
different
locations
in
this
way.
So
essentially,
there
will
be
some
reference
to
this
separate
resource,
and
hopefully
this
will
allow
past
two
to
scale,
but
this
still
I
think
can
open
discussion.
I
think
we
also
isolate
some
other
comment
here,
but
then
I
don't
think
it
is
travel
during
the
call
okay.
F
So
the
other
point
here
is
a
challenge
is
really
about
the
thinking
strategy
right,
so
we've
been
debating
what
is
the
best
strategy?
We
have
to
deal
with
this
one
two
and
scenario
where
I
have
to
deliver
the
same
results
to
multiple
to
a
large
number
of
clusters.
Potentially,
we
know
that
today
in
kcp
you
can
just
label
at
least
the
schedule
at
the
end.
F
Will
label
resources
with
a
label
that
has
the
prefix
State
workload,
kcp
Dev,
and
then
there
is
a
hash
that
is
the
the
sync
Target
name
and
the
workspace
name
pretty
much
the
Sinker,
the
current
Sinker
will
look
and
we'll
try
to
find
this
label
across
resources.
You
know
across
workspaces
to
to
sync
that
resource,
assuming
that
you
have
the
sink
value
here
and
the
problem
is,
if
you
use
this
approach,
if
you
want
to
sync
a
lot
of
stuff.
F
First
of
all,
you
may
end
up
with
some
limitations
there
right
you,
you
can't
really
have
thousands
of
labels
and
there
is
some
limitations,
also
in
the
data
label
volume
there
and
the
other
problem.
I
think
this
is
actually
a
bigger
problem
is,
if
you
have
multiple
sinkers
trying
to
sync
the
cell
resource,
they
all
also
try
to
update
the
status
and
they
are
actually
doing
this
for
different
resources.
F
I
Yes,
surely
it's
not
the
case
anymore
now,
and,
and
it
should
not
be
in
fact
you
sorry
in
fact
the
The
Thinker
and
more
precisely
the
single
virtual
workspace
so
which
does
the
link
between
the
Sinker
and
and
kcp.
It
may
say
it
maintains
a
view
of
of
the
statues
per
Sinker.
I
So
there
are,
you
know,
controlled
list
of
feels
that
the
Thinker
can
bring
back
to
kcp,
typically
the
statues,
but
it
could
be
also
other
fields
in
the
future
and
those
fields
are
by
default
set
as
overrides
on
an
annotation,
so
they
are
not
by
default.
You
know
they
would
not
be
updated
directly
on
the
object
and
then
that's
the
whole
point
and
whole
goal
of
the
upcoming
coordination
controllers
to
be
able
to
get
the
view
if
we
can
say
so,
of
the
object
related
to
each
Sinker.
I
So
with
the
including
the
overridden
Fields,
you
know
the
fields
that
have
been
operating
for
age
Sinker
like
the
statues
and
to
summarize
that
and
to
be
able
to
set
up
the
main
statues
of
of
the
main
object
on
kcp.
That's
typically,
the
you
know
the
deployment
splitter
use
case,
which
was
you
know,
funding
kcp
at
start
where
you
mainly
spread
replicas
across
two
sing
targets
and
then
finally,
you
get
the
two
statuses
which
are
maintained
or
on
the
on
the
kcp
object,
but
in
annotations
I
mean
just
divs.
I
In
fact,
and
and
based
on
those
two
statues,
you
can
just
calculate
the
main
Statues
by
semi,
some
summing
up
the
the
available
replicas.
So
there
is
already
this
mechanism
in
place,
but
obviously
the
main
problem
here,
I
assume,
would
be
once
again
like
for
levels.
It
would
be
the
scaling,
the
fact
that
you
know
maintaining,
let's
say
thousands
annotations-
to
express
the
various
fields
that
are
overridden.
I
For
every
single
Target
would
mainly
bring
some
storage
problems.
On
the
other
hand,
the
you
know
the
storage,
how
we
store
those
statues,
specific
fields,
those
thinker,
specific
Fields,
sorry
I-
will
store.
That
is
mainly
an
implementation
detail.
For
now
it's
it's
an
annotation
internal
annotation
on
the
object,
but
it
could
also
be
worked
on
in
the
future
to
put
that
in
some
dedicated
storage-
or
you
know
so
mainly
they
would
I
mean
in
in
a
quite
long
future
ways
to
bypass
this
scale
scale
problem,
but
that's
the
main
problem
for
now.
F
This
is
very
helpful
because,
of
course,
for
now
we
were
assuming
that
I
didn't
know
that
this
capability
was
actually
available,
so
we
are
making
some
other
assumption
for
scaling
yeah.
So
I
said
that
we
should
probably
talk
more
about
this
offline
David.
If
you
are
okay
with
that
yeah,
the
main
question
is
you're
talking
about
annotation,
so
The
annotation
is
on
the
object
itself
and
the
resource
being
synced
is
not
on
the
sync
Target,
for
example,
right.
I
No,
no!
It's
on
the
the
object
itself
really
carries
all
the
information,
the
the
kcp
informations
and
also
the
informations
of
how
the
object
is
visible
in
each
sinker
and
the
statues
that
was
reported
by
each
singer.
Okay,.
F
Yeah
yeah,
it
could
be
again
could
be
an
issue,
as
you
said,
for
scalability
tourism.
When
you
have
yeah
initially
thousands
of
thinkers
they
all
trying
to
update
the
same
object
and
you
you
have
all
this
annotation
as
well,
so,
okay,
so
for
now
we,
of
course
we
didn't
know
about
that.
So
we.
I
F
I
Interesting
that
there
is
a
mechanism
in
the
in
this
virtual
workspace
that
it's
mainly
just
you
know
managing
some.
You
know
transformation
on
the
Fly
transformation
when,
when
the
objects
are
returned
by
I,
summarize
I
know
synced
back
by
the
Sinker.
This
is.
There
is
one
type
of
transformation
that
does
this.
You
know
maintaining
of
the
statues
in
an
annotation,
but
obviously
it
could
be
done
differently.
It
could
be
down
in
a
dedicated
object
that
you
link
to
it
mainly
the
mechanism.
E
I
F
So
yeah
yeah
and
then
the
50
should
talk
more
offline,
so
for
now
I
present
to
what
was
our
initial
thinking
and
then
maybe
we
can
also.
We
should
explore
this
Avenue
as
well.
I
think
it's
it's
promising
and
then
maybe
actually
some
advantage
or
some
disadvantage.
You
have
to
see
now
what
how
this
works,
especially
for
this
key
we
also
mentioned
he
has
aggregation.
Summarization
is
not
really
the
focus
of
the
stock.
So
it's
not
the
scope
of
this
talk,
and
this
would
be
another
design.
F
Dock
and
customization
is
as
well
something
we
we're
gonna
tackle
in
a
different
DOC.
For
now
we
were
assuming
that
one
initial
tortilla
before
actually
we
we
had
this,
we
stopped
that
we
could
do
this.
Also,
this
possible
model
with
a
sinker
that
you
David
you
suggested
was
to
use
this
mailbox
model
idea.
F
Where
essentially
we
have.
We
said
we
have
a
separation
from
a
workspace
that
is
dealing
with
a
workload
management
side
of
things,
and
then
there
are
mailboxes
essentially
where
we
replicate
the
resources.
Each
one
essentially
is
connected
to
a
different
cluster,
and
the
objects
will
be
somehow
dropped
into
this
mailbox
and
they
will
be
synced
using
the
current
syncing
mechanism
in
each
one
of
these
clusters
and
then
aggregation
of
things
like
that.
F
F
Of
course
you
could
implement
this
in
different
ways.
You
can
think
about
namespaces
as
mailbox,
but
this
could
be
a
problem
if
you
think
about
the
normal
space
resources
or
you
could
have
a
mailbox
workspace
and
this
pretty
much
the
the
idea
that
we
are
considering
here.
I'm
gonna
skip
this
option,
one
that
is
probably
not
yeah
go
ahead.
Andy.
D
C
When
they
think
our
initial
goal
was
to
try
at
least
for
mid-range
scale,
to
test
the
possibility
of
using
workspace
pair
education
and
see,
can
that
be
done
at
all?
You
know
because,
at
least
in
promise
kcp
support,
you
know
ten
thousand
or
one
million
workspaces,
and
so
on.
So
you
say
we
can
do
a
lot
of
with
workspaces.
C
G
E
B
H
C
B
B
C
B
You
you
can
do
that
I'm.
Just
saying
one
shot
gives
you
roughly
some
some
hundred
bytes
per
second
and.
B
C
C
B
F
Okay,
I
see
I,
see
yeah,
okay,
yes,
because
of
course,
in
the
model,
where
we
have
one
single
sort
of
spec,
we
replicate
and
we
just
do
this
status
update.
This
is
I
think
what
David
was
suggesting.
Originally,
we
we
actually.
Yes,
we
have
less
objects
to
to
replicate
to
think
the
status
is
gonna,
be
still
need
to
be
synced,
but
the
objects
are
certainly
much
less.
F
Okay,
you
have
a
question
David.
E
I
Do
you
understand
correctly
that
it's
mainly
cascading
the
workspaces
with
you
know
sinkers
between
them
until
you
reach
the
the
level
where
the
number
of
scene
targets
is
low
enough
to
be
able
to
really,
you
know,
no,
so
that
at
each
level
the
number
of
sync
targets
is
is
low
enough.
In
order
to
use
the
you
know,
standard
syncing
and
scheduler.
C
C
F
C
Yeah
and
then
remember
that
we
didn't
bring
it
up
in
this
call,
but
the
whole
connectivity
pattern
that
you
know-
and
we
see
I
see
that
we
are
out
and
the
fact
that
we
also
want
to
maintain
this
model
of
thinking
and
all
of
that,
but
Support
also
physical
cluster,
that
disconnect
for
a
week
and
come
back.
That
will
have
big
implication
on
the
design
right.
C
A
You
also
close
up
the
call
for
us.
Thank
you
all
for
your
participation
today.
There
is
an
outstanding
item
on
this
same
subject
about
scalability
that
will
come
up,
I
believe
in
the
next
call.
Our
next
call
is
on
December,
1st
bralio's
gonna
I
believe
be
bringing
forward
some
information
in
the
slack
channel
for
kcp
Dev
having
to
do
with
questions
about.
You
know
how
is
kcp
proper
testing
scale
for
a
million
workspaces.
We
are
also
doing
the
the
team.
A
The
community
for
Edge
MC
for
kcb
Edge
is
also
doing
testing
at
scale,
so
maybe
there's
an
an
opportunity
to
combine
efforts
or
potentially
to
influence
each
other
and
inform
each
other
to
bounce
ideas
off
of
each
other,
so
that
looked
for
that
conversation
to
take
place
in
slack
soon
any
other
business
that
we
need
to
discuss
today
from
any
of
the
members
that
are
on
the
call.