►
From YouTube: KCP-Edge Community Meeting, November 3, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everybody
and
welcome
to
the
inaugural
meeting
first
ever
of
the
kcp
edge
Community.
Thank
you
all
for
joining
we're,
hoping
to
have
a
very
vibrant,
thriving
Community
started
as
started
as
a
result
of
this.
We're
focused
here
on
edge,
multi-cluster
related
use
cases
and
we
basing
most
of
what
we're
working
on
today
on
the
base.
A
Kcp
technology
that's
come
about
in
the
open
source
community,
so
we
feel
that's
a
strong
basis
to
start
from
and
we'd
like
to
continue
to
embellish
and
augment
that
as
as
possible
for
purposes
of
upstreaming
that
the
kcp
as
well
as
you
know,
if,
if
we're
in
the
future,
if
it
gets
absorbed
into
kubernetes,
we
wouldn't
object
to
that
either.
A
In
today's
meeting
we'll
be
discussing
the
The
kcp
Edge
and
its
Inception
and
its
driving
concerns
that'll
come
at
the
bottom
half
of
the
hour,
but
up
front
I
wanted
to
get
some
feedback
from
everybody
here
on
the
the
call
for
some
base
Logistics
once
these
are
out
of
the
way
they
I
don't
suspect.
These
will
be
recurring
topics.
A
So
then
we
can
get
to
the
meat
and
the
subject
matter
that
we'd
like
to
cover
so
the
first
up
in
logistics
we
have
the
the
name
Sig
or
the
the
modifier
Sig
on
our
title
for
Sig
kcp
Edge
wanted
to
know.
If
that's
something
that
we
feel
is
acceptable
because
it
connotates
that
we're
part
of
the
kubernetes
community
or
if
we
should
drop
the
word,
the
the
tagline
Sig
and
just
default
The
kcp
Edge
community.
A
B
Sig
just
stands
for
special
interest
group,
so
I
don't
feel
like
there's
a
problem,
including
that
as
part
of
the
name
just
my
two
cents.
Okay,.
A
So,
let's
put
a
vote
a
vote.
Does
anybody
object
if
I
don't
by
raise
of
hand,
does
anybody
object
to
using
Sig
in
our
title.
A
All
right,
ratified
and
approved
we'll
continue
to
use
the
term
Sig
in
our
in
our
title.
Thank
you,
okay.
Secondly,
these
are
a
little
bit
more
involved,
so
GitHub
location
for
code.
So
we
wanted
to
know
you
know
there
could
be
the
potential
for
several
different
efforts
ongoing
simultaneously
and
the
work
that
we'll
be
talking
about
with
EMC
with
sharding
and
all
that
comes
with
EMC.
Potentially,
you
know
schedulers
different
sinkers
Etc,
so
I
wanted
to
get
a
feeling
for
the
community's
suggestion
here
in
terms
of
GitHub.
C
A
All
right,
thanks
Ezra
for
the
questions
Andy
has
his
hands
up.
B
Yeah,
so
we're
going
to
create
a
repo
called
TMC
for
the
transparent
multi-cluster
bit
I
wouldn't
call
what
you're
doing
with
Edge
EMC,
just
given
the
company
name.
I
would
just
call
it
Edge
or
something
like
that,
and
so
we'll
do
another
repo
for
this
well.
D
D
I,
don't
know
I
I,
don't
think
we
need
to
take
a
position
on
that.
I
think
yeah
I
think
the
obvious
choice
is
another
repo
under
kcp,
Dev
and
I.
Think
Edge
MC
would
be
a
great
choice.
C
So
so
we
can
vote
on
the
name,
but
I
think
that
the
bottom
line
about
the
repositories
that
we
can
open
repository
under
kcp
Dev
for
the
edge
Lane
right
it
it
is
that
what
you
meant
Andy,
yes,
okay,
thank
you.
C
A
Edgar
is
your
hand
still
raised.
Are
you
done?
Okay,
great?
Thank
you
all
right.
So,
if
I've
captured
that
correctly,
TMC
is
going
to
be
carved
out
of
the
kcp
repo
and
Edge
repo
to
be
started
so
at
which
time
the
edge
repos
started,
I'll
start
putting
these
agendas
and
the
epics
in
that
in
that
repo
that'd
be
great.
Okay,
any
other
further,
any
other
comments
on
the
GitHub
location:
okay,
all
right,
slack
location.
So
we
know
there's
a
pound
cake,
kcp
Dev!
A
C
B
C
A
A
kcp
Dev
Google
group
that
we've
we
were
asked
to
remind
others
and
adjust
and
add
to
this
email
distribution
for
this
meeting,
which
worked
I
think
very
effectively.
So
thank
you
very
much
for
Lending
us
that
can
we
continue
to
use
that
and
simply
just
qualify
with
kcp
Edge
for
those
of
us
that
are
be
sending
out
messages
on
our
chairs
behalfs.
B
Same
response
for
the
slack
conversations
like
the
mailing
list
is
not
high
traffic
right
now,
so
we'll
split.
If
we
need
to.
A
Excellent,
we
appreciate
that.
Thank
you
all
right.
The
logistics
business
I
believe
is
concluded,
so
we
can
turn
our
attention
over
to
the
driving
concerns
for
Edge
and
with
that
I'll
turn
the
floor
over
to
Mike,
Ezra
and
Paolo,
to
give
us
an
overview
of
the
driving
concerns.
If
any
of
the
three
of
you
would
like
to
share
the
document
that
you
know,
you've
prepared
for
this
presentation
or
if
you'd
like
to
I,
think
it'd
be
good.
If
you
shared
the
document
but
I.
D
Can
bring
it
up,
yeah
yeah
I'll.
Do
that
also
by
the
way
I
don't
have
permissions
to
create
repos
under
kcp
Dev
I'm.
D
All
right,
so
you
know
we
tried
to
organize
some
thoughts,
shared
it
with
the
community.
We
started
to
have
some
discussion
already,
so
I
tried
to
organize
this
as
some
driving
concerns
and
then
there's
consequences
there
really
a
lot
of
consequences
to
tease
out
but
I
think
it's
important
to
recognize
that
they're
really
some
fundamental
differences
that
that
are
driving
this
and
you
know,
I'm
still
learning
about
transparently
close
to
it.
I,
don't
fully
understand
it.
D
So
I'm
just
going
to
kind
of
use
that
as
a
springboard
to
really
I
think.
The
important
thing
is
just
to
be
clear
about
what
we
mean
about
Edge
Computing,
so
I
might
call
out
some
differences
that
are
not
really
differences,
but
anyway,
here
we
go
so
one
of
the
I'm
not
quite
sure
how
to
really
I'm,
not
sure
I
really
got
the
right
phrasing
or
framing
of
all
these
things,
but
you
know
my
impression
of
transparent
multi-cluster.
D
Is
it
really
focuses
on
a
scenario
of
supporting
a
user
who
basically
wants
to
use
a
kubernetes
as
a
service
and
they're
they're,
developing
containerized
workload,
and
that's
all
well
and
good?
In
Edge,
Computing
customers,
of
course,
have
the
full
stack
problem
they
own.
Typically,
you
know
the
customer
is
an
Enterprise
that
brings
Edge
locations.
D
So
there's
a
kind
of
a
different
Slice
on
it
of
the
customer
in
some
sense
in
the
first
instance
owns
the
full
stack
problem.
D
Another
is
that
the
edge
locations
may
be
small
in
terms
of
the
compute
network
storage
memory.
You
know
that
kind
of
underlying
resource
involved.
D
In
fact,
you
know
when
you
talk
about
Edge.
People
also
will
often
put
iot
in
the
same
breath
and
iot
really
stresses
a
kind
of
small
devices.
D
I
think
here
we
don't
want
to
go
all
the
way
to
the
smallest
possible
device.
I
think
that
for
Edge
multi-cluster
we
want
to
terminate,
at
the
cluster
point
of
view,
say:
okay,
we're
going
to
say
that
each
Edge
location
can
run
a
cluster.
It
might
be
a
small
singer.
Note
single
node,
kubernetes
cluster,
but
it's
still
a
cluster
in
TMC
I
believe
there
is
a
again
because
of
the
focuses
on
kubernetes
as
a
service.
D
The
idea
is
that
the
cider
standard
and
I
may
be
getting
the
somewhat
wrong.
The
user
focuses
their
attention
on
one
or
a
few
namespaces
I
noticed
you
know.
The
placement,
for
example,
refers
to
name
spaces
again,
the
in
the
edge
scenarios.
It's
really
a
full
stack
problem,
and
you
know
there
may
be
non-namespace
resources
that
are
important
here
too.
D
So
it's
yeah,
okay,
anyway.
Okay,
let's
move
on
as
I
sent
us
earlier,
customers
bring
their
own
business
physical
hierarchy,
it's
not
like.
We
are
going
to
design
a
service
that
works
for
all
the
customers
we
might
be
able
to.
You
know
work
on
some
Central
thing,
some
Central
service
that
some
customers
can
use
a
lot
of
customers
do
actually
have
for
various.
You
know
sovereignty
issues,
you
know
they
won't
want
to
use
a
service.
They'll
want
something
on-prem
all
the
way.
D
Some
may
be
able
to
use
a
shared
Central,
remote
Central
service
customers.
Doing
Edge
Computing
generally
have
a
more
complicated
arrangement
of
roles
and
responsibilities
in
in
TMC,
there's
kind
of
a
focus
on
a
devops
sort
of
scenario
where
the
the
primary
user
is.
You
know
developing
and
operating
something,
but
in
Edge
Computing,
it's
it.
It
tends
it
could
not
always,
but
it
can
be
much
more
complex.
D
D
Where
there's.
Maybe
you
know,
like
a
you
know,
auto
manufacturer
or
a
chemical.
You
know
refiner
over
here
they
have
many
plants.
There
may
still
be
a
you
know:
a
corporate
engineering
team,
that's
doing
some
level
of
central
Engineering
in
there
in
some
sense,
at
the
top
of
the
pyramid,
but
there's
also
plant
managers
that
are
have
authority
and
need
to
be
involved
in
a
decisions
about
what
happens
in
their
plant.
D
You
know
think
of
your
laptop
or
your
phone,
you
know
on
smart,
you
know,
phones,
are
you
know
really,
you
know
Common.
D
Think
of
all
the
things
that
are
involved
if
you've
got
an
iPhone
that
you
use
for
work
right,
Apple's
got
some
Authority.
You
probably
bought
that
phone
from
a
Telco
like
Verizon.
They
have
some
Authority,
your
employer
has
some
Authority
and
you
have
some
Authority,
so
you
know
we
we
often
have
you
know
these
more
articulated
complicated
scenarios,
and
now
we
get
to
the
one
of
the
really
important
driving
scenery
issue
concerns
the
number
of
edge
locations
can
be
large.
D
Not
you
know,
it
may
not
be,
in
fact
there's
a
lot
of
customers
that
don't
have
a
lot,
but
we
do
aim
to
ultimately
cover
cases
where
there
are
a
lot
again.
If
you
think
about
you
know,
say
the
cars
or
or
cell
phones.
You
know
you
quickly
get
to
very
large
numbers.
D
You
know
we
need
to
walk
and
crawl
before
we
run,
but
you
know
that
that
is
part
of
the
vision
all
right
now,
also
coming
with
in
in
that.
The
next
concern
is
that
this
multiplicity
in
TMC,
each
placement
object,
directs
one
copy
from
the
you
know
the
the
the
Hub
workspace
that
to
a
some
P
cluster
in
Edge,
and
we
wouldn't
want
to
have
to
create
a
placement
object
for
each
Edge
destination
or
Edge
location,
I,
want
to
say
also
an
edge
location.
D
You
know
may
have
more
than
one
cluster,
but
it
will
I
think
we
could
I'm
willing
to
draw
a
line.
It
says
it
has
at
least
one
cluster,
but
anyway
we
wouldn't
want
to
say
that
you
know
the
the
higher
level
stuff,
whether
it
be
administrative
users
or
higher
level.
Automation
has
to
directly
maintain
a
placement
object
for
every
Edge
destination
that
they
want
something
to
go
to.
You
want
a
more
compact
statement.
D
You
know
you
notice,
for
example,
in
TM,
in
transparent,
multi-cluster
right
the
place.
That
has
a
predicate
that
so
that
says
what
locations
are
acceptable
and
this
semantic
is
pick
one
of
them.
I
mean
Andrew.
You
want
something
more
like
pick
all
of
them.
You
want
to
be
able
to
say
compactly.
This
is
all
the
places
I
want
this
stuff
copied
to
now.
That
has
some
knock-on
implications
that
are,
you
know,
TMC
does
not
attempt
to
address.
One
of
them
is
the
status
the
status.
D
Obviously,
each
destination
has
its
own
status.
That
needs
to
get
reported
back
somehow,
and
the
standard
status
sections
of
the
standard
in
object
types
they're
not
intended
to
represent
status
from
many
copies
they're
intended
to
represent
the
status
from
one
copy.
So
there's
a
semantic
mismatch
there.
D
Also
again,
the
higher
level
stuff
typically
doesn't
want
to
just
deal
with.
You
know
thousands
of
individual
statuses
and
wants
to
deal
with
a
summary
of
some
form,
so
there's
an
additional
issue
of
defining
how
to
do
the
summarization,
how
to
represent
the
summaries.
D
Also,
there's
a
more
subtle
but
I
think
really
important
thing
that
comes
out
on
the
spec
site,
which
is
again
when
you
have
talking
about
this,
this
one
to
many
distribution
that
you
often
find
in
Edge
scenarios
that
you
need
to
do
the
the
customer
wants
to
do
a
bit
of
customization
for
each
Edge
destination,
so
the
copies
need
to
vary
a
little
bit
and
be
personalized
or
individualized
to
the
destination,
so
those
come
out
of
there.
D
Now,
when
you
combine
those
two
previous
issues,
you
get
a
lot
more
things
that
come
out
when
you
deal
with
the
fact
that
there's
this
this
one
to
many
distribution
and
the
mini
is
large.
Some
of
the
solutions
that
would
work
if
the
Beanie
was
small,
just
completely
unworkable
on,
and
you
have
to
deal
with
some
additional
issues,
so
I
I
try
to
listen
to
them
here,
as
I
mentioned.
One
of
them
is
the
need
for
the
compact
representation
of
where
you
want,
or
the
customer
wants,
wants.
D
One
thing
from
the
center
to
go
to,
and
I
kind
of
you
know
outlined
my
thought.
You
know.
Basically,
it's
a
a
takeoff
on
the
the
TMC
placement.
We
can
make
an
edge
placement.
That
is
a
lot
like
the
TMC
placement,
but
the
meaning.
The
interpretation
of
the
predicate
is
pickle.
I'm,
not
pick
one
of
the
things
that
satisfies
it.
D
D
But
again
we
have
a
large
number
of
destinations
that
that
becomes
unworkable.
You
need
the
customers
really
need
a
way
for
to
describe
with
one
pattern
how
something
from
the
center
is
to
be
customized
for
each
of
the
many
destinations
that
it
goes
to.
D
Let's
see
again,
the
the
summarization
of
status
becomes
critical,
the
you
know,
nobody,
nothing
higher
level
wants
to
deal
with
thousands
of
individual
statuses
in
in
the
first
instance,
of
course,
when
there
are
issues
problems
typically,
there
will
be
some
need
for
some
ability
to
find
them.
A
you
know
find
which
destinations
have
trouble
and
be
investigated,
so
there
often
is
a
desire
to
have
this
is
somewhere
in
the
center,
as
well
as
have
a
summary
in
the
center.
D
Another
thing
that
comes
when
you
have
this
compact
disc
description
of
saying
one
copy
in
the
center
gets
distributed
to
many
destinations
is
now
you
don't
necessarily
want
all
the
destinations,
all
those
copies
to
me
at
the
same
time,
particularly
when
you're
dealing
with
changes
you'll
often
want
to
do
some
kind
of
you
know
rolling
update,
and
this
is
where
things
like.
Canary
testing
and
blue
green
testing
appear
also
typically
Enterprises
doing.
D
So
there's
a
need
to-
or
this
just
brings
in
the
issue
of
controlling
that
rollout
and
then
also
a
little
more
subtly-
and
this
is
not
a
matter
of
a
really
interface
to
the
higher
levels,
but
just
in
comparing
with
TMC,
as
most
of
you
are
probably
aware
in
TMC
there's
a
distinction
between
the
scheduler
and
a
sinker
and
the
way
they
communicate
is
with
labels
on
namespaces,
so
each
for
each
destination
that
a
namespace
should
get
propagated
to.
D
There
is
a
label
put
on
that
name
space
and
that
works
well
when
there's
a
few.
But
if
you
get
to
the
thousands
that
just
is
unworkable,
so
we
need
a
different
sort
of
interface
between
scheduling
and
thinking.
If
we
even
maintain
that
that
distinction
and
I
think
it's
plausible
to
maintain
that
distinction,
because
it's
it's
an
it's
an
a
useful
kind
of
modularity,
but
we
need
a
new
kind
of
interface
there.
D
D
Of
the
center
one
motivation
for
this
is
the
connectivity.
The
networking
may
not
be
consistently
good.
Sometimes
it's
absence.
Sometimes
it's
slow.
You
know.
In
some
scenarios
it
is
sometimes
slow
and
sometimes
faster,
sometimes
present,
sometimes
absent.
You
know.
In
generally,
we
do.
We
don't
rely
on
we
in
Edge
Computing
you,
you
don't
generally
have
a
can't
make
the
assumption
that
there's,
usually
good
connectivity
between
engine
Center
or
between
Edge
locations.
D
There
are
the
motivations
for
this,
for
example,
a
data
sovereignty
or
you
know
regulatory.
You
know
General
regulatory
concerns,
so
I
think
you
know.
The
driving
desire
here
is
to
enable
each
Edge
location
be
able
to
operate
independently
and
again
it's
a
driving
concern.
You
know
this
has
some
implications,
so
the
health
checking
that's
in
TMC
is
not
appropriate
because
it
assumes
that
usually
the
connectivity
is
there
and
also
TMC
wires
each
container
in
a
p
cluster
when
it
talks
to
an
API
server.
D
It's
talking
to
the
aps
or
back
in
the
hub
cluster
and
that's
giving
that's
not
what
we
want
in
h.
We
want
all
the
communication,
local
within
an
edge
location,
so
I'm
going
to
stop.
There
I've
probably
gone
on
way
too
long,
but
that's
just
a
brief
zoomy
overview.
D
And
I
did
make
some
specific
API
proposals
for
some
of
this,
so
people
can
look
at
those
I
I
shared
them
with
the
kcp
dev
mailing
list
anyway,
I'm
going
to
shut
up
and
and
take
remarks.
C
No
I
think
that
say
all
kind
of
boils
down,
even
after
the
first
or
second
kind
of
it's
kind
of
I,
see
it
as
kind
of
incentive
to
why.
We
think
we
indeed
need
probably
a
complete
different
layer
right.
We
need
different
thinker,
different
scheduler,
different
tests
and
policy
and
so
on,
and
we
already
discuss
it
last
call
and
we
probably
when
we
go
to
each
of
those,
we
will
need
to
come
with
a
probably
with
this
community,
with
a
more
specific
requirement
list
right.
C
What
the?
What
do
we
require
from
The
Thinker
in
that
case?
What
exactly
do
we
want
to
sing
if
we
take
into
account
that
we
are
disconnected
and
so
on
so
I
see?
That
list
is
kind
of
initial
requirement
list
plus
incentive?
If
someone
actually
thought
that
you,
we
may
be
able
to
go
with
the
existing
TMC
component,
so
this
is
kind
of
a
lot
of
reasons
why
we
we
published
it
and
I
think.
E
I
agree
with
this,
rather
probably
what
we
really
want
to
do
is
to
somehow
boil
down
this
to
a
sort
of
a
set
of
requirements
that
really
impacted
the
existing.
You
can
see
actually
what
we
need
to
add
that
to
build
the
the
new
hedge
multi-cluster
service,
so
we
probably
have
a
parallel
set
of
components.
So
I
can
imagine
that
we're
gonna
have
a
scheduler
for
Edge
we're
gonna
have
a
sinker,
maybe
I
don't
know.
E
Maybe
we
can
reuse
the
existing
sync
or
modify
it
for
what
we
need
to
do
here,
but
will
be
probably
a
different
Sinker
or
maybe
a
configure,
configure
configurable
sinker
and
then,
of
course,
we're
gonna
have
maybe
this
new
set
of
apis
for
placement
and
so
on,
because
of
course
the
behavior
is
different.
I
think
we
discussed
already
that
probably
we
don't
want
to
use
the
existing
placement,
for
example,
to
deal
with
Edge,
because
the
behavior
will
be
different,
and
so
maybe
it
doesn't
make
sense
to
try
to
extend
that.
E
One
really
to
have
a
list
of
these
components
and
what
actually
each
one
of
them
does
more
in
detail
before
we,
when
we
start
implementing
them.
C
Mentioned
that
we
can
start
with
just
copy,
you
know
kind
of
duplicate
the
TMC
there
and
start
from
there
as
our
layer
I
think.
The
main
question
along
the
whole
work
will
be.
What
are
the
required
changes
in
the
core
kcp
layer
if
at
all
and
try
to
minimize
them,
and
for
that
we
need
to
interact,
is
the
recipe
people,
but
you
know
any
special
comments
from
Andy
Stephan,
something
I
think
it's
high
level
currently
right,
not
there.
B
Yeah
I
mean
I'd,
say
at
this
point:
it's
kind
of
wait
and
see
like,
as
there
are
more
Explorations,
that
the
group
does,
if
you
run
into
problems
with
the
core
of
kcp,
like
we'll
see
what
changes
may
need
to
be
made
at
that
point.
But
it's
really
hard
for
me
to
say:
yes,
I,
foresee
X
changes
right
now,
right.
E
D
B
E
D
E
Point
we
didn't
in
touch
was
about.
We
talked
about
scale
right,
so
at
some
point,
I
think
we
need
also
to
think
how
do
we
start
the
sort
of
testing
for
scale
in
a
way
right,
so
I
I,
don't
know
how
much
of
this
kind
of
testing
was
done
so
far,
at
least
for
the
TNC.
E
For
example,
I
know
that
the
goal
is
to
have
a
lot
of
workspaces,
we're
talking
about
a
million
potentially
with
sharding
and
so
on,
but
the
one
that
we
need
at
some
point
to
be
able
to
sort
of
have
a
larger
number
of
physical
cluster
or
maybe
to
have
a
way
to
simulate
these
physical
clusters
right
and
so
that
we
can
also
connect
to
a
set
of
Charlotte
API
kcp
servers,
a
large
number
of
syncers
and
this
singer
connected
to
this
physical
cluster
or
Edge
clusters,
and
so
I
wonder
if
there
was
any
effort
so
far
in
the
community
to
try
to
somehow
simulate
a
large
number
of
singers,
for
example,
to
do
some
kind
of
stress
testing
on
the
kcp
side
of
things.
C
Just
let
me
just
add
to
that
even
another
question,
because
this
is
something
we
were
planning
to
work
on
even
on
the
normal,
without
the
TMC
right,
I
was
looking
at
the
code
and
the
initial
promise
was
you
know.
Workspace
is
a
light,
switching
zero
course.
You
can
open
quickly,
Millions
But
as
time
goes
on.
A
Remember,
Dave
festel
brought
this
up
in
another
call
that
we
had
with
the
I
believe
it
with
with
the
smaller
group
of
folks
from
the
community
about
using
workspaces,
specific
controllers,
etc,
schedulers,
etc.
F
F
Much
there's
a
big
interest
to
do
it.
We
are
at
a
stage
where
you
must
know
what
is
just
not
implemented.
So
there's
not
much
value
in
just
blindly
testing
what
we
have
like
the
sharding
work.
It's
very
much
work
in
progress,
we're
getting
nearer
to
something,
but
we
still
have
steps
in
front
of
us
testing
workspace
creation
is
something
yeah,
just
try
it
and
find
out
where
the
problems
are
and
I
think
everybody
is
happy
to
see
improvements.
F
C
A
E
And,
in
any
case,
it
looks
like
there
is
value
in
doing
some
kind
of
stress
testing
to
find
bottlenecks
and
issues
that,
of
course,
are
also
helping
to
fix
them.
But
in
order
to
find
them,
we
still
need
to
provide
this
kind
of
stress,
testing
or
large-scale
testing
in
some
way
right.
F
E
It
and
also
I
assume
that,
especially
for
Edge,
right
to
end
up
having
potentially
a
large
number
of
physical
clusters
connected
to
one
Shard
at
least
or
multiple
shards,
and
in
that
case
I,
don't
know,
that's
also
a
scenario
that
may
be
slightly
different
than
TNC,
where
we
may
have
a
lot
of
thinkers
somehow
hitting
the
same
API
server
instance,
and
that's
something
that
we
need
to
figure
out.
How
that
will
behave
in
the
case.
D
Yeah
I
part
of
the
question
here,
I
think
for
testing
is.
Are
we
talking
about
a
test
that
somebody
runs
once
or
is
there
some
CI
framework
to
contribute
this
to
so
that
it's
run
regularly.
C
F
F
E
F
D
So
in
in
kubernetes
there
are
a
few
different
kinds
of
tests.
There
are
some
tests
that
are
run
on
every
PR.
There
are
some
tests
of
those
some
are
required
to
pass
and
some
are
not
required
to
pass.
There
are
other
tests
that
are
run
periodically,
not
on
a
PR
basis
and
I
think
there
are
some
that
are
not
even
in
that
category.
D
So
you
know
that's
part
of
the
question
here.
Is
you
know?
What's
what's
the
story
and
what's
the
prognosis
for
the
coming
story
in
in
kcp
for
these
things
and
the
other
part
of
the
question
is
the
infrastructure
to
run
the
tests
on
you
know
it's
it's
one
thing
to
have
a
makefell
Target,
but
what
are
we
assuming?
It
runs
that
and
does
that
make
file
Target
or
whatever
just
use
the
OneNote
it
runs
on
or
can
it
spin
up,
you
know,
VMS
or
whatever,
to
to
make
a
large
system.
D
C
E
C
E
Right,
because
all
we
want
to
stress
is
the
interaction
somehow
between
the
I
mean
the
Sinker
itself
and
the
and
the
center
the
kcp
API
server
on
the
center.
That's
what
we're
really
stressing!
We
are
not
trying
to
do
anything
really
real
on
those
physical
clusters,
except
maybe
deploying
or
simulating
the
deployment
of
some
results
there.
C
C
E
Idea
was
to
see
if
we
can
come
up
with
some
kind
of
framework
or
some
simulated
thinker
or
Seto
Sinker,
that
I
think
this
stuff
we
did
in
our
own
internal
PLC
and
I
focused.
We
actually
have
these
simulated
agents
and
we
can
run
a
lot
of
them
and
they
simulate
they
put
stress
on
the
on
the
center
and
that
allow
us
to
figure
out
what
is
going
on
there
and
I.
Think
Mike
did
a
lot
of
interesting
findings
in
finding
bottlenecks
and
performance
issue
and
stuff
like
that.
E
E
But
I
mean
it
can
be
a
sink,
it
can
be
even
a
deployment,
but
even
the
target
doesn't
need
to
necessarily
react.
It
doesn't
need
to
have
a.
C
E
D
Think
there's
a
range
of
possibilities
here.
Right
I
mean
we
all
looked
at
a
blog
post
from
Rancher
a
few
years
ago,
where
they
went
through
basically
two
levels
of
simulation.
You
know
to
get
to
their
most
extreme
scale.
You
know
they
did
not
do
the
real
thing
you
know
out
at
the
out
from
the
center
at
all.
D
They
had
something
that
just
simulated
the
behavior
of
the
peripheral
thing
and
then
they
before
that,
though
they
did
something
that
was
a
little
more
real
and
I
think
we
probably
will
want
to
end
up
doing
both
the
you
know,
one
for
really
high
fidelity
and
one
for
really
high
scale.
C
C
C
C
F
D
And
I
think
it
you
know
will
also
will
need
our
own
placement.
You
know
data
type,
you
know
from
the
start.
F
D
Modify
it
to
to
what
we
need.
E
There
are
a
few
other
resources,
also
that
are
relevant
to
TNC
right.
There
is
the
scheduler
part,
and
there
are
also
API
definitions
like
the
current
placement
location,
for
example.
Maybe
we're
gonna,
reuse,
location,
we
don't
know
about
draw
replacement,
is
something
that
may
maybe
you.
F
E
E
B
Somewhere
yeah
so
right
now,
it's
on
a
case-by-case
basis,
because
there's
no
easy,
deterministic
repeatable
way
to
identify
what
belongs
to
what
workspace
versus
physical
cluster.
And
how
do
you
resolve
potential
conflicts
if
you've
got
two
cluster
scope,
things
that
you're
trying
to
sync
down
to
the
same
physical
cluster?
B
So
we
know
that
we
want
to
do
it
with
persistent
volumes
and
there
are
design
efforts
to
make
that
work
between
workspaces
and
physical
clusters.
I
think
we
would
need
similar
use
cases
and
r
d
to
figure
out
what
other
cluster
scope
things
we
might
be
syncing
back
and
forth,
rather
than
trying
to
open
it
up
to
everything.
C
F
I'm
not
really
about
Edge
I
mean
I
was
mentioning
that
in
I.
Think
you
mean
two
talks
but
and
there's
nothing
concrete
about
that.
I.
F
Had
in
general
I
think
we
had
good
feedback
from
many
companies,
even
big
ones,
from
the
cloud
business
and
it
really
helped
to
to
show
kcp
not
as
TMC
like
this
bundle
of
things
and
also
not
with
hierarchy
but
really
as
a
very
generic
tool,
which
can
be
used
for
many
many
things,
and
we
heard
from
a
couple
of
people
just
moving
out
compute
and
replacing
it
with
something
else.
Like
this
Edge
topic,
you
are
working
on
or
something
else
that
resonated
so
also
in
the
multi-cluster
area.
F
B
C
F
B
F
Them
like
when
we
ask
them
independently,
what
do
you
think
kcp
is
and
what
does
it
bring
to
you?
We
have
different
things
like
completely
different
things.
That's
why
we
generalized
the
discussion
into
this
is
so
I
use
this
kcp
machine
metaphor
right.
So
it's
like
a
turing
machine,
but
it's
kcpe
it's
distributed
and
it
has
controller
so
controller
Logic
for
the
workloads.
F
This
was
a
style
we
presented
that
so
and
and
then
basically
the
the
task
for
all
your
for
the
audience.
If
you
have
this
distributed
thing
and
scalable
and
everything
and
bindings,
of
course,
in
workspaces,
what
can
you
build?
That
was
a
question
to
make
people
think
what
are
their
use
cases.
C
A
Okay
and
I
think
any
of
the
other
topics
we
choose
to
cover
at
this
point
would
be
too
big
in
nature
to
fit
within
the
time
we
have
left.
So
if
anybody
has
any
closing
remarks,
they'd
like
to
make
otherwise
I'd
I'd
like
to
close
the
meeting,
anybody
have
any
remarks
they'd
like
to
make.
A
Okay
folks,
so
thank
you
Andy
and
Stefan
for
creating
the
repository
for
us.
Edge
MC
wait.
D
A
Yeah
sure,
certainly
yes,
I
was
going
to
remark
on
that.
So
we
have
a
bi-weekly
meeting
Cadence
that
we've
established
it'll,
be
every
other
Thursday
next
meeting
will
be
on
the
17th
and
so
Mike,
just
as
you
were,
making
that
comment,
I
was
leaving
it
in
the
chat.
A
The
there
is
an
issue
link
for
the
new
repository
with
issue
number
three,
so
I've
copied
over
issue,
one
which
was
the
original
epic
issue
two,
which
was
today's
meetings,
agenda
and
issue
three
now
you
feel
free
to
add
any
topics
or
discussions
you'd
like
to
put
there
Andy.
As
a
note
I'll,
be
closing
out
the
two:
the
two
tickets,
that
I
have
residual
in
kcp
Dev
or
in
the
kcp
repo
I.
B
B
A
B
So
I'm
guessing
you'll
get
the
recordings
Andy
and
want
to
post
them
up
on
YouTube.
Let
me
send
you
a
link
to
get
you
access
to
our
YouTube
channel.
We
can
either
create
a
new
one
just
for
Edge
discussions,
or
we
can
just
put
them
in
the
channel
that
we
have
for
kcp
Community
meetings.
It
either
is
fine
with
me.
Then
you.
A
B
All
right,
let
me
I'll
DM
you
a
link
to
get
in
there.
A
Thank
you,
everybody
for
your
attendance
and
look
forward
to
hearing
from
you
again
in
two
weeks.
Please
contact
the
team
on
slack.
If
you
need
to
discuss
any
subject
matter
between
now
and
then
thank
you.