►
From YouTube: KCP-Edge Community Meeting, January 26, 2023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everybody
and
welcome
today's
the
January
26th
edition
of
the
Sig
kcp
Edge
Community
call.
Today
we
have
a
few
items
on
the
agenda
having
to
do
with
a
public
demonstration
we'd
like
to
do
in
the
community
in
the
coming
months,
I'm
I'll
start
off
the
call.
A
We
have
a
contributor
code
of
conduct
so
as
contributors
and
maintainers
in
the
cncf
community
and
in
the
interest
of
fostering
an
open
and
welcoming
committee
Community,
we
pledge
to
respect
all
people
who
contribute
through
reporting
issues,
posting
feature,
requests,
updating,
documentation,
submitting
pull
requests
or
patches
and
other
activities.
Basically
just
be
nice
to
everybody.
On
the
call
really
appreciate
you
all
joining
today.
I'll
go
ahead
and
share
the
agenda
on
the
screen.
A
And
let
me
get
rid
of
all
right,
so,
let's
get
started
here
so
first
up
we
have
some
discussion
that
we'd
like
to
take
on
about
issues
in
PRS
and
how
they
are
being
processed.
A
So
Mike
I
think
you
had
some
a
vested
interest
in
this.
Would
you
like
to
kick
off
the
conversation.
B
Yeah,
it's
just
standard
stuff
and
I
I
have
a
PR.
You
know
putting
a
proposed
statement
in
the
contributing
dot,
MD
document,
so
the
the
issues
are.
One
is
just
this
get
history
style
thing
that
was
linked
to
it.
There's
you
know
in
some
sense,
there's
no
right
and
wrong
here.
It's
just
each
project
needs
to
choose
for
itself
what
it's
going
to
work
best
for
itself.
B
We
may
take
inspiration
from
what
kubernetes
and
of
the
main
kcp
project
are
doing
in
kubernetes.
So
the
issue
is:
when
you
make
a
PR,
you
know
what
does
it
look
like
in
terms
of
git
commit
structure
in
the
kubernetes
project?
They
enforce
the
Restriction
that
a
PR
be
a
from
a
branch
that
points
to
a
commit.
B
That's
built
from
a
commit,
that's
built
from
a
commit,
that's
built
ultimately
from
something
in
the
main
line
of
development,
and
it's
that
simple,
it's
just
a
simple,
strict
chain
of
commits
that
that
diverge
from
the
mean-
and
ideally
it's
one
commit
rather
than
a
long
chain.
B
You
can
have
more
than
one
if
it
makes
some
sense
to
maintain
a
distinction
for
some
reason,
but
normally
it's
expected
to
be
one
commit
that
diverges
from
the
mainland
development
and
that's
all,
and
in
particular,
one
of
the
things
that
we
have
done
that
is
forbidden
in
kubernetes
is
to
merge
the
mainland
of
development
into
the
Branch,
that's
being
submitted
in
the
pr.
B
That's
you
know
not
necessarily
wrong.
I
mean
git
supports
it.
The
other
reason
I
think
that
kubernetes
doesn't
want.
That
is
that
it
makes
each
PR
in
some
sense
of
self-sufficient
and
independent
of
other
PR's.
You
can
understand
the
development
of
the
main
line
as
a
strict
serialized
sequence
of
contributions,
Each
of
which
comes.
You
know
before
its
predecessors,
remember
before
successors
and
after
its
predecessors,
which
just
makes
it
simple
to
understand.
B
I,
like
that
and
Advocate
that
and
I
think
that
we
should
choose
that.
Let's
see
another
issue
is
is
just
timeliness
of
response.
The
kubernetes
I
believe,
makes
a
statement.
I,
don't
think
it's
actually
lived
up
too,
very
well,
but
I
think
there's
somewhere
there's
a
statement
that
HPR
gets
some
level
of
response
within
one
business
day,
so
I
put
that
in
rpr
as
well.
B
B
All
right,
there's
this
question
of
approvers
and
or
what
level
of
ascent
is
needed.
So
you
know
how
many
pairs
of
eyes
do
we
need
before
something
get
on
something
before
it
gets
merged
independent
of
the
contributors
you
know
in
in
openstack.
They
insist
that
on
two
pairs
of
eyes,
besides,
the
contributors
in
kubernetes
I
think
the
formalities
I
think
require
one
pair
of
eyes.
B
B
We,
as
we
inherited
from
kcp
I,
don't
seem
to
have
that
automatic
approval,
but
we
can
still
do
a
manual
approve
of
our
own
work.
So
just
need
to
be
clear
and
decide.
You
know
what
we're
gonna
do
in
terms
of
that,
that
level
of
policy
or
that
part
of
policy.
B
So
I'm
going
to
stop
here
and
invite
further
comments.
C
Well,
I
think
at
least
even
just
approval
from
at
least
another
person
to
do.
The
review
makes
sense
to
me.
I,
don't
know
if
we
in
this
phase
or
need
to
go
as
every
weight
does
a
hop
and
start
giving
two
people
and
whatever,
whatever
you
have
since
too
much.
Indeed,.
C
So
to
me,
it
looks
reasonable
and
by
the
way,
kcp
is
what
they
do
is
also
one
person
additional
person
for
review.
I'm
generally,
the
core
kcp.
B
D
The
basic
ACP
project
is
doing.
Maybe
there
is
sorry
mainly
there
is.
There
are
two
levels,
the
one
that
you
know
says
the
let's
go
to
main
flag
and
then
approvers
and
approvals
are
are
a
limited
set
of
people.
According
to
you
know
the
the
package
inside
the
repo-
and
so
you
need
to
to
have
both
the
lgtm
and
the
and
the
approval
in
order
for
the
peer
to
be
automatically
merged.
D
E
I
think
that
we
should
really
be
dynamic
here.
We
should
face
the
fact
right.
We
are
currently
a
small
group
of
people.
Some
things
are
desirable,
but
not
practical.
We
cannot
have
two
pair
of
files.
You
know,
of
course,
because
we
don't
have
enough
people
and
I
think
everything
here
is
reasonable
and
when
this
community
become
bigger,
we
may
choose
to
change
some
of
the
stuff.
E
But
at
this
point
the
regular
thing
of
you
know
just
everyone
should
of
course
try
to
avoid
approve
its
own,
his
own
work,
but
you
know
if
it's
a
simple
script,
that
I
I
wouldn't
go
to
enforce
and
not
allowed
it,
for
example,
right
only
because
you
know
we
still
haven't
written
a
lot
of
code
events,
so
it's
just
currently
reviewing
approval
of
configuration.
Things
make
file
and
such
things
so
I
I
think
just
lgtm
from
someone
else
with
some
deep
review
and
approve.
E
You
know
who
everybody
can
approve
it,
and
if
it's
a
simple
thing,
you
can
just
go
ahead
and
approve
yourself.
That's
my
question.
B
All
right,
so,
when
talking
about
this,
we
have
to
be
careful
about
the
word
approve,
because
it
is
both
a
technical
term
that
has
a
narrow
meaning
here
in
the
formalities
and
a
general
term.
That
is
broader.
B
So
when
you
said,
don't
approve
your
own
work,
I
think
you
met
it
in
the
in
some
sense.
In
the
broader
sense,
we
I
think
we
are
proposing
here
that
someone
who
is
in
the
owners
can
do
a
slash
approve
on
their
own
contribution.
E
He
can
technically
I
would
I
think
that,
as
time
goes
on,
instead
of
you
know
enforcing
two
pairs
of
eye.
You
could
you
know
it's.
You
know
if
you're
working
on
a
very
complex
and
a
big
scene,
you
can
either
try
to
you
know
for
yourself
trying
to
to
get
another
viewers,
or
you
know
we
should
switch
as
time
goes
on.
You
know
to
two
LG
TNS
and
so
on,
I'm,
just
saying
that
this
stage
I
don't
really
care.
B
So
I
believe
so,
if
I
try
to
pick
apart
carefully,
your
words
I
think
you've
been
inconsistent,
I
think
but
I
think
you
mean
what
they
said
here,
which
is
that
we
need
we're
going
to
insist
only
one
other
pair
of
eyes,
and
that
is
conveyed
through
the
formalities
by
the
LG
TM.
We
expect
the
contributors
to
not
LGT
in
their
own
work.
B
We
allow
contributors
to
slash,
approve
their
own
work
if
they're
in
owners,
of
course,
which
is
the
only
case
it
would
be
effective
and
by
the
way
it
doesn't
so
much
matter
whether
it's
like
a
lot
of
code,
even
a
little
bit
of
hacking
around
in
the
make
file
or
the
CI
scripting
is
sufficient
to
break
the
build
and
that's
a
really
big
deal.
So
I
do
want
another
pair
of
eyes
even
on
small
Central
foundational
stuff.
A
A
Thank
you
all
right,
so
what
we
can
do
is
we
can
leave
this
as
lazy
consensus
for
next
week
unless
other
people
are
strong,
we'll
we'll
not
next
week,
but
in
two
weeks
we'll
approve
emerge
back
we'll
give
people
two
weeks
to
absorb
the
information
in
the
document
and
if
you
have
any
changes
or
updates
you'd
like
to
make,
please
do
so
between
now
and
then
I
think.
There's
some
matter
of
history
style
that
we'd
like
to
discuss
also
here
so
so
Ezra
you
brought
it
up.
A
B
I
think
I
did
it's
it's
in
here.
It's
in
the
first
paragraph
and
I
did
discuss
it
already
very.
A
A
B
Yeah,
no
actually
I
think
we
don't
have
a
good
example
of
this.
It's
it's
something
that
comes
with
a
larger
volume
of
code.
It's
just
a
matter
of
dependencies
right
now
we
The
Edge,
MC
repo
depends
on
a
particular
commit
of
the
kcp
repo
and
we're
going
to
need
to
move
that
forward
from
time
to
time.
But
every
time
we
do
that
involves
all
of
the
dependent
code
being
updated
all
at
once
and
as
there's
more
and
more
of
it.
That
becomes
the
less
and
less
feasible
thing
to
do.
E
D
B
But
in
some
sense
that's
that's
second
order.
Testing
will
help
us
find
mistakes,
but
even
with
testing
in
place,
it
Remains
the
case
so
that
my
concern
here
is
that
Casey,
the
edgemc
is
the
I
expect
to
be
depending
fairly
heavily
on
the
kcp
repo,
and
that
is
moving
fast
and
breaking
things.
It's
still
in
a
zero
point,
something
release
so
there's
no
great
expectation
of
stability.
Normally
between
repos,
you
expect
a
backward
compatibility,
but
with
the
rapid
Pace
in
the
kcp
repo
there's,
no
promise
or
expectation
of
that.
B
E
So
like
what
is
that
for
positive?
Do
you
maybe
want
to
suggest
that
we
will
stick
with
a
specific
official
release,
because
we
are
developing
right
right?
We
are
developing
a
concept
right.
We
don't
really
depend
on
a
very
specific
last
minute
features
from
kcp,
so
we
can
do
that
and
maybe
put
some
process
on
what
should
we
do
if
we
want
to
switch?
You
know
versions
when
we
need
to
discuss
this
on
the
call
before
we
do
that.
B
See
right,
I
have
a
few
ideas,
one
certainly
the
most
obvious
one
is
to
update
infrequently
so
right
now
we're,
depending
on
a
commit
that
I
picked
up
last
Thursday
morning
and
it's
working
for
us
I,
don't
think
we
can
afford
to
stay
too
far
behind
because
again
in
their
kcp
repo
they're
moving
fast
and
breaking
things,
and
don't
have
a
lot
of
concern
with
supporting
older
stuff.
B
So
I
expect,
since
we're
going
to
be
depending
fairly
intimately
and
heavily
on
it
and
needing
support
that
we
will
not
be
able
to
stay
very
far
behind.
So
there's
a
bound
on
that
I,
don't
know
what
that
bound
is
going
to
be.
All
this
is
very
forward-looking
and
speculative.
So
I,
don't
have
a
you
know
any
sharp
answers
here,
but
you
know
this:
that's
one
of
the
concerns
that
I
have
so,
for
example,
right
right
now.
B
The
the
thing
in
progress
is
that
that
big
refactoring
of
logical
cluster
work
once
that's
done,
my
understanding
is
the
plan
is
the
next
thing
to
do
is
to
rebase
on
Kube
release
1.26.
We
will
probably
want
to
pick
that
up
just
for
practical
reasons
and
then
we
might
stop
and
wait
for
the
V
0.11
release.
B
B
We
might
break
things
up
into
smaller
pieces
that
can
have
their
own
go.mods,
and
so
those
can
be
independently
updated,
whether
those
Go
in
different
places
in
one
repo
or
multiple
repos.
That's
another
question,
but
that's
just
another
technical
thing
that
can
be
done
to
deal
with
it
end
of
my
thoughts.
E
For
this
proposal,
in
order
to
do
this,
you
need
to
do
that
in
kcp,
or
you
have
a
solution
which
you
just
have
multiple
modes
in
our
repository
and
you
leave
the
kcp
as
is:
can
you
do
that?
What.
B
I
meant
is
yes
in
in
HMC
we
could
have
multiple
parts
of
HMC
each
with
their
own
go.mod,
and
so,
when
it
comes
time
to
update
these
different
parts
of
edgemc
could
be
updated
independently.
B
That
would
depend
on
us
actually
being
able
to
break
it
to
MC
into
parts
that
could
Advance
independently
I'm,
not
sure
what
that
would
be.
E
A
Me
let
me
pause
this
here
because
I
know
David's
on
to
David.
Do
you
have
any
comments
or
points
of
information
like
the
things
that
you've
learned
along
the
way.
D
And
not
really
for
now,
that's
right
that
we
are
still
in
a
quite
High
pace
of
change,
so
I'm
not
sure
I
have
much
to
say
there
apart
from
maybe
the
things
that
were
mentioned
by
Mike,
probably
to
that
sharding
should
be
added
as
something
that
might
require
some
changes
according
to
how
you
you
you
would
want
to,
you
know,
include
charting
into
your
controllers.
So
that's
that
would
also
be
you
know
a
next
step,
I
assume
in
terms
of
changing.
D
You
know
adapting
to
the
kcp
repo,
but
apart
from
that,
I
completely
agree
with
the
ID
that
is
splitting
into
into
several
PCS
on
the
KC
on
the
on
the
agency
side
might
make
it
easier
to.
You
know,
pick
up
the
changes
on
the
kcb
side
that
you
only
need
on
some
parts
of
your
work.
A
So
how
do
we
want
to
proceed
here?
Do
we
want
to
come
up
with
another
document
that
describes
how
we
should
approach
this
so.
B
B
At
this
point,
we
haven't
got
enough
code
that
this
is
a
series.
A
big
problem.
I
was
just
hoping
for
a
good
answer.
D
B
Come
out
of
the
wisdom
of
the
crowd
but
I,
don't
think
I've
really
heard
much
that
we
didn't
already
know
so.
I
think
we've
just
gone
as
far
as
we
can
for
now.
D
Yeah,
and
maybe
by
the
way,
maybe
that
that
could
be
an
interesting
topic
also
to
to
raise
on
the
kcp
community
call
or
as
a
marginal
question
about
you
know,
how
do
we
taking
account
depending
projects?
You
know
I,
don't
have
the
answer
here,
but
I
I
surely
think
that
it
could
be
useful
to
bring
that
to
to
the
kcp
community
core.
B
I
expect
the
answer
will
be
that
it's
too
early,
but
if
you
think
it's
worth
raising
then
I'll
raise
it
again
sure
sure.
C
E
A
Okay,
let's
move
on
to
the
next
order
of
business,
so
we're
out
of
housekeeping
we're
into
you
know
next
steps
here
so
Mike
before
you
go
to
number
four
here:
I'm
gonna,
just
mention
number
five
I
did
open
a
PR
for
the
creation
of
a
dedicated
kcph,
Edge
Dev
slack
Channel
I'm
waiting
for
approvals,
not
sure
if
we'll
get
them,
if
not
Andy
Goldstein
has
mentioned
he's
happy
to
have
us,
pollute
the
kcp
dev
slack
Channel
with
the
PRS
and
issues
and
approvals
cycle
that
we
require
so
something
to
fall
back
on.
A
If
we
don't
get
approval
here,
I'll
keep
monitoring
the
progress
on
this
specific
PR
and
update
the
community
as
when
and
if
information
is
forthcoming.
A
Any
questions
about
that
sounds
good.
Okay,
all
right!
Thank
you,
and
next
up,
so
we
have
a
date
set
of
March
30th,
it's
a
Thursday,
it's
around
the
same
time.
A
That
may
not
be
explicitly
mentioned
with
either
one
of
those
sub
those
themes.
So
with
that
I
know,
Mike
has
been
doing
some
work
in
the
space
on
trying
to
draw
up
an
initial
architecture
and
has
some
ideas
and
thoughts.
I
think
he
wanted
to
share
here
today.
So
I
thought
we'd
use
the
balance
of
this
meeting
to
discuss
in
more
detail
what
it
would.
What
the
q1
POC
for
2023
looks
like
for
kcp
Edge,
well
Mike.
B
Okay
sure
so
I
do
have
a
proposal.
Let
me
bring
it
up
and
share
it.
Let's
see
here,
if
I
can
do
this
reasonably.
D
B
E
B
B
Yes,
I
plan
to
get
there,
but
I
I
really
want
to
kind
of
focus
on
the
goals
and
the
scope
this,
because
it's
important
to
understand
that
this
is
just
a
fragment
of
what
we're
overall
talking
about
it's
easy
to
get
over
ambitious
and
I.
Don't
I
want
to
not
do
that.
I
want
to
do
some
experimenting
with
designing
interfaces
for
the
concerns
that
I
think
are
distinctively
Edge
and
some
implementation
to
get
some
reality
behind
that.
B
But
it's
limited
implementation
and
in
fact
it's
it's
limited
in
some
critical
ways
that
that
fail
on
some
critical
criteria
in
the
implementation,
but
I
want
to
start
with
interfaces
that
don't
build
in
those
failures
and
I
expect
and
hope
that
these
this
will
prompt
discussion
of
better
implementations
and
future
for
for
future
revisions
of
interfaces
and
and
what
implementation
can
be
shared
with
TMC
as
we
go
forward
to
better
implementations,
but
for
this
initial
concept.
B
So
the
the
let
me
just
go
through
the
sort
of
the
points
that
I
think
that
we
do
and
do
not
want
to
work
on
in
this
first
one.
So
one
of
the
things
that
I
think
is
important
in
the
edge
scenario
is
modularity
is
is
a
key
goal
and
you
know
there's
kind
of
a
big
at
the
very
highest
level.
There's
a
big
layering
distinction.
People
often
make
a
distinction
between
invent
sorry
infrastructure
or
platform
versus
workload
and
I.
Think
it's
important
to
follow
that
I.
B
Think
a
lot
of
organizations
have
very
idiosyncratic
ways
of
managing
their
infrastructure
and
we
need
to
have
the
kind
of
modularity
have
modularity
so
that
the
workload
management
is
not
married
to
one
particular
way
of
managing
infrastructure,
so
the
workload
management
should
simply
read
or
import
the
information
from
the
infrastructure
here.
I
talk
about
in
terms
of
an
inventory,
the
the
workload
management
needs
to
import.
An
inventory
of
the
infrastructure.
B
The
Proposal
here
is
to
reuse.
The
location
and
sync
Target
object
types
from
TMC
to
represent
that
inventory.
B
B
B
We
Define
our
own
Edge
placement
object,
type
to
direct
that
propagation,
let's
see
and
it
ref
and
like
in
TMC
it
references,
the
location
and
implicitly,
sync
Targets
in
location
objects.
So
it
has
a
selection
criteria
for
location
objects,
as
in
TMC.
B
Unlike
some
other
work,
we
do
explicitly
our
Arc
and
Edge.
We
really
are
concerned
about
a
large
number
of
edge
clusters,
so
the
design
of
these
interfaces
are
with
a
large
number
of
edge
clusters
in
mind.
Thousands
to
Millions.
B
Also.
Another
thing
is
that
we
do
want.
You
know
our
vision
for
Edge
is
that
the
edge
clusters
can
tolerate
intermittent
connectivity
on
a
sovereignty
data
and
whether
there's
other
sovereignty
concerns
and
for
those
and
other
concerns.
We
do
want
each
Edge
cluster
to
be
able
to
operate
each
Edge
location,
which
may
in
general,
be
a
collection
of
clusters,
but
we
and
I
mean
geographic
location
in
TMC.
Location
is
more
vague.
It
can
mean
a
number
of
things
here.
I
think
we
want
to
specifically
purpose
location
to
mean
a
geographic
location
in
general.
B
It
can
have
multiple
clusters,
but
for
now
we're
starting
with
one
cluster
per
location,
but
each
location
operates
independently
of
other
locations,
so
it
does
not
require
constant
connectivity.
It
does
not
require
communication,
a
regular
Communicator,
constant
communication
with
service
providers
from
anywhere
else.
B
B
So
those
are
the
things
that
I
think
I
I'm
suggesting
we
do
address,
and
that
leaves
some
important
things
not
addressed.
So,
as
I
said,
I
mean
well
yeah,
so
I
think
for
this
one
I'm
willing
to
not
address
actually
supporting
the
intermittent
connectivity
in
the
implementation.
The
interfaces
are
designed
for
it
and
it's
okay.
If
we
start
with
the
implementation
that
does
not
support
it.
B
Also
again,
the
interfaces
are
designed
for
a
large
number
of
edge
clusters.
It's
okay.
If
the
implementation
at
first
does
not
another
simplification
is
one
sync
Target
per
location,
I
think
if
we
want
to
bring
in
really
a
concept
of
location
that
corresponds
to
things
like
IBM,
satellite
Cloud,
satellite.
B
That
brings
in
an
extra
level
of
organization-
and
you
know
that's
important
just
in
terms
of
scoping-
you
know
get
to
that
later.
Another
really
important
thing
that
I
believe
ultimately
Edge
needs
is
more
than
two
layers
in
the
hierarchy
and
more
than
just
Center
and
Edge,
but
some
intermediate
vertices
in
between,
but
again
just
to
scope
it
down
to
something
we
can
do
quick
and
easy,
not
yet.
B
Okay,
also
one
more
limitation
or
maybe
Road
mapping,
I'm
going
to
suggest
that
we
start
with
not
transporting
non-names
based
objects.
That
implies
we're
not
transporting
custom
resource
definitions.
That
implies
we're
not
transporting
custom
resources,
they're
defined
by
user
supplied,
custom
resource
definitions,
I
hope
we
can
get
to
that
I
hope
we
can
take
a
two-step
roadmap
here,
first
step
without
that
second
step.
A
B
Or
maybe
we
just
redefine
it,
so
we
could
go
there
immediately.
This
is
open
for
discussion.
It
requires
obviously
a
bit
more
a
different
implementation.
We
can
talk
about
how
to
do
that.
B
So
now
we
can
go
to
the
overview
picture.
So
the
key
idea
here
in
these
is
that
the
implementation
yeah
that's
a
really
terrible
picture.
Let
me
go
to
the
I,
have
a
better
view
of
it
here,
just
a
minute
here,
a
Google
map
or
Google,
drawing
of
it,
hello,
okay,
the
key
idea
is
to
reduce
each
Edge
placement
problem
to
a
collection
of
TMC
placement
problems,
one
for
each
Edge
involved.
B
Edge
cluster
and
I
can
go
through
more
details,
but
maybe
I
should
just
stop
and
you
know
get
feedback
on
what
I've
said
so
far.
C
C
C
I
didn't
test
longer,
so
far,
reconnect
and
things
still
continue
to
operate
as
expected.
So
I
don't
know.
This
is
something
that
we
want
to
to
test.
I
need
to
to
test
further
or
to
show
in
the
POC,
but
I
don't
think
it's.
It
seems
to
be
not
an
issue
for
now.
D
Yes,
maybe
just
to
give
an
element.
There
is
an
an
argument
when
you
start
kcp
that
allows
setting
the
delay
you
wait
before
considering
a
Sim,
Target
unreachable
or
a
non-ready.
That's
the
herdbeat
delay.
So
by
default,
it's
about
I,
think
one
minute
or
something
like
that.
We
I'm
not
sure
we
can
completely
disable
that
so
for
now
you
put
that
you
would
put
that
to
maybe
when
you're
I
don't
know,
but
but
we
obviously
we
we
could.
D
B
D
Yes,
that's
that's
the
sibling
issue.
I
mean
that
that's
the
the
other
part
of
the
question
and
and
there
obviously
this
change
is.
Is
you
know
down
in
the
Sinker
by
by
mutating
the
deployment
on
the
fly
when,
when
we
put
it
Downstream,
you
know
by
just
replacing
the
the
the
the
environment
variables
and
and
all
this
so
here
it
it
would
be.
D
Quite
you
know
it's
just
a
feature:
I
did
on
top
of
of
normal
thinking,
so
it
could
be
quite
either
quite
easy
to
just
have
an
option
to
disable
this
in
the
sinker,
or
maybe
just
you
know,
have
a
way
to
to
build
another
Sinker
that
that
does
not
at
this.
These
things.
Okay,.
B
B
D
Yeah
I
mean
in
in
this
specific
regard.
It
might
be
simple,
then
I
don't
know,
and
what
is
you
know
harder
to
to
to
answer
right
now
is:
will
there
be
in
the
TMC
Sinker
other
specificities,
which
are
much
more
specific
to
TMC,
in
which
case
the
second
option
of
having
a
a
dedicated
HMC
thinker
might
also
make
sense,
but
but
in
any
case,
I
assume
that
a
quite
big
part
of
the
cut
could
be
shared,
at
least
the
sauce.
C
C
These
things
are
changing
very
rapidly,
so
for
us
I
think
at
least
in
this
phase,
I
wouldn't
be
for
trying
to
maintain
a
fork
or
the
Sinker,
because,
because
it's
going
in
pair
also
what
is
going
on
with
the
car
with
a
virtual
workspace,
so
they
have
also
to
keep
on
you
know
to
to
work
with
the
virtual
workspace
that
you
have
so
for
now
at
least
I
think
the
the
the
option
of
some
are
switching
this
feature
on
and
off
will
be
much
easier,
but
then,
maybe
later
on,
when
things
hopefully
are
more
stable
and
maybe
the
virtual
workspace
API
is
not
changing
next,
maybe
we
can
look.
C
You
know
this
other
problem
of.
Maybe
we
have
other
requirements,
so
maybe
we
need
the
Thinker
sort
of
maybe
even
sharing
apis
or
libraries
that
you
have
right
now,
right,
I,
don't
think
we
want
to
maintain
a
huge
Fork
of
that
sync
as
it
is.
D
No,
no
clearly,
no,
no,
surely
I
mean
that
would
be
possibly
in
the
future.
If
we
really
see
that
on
the
single
side
there
are,
you
know,
incompatible
parts
of
the
code
due
to
damaging
diverging
use
cases,
then,
obviously
the
idea
would
be
I
think
to
factorize
the
code
on
both
sides
so
that
we
can,
you
know,
optimize
the
the
quantity
of
the
code
that
it
should,
but
not
forking
I
mean
I,
don't
think
it
would
be.
D
C
Example
understand
that
you
have
this
kind
of
Frameworks
to
to
basically
Transformers
to
apply
transformation
and
stuff,
like
that.
So
I
wonder
that
eventually
can
be
made
more
pluggable
in
a
way,
so
that,
for
example,
that
manual
transformation
or
you
know,
don't
have
to
basically
copy
all
the
code.
But
you
have
just
to
plug
some
piece
of
code.
There.
D
Yes,
yes,
I
think
you
know
it's
a
bit
hard
for
now
to
you
know,
foresee
exactly
what
what
would
be
required
in
the
future,
but
I
think
that
you
know
from
sharing
the
source
code.
To
you
know,
sharing
complete
libraries
or
I
mean
we
can
we
can.
We
have
some
sort
of
you
know
Freedom
here
to
choose
as
as
time
goes,
okay
and
yes,
as
for
as
for
the
the
current
Sinker,
there
are
still
you
know
some
refactoring,
not
not
small
refactoring,
in
order
to
introduce
sharding
support
into
the
Sinker.
D
So
obviously
that
would
be
easier
for
you
to
to
keep
using
it.
For
now
at
least,
but
I
I
mean
we
can
I
think
we
can
open
an
issue
still
on
the
kcp
side
to
just
allow
disabling
the.
C
D
D
B
That
issue
in
the
kcp
repo.
E
Why
does
it
need
to
be
think
a
wide
option?
I
would
argue
that
it's
kind
of
a
fair
controller
or
a
deployment
option
right
well,
I
I
could
think
even
on
a
single
deployment
with
multiple
ports
and
controllers,
where
some
of
those
interact
with
the
local
API
server
and
some
of
those
want
to
interact
with
the
kcpi
server.
Well,
I.
D
Think
this
has
to
be
discussed
because
possibly
this
might
bring
some
security.
You
know
concerns
if
the
end
user
somehow
can
decide
or
hook
into
I,
want
my
my
deployment
coming
from
kcp
to
be
able
to
discuss
with
some
other
components
running
you
know,
on
on
the
physical
cluster
or
even
to
to
point
to
the
physical
cluster
API
server,
that's
very
impacting
in
terms
of
security,
so
I
mean
leaving
that
to
the
end
user
without
a.
E
E
Fully
agree
with
you
that
this
little
we
need
to
take
into
consideration
all
the
security
aspects,
their
mission
aspects.
Maybe
it's
something
that
the
physical
customer
need
to
explicitly
allow
you
to
do,
and
so
on
yeah.
But
in
any
case,
from
the
use
case,
I'm
now,
looking
at
a
thinker
will
not
be
enough.
That's
my
only
input,
I
I.
If
you
seek
it's
valuable,
we
can
bring
it
in
the
case
of
people
as
well
to
see
what
are
the
plans
for
that?
If
you
think
it's
valuable,
maybe.
D
B
D
D
B
Oh
yes,
right
implement
it
sure
right.
The
question
is,
as
you
guys
just
discussed,
it
brings
up
some
non-trivial
issues.
So
yes,
the
question
in
my
mind,
is
we
don't
know
you
know
whether
there'll
be
consensus
on
what
that
PR
should
do
all
right.
So,
let's
see
where
Andy
I
can
go
into
more
details
on
this
design,
I
did
write
up
a
little
bit
about
each
of
the
pieces.
I
can
go
over
that
yeah.
A
E
E
B
I,
have
a
little
I'm
I'm
a
little
unsure
what
to
do
with
the
word
eventually,
because
I
think.
Eventually,
we
want
to
stop
using
this
architecture
here.
I
think
we
want
to
eventually
stop
transforming
Edge
placement
into
TMC
and
just
directly
implement
the
edge
interfaces,
whether.
E
B
Oh
look
I,
you
know
I
believe
in
making
incremental
progress
right.
So
that's
why
I've
only
outlined
this
POC
explicitly
expecting
that
things
will
change
after
this
POC,
based
on
what
we
learned
in
this
POC,
so
I
have
only
begun
to
sketch
this
design.
I
I
have
not
worked
at
all
the
details
of
the
design.
B
I
expect
things
will
be
realized
and
learned
in
the
course
of
that,
as
well
as
implementing
and
trying
to
experiment
with
it,
and
my
plan
is
to
you
know,
do
that
learning
before
outlining
The
Next
Step,
but
I
expect.
The
next
step
will
be
different,
will
not
involve
transforming
to
TMC
problems,
but
exactly
what
it
looks
like
I'm,
not
prepared,
I,
don't
think
we
should
try
to
say
so.
It
might
I
will
add
one
other
thing,
this
use
of
mailbox.
B
If
I
may,
let
me
just
say
one
other
thing:
I
one
of
the
things
I
do
anticipate
is
that
the
next
turn
of
the
crank
will
not
involve
mailbox
workspaces.
There
are
really
inefficient
way
to
deal
with
stuff,
but
I
may
be
wrong.
Maybe
maybe
we
ultimately
need
it,
but
you
know
it.
It
looks
pretty
expensive
to
me
so
I'm
not
real
enthused
about
it,
but
I'll
stop
there.
E
The
performance
was
first
of
all
my
this
is
why
I
mentioned
that
my
comment
was
Paulo
I'm
completely
fine
with
the
POC
I,
fully
cleaned
the
incremental
steps,
and
so
on
I'm,
just
saying
that
power
always
a
question
about
in
general,
should
we,
you
know,
add
something
that
we
just
think
here
allow
it
to.
You
know
not
not
set
the
controller
to
talk
to
the
is
the
best
version
so,
but
this
is
kind
of
a
longer
term
right,
it's
something
that
we
need
for
the
future.
E
B
Don't
know
that
in
the
farther
future
after
this
POC,
you
know
we'll
still
have
something
like
a
sinker
I
think
it's
an
open
question.
How
much
code
can
be
shared.
E
In
one
point
about
the
mailbox
workspaces,
but
probably
David,
know
about
that
much
much
better
I
think
that
I
looked
a
little
bit
on
the
changes
on
the
shelving
that
are
coming
in
and
I.
We
for
sure.
We
need
to
look
into
that,
because
one
of
the
aspects
that
we
look
on
is
scale
and
there's
the
approaches
to
address
scale
even
in
a
single
region,
and
we
need
to
look
with
the
how
maybe
on
POC,
1.5
or
P
or
C2.
We
need
to
see
how
to
Leverage
The
shelving
that
they
are
introducing
into
our
model.
E
B
We
need
to
look
yes.
Definitely.
The
sharding
is
a
a
concern
and
I
it's
just
for
a
matter
of
scoping
and
and
making
this
something
we
can
complete
relatively
quickly.
I
wanted
to
put
it
out
of
scope
of
this
POC,
but
I
am
very
concerned
about
sharding
for
scale
and
am
happy
to
pursue
it
even
concurrently.
I,
don't
really
want
to
put
off
for
a
quarter
think
about
shouting
for
scale,
but
I
do
want
to
scope
it
as
a
independent
work
from
this
POC.
E
E
D
That
it
is
sorry
I
didn't
hear
you
were
speaking
with
me.
Part
of
it
is,
is
already
there,
and
current
work
is
mainly
on
updating
all
the
end-to-end
tests
as
well
so
I
think
in
the
in
the
you
know
few
next
weeks
things
will
will
clear
up.
D
D
Yeah
surely,
and-
and
it's
quite
interesting
to
go
into
you-
know
the
various
type
of
deployment
infrastructural
deployment-
apologies
because
you
can
have
a
number
of
shots
which
are
hidden
behind
the
same
external
URL,
mainly
through
the
front
proxy
and
then
effectively
this
becomes.
You
know
shouting
for
scaling,
because
then
you
can
move
and
not
now,
but
in
the
future
the
idea
is
to
be
able
to
move
even
a
workspace
from
one
shot
to
another,
and
it
would
be
possible
as
long
as
the
shards
are.
D
E
B
A
Back,
thank
you
for
yielding
I
appreciate
that
so
yeah.
So
folks
in
the
next
two
weeks,
I
expect
to
see
a
flurry
of
activity
around
issues
and
PR's
related
to
this
POC.
That's
been
put
out
there
for
discussion
and
comment
and
I'd
I
look
forward
to
reviewing
those
next
time
we
meet
in
two
weeks
in
two
weeks
time.
We
also
should
have
some
feedback.
A
We've
got
some
things
cooking
on
the
side
from
the
point
of
view
of
observability
and
another
use
case
that
has
come
to
us
for
hierarchical
control,
plane
which
begs
the
case
for
multi-tiered
hierarchy
and
different
places
for
command
and
control
and
view
of
status
summarization
within
it.
But
that
would
be
you
know,
of
course
post
POC
for
q1,
but
we
should
start
lining
up
those
discussions,
so
I
expect
to
bring
them
to
the
table.
A
So
that's
the
development
we'll
continue
to
track
that
thanks
everybody
for
joining
the
call
and
we'll
talk
to
you
again
in
two
weeks.