►
From YouTube: Community Meeting, October 11, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
Totally
cool
I
think
recordings
on
and
we
can
go
ahead
and
start.
B
Away,
oh
my
God,
okay!
Well
up
there.
It
is
awesome,
perfect
awesome
thanks
everyone
for
coming.
This
is
the
kcp
community
meeting
for
October.
Just
a
reminder
we
are
recording.
B
So,
let's
see
just
maybe
you
can
start
off
with
your
comment
on
the
reverse
version.
Claims.
A
This
is
actually
a
whole
presentation
with
like
12,
slides
so
or
15
slides.
Even
if
that's
okay,
we
can
start
with
that.
Oh
no,
it's
not
public
and
I
have
to
set
up
the
permissions.
I
will
do
so
as
a
follow-up.
Is
it
okay?
If
I
start
with
the
presentation
straight
out,.
A
To
talk
about
reverse
permission,
claims
and
hello
everybody,
my
name
is
Serge
by
the
way
who
doesn't
know
me,
I'm,
just
a
person
interested
in
kubernetes,
and
especially
nowadays,
kcp
and
currently
focusing
on
authorization,
subsystem
and
kcpe
for
the
folks
for
sort
of
like
new
one
on
this
channel
and
I
would
like
to
talk
about
a
topic
that
is
called
reverse
permission.
Claims
I,
don't
know
if
this
is
a
good
term.
If
it's
not,
let
us
know
we'll
change.
Everything
on
the
Fly
naming
is
hard.
A
So
what
are
we
talking
about?
Probably
many
of
you
already
know
the
concept
of
API
exports
and
obviously
API
Bindings,
that
is,
you
can
yeah
have
some
API
some
resource
schema,
which
you
want
to
be
available
in
other
workspaces
and
what
you
usually
do.
As
a
service
provider,
you
do
Define
a
resource
that
is
called
API
export
to
export
thing
in
this
case.
A
You
know
stuff,
so
that
should
be
a
little
bit
familiar
at
least
for
for
the
core
maintainers.
So,
given
an
API
exploit
that
wants
to
export
this
resource
type
sheriffs.white
west
of
the
system,
any
group
it's
declared
in
here
what
we
can
do
today,
what
you
can
do
already
today
is
inside
this
pack.
You
can
declare
so-called
permission
claims
and
what
is
a
permission
claim.
A
permission
claim
is
the
intent
of
the
service
provider
to
have
access
to
some
other
resources
next
to
the
one
that
is
being
exported?
A
If
any
one
of
you
knows
things
like
the
oauth
flow,
whenever
you
I,
don't
know,
authorize
that
I,
don't
know
GitHub,
Google
or
whatnot.
There
are
usually
sometimes
questions
being
asked
like.
Do
you
accept
that
your
name
is
being
queried
or
some
other?
You
know
metadata
about
your
account
on
behalf
of
the
claimer
and
the
idea
here
is
sort
of
like
similar
right.
So
you
set
a
list
of
resources
in
the
export
which
you
as
a
service
provider.
A
Would
you
like
to
access
the
canonical
example
being
imagine
your
a
database
provider
and
you're
exporting
a
I,
don't
know
a
database
resource
which
users
can
then
create
your
provision,
a
database
for
them,
and
you
claim
a
secret
or
Secrets
inside
that
API
export,
which
you
then
you
know
after
the
database
is
provisioned.
You
then
just
create
on
your
behalf
and
also
you
create
a
secret
on
the
workspace
of
the
user
and
thus
Secrets
as
a
resource
type,
is
explicitly
mentioned
here
as
a
claim.
What
is
missing
today
is
sort
of
like
fine-grained
settings.
A
Today.
What
you
can
do
is
like
you
can
claim
a
whole
resource,
All
or
Nothing.
Nothing
else.
So
we
need
a
little
bit
of
more
fine-grained
permissions
to
sit
here,
and
the
proposal
here
is
to
set
verbs
on
resources
that
you
are
claiming.
A
Why
is
it
called
reverse?
I
will
come
to
it
in
a
minute
is
because
usually
when
you,
you
know,
claim
a
permission
to
a
concrete
resource,
is
a
service
provider
you
want
to
have
access
to
it.
Given
the
example
I
just
mentioned
a
minute
earlier,
you
want
to
have
access
also
to
secrets.
In
order
to
provision
a
secret,
so
you
can
access
the
database
that
is
being
provisioned
by
the
service
provider.
You
also
want
to
restrict
the
consumer
potentially
to
override
that
secret
right.
A
You
want
to
have
some
control
over
the
resources
as
a
service
provider
that
you
claim,
but
also,
on
the
other
hand,
you
want
to
restrict
the
consumer
of
the
API
to
mess
around
with
those
resources.
A
So
the
proposal
here
is
to
add
two
lists
of
verbs,
not
just
one
what
you
usually
know
from
kubernetes
when
you
create
role
role,
bindings,
one
is
called
claimed,
and
that
declares
the
list
of
verbs
that
a
service
provider
claims
to
have.
When
you
know
the
service
provider
in
this
case
wants
to
create,
delete
right
and
so
on,
Secrets,
but
restrict
the
consumer
of
that
API
export
just
to
be
able
to
read
those
stripes
all
right.
A
So
this
is
exactly
what
the
proposal
here
is
and
concretely
like
when
it
comes
to
the
exact
semantics
of
this
list.
This
is
an
allow
list,
semantics
right,
so
exactly
like
kubernetes
what
you
know
from
Roll
row
bindings,
whatever
you
have
in
this
set
of
verbs
here.
A
The
user,
on
the
one
hand,
whatever
is
in
the
claim,
set
the
service
provider,
on
the
other
hand,
in
the
restrict
two
by
the
user,
is
allowed
to
be
executed
right
so
and
exactly
like
kubernetes
Star
means
all
verbs
all
out
it
gets
in
this
case.
It's
just
only
get
is
allowed.
Obviously,
the
verbs
are
arbitrary
and
if
it's
empty
list,
it's
no
verbs
all
out.
No,
this
deviates
a
little
bit
from
kubernetes.
You
cannot
have
an
empty
list
of
verbs
and
role
by
or
role
definitions
or
cluster
role
definitions.
A
In
this
case,
we
are
allowing
it
because
we
don't
have
a
separate
resource
type
for
declaring
those
scripts,
so
tldr
being
claimed
is
the
set
of
reps.
What
is
the
service
provider
allowed
to?
Do?
Restrict
two?
What
are
you?
The
users
or
consumers
allowed
to
do
question
so
far
on
the
general
proposal.
A
Nope,
okay,
cool
so
next
slide,
then
how
would
the
usage
pattern
would
look
like
so
for
this?
Given
a
gap?
Example,
if
you
have
an
API
export
which
restricts
just
get
verbs
on
Secrets,
if
a
user
tries
to
access,
you
know
a
secret
on
a
workspace
where
the
user
declared
a
binding
towards
sheriffs,
and
that
will
work,
but
a
post
will
fail
right.
So
in
this
case,
restrict
two
constraints:
to
get
verbs
and
secrets
for
consumers
of
API
bindings.
A
Obviously
what
you
see
in
here
when
we
just
leave
it,
as
is
we
have
a
potential
problem
and
we
lock
the
access
to
all
secrets
for
consumers
in
all
of
its
name
spaces
inside
the
consumers
workspace.
So
here
comes
another
twist
in
The
Proposal.
We
need
a
little
bit
more,
namely
you
know
yeah.
This
is
the
example.
A
If,
if
the
user
tries
to
access
another
secret
bar
instead
of
a
food
secret
due
to
the
verb
list
here
in
the
permission
claim,
access
will
be
forbidden,
so
we
need
a
little
bit
more
and
therefore
the
proposal
is
to
add
one
more
field
into
the
permission
claims,
namely
the
resource
name
that
you're
trying
to
restrict
access
to.
A
In
this
case.
Obviously,
if
the
user
tries
to
update
the
other
secret
bar
since
the
claim
only
refers
to
a
resource
name,
Foo
of
Secrets,
obviously
with
the
canonical,
you
know,
syntax
that
we
know
from
kubernetes
namespace,
slash
resource
name,
the
consumer
of
the
API
can
still
edit
other
secrets
in
other
namespaces.
A
C
A
Yeah
totally
I
mean
that's,
that's
a
good
proposal.
I
mean
I.
Just
you
know,
shoot
it
for
the
resource
name
being
enough,
potentially
having
redundant
permission
claim
entries.
But
that
sounds
like
a
good
idea.
Yeah.
D
So
we
are,
we
are
also
pretty
free
in
performance,
so
everything
is
controller
and
emission
based
so
those
mechanism
we
have
today
so
everything
we
can
map
to
labels
which
are
performation
claim
we
can
Implement
so
labor
selectors,
as
you
say,
Json
Pass
based
things,
relative
other
objects
so
be
creative.
You
have
to
find
use
cases
and
implement
solutions
for
them.
A
Steve
or
I
believe
Mike's
pressure
was
first
I.
Didn't
remember,
Mike!
You
go
first
see
if
you
go
next
time.
Yeah.
E
Actually,
I
don't
quite
understand
the
problem
back
on
slide
four,
you
said
something
about
locking
since
I
assume
what
you
mean
here
is
that
these
permission
claims
get
added
to
the
r
back
and
our
back
is
additive.
All
we're
saying
is
that
you
have
failed
to
give
permission
to
the
consumer
to
read
other
Secrets,
which
is
exactly
your
intent.
A
I
will
I
will
come
to
the
deadlock
in
a
minute.
The
thing
is
I
think
what
you
believe
is
a
what
you're
speaking
about
is
about
the
orback
within
workspaces
right.
The
regular
kubernetes
are
back.
E
Yeah
I
assume
I,
you
didn't
actually
say
it,
but
maybe
you
should
just
check
my
assumption.
I
was
assuming
that
these
permission
claims
get
added
by
either
explicitly
or
implicitly
to
the
regular
rbec
in
the
consumer.
Workspace.
Yes,.
A
A
The
thing
is
why
we
are
doing
it
on
an
API
export
level.
Is
that
you
know
we
would
have
to
update
potentially
millions
of
workspaces
with
orbic
definitions
to
reflect
what
we
declared
in
the
API
API
export
right
so
like
that
would
be
an
extremely
expensive
operation
when
it
comes
to
the
implementation.
What
will
rather
happen
is
like
there
will
be
a
concrete
authorizer,
like
literally
an
authorizer
inside
the
existing
authorizer
chain
in
kcp.
That
will
you
know
assert.
A
You
know
the
user
is
accessing
a
workspace.
Does
it
access
a
resource
which
has
an
API
binding
it
does
the
API
binding,
have
any
permission
claim
restrictions
set?
If
so,
you
know
check
if
the
request.
E
D
E
D
No
no
yeah
as
a
normal
Also
Rises
like
six
twenty
one
sec
I
like
that,
but
this
is
different.
It's
it's
a
conjunction
between
both.
Yes,
it's
not.
A
E
Okay,
yeah,
maybe
I'm
just
grossly
confused,
because
in
regular
kubernetes
there
is
a
chain
of
authorizers
that
can
say
allow
deny
or
no
decision,
and
the
convention
is
that
each
one
either
actually
says
allow
or
no
decision,
and
so
it
in
practice
becomes
a
union.
A
E
D
D
We
already
have
similar
cases
so
I
think
it's
the
doc.
We
don't
have
to
go
into
details
here.
There
are
already
a
couple
of
authorizers
which
are
changed,
but
in
the
conjunctive
way
like
access
to
the
workspace
is
one
of
them,
and
this
is
already
the
case
could
even
be
done
in
queue.
It's
just
exposed
in
code.
You
can
do
that
in
Cube
as
well.
B
And
critically
too,
like
if
you,
you
know
this
also
fundamentally
can't
be
implemented
through,
like
delegated
rbac
objects
as
well,
because
if
you
have
admin
to
the
workspace,
you
can
do
whatever
you
want,
and
that
is
expressly
not
what
this
is
about.
I
guess,
Serge,
I,
just
I
linked
to
issue
in
the
chat.
We
have
one
open
for
like
different
types
of
selector,
behaviors
I,
think
if
we
just
you
know,
leave
it
open
for
other
things
in
the
future.
That'd
be
great,
but
resource
name
is
a
great
place
to
start.
A
E
C
A
All
right
any
more
questions
right,
if
not
I
would
continue.
The
next
set
of
things
is
on
the
service
provider
side.
So
obviously,
service
providers
access
things
using
the
so-called
virtual
API
export
API
server
gear
maintenance.
We
need
some
shortcut
for
this
thing,
because
I
think
it
needs
some
abbreviation
anyways.
What
you
usually
do
here
is
there
is
this
concept
of
virtual
API
service
and
when
you
access
service
via
oh
God,
yeah,
API,
server,
VW
yeah.
A
That's
that's
also
fine,
with
the
slash
Services
API
export
prefix
on
requests,
and
this
is
the
canonical
way
of
service
providers
to
access.
A
You
know
concrete
resources
and
workspaces
that
are
being
exported
so,
and
this
is
exactly
the
place
where
the
client
set
of
verbs
starts
to
function
right
in
this
case,
if
the
service
provider
accesses
the
very
same
Secret
by
the
virtual
API
server,
the
virtual
API
export
API
server,
it
has
permissions
to
execute
all
verbs,
so
excess
will
be
permitted.
A
We
have
a
problem,
though
imagine
in
the
very
same
consumer
workspace.
We
have
another
API,
binding
Cowboys,
also
a
term
taken
from
e2e
test.
Sorry
about
that
I'm,
not
very
creative
here
and
that
API
binding
that
Cowboys
API
in
binding,
also
as
a
permission
claim
towards
secrets
and
it
restricts,
for
instance,
to
get
operations.
So
this
needs
to
be
considered
also
in
this
case,
even
if
the
service
provider
claimed
oh
yeah,
thanks
Stefan
claimed
all
permissions
on
the
sheriff's
API
export
right.
A
A
Obviously,
we
have
sort
of
like
a
deadlock
situation
here
and
the
solution
here
again
resource
name
using
selectors
or
whatnot
right
to
restrict
in
the
API
export
exactly
the
set
of
resources
that
you
know
you
want
to
have
permission
claims
for,
and
this
is
also
sort
of
like
the
realization
that
we
are
having
when
it
comes
to
permission
claims.
A
A
So
we
have
another
case
that
we
have
to
consider
and
that's
actually
kcp
buying.
So
I
took
this
slide
from
Stefan's
presentation
that
is
sort
of
like
a
new
mechanism
that
we
want
to
introduce
in
order
to
be
able
to
synchronize
resources
inside
workspaces
to
literally
native
kubernetes
clusters.
Right,
like
your
favorite
kubernetes
distribution
may
be
a
kind
mini
Cube
or
openshift
cluster,
and
we
have
another
problem
here
that
in
in
this
Dimension
we
don't
have
in
kcp
API
exports
and
API
bindings,
and
that
is.
A
We
have
synced
resources
on
the
native
Cube
cluster,
and
in
this
case
you
know,
this
Foo
resource
is
the
example
here,
which
is
a
claimed
resource
in
the
API
export.
In
this
case,
the
API
export
is
called
mangodb.
I
should
have
called
it
sheriffs
here
as
well
and
secrets
to
be
consistent
with
the
previous
slides,
and
the
question
here
is:
what
is
the
canonical
source
of
Truth
for
writing
to
Foo
right?
You
can
using
the
things
that
I
explained
previously.
A
You
can
perfectly
say:
I
claim
star
on
Foo,
but
also
you
know
restrict
to
Star
for
consumers
of
the
API
on
food,
and
then
it
becomes
a
problem
for
the
sinking
mechanism,
because
Bo,
if
you
write
an
API
export
like
this
I
think
I
gave
yeah
I.
Think
I
gave
this
example
here.
If
you
write
an
API
export
like
this,
you
know
you
have
a
conflict,
because
the
Sinker
then
doesn't
know
what
do
I
do.
A
It
needs
to
do
some
sort
of
reconciliation
and
it
doesn't
know
what
to
do
when
somebody
on
the
left
hand,
side,
the
red
fool
changed
that
resource
or
the
service
provider,
change
that
resource
again,
a
pretty
simple
solution
that
we're
envisioning
here
is:
we
want
to
have
an
opinionated
set
of
verbs
that
we
know
are
mutating
resources
and
those
can
either
be
only
claimed
or
restricted
to
mutually
exclusive
right.
So
you
cannot
say
both
you
know,
update
in
the
claim
and
in
the
restrict
two
set
of
verbs.
A
These
lists
must
be
mutually
exclusive.
Obviously,
read
operations
can
be
declared
on
both
side
of
things,
and
this
invariant
at
least
that's
sort
of
like
the
idea.
Will
be
checked
in
the
API
exported
Mission
such
that
you
know
you
cannot
even
create
an
API
export.
If
writing
verbs
are
conflicting,
so
15,
slides,
20
minutes,
I'm
good,
also
time
wise.
Any
more
questions
towards
this
proposal.
A
All
right
so
going
once
twice,
I
will
update.
There
is
a
an
existing
PR
for
this
proposal
here.
So
I
will
update
it's
not
100
up
to
date.
Yet
with
what
I
just
presented
here,
but
I
will
update
it
and
yeah
welcome
the
community
to
discuss
further
details
over
there
obviously
also
Mike,
inviting
you
as
well
there,
and
then
we
can
continue
discussion.
That's
it
for
my
site.
Thank
you.
B
Thanks
Serge
and
I
want
to
call
out
as
well
that
sort
of
that
object,
selection
mechanism
that
we
talked
about.
We
would
like
to
hear
from
folks
about
how
they
want
to
select
objects.
We
had
some,
you
know,
straw,
man,
ideas,
all
objects
that
I've
ever
created.
B
You
know
all
secrets
referenced
in
a
pod,
spec
volume
Mount.
You
know
it'd
be
interesting
to
hear
what
kind
of
selectors
are
useful,
so
some
feedback
that
would
be
awesome.
F
Maybe
just
one
one
thing:
one
question
Json
path
was
mentioned
at
some
point.
You
know
on
the
various
options
in
this
area
and
I
was
wondering.
F
Did
we
envision
the
case
where
we
want
a
service
provider
or
some
sort
of
controller
I'm
thinking
mainly
of
the
upcoming
coordination
controllers,
to
be
able
to
only
change
some
things
or
to
in
the
in
a
given
object,
for
example,
to
be
able
to
modify
only
some
labels
that
that
are
forbidden
for
other
type
of
of
controllers,
which
are
less
privileged
in
some
sense
and
the
contrary?
Also
so
I
mean?
Is
it
something
that
is
covered
by
more
or
less
the
the
proposal
or
something
that
should
be
added.
A
Yeah
Andy
I'm
curious
what
what
your
opinion
on
this
one
is.
I've
been
briefly
discussing
this
with
Stefan
as
well.
The
idea
like
there
are
two
things.
First,
there
is
a
proposal
Upstream
to
allow
fine-grained
permissions
on
field
levels.
However,
this
is
not
in
kubernetes,
yet
right,
so
I'm
not
sure
if
it's
a
good
idea
to
rely
on
this
idea.
Second
thing
is,
or
second
strongman
proposal
is
to
have
dedicated
verbs,
dedicated
mutation,
verbs
like
annotate
or
label,
or
something
like
this
that
reflect
sort
of
like
this
fine-grained
set
of
permissions.
Indy.
C
I
do
think
that
we
are
going
to
need
to
at
least
have
annotations
or
labels
that
are
protected.
So
there's
an
issue.
I
don't
have
the
number
in
front
of
me
for
finding
some
way
to
restrict
I
mean
it's
kind
of,
like
reverse
permission,
claims
where
we
I
guess
yeah
I,
guess
it's
all
tied
up
into
this,
like
there's
say
like
a
resource
quota
that
the
API
provider
needs
to
be
able
to
create
and
maintain.
C
So
that's,
that's
the
full
reverse
permission
claim
bit,
but
I
think
that
there's
probably
other
examples
where
there
are
labels
or
annotations
that
a
service
provider
wants
to
own
and
not
allow
users
to
change,
but
they
don't
care
if
the
user
and
they're
happy
to
let
the
user
change.
Other
contents
yeah
this
one
thanks.
C
A
C
To
whether
or
not
something
like
kubernetes
API
Machinery
already
supports
this,
and
should
we
avoid
it
because
they
don't
I
think
that
we
should
look
at
what's
been
discussed
previously
and
if
there
are
significant
concerns
in
terms
of
performance
or
whatnot,
we
should
definitely
take
those
into
consideration
if
it's
just
hasn't
been
implemented.
Yet
I,
don't
think,
there's
any
reason.
We
couldn't
work
to
both
try
and
get
this
upstream
and
also
do
it
in
kcp.
Simultaneously.
D
I
think
this
fits
very
well
in
this
in
this
model.
That
Saga
showed
circus
also
showed
this
example
where
there's
Mutual
access
like
you,
cannot
have
two
two
writers
to
an
object
and,
in
the
studio
example,
I
think
this
is
pretty
much
what
you
are
saying.
You
want
the
right
access
to
certain
labels
and,
at
the
same
time,
for
bit
right
access
to
other
people
like
to
the
users,
so
I
think
it
fits
perfectly
well
just
a
question
how
to
express
that
in
a
nice
API.
F
And
maybe
a
first
step
would
be
since
there
there
is
already
some
protection,
for
you
know:
internal
annotations,
for
example,
or
or
privileged
levels.
So
maybe
at
least
the
first
step
where
you
you
have
a
verb
to
be
able
to
modify
privileged
levels,
that
you
know
normal
clients
and
end
users
cannot
modify.
Maybe
I.
D
F
Or
maybe
it
could
be
a
distinct
category
of
of
annotations,
but
at
least
some
other.
F
B
Okay,
David:
let's
make
sure
that
what
you're
thinking
about
is
captured
in
that
that
issue
I
linked
it
in
the
chat.
B
Awesome
unless
we
have
something
else,
Paulo
looks
like
you're
here
to
talk
about
Edge
workload,
distribution.
G
Yes,
thank
you
so,
first
brief
introduction:
I'm
Paul
auditory
from
IBM
research,
I've
been
working
on
several
kubernetes
projects.
Yeah
at
IBM
also
work
a
little
bit
with
the
crossbling
community
I'm
happy
to
see
Daniel
here
by
the
way
I.
G
So
today
we
had
a
discussion
a
few
days
ago
with
David,
festal
and
Stefan
about
some
of
these
Edge
use
cases.
So
I
wanted
to
take
a
little
time
just
to
talk
about
media,
to
lining
some
of
these
use
cases,
and
some
of
the
differs
that
we
see
between
the
current
model
within
CM.
What
we
may
need
to
actually
deal
with
the
edge
deployment,
so
the
first
use
case
I
want
to
outline
is
a
retail
use
case.
G
So
in
this
case
we
have
potentially
hundreds
of
stores
for
a
retail
chain
and
we
have
an
app
that
needs
to
be
deployed
from
a
central
location
to
all
these
stores
and
we
need
also
to
customize
each
rental
store
location
with
particular
parameters,
name
properties
and
values
that
can
be
specific
to
the
location.
Potentially,
the
store
manager
is
simultaneous,
defining
properties
there.
G
G
You
may
have
also
low
bandwidth,
and
the
idea
is
that
the
application
of
the
stone
is
to
continue
to
operate,
even
if
you
lose
the
connectivity
to
the
center
another
use
case,
the
2f
again
I'm
going
to
go
very
high
level,
and
then
this
there
is
to
sort
of
talk
about
where
we
see
the
difference
with
the
DNC.
Here
is
the
industrial
Edge
use
case
where
essentially,
it's
a
AIML
kind
of
use
case
where
we
have
ml
training
in
the
center
and
then
in
finishing
that
happens.
G
The
manufacturing
plant
and
the
the
goal
here
is
to
deliver
an
application
that
does
inferencing
to
the
plants
and
they
could
be
also,
of
course,
a
model.
There
is
an
application
that
is
a
model
that
needs
to
be
delivered
in
this
case.
G
The
scale
typically
is
about
tens
of
manufactured
implants
in
each
plane,
so
I
may
have
a
sensors
in
the
production
line
that
are
connected
to
some
line
server
and
then
from
there
they
can
be
connected
to
some
potentially
to
a
cluster,
so
it
could
be
a
kubernetes
cluster
in
some
case
that
we
we
see
this
trend
emerging
of
it
in
small
cluster,
like
k3s,
potentially
micro
shift
Etc
coming
up
and
planning
the
inference
in
there.
So
we
essentially
need
to
deliver
this
information
Gap
from
the
center
to
all
these
different
locations.
G
There
is
also,
in
this
case,
a
plane
manager
that
could
make
decisions,
maybe
somehow
customize
the
app.
This
can
be
done
from
the
center.
There
could
be
some
local
decision
and
then,
of
course,
while
finally
defined
rows,
but
they're
not
going
to
get
too
much
in
detail
there.
So
requirements
are
similar,
probably
to
retail,
but
there
is
also
the
need
to
manage
some
of
the
the
machine
learning
model
life
cycle
and
the
last
use
case
I
want
to
quickly
outline
is
Telco
use
case
here.
G
The
idea
is
that
telcos,
today
they're
using
SDM
to
manage
5G
networks
using
using
virtualized
radio
Access
Network
appearance.
In
this
case
there
are
different
components
connected
to
the
5G
antennas
and
in
some
of
these
components,
for
example,
distributed
units
on
the
central
centralized
units,
some
Telco,
actually
starting
to
experiment
with
kubernetes,
maybe
single
node
kind
of
kubernetes
deployment
and
deploy
some
of
these
Network
functions
with
customizations.
They
are
at
the
edge
in
this
in
this
kind
of
locations.
G
So,
even
in
this
case,
we
have
a
need
to
customize
some
time
per
location.
We
have
also
this
natural
hierarchy,
because
we
have
all
this
hierarchy
of
different
units
and
eventually
going
back
to
the
center,
where
we
need
to
somehow
control
and
distribute
Network
functions.
The
main
difference
with
the
other
two
cases
is
the
scale.
It
is
much
larger,
we're
talking
about
potentially
one
thousand
towers,
for
example
per
Legion,
and
there
are
multiple
regions
Etc.
So
this
is
probably
the
main
difference
now.
G
Given
that
brief
introduction
on
these
cases,
we
like
to
sort
of
we're,
actually
been
playing
a
little
bit
with
kcp.
We
think,
actually
it's
a
very
nice
model
for
delivering
workloads
even
to
the
edge,
but,
of
course
where
we
start
experimenting,
we
saw
that
there
are
difference
in
the
way,
actually
TMC
works
and
the
kind
of
features
that
will
be
required
to
to
deal
with
this
kind
of
edge
multi-cluster
scenarios.
G
So
the
first
similarity
with
with
the
TNC
we
think
is
that
in
the
TNC
model
a
user
can
create
a
workspace.
G
You
can
create,
for
example,
a
placement
policy
to
buy
namespaces
to
locations
and
then
apply
this
workload
to
the
namespace
and
The
Thinker
will
deliver
this
workload
to
to
the
Target.
So
from
this
perspective,
we
we
think
this
model
is
very
attractive.
This
user
experience,
because
in
a
way,
makes
somehow
this
idea
that
you
have
this
summer
virtual
cluster,
from
where
you
can
deploy
in
our
workload
to
different
locations
and
we'd
like
to
have
a
similar
model
in
a
way
for
The
Edge
multi-cluster
the
ENC.
G
But
there
are,
of
course
difference
that
we
like
to
compare
and
contrast
right.
So
in
the
case
of
DNC,
what
happens
that
one
application
is
delivered
exactly
to
one
target
cluster?
G
Yeah
per
placement:
yes,
so
we
can
actually
achieve
the
behavior
today
in
TNC,
defining
a
placement
and
defining
allocation
for
each
other
async
targets,
and
that
is
actually
possible
today.
G
But,
of
course,
you
have
to
start
creating
all
these
different
resources,
and
there
are
also
other
difference
that
I'm
going
to
underline,
for
example,
in
NC.
Ideally,
we
like
to
have
one
single
policy,
one
single
way
to
define
a
predicate
to
identify
many
clusters
and
get
the
application
delivered
to
all
of
them.
So
it's
more
like
a
one-to-end
kind
of
model
to
deliver
and
TNC
the
other
difference.
As
we
saw
some
of
these
use
case,
we
could
have
Network
partitions
right
and
in
in
the
currency
model,
there
is
a
health
checking.
G
So
if,
for
example,
a
cluster
goes
down,
the
TNC
scheduler
will
resched
the
workload
to
a
different
cluster.
You
can
see.
On
the
other
end,
we
probably
need
to
have
a
different
approach
to
checking.
We
need
to
be
tolerant,
somehow
to
loss
of
connectivity
for
a
extended
period
of
time.
G
Another
difference
we
think
in
the
current
model,
with
the
Sinker,
when
a
deployment
report
is
actually
delivered
to
a
Target.
The
Pod
is
somehow
injecting
some
basically
environment
variables
and
certificate.
Step-Ups
points
back
to
kcp
right,
so
that's
our
behavioral
problem
needs
to
be
somehow
customizable
in
a
way
for
these
scenarios,
because
we
don't
want
to
have
the
part
depends
again
on
the
kcp
on
the
center,
because
those
locations
there
to
operate
in
autonomy.
G
And
finally,
the
last
part
is
more
about
scale,
but
we
potentially
could
have
a
large
number
of
sync
targets,
and
today
I
think
this
TNC
is
more
looking,
maybe
handful
of
cluster
from
what
we
understand.
But
of
course
we
have
to
look
how
we
can
extend
that
to
maybe
potentially
hundreds
or
maybe
even
the
future.
G
Thousands-
and
the
last
point
is
about
status
right
once
we
start
going
into
this
model
of
1
to
n,
and
now
we
have
to
figure
out
a
way
to
somehow
get
to
the
status
for
all
the
different
copies
from
the
targets
and
potentially
Aggregate,
and
summarize
the
status
of
in
one
place.
That
is
somehow
easy
to
see
what
is
going
on
and
then
from
there
maybe
be
able
to
drill
down
and
see
if
there
is
any
problem
to
the
individual
status.
G
So
anyway,
this
was
just
a
brief
intro
to
explain.
You
know
the
use
case
that
we
saw
and
some
of
the
difference
that
we
see
we
can
see.
I
think
Mike
has
also
some
more
points
on.
You
know
things
initial
thinking
that
you
have
in
this
space,
but
like
first
two
years.
There
is
any
thought
about
this
anything
that
anyone
would
like
to
to
bring
up
here.
G
E
Yeah
so
I've
been
working
on
a
proposed
goal
for
interface,
What
users
of
the
edge
scenarios
would
might
be
doing
and
then
I
was
hoping
to
engage
some
discussion
about
implementation,
since
it
does
have
overlap
with
the
TMC
use
case.
You
know
the
thinking
is.
We
can
probably
share
some
implementation
technology.
E
Where
do
I
find
that
oh
geez
there?
It
is
okay,
I'm
just
going
to
show
you
the
whole
screen,
yeah
sure,
and
it's
not
liking
it.
Now.
Oh
now,
it's
liking
it.
Okay
for.
B
The
functionality
that's
provided
by
the
sinker-
it's
not
strictly
required
for
kcp
kcp,
can
be
a
control
plane
without
it.
But
right
now.
B
E
So
I've
hypothesized
an
interface.
Let
me
start
so
one
of
the
problems
that
that
I
think
is
important
in
Edge
is
the
users
or
it's
it's
actually
I
think
needs
to
be
emphasized
a
little
bit
more
than
what
Paulo
said.
Is
it's
not
just
devops,
it's
a
much
more
fine-grained
division
of
roles
and
responsibilities.
One
of
the
things
that
people
involved
need
to
be
able
to
do
is
bundle
things
and
create
and
reuse
abstractions,
and
my
proposal
is:
let's
not.
We
don't
need
to
invent
anything.
E
That's
already
been
solved
a
few
times.
One
solution
is
Helm
and
git.
Ops
also
provides
bundling,
not
quite
so
great
on
abstraction
and
so
I
propose
that
we
should
go
back
to
creating
an
API
for
Helm,
but
that's
that's
purely
non-uh,
Edge
or
or
TMC.
That's
purely
a
Helm
development.
E
Getting
on
to
the
edge
part.
I
proposed
an
alternate
placement
that
has
a
spec
that
is
similar
to
tmcs.
It's
got,
select,
location,
selectors
and
namespace
selectors,
but,
as
Paulo
said,
the
semantic
is
don't
pick
one
of
the
matching
locations
pick
all
of
them.
This.
This
is
saying:
I
want
the
the
contents
of
these
namespaces
delivered
to
all
of
the
matching
locations,
and
then
once
we
get
to
multiple
destinations,
that
raises
the
question
of
coordination
of
rolling
out
changes.
E
You
know,
the
simplest
thing
you
can
imagine
is
that
every
change
gets
rolled
out
as
quickly
as
possible,
but
that's
not
actually
what
you
always
want
right.
In
fact,
you
know
Concepts,
like
Canary
testing
or
blue
green
testing,
are
exactly
about
controlling
the
rollout.
Also
when
you
get
into,
for
example,
telcos
where
they
have
overlapping
coverage
by
different
antennas,
you
have
kind
of
domain
specific
control.
You
want
to
exert
over
a
roll
out
there
as
well.
E
So
I
have
hypothesized
that
in
this
kind
of
edge
placement,
there
is
a
option
to
specify
some
control
over
rollout,
which
I
have
hypothesized
a
simple
way
of
doing
that,
which
is
in
terms
of
putting
a
lower
bound
on
the
number
of
destinations
that
have
got
the
latest
copy,
that
lower
bound
could
be
specified
either
in
absolute
terms
or
as
a
percentage.
E
E
One
of
the
things
in
the
TMC
placement
is
as
soon
as
there
is
some
binding.
It
goes
into
immutable
and
I.
Think
for
Edge.
We
don't
want
placement
to
shift
into
immutable,
there's
an
inherent
dynamicity
in
the
set
of
matching
locations,
so
it
doesn't
make
sense
to
think
of
placement
as
immutable.
E
This
status
needs
this
to
reflect
a
spec
generation
and
then
I've
hypothesized
again
some
simple
things
here:
a
count
of
the
matching
locations
and
then
a
status
from
the
rollout
and
Raw
Status
again,
as
you
started
with
a
really
simple
status,
which
is
a
count
of
the
number
of
completely
current
the
number
that
are
not
completely
current
and
stale.
E
As
Paolo
mentioned,
disconnected
lack
of
connectivity
is
normal,
but
we
do
want
to
have
some
kind
of
a
concept
of
this
thing
has
been
disconnected
for
so
long
that
we're
going
to
consider
the
information
from
it
stale,
not
necessarily
a
fatal
error,
but
maybe
something
for
a
little
bit
of
attention.
E
So
that's
placement
and
then,
together
with
something
that
goes
together
with
that
is
customization
I've
hypothesized
a
few
different
ways
of
doing
customization
one
is,
and
let
me
maybe
get
to
some
examples
here,
yeah.
Let
me
let
me
just
actually
try
showing
some
examples.
E
B
E
No,
it's
not
that
one!
The
sores
attack,
good
grief
did
I
lose
the
example.
Okay,
okay,
no
I've
got
it
here
somewhere,
where
so,
oh
geez,
just
just
a
moment.
I'm
sorry.
E
Okay,
I'll
just
go
up
here
and
actually
look
at
it.
So
here
is
an
example
of
a
a
Helm
interface
object
that
is
doing
customization
in
the
simplest
possible
way.
It's
enabled
by
an
annotation
that
says
to
do
what
I
call
parameter
expansion,
which
is,
when
you
see
the
syntax
percent
and
then,
in
parentheses,
a
parameter
name.
What
that
means
is
that
this
is
a
reference
to
an
edge
property
and
in
each
location
that
obviously,
that
parameter
reference
gets
replaced
by
the
parameter
value.
E
G
Yes,
one
point:
I,
don't
know
if
we
need
to
go
too
much
in
detail,
I
think
what
I
did
out
here
was
also
to
start.
You
know,
presenting
the
use
case,
some
of
the
initial
votes
and
also
potentially
opening
the
floor
to
see
what
is
the
best
way
to
somehow
proceed
if
we
want
to.
G
Maybe
these
are
some
initial
idea,
but
maybe
there
is
some
way
to
somehow
bring
this,
maybe
into
an
issue
or
some
discussion
within
the
community
that
we
like
to
maybe
start
to
small
and
see
what
are
the
things
that
make
sense
to
to
to
to
to
start
tackling
without
getting.
You
know
this
level
of
detail
yet
so
I'd
like
to
hear
maybe
from
the
functionalities
any
thoughts.
What
will
be
the
best
way
to
bring
some
of
these
thoughts,
maybe
as
an
issue,
maybe
some
Google
docs
or
what
would
be
the
best
way.
E
All
right,
I'll,
just
briefly
point
out
I,
also
have
a
proposal
for
status
summarization.
But
yes,
let's
see
what
people
think
about
how
to
proceed.
D
Yeah,
so
it's
pretty
exciting
to
see
that
that
so
we
talked
about
that
before
that.
Maybe
a
different
API
is
really
the
way
to
go
for
that,
and
we
I
really
like
to
see
that
I'm
very
interested
in
so
one
bigger
question
for
kcp
as
a
project,
and
especially
the
core
of
kcp,
is
our
task
and
challenge
is
to
make
this
work
possible
on
top
of
kcpe,
without
your
service
being
in
any
way
privileged.
D
We
had
similar
discussions
so
David
and
I
discussed
that
TMC
started
as
part
of
core,
and
we
always
try
to
keep
it
kind
of
not
too
much
in
the
in
the
core
Parts,
but
still
is
at
some
points
point,
and
we
also
want
to
to
move
out
TMC
from
kcp
to
make
it
its
own
self-standing,
side,
project,
subproject
of
kcp
at
the
same
challenge
we
have
here
and
I
think
that
there's
a
really
overlap,
so
our
Challenge
and
weapons
Community
is
to
make
or
to
enable
kcp
to
to
host
such
a
thing
like
this
Edge
based.
D
E
I
I
think
I
see
two
challenges,
or
one
is,
as
you
said,
I
think
kcp
today
has
a
factory
problem.
Tmc
needs
to
be
separated
out.
That's.
F
So
that
was
my
my
point:
I
mean
the
point
I
wanted
to
bring
just
after
yeah.
Yes,
the
fact
that
there
for
the
TMC
when
we
started
on
that
along
the
way
there
are
we
we
meet
the
need
to
build
some
Primitives.
That
are,
you
know
the
basis
for
concrete
TMC
work.
F
One
of
those
Primitives,
for
example,
is
the
obviously
the
Sinker,
but
if
we
go
a
bit
more
into
the
detail,
the
sink
of
virtual
workspace,
which
it
is
still
being
completed,
I
mean
with
interesting
features
that
could
be
of
interest
for
you.
But
these
are
two
components
here:
the
Sinker,
which
is
an
agent
in
the
physical
cluster.
F
But
then
you
have
the
syncope
virtual
workspace,
which
is
mainly
A
Primitive
on
the
kcp
side,
to
present
all
the
resources
in
which
the
Sinker
and
finally,
the
physical
cluster
will
be
interested
in
all
the
resources
that
have
to
be
synced,
and
this
is
typically
the
type
of
Primitives
that
it
seems
to
me
would
be
could
be,
at
least
if
we
designed
them
correctly
could
be
shared
between
various
variants
of
multi-cluster.
You
know
transparent
one
Edge
one,
but
typically
these
type
of
things
give
me
all
the
objects
which
are.
F
F
These
type
of
things
could
be
common,
and
then
we
have
to.
It
seems
to
me
think,
together
with
all
of
our
with
the
new
use
cases
that
that
come
up
the
new
type
of
multi-cluster,
we
have
to
think
together
how
we
draw
the
lines
between
what
would
be
different
and
sort
of
pluggable
like
the
the
Sinker.
You
can,
even
today,
just
replace
the
Sinker
with
another
image
so
that
things
Parts,
which
are
pluggable
and
others
which
would
be
common
Primitives
and
that
we
have
to
design
with
this
approach
in
mind.
F
That
does
it
make
sense
related
to
your
question.
Mike.
E
Yes,
right,
I
think
yes,
there's
a
matter
of
Designing
the
interface
right
and
identifying
what
makes
sense
to
be
shared
and
and
not
so,
I
I
think,
there's
a
lot
of
discussion
to
be
had
here
right,
we've
just
kind
of
started
to
open
the
topic
so
and
the
meeting's
almost
over
now
so
I
think
you
know.
My
question
really
is:
is
how
do
we
want
to
proceed
to
make
progress
on
this
topic
for
these
topics.
C
E
Mean
sorry
Andy
I
I,
what
we've
got
so
far
is
I.
Have
proposals
for
interface,
I
really
wanted
discussion
and
I
wanted
to
hear
your
guys
ideas
about
what
makes
sense
as
common
implementation,
so
I
just
I'm
just
going
to
stop
and
say
you
know
it
seems
like
we
have
similar
scheduling
and
sinker
problems,
but
not
identical.
So
you
know
I
wanted
to
solicit
ideas
about
what
makes
sense
there.
C
Yeah,
so
to
finish,
my
statement,
I
think
that
if
you
can
put
together
proposals
for
where
you
see
commonality
between
Edge
and
TMC
and
where
you
see
Divergence,
then
we
can
take
a
look
at
that
and
provide
feedback
and
continue
to
discuss
where
it
makes
sense
for
kcp,
slash
TMC
to
have
shared
functionality
and
then,
where
the
the
edge
stuff
could
be.
Something
that
you
and
other
folks
are
interested
in.
Edge
can
continue
to
explore.
Standalone.
E
C
I
would
be
fine
with
anything
like
it
could
be.
A
drawing
that
just
you
know
has
like
TMC
on
the
left,
Edge
on
the
right
and
common
in
the
middle,
and
you
just
list
out
what
you
think
fits
in
each
bucket
like
it.
It
could
be
a
Google
doc,
anything
makes
sense,
I
would
say
and
I'm
not
looking
for
like
20
pages
of
Pros
or
anything
just
high
level
topics.
C
E
Yeah
I
I,
my
problem
is:
it
sounds
like
you're
asking
for
something:
that's
a
little
more
worked
out.
You
know
as
I've
been
saying,
I
think
I
I'm,
you
know
I'm.
What
I'm
trying
to
work
out
is
some
interface
that
makes
sense
from
a
user
experience
point
of
view
and
I'm.
E
The
implementation
is,
is
really
kind
of
a
an
open
space.
In
my
mind,
I
think
the
current
interface
between
the
Sinker
and
the
scheduler
is
not
going
to
scale
so
I
think
you
know
just
briefly.
You
know
we
need
to
talk
about
a
scalable
interface
between
a
scheduler
and
thinker,
and
you
know,
there's
some
commonality
again
and
some
I'm
repeating
myself
I've
grown
out
of.
As
far
as
I've
thought
about
implementation.
B
Yeah
Mike
I
think
I,
guess
maybe
what
India
was
trying
to
say
like
this
sounds
good.
It
sounds
like
a
good
direction.
It
sounds
like
there's
a
lot
of
overlap,
I
think.
Let's
take
time,
maybe
David
can
find
time
to
chat
and
like.
C
B
E
To
share
I
would
like
to
share
what
I've
done
so
far
on
proposing
interface,
because
I
think
that'll
really
flesh
out
a
little
bit
better.
The
the
you
know
the
thoughts
about
where
we
need
to
go
for
implementation.
F
Today
and
I
was
about
just
sorry,
Steve
yeah
I
was
about
just
to
suggest
maybe
to
in
addition
to
such
a
Google
doc,
where
you
know
remaining
high
level
Parts
where
we
are
still
seeking
for
similarities.
F
I
think
it's
quite
an
important
one,
but
maybe
there
are
some
precise
things
that
we
already
know
have
to
be
investigated.
For
example,
the
network,
you
know
independency,
so
that's
clearly
something
that
we
have
to
think
about.
That
I
mean
to
me
it's
something
where
we
can
already
open
a
GitHub.
Is
you?
How
do
we
manage
cases
or
existing
targets?
We
know
they
are
going
to
be
the
connected
regularly.
So
that's
because
that's
even
something
that
we
have
to
think
more
generally.
F
Another
Point
like
that
is
you
know
the
fact
that
in
some
cases
we
don't
run
the
workloads
to
point
back
to
kcp,
for
example.
That's
also
related
to
to
the
same
thing.
So
these
things,
which
are
really
infrastructure
underlying
infrastructure
difference.
You
know
that
break
the
current
assumptions
of
kcp.
It
seems
to
mean
that
we
can
already
create
issues
and
start
discussing
with
on
that
and
then
keep
the
Google
Document.
For
you
know
trying
to
find
additional
convergence
areas,
does
it
make
sense.
B
Totally
David:
do
you
want
to
find
a
time
next
week
and
schedule
for
the
community
to
chat
about
this.
F
Yeah
sure,
thank
you
in
a
dedicated
meeting
you
mean
yeah
yeah
sure.
B
Awesome
we
have
one
minute
and
I
have
a
very
quick
topic
that
I
just
want
to
blurt
out
here
before
we
go,
so
we
and
but
we
I
mean
Andy
and
I
right
now
are
currently
in
the
process
of
moving
around
some
of
our
testing
in
the
kcp
repos.
So
one
of
the
one
of
the
things
we've
noticed
is
generally,
we
have
some
tests
that
are
using
fake
client
sets.
B
We
want
to
move
away
from
that
to
using
sort
of
the
mocking
function
based
stuff
that
we
have
everywhere
else.
There's
a
small
list
here,
I
think
we've
found
that
the
the
little
functions
are
a
little
bit
more
robust.
They
make
for
shorter
tests
that
are
a
little
bit
easier
to
write
and
they
don't
have
some
of
the
same
setup
properties
where
your
test
has
to
like
create
indexers
correctly
and
all
that
sort
of
stuff.
So
just
keep
an
eye
on
this.
B
E
Don't
you
know
know
any
details
of
what
you're
saying,
but
you
know
from
my
work
Upstream
in
kubernetes
I'm,
going
to
complain
that
there's
a
lot
of
tests
that
are
really
check
subtests
for
every
function
and
implementation.
You
write
a
test
function
that
says:
is
the
implementation
function
doing
what
I
thought
it
does
and
there's
a
difference
files
test,
which
is
you
know,
a
Behavior
test?
E
Does
this
thing
accomplish
what
it
should
without
regarding
the
implementation
and
mocking
sounds
more
like
the
former
than
the
latter
and
fake
clients
sound
more
like
the
latter
than
the
former.
B
Totally
and
I
think
the
maybe
I'm
being
imprecise
in
my
language,
I
think
we
are
looking
at
the
the
latter
sort
of
test.
I
think
there's
a
bunch
of
setting
up
the
fit
clients
making
sure
that
you're
in
former
factory
set
up
the
same
way
that
the
Informer
Factory
would
have
otherwise
been
set
up
when
starting
the
controller
waiting
for
the
caches
to
sink
in
the
same
way
like
that
sort
of
stuff
ends
up
making
those
tests
a
little
fragile,
and
that
might
be
a
kcp
specific
thing.
B
But
it's
definitely
something
we've
noticed.
So
the
intent
is
not
to
do
the
check.
Some
sort
of
thing
like
you
were
saying.
B
Awesome
we
are
out
of
time.
Thank
you,
everyone
for
all
the
discussions
and
let's
keep
an
eye
out
for
that
follow-up
meeting.
David
about
the
edge
thanks
all
have
a
good
one.