►
From YouTube: KCP-Edge Community Meeting, December 1, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everybody
and
welcome
to
the
Sid
kcp
Edge
meeting
community
meeting
for
December
1st
2022.,
like
to
remind
everybody
on
the
call
that
we
have
a
contributed
code
of
conduct
as
contributors
and
maintainers
in
the
cncf
community
and
in
the
interest
of
fostering
an
open,
open
and
welcoming
Community.
We
pledge
to
respect
all
people
who
contribute
through
reporting
issues,
posting
feature,
requests,
updating,
documentation,
submitting
pull
requests
or
patches
and
other
activities,
and
basically
just
be
nice
to
each
other
thanks.
Okay.
A
So
today
we
have
queued
up
two
items:
one
on
edge
placement
from
June,
Juan
and
Yuji
watanabi
wants
to
would
like
to
talk
to
us
a
little
bit
about
policy
management,
so
without
further
Ado,
we'll
get
started.
Ugu
are
in
first
in
the
queue.
So,
if
you'd
like
to
take
Center
Stage
share
what
you
would
like
to.
B
Can
see
my
screen?
Yes,
okay,
thank
you!
I'm
Yuji,
Watanabe,
so
I'm
from
IBM
research.
Today,
I
I'd
like
to
present
our
very
early
stage
idea
about
the
Porsche
control
for
kcph.
B
So
this
is
my
joint
work
with
my
correct
yeah
takumi,
and
also
the
ryoktang
folks,
power
rule,
hamit
thing
and
Constantine,
and
before
going
to
the
details
about,
let
me
explain
about
this
talk,
so
this
is
still
very
hard
stage
idea
so
at
the
and
we
are
very
new
to
the
kcp
at
it.
So
please
give
us
any
comment:
feedback
and
all
the
materials
that
is
right
and
design
document
and
demo
recording
and
some
prototype
implementation
or
rings
available
here.
So
please
get
a
please
interrupt
me.
B
It
might
do
even
during.
Even
if
you
are
in
the
this
presentation-
or
maybe
you
can
give
give
the
comment
to
the
design
doc
directory,
okay,
so
faster.
Let
me
explain
about
the.
What
is
the
political
I
am
trying
to
explain
this
in
this
page,
so
in
this
in
this
thought,
and
basically
the
verify
resource
by
policy
the
it's
like
the
admission
control
or
sometimes
maybe
the
background
scam
for
the
resource.
B
So
typical,
the
implementation
is
a
Cubano.
Actually
we
are
doing
some
prototyping
by
using
this
key
value.
B
And
it's
verified
by
mutate,
Genetic
Resource
according
to
the
policy,
so
as
a
example
is
Oba,
gatekeeper
or
q1n,
but
we
are
trying
to
address
the
genetic
policy
control
but
prototype.
We
are
doing
the
Prototype
based
using
the
keyboard
a
so
when
we
try
to
do
the
similar
as
things
policy
control
for
the
kcp
each
scenario.
B
The
this
is
a
very
high
level
idea
what
we
should
do,
so
we
have
the
kcp
side
and
cluster
side,
and
if
the
resource
comes
into
the
workspace
it
syncs
to
the
Cross
sync
to
the
cluster,
so
the
resource
checks
should
be
done
at
the
workspace
level
and
also
the
across
the
size.
So
so
the
reason
we
need
to
do.
We
need
policy
control.
B
Also
in
the
crosstalk
side,
it
press
the
HD
scenario,
the
cluster,
this
cluster,
maybe
disconnect
connected
so
the
if,
during
the
disconnected
phase
cluster
may
be
accessed
directly,
so
this
control
also
require
so
the
basically
the
requirements
like
we
put
the
policy
control
here
and
if
user
defined
policy
policy
policy
control
is
become
effective
according
to
the
policy
and
the
policy
is
can
be
also
enabled
on
the
crosstalk
site
and
all
the
report
is
correct.
Reported
report
is
generated
here,
so
it's
also
available
to
the
user.
B
So
this
is
a
very
high
level
idea
of
the
what
we
are
trying
to
achieve
by
this
policy
control,
okay,
so,
and
so,
for
enabling
this
we
have
some
proposal
proposed
design,
so
basically
that
we
are
using
some
and
cluster
or
the
policy
control.
This
is
connected
to
the
workspace
manage
workspace.
People
said
policy
control
the
same
you.
If
user
defines
a
project
control,
then
a
policy
control
operator
we
propose
proposing
is
automates
the
enabling
the
back
end.
B
Api
at
EPA
then
enable
it
as
an
admission
workbook
here
so
resource
up
to
in
the
workspace
is
automatically
checked
before
admission
or
after
admission.
The
it's
it's
scanned
by
the
this
about
plane
by
the.
B
So
it's
so
this
mechanism
back,
so
that
for
enabling
this
admission
over
backward
must
be
if
it
must
be
effective,
so
the
disposuction
control
operator
it
automates
the
enabling
enablement
of
this
background
keyboard.
So
this
is
the
workspace
sign
protection.
The
cluster
side
protection
is
the
also
the
activator,
so
so
the
goal
is
to
enable
the
key
value
on
the
cluster
side.
So
well,
that's
the
we.
B
The
position
control
operator
creates
the
resource
then
automatically
the
sync
to
the
crosstalk
side.
Then
keyboard.
Is
it
automatically
installed?
So
after
this,
after
starting
the
key
building,
the
policy
is
synced
to
here.
So
the
then
the
policy
control
box
so
even
automatically
generates
the
report,
but
after
by
the
backgrounds,
so
report
is
created
in
the
workspace
here
at
the
cluster
side.
Protection
generates
the
report
here,
so
we
need
to
have
the
mechanism
to
deliver
this
report
to
the
workspace
side.
So
this
is
about
up
up
foreign.
B
We
need
to
use
some
upsync
mechanism,
maybe
it's
in
the
it's
made
it
to
be
Implement
enabled
in
the
design,
but
by
using
that
we
can
send
the
send
back
the
report
to
the
workspace.
Then
policy
Ctrl
bit
automate
automatically
summarize
the
report
to
the
and
the
report
back
to
the
original
position
control
gives.
So
this
is
a
proposed
to,
but
so
both
okay.
So
this
is
Highway
design.
So
we
did
some
prototype
implementation
of
this
by
using
the
kiberino
and
one
kcp.
One.
B
So
the
by
using
this
position,
control
beta
the
admission
report
on
the
kcp
side
automatically
enabled
so
that
if
you,
if
you
create
some
config
map
on
the
sum
workspace,
it's
blocked
by
the
workspace
side,
automation,
control
or
if,
if
this
is
synced
to
the
raster
side,
and
then
it's
some
file,
vibration
found
it's
reported
at
the
crosstalk
site
and
it's
in
it's
sent
back
to
the
think
target
status.
I
did,
but
this
is
not.
B
We
are
just
doing
give
some
short
small
code
changes
on
the
sync
Target
status
code.
But
maybe
this
part
is
just
our
experience,
so
we
can.
We
should
use
the
more
official
way
to
up
for
the
upsync.
So
maybe
some
comment
here,
yeah
appsync
is
now
available,
so
we
we
should
use
the
opposite.
So
it's
it's.
B
Okay,
thank
you
so
much
for
during
this,
the
prototyping
that
we
have
several
issues.
B
Things
need
to
be
resolved,
so
the
policy
report
aggregation
for
the
policy
control
position,
report,
aggregation
out,
conflict,
the
crosstalk
policies
at
the
crosstalk
site
and
and
how
we
can
to
make
the
cabinet
to
the
workspace
aware
so
now
that
we
are
handling
the
what
we
are
standing
up,
the
kiberino
power
workspace,
but
we
may
need
some
some
change
in
the
cabinet
size
or
building
with
the
workspace
and
for
single
side
the
connect
with
we
are
syncing
the
namespace
scope
policy,
but
for
cluster
scope
policy
we
need
some
sync
mechanisms.
B
Must
be
handled
in
the
center
and
and
some
much
much
Pro
cluster
from
the
one
workspace
or
one
resource
to
the
multiple
Peak
cluster
was
absync.
So
there's
there
is
some
items
we
need
to
find
the
good
way
for
the
in
the
sync
sync
mechanism,
but
anyway.
So
this
is
a
high
level
idea.
So
the
a
please
give
us
any
comment
and
questions
regarding
to
this
proposal
or
the
approach
to
this.
The
policy
control
problem.
So,
okay,
so
thank
you.
C
Maybe
it
just
sorry,
maybe
I
just
have
a
question
about
the
problems
or
the
challenges
on
the
Sinker
side,
the
previous
side,
yeah
normally
I
mean
the
the
thinking
of
clusterscope.
Resources
is
in
fact
wired
inside
the
Sinker
already,
but
quite
limited,
because,
obviously
it's
something
that
we
cannot.
You
know
for
which
we
cannot
open
the
door.
Putting
cluster-wide
resources
inside
a
physical
cluster
is
very
impactful.
C
A
C
D
Be,
for
example,
in
my
proposal
for
Edge
placement,
the
NH
placement
object,
type
or
resource.
There
is
a
field
that
holds
a
list
of
gvrs
or
gdks
I
forget,
which
of
cluster
scope,
resources
to
sync.
B
C
B
C
D
I'll
also
note
that
this
connects
to
the
difference
between
the
TMC
model
and
the
edge
model
where
in
the
edge
model,
we
want
these
clusters
to
operate
independently
and,
while
disconnected
and
in
that
kind
of
model,
you
tend
to
want
more
of
the
cluster
scope
resources
to
go
there.
D
C
D
D
You
have
some
questions
here.
What
is
this
about
this
item
here
about
conflict
in
multiple
cluster
scope,
policies.
B
Okay,
so
the
thank
yous,
for
example,
that
if
the
policy
is
synced
to
here,
so
maybe
we
I
I'm,
assuming
that
also
if
the
post
cluster
scope
policy
comes
into
here,
so
crosstalk
scope
policy
of
the
keyboard.
In
the
case
it
it
affects
all
the
other
namespace.
B
So
if
you,
for
example,
if
you
have
the
multiple
namespace
to
the
sync
to
the
to
like
this
kcp
site
in
one
cluster,
one
cluster
scope
comes
into
the
one
namespace.
So
the
third
cluster
scope
namespace
affects
to
the
under
namespace,
which
is
about
which
is
synced
to
other
other
workspace.
Maybe
that
is
a
kind
of
the
conflict,
so
if
so
that,
so
that
is
the
one
things
we
need
to
think
about.
So
from
the
policy
side,
I'm.
D
Not
sure
I
followed
it,
so
certainly
policies
are
additive
yeah.
This
problem
is
not
unique
to
Edge,
so
I'm
a
little
confused
about
I.
B
Know
yeah:
this
is
not
this.
This
is
very
genetic
problem
for
the
cluster
scope.
So
if
the
crosstalk
scope
deployed
to
the
Sun
or
the
crosstalk,
it
affects
all
the
namespace,
so
yeah.
D
D
B
So
the
pull
the
my
eye
sort
is
if
the
cluster
scope
resource
is
deployed
here,
so
it
affects
the
other.
So
the
correct
policy,
so
by
use
case,
is
a
user
wants
to
deploy
the
a
cluster
school
policy
in
just
some
namespace
by
this
syncing
mechanism,
but
clusters,
a
scope
policy.
D
D
B
So
that
so
that
is
not
the
issue
for
the
in
the
cabinet,
but
in
the
kcp
workspace
and
name
spacing
mechanism.
So
my
original
assumption
was
namespace.
B
All
each
namespace
should
be
isolated
so
that
my
original
I,
so
I
saw
that
with
the
reason
or
why
a
name
name,
space
scope
is
is
only
can
be
synced
to
the
target
post,
but
maybe
I'm
wrong,
yeah
I'm,
just
that
was
my
original
butter.
C
Yeah
I
think
that
the
type
of,
if
I,
can
answer,
maybe
the
type
of
of
conflicts
you're
mentioning.
We
also
very
basically
face
that
in
the
currently
you
know,
simple,
cluster-wide,
thinking
that
we
implemented,
if
you
think
of
PVS,
for
example-
and
there
is
PVS-
are
really
cluster
wide
and
we
don't
when
we
think
cluster
wide
resources,
we're
seeing
them
as
cluster-wide
objects
on
the
on
the
physical
cluster.
C
Obviously,
and
so
simply,
if
you
find
an
object,
that
already
exists
with
the
same
name
but
a
different
origin
that
doesn't
come
from
the
same
kcp
workspace
for
the
same
scene.
Target,
then
just
an
error
is
produced.
So
for
now
it's
it's
mainly
the
user.
You
know
on
the
kcp
side,
the
user
responsibility
to
put
forth
schedule
for
thinking
objects
that
will
not
have
the
same
name
as
an
already
as
the
name
of
an
already
existing
cluster-wide
object
in
the
physical
cluster.
B
Yeah,
maybe
it's
it's
kind
of
the
user's
responsibility
to
manage
the
bullish
clusters,
cluster-wide
policy
to
the
upright
to
the
crust.
So
that
is
not
the
issue.
A
friendly
between
type,
the
user
is
a
single
tenant,
so
not
not
much
different
case,
the
single
tenant
case.
It's
not.
A
D
A
Usually
ug
two
questions
for
you
more
community-based
Outreach.
Have
you
spoken
to
Joshua
Packer,
who
I
see
is
actually
on
the
call
about
what
he's
working
on
with
ACM
policy,
with
kcp
and
I?
Think
and
have
you
already
looked
at
what
kyverno
and
ocm
or
kyverno
and
ACM
are
doing
today?
Are
you
familiar
with
that
integration.
B
Yeah
I
I
I'm
of
family,
always
HDM,
but
and
but
I.
Don't
I.
I
need
to
talk
with
Josh
on
the
how
the
kcp
keyboard
issue
so
that
I
need
to
talk.
I
need
to
sing
talk
with
Joshua.
B
B
B
Yeah,
this
is
a
very
generic
terms.
Sorry,
the
if
we
find
policy,
we
have
the
policy
quadruplet,
so
we
can
detect
the
Polish
evaluation
by
checking
the
report.
So
after
by
from
that,
we
maybe
some
action,
maybe
delegation
or
something
should
or
notification
should
be
triggered.
So
that's
a
very
genetic
select
statement,
source.
B
No,
no,
it's
a
policy
control,
maybe
it's!
This
is
one
of
the
function
we
can
address
in
the
policy
control
beta
foreign.
D
I
see
so
the
point
is
Making
Connections
out
from
Key
Verno
to
have
some
effect.
Okay,
thank
you.
Okay,.
A
A
Results
so
no
there's
a
kubernetes
working
group
out
there
and
I
had
the
link
and
I.
Don't
have
it
right
now,
but
it
it
it's
Maine.
One
of
its
main
objectives
is
to
be
able
to
surface
the
results
of
policy
scanning
for
any
number
of
security,
related
projects
that
are
out
there,
and
this
you
have
this
upsync
from
p-cluster
to
workspace.
So
when
the
kyverno
generates
a
report,
it
has
to
push
that
report
up
to
be
used
somewhere
right.
A
E
D
You
another
question
here
in
the
Sinker
section:
what
do
you
mean
by
Sync
one
resource
to
all
P
clusters.
B
Yeah
this
is,
we
have
I,
have
very
limited
idea,
but
maybe
it's
very
related
to
the
discussion
the
last
week.
The
power
of
the
the
Marchi
yes
schedule
so,
but
would
be
what
it
means.
When
I
put
the
one
resource
to
the
workspace,
then
one
space
resource,
maybe
the
policy
policy
resource.
Maybe
it's
it's
deployed
to
the
multiple
Peak
clusters
bound
to
the
that's
workspace.
One
block
space
to
multiple
Peak
rust
so
can
be.
The
policy
should
be
deployed
to
the
all
the
peak
cluster.
D
B
B
D
D
A
difference
between
location
and
cluster,
so
there
is
still
something
of
an
issue
there.
But
okay
I
understand
what
you're
saying
yeah.
B
Basically,
this
is
people
plus
this
meaning
is
a
location,
was
sync
targets.
C
I
have
just
a
question
about
what
is
the
difference,
maybe
in
those
questions
in
the
same
page,
between
multi-clusters
in
a
workspace
and
sync
one
resource
to
all
P
clusters,
I
mean
I'm,
not
sure
I,
understand
the
the
difference,
I
mean
the
problems
or
the
question.
It
relates
to.
B
B
C
Currently,
at
least
with
the
TNC
placements,
that
probably
would
be
also
you
know,
generated
by
your
Edge
placements.
C
It's
already
possible,
you
have
a
namespace
selector
on
the
placement
and
then
the
the
type,
the
location
that
is
assigned
to
a
namespace
depends
from
the
selector.
In
fact,.
A
Yeah
you'll
June
is
up
next
also
giving
a
discussion
about
Edge
placement,
so
you
might
benefit
from
that
discussion.
Yeah.
B
A
Okay,
next
up
June,
would
you
like
to
show
us
a
bit
about
Edge
placement
and
really
timely,
based
on
the
comments
and
questions
for
Eugene
I.
E
Scheduling
controllers:
another
destroy
I
also
have
some
questions
to
trying
to
seek
our
name,
seek
the
comments
and
helps
from
the
community.
So
let
me
go
to
that
issue,
so
my
initial
trial
is
trying
to
implement
others.
E
Proposals
around
the
design
of
the
HMC
schedulers
on
Polar
proposes
several
options:
I'm
studying
with
the
most
basic
one,
the
option,
one
which
is,
which
is
actually
a
simplified
version
of
this
option:
two
I'm,
starting
with
the
I'm
starting
with
this
electricity
tab,
slash
controller
runtime
example
because
looks
like
it
is:
it
is
made
by
computer,
and
it
is
very
good
for
a
quick
answer
and
in
this
initial
trial,
I
introduced
ads
placement
API.
E
Was
able
to
I
was
able
to
realize
a
multicast
pattern
for
workloads,
meaning
that
I
can
distribute
a
deployment
in
a
user's
workspace,
which
is
here
in
your
list
box
is
the
workload
management
works
is
two
multiple
mailbox
workspaces
I
actually
have
a
recording
for
that.
I
can
show
that
later
and
also
during
my
trial,
I
hit
some
issues
on
the
major
issue
I
hit
is
about
watching
Excel,
especially
watching
multiple
tbrs
or
watching
kcp.
E
Here
let
me
play
the
recording.
I
have
four
terminals
here
on
the
left
in
the
bottom,
I
have
the
kcp
server
running
and
then
on
the
left
in
the
top
window,
I'm
going
to
use
Google
to
talk
to
the
kcb
server
on
there,
I
have
a
third
window
which
is
on
the
right
hand.
Side
in
the
bottom,
faster
window
shares
implementation
of
the
of
my
Edge
schedule
and
finally,
I
have
the
fourth
window
shoot,
which
shows
the
the
peak
cluster
I'm
going
to
use
in
this
demo.
E
D
E
Let
me
go
ahead
continue,
so
let
me
show
the
exact
API
object
of
the
edge
placement
I'm
going
to
use
in
this
demo.
So
in
this
Edge
place
an
object.
It
has
a
list
of
location
workspaces
in
this
list.
We
have
two
and
also
in
the
spec.
We
also
have
a
location
selector
which
specifies
in
each
of
the
location
of
workspaces.
I
want
to
select
locations
match.
This
label,
which
is
environment,
must
be
adduction.
E
Before
I
create
that
edge
placement,
I
want
to
check
what's
inside
the
location,
workspaces
I
have
a
tiny
script
to
show
that
in
the
first
locational
workspace,
I
have
three
locations.
Three
instant
package:
there
is
a
One
camera
mapping
between
them
and
also
in
the
second
location.
Workspace
I
have
similar
thing
three
locations,
three
thing
targets:
that's
why
I
have
six
ERS
on
right
hand,
side
in
P,
cluster
and
finally,.
F
E
The
users
workspace
I
have
a
workload
which
is
a
deployment
of
a
Gen
X
named
nginx
EP,
where
EP
stands
for
Edge
placement
and
then
let
me
check
make
sure
the
sinkers
are
running
correctly
in
the
in
a
V
cluster
we
can
see.
There
are
six
interpols.
E
F
E
E
So
using
this
command,
I
created
The
Edge
placement.
After
that,
let
me
check
what's
inside
what
location
workspace
again,
so
we
can
see.
We
have
two
freshly
created,
TMZ
placement
and
those
TMZ
placements
corresponds
to
the
two
locations
who
match
the
label
environment
should
be
production
in
the
spec
of
the
edge
placement.
E
In
addition
to
that,
we
can
see
that
the
workload
which
is
the
workload
which
is
the
nginx
EP,
is
copied
from
my
user
workspace
to
this
location.
Workspace
things
are
very.
F
E
In
another
location,
workspace
I
have
largely
created
TMZ
placements,
corresponding
to
the
two
matching
locations,
as
well
as
the
copied
workload
from
user
workspace.
In.
E
Workspace
we
have
this
freshly
created
placement,
which
is
private,
which
had
different
API
Group
with
the
TMC
placement.
E
Finally,
let
me
check,
what's
running
inside
my
P
cluster
I'm,
using
the
same
command,
I
think,
but
this
time
we
can
see
there
are
four
pause
origin
next
been
running
there.
Why
do
I
have
four,
because
I
have
two
from
the
first
location
workspace,
responding
to
the
two
selected
selected
allocations
and
another
two
from
the
second
location
workspace.
So
this
summarize
the
demo
which
I
showed
how
to
dispatch
the
workload
from
one
single
workspace
to
multiple
workspaces.
E
So
this
is
what
I
have
for
the
demo
and
next
I
want
to
talk
about
the
issues
I
hit
when
I
was
doing
the
implementations
first
issue
or
before
that,
let
me
I
I,
take
I,
took
a
look
at
the
kcp
code.
I
found
that
it
is
really.
E
It
is
very
straightforward
for
the
TMZ
controllers
to
set
up
watches
using
the
cook,
sharing
informal,
Factory
and
kcp
sharing
form
of
factory,
so
it
just
leaves
the
three
lines
of
code
or
it
set
up,
watches
for
the
namespaces
locations
and
placements
and
when
I
was
trying
to
do
similar
thing
by
Enlightenment
implementation.
E
E
So
it
looks
like
this
if
I
just
doing
a
query
against
the
virtual
workspace,
which
is
from
the
my
export
or
API
export
I
can
get
all
the
Edge
placement
object
across
all
the
name.
All
works
with
workspaces,
but
using
this
pattern,
I
didn't
find
the
clearly
is
a
way
to
set
up
multiple
watches
like
this.
That's
just
one
liner
to
set
up
a
watch.
I
didn't
find
it
easier
way
to
do
that.
That's
the
first
issue,
I
have
a
second
issue.
E
I
have
is
how
do
you
watched
kcp
apis
because
in
my
demo,
I'm
watching
my
custom
API,
which
is
ADS
placement.
F
E
F
E
E
E
I
have
a
feeling
that
I
have
a
feeling
that
this
controller
runtime
example
is
more
for
the
users
who
is
going
to
be
using
the
asset
offered
by
kcp
and
not
that
it's
suitable
for
contributors
to
kcp
and
I
have
a
feeling
that
I
have
to
switch
to
lower
level.
Libraries
such
as
a
client
go
which
is
customized
in
kcp
dash,
DV
and
also
some
other
components
in
this
package,
but
since
I'm,
still
ramping
up
I'm
not
familiar
with
the
entire
big
picture.
E
F
Ahead,
Andy,
the
way
that
you
do
this
is
that
in
your
API
export,
you
add
permission
claims
for
every
single
resource
that
you
want
to
work
with,
so
that
would
include
the
what
is
it
placements
and
location
resources?
And
so
you
would
only
go
through
the
virtual
workspace
for
your
API
export.
You
would
not
switch
to
the
root
scheduling,
API
export
and
by
doing
the
permission,
claims
you'll
be
able
to
just
go
to
one
place
and
use
your
virtual
workspaces
server
to
do
wild
card
watches
on
everything
that
you
need.
F
F
You
are
you
own
certain
apis,
like
your
Edge
placement
resources,
and
you
want
to
manipulate
whether
it's
read-only
or
read,
write
resources
that
you
don't
own
and
that
you're
not
exporting
those
could
be
things
like
config
maps
and
secrets.
It
can
be
kcp
locations
and
placements.
It
could
be
cert
manager,
certificates
whatever
and
as
a
service
provider.
You
have
one
and
only
one
URL
that
you
should
be
using
to
interact
with
kcp.
It
is
the
URL
for
your
virtual
workspace
for
your
API
export
and
as
a
service
provider.
F
F
But
if
you
as
a
service
provider,
want
to
manipulate
Secrets-
and
you
know
you
could
swap
secret
for
kcp
locations
and
placements,
but
Secrets
is
a
good
example.
F
So
if
you
as
a
service
provider,
are
providing
Edge
placements
and
you
want
to
get
access
to
some
of
my
secrets
where
I'm
a
user,
you
have
to
ask
for
permission
to
access
those
secrets
and
I,
as
a
user
have
to
Grant
you
those
permissions.
It's
just
like
in
any
type
of
oauth
flow
like
in
GitHub,
for
example,
where
you
connect
a
GitHub
application
to
your
account.
It
will
list
a
screen
that
says
application.
F
X
is
would
like
to
have
permission
for
your
private
repos
or
your
public
repos
or
your
user
information,
and
you
can
grant
it
or
not,
and
and
so
that,
that's
how
this
permission
claim
mechanism
works,
and
the
idea
is
that
you
only
have
permission
to
what
the
user
lets
you
work
with.
F
E
If
minus
n
is
correct,
the
permission
claims
is
in
the
spec
of
the
API
expert,
yes
yeah
and
also
there's
something
which
can
accept
that
in
the
API
bindings.
Yes,
okay,
cool,
I,
think
I.
Think
I
I
understand
what
you're
talking
about.
Thank
you.
Thank
you.
Randy.
F
A
I
just
dropped
the
link
in
the
chat.
There
was
a
good
talk
that
Stefan
gave
at
kubecon
that
was
recorded
and
is
now
posted
on
YouTube
about
the
API
export
facility
and
also
API,
and
how
permission
claims
work
and
what
Andy
just
mentioned
about.
You
know
agreeing
to
allow
people
to
have
access
to
your
service
et
cetera.
So
it's
if
you
haven't
watched
it
already.
It's
a
good
thing
to
watch.
E
Okay,
I
think
I
have
got
my
answers
back
to
you.
Randy.
A
Hey
well
actually,
June
I
had
a
further
question
for
you.
Did
you
get
your
answer
when
it
comes
to
using
the
lower
level
controller.
E
A
I
think
this.
E
Is
this
is
kind
of
move
a
more
a
question?
We
saw
why
there
is
scope,
because
when
we
in
the
future,
when
we
contribute,
we
built
the
HMC
controllers,
I'm,
not
still
quite
sure
about
which
way
should
we
go?
Should
we
follow
the
controller
runtime
example,
or
should
we
followed
pattern
which
is
currently
existing
in
kcp,
like
those
scheduling
controllers
like
this
one,
I
didn't.
A
E
C
Yes
sure
I
had
a
question
about
in
your
demo.
You
mentioned
that
you
know
when
creating
the
TMC
placements
based
on
the
edge
placement,
and
you
also
copied
the
workloads
to
the
location,
workspace
and
I.
Had
the
question
I
mean:
did
you
have
anything
that.
D
C
Okay,
so
yeah.
That
was
my
question,
because
the
model
Now
is
really
to
to
you
know
generally
distinguish
the
location
workspace
and
not
put
any
workload
in
the
workspaces
that
contain
the
the
locations
and
the
same
targets.
So,
but
if
it's,
if
it's
only
just
for
the
demo,
you
know
that's
fair.
E
F
E
A
Yeah
is
there
anything
else
you
needed
explanation
for?
Are
you
good.
A
Folks,
Andy,
you
still
have
your
hand
up,
but
I
assume
you've
already
answered.
Okay,
all
right
team.
We
have
about
10
minutes,
left
I,
don't
have
anything
else
to
really
talk
about
in
the
queue
at
the
moment,
there'll
be
more
follow-on.
Now
that
our
we
had
an
internal
proof
of
concept
that
we
were
working
pretty
hard
at
so
you'll
start
to
see
some
more
activity
in
the
community
I
think
as
a
result
open
the
floor
for
any
comments
or
questions
at
this
time.
If
not,
we
can
close
a
little
bit
early.
A
Okay,
folks,
thanks
so
much
for
your
participation
in
today's
Sig
Community
call
Sig,
kcb,
Edge,
Community,
call
and
look
forward
to
seeing
you
again.
We
will
have
another
meeting
coming
up
on.
Let's
see,
14th
December
15th
there'll
be
an
agenda
posted
shortly.
I
will
post
a
poll
as
I've
seen.
Other
sigs
do
as
to
whether
or
not
we
should
have
one
at
the
end
of
December.
A
When
everybody's
out,
you
know
going
to
be
probably
out
so
I'll
post
a
poll
in
the
in
the
GitHub,
and
you
can
take
a
look
at
it
if
you're
going
to
attend
or
not,
and
let
us
know
and
we'll
we'll
talk
about
it
on
the
next
call,
we'll
talk
what
the
results
of
that
poll
yielded.
Thank
you,
sir.
Thank
you.
Everybody.