►
From YouTube: Kubernetes SIG Multicluster 2020 September 8
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
C
Okay,
there
we
go
sorry,
I
can
get
my
mic
working
this
whole
time
all
right.
So,
let's
get
started.
I
think
herman.
We
had
you
first
on
the
agenda.
Yes,
you
all
hear
me.
D
I'm
all
the
way
from
the
netherlands
europe.
So
it's
evening
in
my
place,
I
didn't
know
about
this
or
did
know
about
the
city,
but
I
didn't
know
what
what
you
were
working
on
before
kubecon.
So
that's
a
great
way
to
get
to
know
people
and
what
I
was.
D
What
I
scheduled
was
mc
robot,
which
is
the
way
we
do
some
synchronization
between
clusters
and
I'm
specifically
working
for
q42.
But
that's
not
the
point.
The
point
is
that
we
are
working
for
a
client
which
is
philip
you
so
the
iot
cloud
platform
that
that
that
has
bridges
and
lights
in
your
home,
which
you
can
control
via
google
and
alexa.
D
That
is
actually
built
using
kubernetes.
So
I'm
the
tech
lead
of
that
project.
So
I
created
mcrobot
to
synchronize
between
clusters.
D
Specifically,
you
have
a
bridge
in
your
home
and
you
have
a
mobile
phone
which
is
needs
to
connect
to
this
bridge.
But
it
can
also
be
that
you
are
just
went
out
of
the
house
and
you
want
to
enable
the
the
lights
one
other
home
or
that
you
do
that
via
alexa.
But
then
it
can
be
that
alexa
is
in
a
different
cluster.
D
E
I
can,
I
can
even
share
it.
Maybe
here
oh
yeah,
please
do
let
me
know
if
it
lets
you.
D
Well,
I
need
to
somehow
okay
well
then
share
the
whole
screen.
Oh
no,
perfectly
of
dammits
well,
never
mind.
I
think
I
will
do
it
out
because
otherwise
that
will
take
a
while.
So
if
you
go
to
the
repository,
you
see
a
nice
picture
of
a
robot
which
is
the
the
the
mce
and
it
mca
is
multi-cluster.
D
Of
course,
what
it
does
is
it
uses
pubs
up,
google
cloud
pops
up,
but
it
can
be
any
means
of
flips
up,
so
it
can
also
connect
to
sqs
or
something
we
use
the
code.
D
The
golang
project
go
cloud
dev
which
can
actually
connect
to
any
storage,
and
it
will
publish
all
the
nodes
in
the
cluster
together
with
a
list
of
noteboard
services.
Basically,
so
I
can
also
share
in
the
in
in
in
the
chat
a
bit
of
how
that
works.
So
at
this
location
you
see
that
we
do
a
publish
of
the
local
states,
then
I
need
to
open
it
myself,
too,
and
and
that
that
status
filled
based
on
the
just
reading
the
kubernetes
api.
D
If
I
watch
operation
there's
also
a
subscription
to
to
the
remote
cluster
states
from
other
clusters,
which
is
happening
just
above.
E
E
D
D
No,
unfortunately,
not
if
I
would
be
able
to
can
I
drag
something
to
the
chat.
Yes,
I
can
no,
it
copies
the
link
okay,
so
too
bad.
Basically,
we
have
two
clusters
and
they
all
have
same
surface.
D
So
it's
it's
basically
the
destination
of
the
other
hp
requests,
and
there
is
a
component
which
is
sitting
before
that
which
is
in
nginx.
D
D
If
it
returns
a
401,
then
the
request
is
supported.
But
if
you
return
to
200,
then
you
have
the
possibility
to
add
also
headers
to
the
request.
So
we
use
that
to
determine
the
upstream
and
there
are
multiple
streams
in
nginx
configured,
one
for
each
target.
Cluster.
C
D
No,
we
create
one
service
per
cluster,
so
the
the
name
of
the
cluster
actually
is
and
well
maybe
this
helps.
I
have
a
guest
github,
which
I
prepared
for
this
meeting.
It's
contained
some
private
information,
so
yeah.
Well,
it's
not
really
too
private.
It's
a
testing
environment,
but
don't
post
it
on
twitter
or
something
this
is
the
surface.
Sync
is
the
custom
resource
that
we
define
in
nc
robot.
C
We
had
a
question
from
tim
on
and
chat
if
this
is
an
alternative
to
the
multi-cluster
service
model
or
designed
to
work
with
it
and
from
my
understanding,
the
multi-cluster
service
model
we've
been
talking
about
is
is
new
to
you
from
uconn.
So
this
is
the.
This
is
an
alternative
that
you've
been
pursuing.
D
Yeah,
so
I'm
just
explaining
this
what
I,
what
I,
what
we
built
and
probably
you,
will
be
able
to
provide
some
input
to
how
we
could
replace
this
or
how
we
could
how
we
could
learn
about
both
from
each
other.
And,
of
course,
this
is
a
very
custom-built
solution.
Your
solution
will
be
way
better
and
more
generic,
and
I'm
very,
I
would
like
to
to
migrate
to
to
a
open
source
community
built
approach,
but
maybe
it
looks
a
little
bit
the
same.
D
So
let's
take
a
look
and
if
you
go
to
the
quist
and
you
go
to
line
33,
you
see
a
status
which
is,
of
course
not
in
the
the
the
crd
resource
in
the
the
repository
of
mc
robot,
but
it
shows
what
is
actually
saved
so
the
the
way
it
works
is
it
first
updates
this
resource,
based
on
the
publish
of
all
the
other
clusters,
building
a
list
of
clusters
and
a
list
of
services
in
those
clusters
with
endpoints
and
port
definitions,
and
once
it
has
done
that
the
state
is
distorting
that
object.
D
So
it
then
can
ensure
that
the
services
are
created
according
to
this
object.
So
it's
a
two-step
approach
to
ensure
that
if
the
pub
step
goes
down
that
this
is
no
problem
and
for
each
of
those
clusters,
surfaces
are
created
which
you
can
see
below
where
I
get
got
one
of
these
services
from
in
the
second
file.
D
There
is
a
surface
which
uses
which
is
a
cluster
ip
surface,
targeting
end
points
which
are
below,
and
those
endpoints
also
outputted
in
the
in
the
list.
D
Those
reference,
the
hosts
of
the
other
cluster
directly
via
the
node
port,
because
they
are
in
the
same
vpc
network,
so
that
these
are
the
constraint
that
we
were
working
with,
that
they
are
indeed
in
the
same
vpc
network.
We
had
a
short
period
with
philips.
U,
where
we
migrated
from
a
non-vpc
network
cluster
to
a
vpc
network
cluster
and
there
we
needed
to
actually
spend
the
internet
because
legacy
networking,
google.
D
You
cannot
pair
that
to
vpc
networking
in
some
way
so
that
we
did
using
the
public
internet,
but
it
was
a
different
solution
before
this
and
it
was
really
yeah
not
not
ideal,
and
we
really
needed
to
get
away
from
legacy
networking.
So
that's
why
we
did
that
with
a
one
one
off
yeah,
that's
that's
it
and
then
then
we
have
a
different
api
or
api
reader.
D
C
D
E
D
Doors,
yeah
but
but
the
services
are
similar
between
each
cluster
so,
but
we
can
target
specific
surfaces
because
we
we
need
to
reach
a
specific
back
end.
Actually
so
so
you
could
look
at
the
u
bridge
as
being
connected
to
one
of
the
back
ends
and
we
need
to
point
to
the
correct
one.
C
Okay
and
and
a
back
end
in
this
case
that
you
is
just
a
per
cluster
service
or
is
it
like
a
specific
pod.
D
We
didn't
didn't
know
about
the
pot
ip,
that
you
could
actually
use
pot
type
piece
from
from
different
clusters.
So
the
if
you
need
to
align
these
cedar
ranges
and
make
sure
that
it
all
works
correctly,
and
we
didn't
know
that
so
we
didn't
intend
on
targeting
specific
pods
from
other
clusters
yet,
but
it
would
be
possible-
and
I
was
actually
investigating
already
whether
we
can
build
this
whole
thing
with
envoy
to
have
one
one
cluster
per
pot
actually,
which
you
can
target
via
low
balancing
subsets.
D
And
then
it's
do
that
load,
balancing
based
on
metadata,
which
we
assign
in
http
filter
with
some
webassembly.
D
C
So,
and
I
guess
another
approach
is
if
with
multi-cluster
services,
I
don't
know
if
you've
had
a
chance
to
take
a
look
at
the
cap
yet
and
and
see
how
it
works
for
you
yeah.
The
here.
C
I'll
I'll
link
it
in
chat,
but
it
sounds
like
what
you're,
what
you're
really
looking
for,
is
kind
of
like
a
multi-cluster
headless
service,
where
you
can,
where
you
actually
get
a
name
for
each
pod,
is.
Is
that
true?
Or
am
I.
D
Kinda
there
there
were
about
500
parts
of
these
back-end
services
would
be
quite
a
long
list
for
dns
well
still
possible
possible.
Of
course,.
D
C
So
so
one
one
option
that
comes
out
of
that
kep
is
that
we
you
basically
would
an
implementation
would
create
a
headless
service
for
you.
That
would
just
let
you
address
each
pod
directly
from
each
cluster
yeah,
but
but
I'm
curious,
if,
if
you
could
take
a
look
since
that's
a
direction,
we've
been
thinking,
you
know
heavily
on
and
let
us
know
how
it
fits
your
model
and
I
just,
as
importantly,
how
it
doesn't
and
what
it's
missing
and
what
may
or
may
not
work
for
you.
D
Would
this
surface
so
it's
a
headless
surface?
What
would
it
offer
other
than
that?
You
have
a
whole
list
of
that
because,
because
I
think
a
headless
surface,
you
still
need
to
do
the
load.
Balancing
yourself,
of
course,.
C
D
Currently,
it's
load
balanced
because
it's
targeting
a
surface
which
which
itself
determines
the
part
of
the
of
a
different
surface.
Basically
so
there's
a
layer
in
between
we
have
within
the
cluster
actually,
which
is
maybe
not
ideal
when
you
want
to
remove
it,
which
is
part
of
the
60.
I
said
this
complexity,
which
you
want
to
remove,
is
redis,
so
we
have
a
pod
which
receives
an
http
request.
It
stores
it
in
a
queue.
D
It
sends
a
publish
to
a
topic
for
because
it
knows
which
bot
is
connected
to
that
bridge
or
the
other
way
around
which
bridge
is
connected
to
the
pod.
D
C
Got
it?
Okay
so
because.
C
But
the
this
kept
the
api
proposed
here
could
help
you
just
basically
create
that
first
link
to
that
routing
service.
D
Yeah
yeah
yeah,
the
the
the
whole
point
of
the
complexity
in
our
system
is
that
you
have
a
dedicated
or
a
specific
websocket,
a
persistent
website,
connection
to
the
bridge.
You
need
to
reach
what
that's
connection
somehow
and-
and
there
are
too
many
clients
and
too
many
parts
to
to
to
manage
that
from
every
cluster.
Basically,
it
would
be
much
more
lenient
to
first
send
it
to
the
correct
cluster
and
then,
because
also
those
clusters
would
not
be
neighbors.
They
would
be
in
different
regions.
D
So
it's
mostly
to
to
to
reduce
the
latency
in
the
general
case.
But
then
you
have
the
cases
where
partners
of
you
are
in
a
specific
zone.
So
if
they
are
always
in
us
east
one,
then
you
always
need
to
proxy
to
a
different
cluster.
That's
that's
disadvantages,
but
we
cannot
yeah.
We
cannot
prevent
that.
Ideally,
the
the
cloud
function
is
of
a
partner
is
close
to
the
the
partner,
the
bridge
itself.
E
C
Okay,
cool
yeah.
I
really
love
your
input
on
that
on
that
api
proposed
there
and
and
if
that
can
basically
make
your
life
easier
and
and
yeah,
and
especially
where
it
where
it
hurts,
what
what's
not
working,
what
what
could
be
added
yep
I'll.
B
C
A
It's
it's
interesting
to
see
from
my
perspective,
because
I,
like
you
herman,
also
built
sort
of
a
bespoke
solution,
also
called
service
sync
for
that
matter,
and
it's
interesting
to
see
another
take
on
it
for
sure.
A
B
All
right,
well
I'll,
look
forward
to
hearing
herman.
You
know
what
what
you
and
your
team's
thoughts
are
about
the
kep
and
whether
you
know
what
what
you
might
view
your
robot,
that
you've
built
how
you
might
be
that
relating
to
the
cap.
B
If
we're
talked
through
on
that
one,
I
think
the
next
one
on
the
agenda
is
you
had
the
work
api
kept
on
the
agenda?
Do
you
want
to
talk
about
that.
F
Oh
nice,
so
yeah,
so
for
some
background,
this
idea
is
originally
generated
by
valerie,
my
blog
post,
and
we
did
some
prototype
on
that.
F
So,
based
on
that
idea,
we
we
have
a
dog
to
to
define
what
the
api
should
be
like
and
the
use
cases.
So
in
this
dock
we
basically
have
some
terminologies
like
you
in.
In
this
scenario,
you
would
have
a
hub
cluster
that,
where
you
will
create
the
work
apis
and
then
the
the
manifests
defined
in
the
api
will
be
applied
in
the
multiple
clusters
and
there's
another
terminology
like
manage
cluster.
We
we
call
that
managed
cluster
in
this
dock,
but
there's
also
some
alternative
terminologies
like
spoke
and
smoke
clusters.
F
F
F
So
there
are
some
discussions
about
whether
a
push
and
pull
model
should
be
used
in
a
multi-cluster
context
like
in
a
push
portal.
There
are
some
limitations,
including
how
the
security
there
will
be
more
exposures
on
api
servers
of
managed
clusters,
but
to
to
remedy
the
problem.
There
are,
there's
also
an
option
to
use
a
pool
model
that
we
have
an
agent
on
the
managed
clusters
that
watch
some
workload
on
the
hub
and
then
fetch
that
apis
and
apply
it
on
the
managed
cluster.
F
So,
but
for
the
work
api
itself,
I
think
it
doesn't
really
constraint
on
the
push-up
pull
model.
The
controller
itself
could
reside
in
the
manufacturer
as
agent
of
residing
hub
cluster
that
push
the
manifest
to
the
managed
cluster.
F
So
I
think
the
this
is
the
api
that
we
have
been
developed
in
the
prototype.
In
the
batteries
repo
that
is
quite
simple
like
in
the
spec,
we
only
have
a
list
that
to
define
the
manifest
that
is
going
to
be
applied
onto
the
managed
cluster
in
the
status.
F
So
this
is
a
manifest
and
it
is
just
a
raw
extensions
to
define
the
any
type
of
the
crude
resources
in
a
status.
We
have
two
type
of
the
conditions.
One
conditions
is
the
conditions
of
the
work,
and
another
condition
is
the
condition
for
each
manifest,
which
is
called
manifest
condition
in
the
manifest
condition.
There
is
a
resource
identifier
to
identify
what
which
resource
that
has
that
condition.
F
The
resource
and
variable
includes,
like
the
group
version,
kind
and
resource,
namespace
and
name
of
the
results,
and
also
it
will
have
the
conditions
for
the
for
each
manifest.
So
in
the
work,
I
think
so
in
the
current
thinking
we
are
thinking
like
for
each
manifest.
F
We
record
the
conditions
of
the
resources
for
for
each
resources
defined
in
the
work
api,
so
so
this
is
a
simple
example
that
you
could
have
a
work
defined
in
a
hub
that,
in
the
manifest
we
find
that
I
want
to
apply
a
config
map
onto
the
managed
cluster
and
after
that,
after
the
controller
has
applied
the
manifests,
there
will
be
conditions
for
each
manifest
in
this
examples.
F
F
And
I
think,
based
on
the
discussion
previously
with
valerie,
that
we
have
some
edge
cases
and
I
list
some
known
edge
cases
also
in
this
stock,
for
example,
it
is
possible
that
in
the
resources
of
the
managed
cluster
that
you
want
to
apply,
that
resource
has
some
immutable
immutable
field
or
the
field
has
been
set
by
the
api
server
or
some
defaulting
functions
and
the
wall
controller.
F
F
But
in
that
case
we
you
may
not
want
the
controller
to
change
the
replica
back.
So
this
is
some
ash
cases
that
we
are
thinking
of,
and
there
are
some
possible
solutions.
For
example,
you
need
a
server
apply
to
check
whether
this
is
a
certain
field
in
the
resource
has
been
changed
by
a
group
ctrl
or
by
automation,
controller
anyway.
So
this
is
very
simple.
F
B
I
saw
a
question
I
think
it
was
from
andrew
in
the
chat
about
how
does
this
differ
from
sig
apps
application?
Crd
trojan,
do
you
do
you
want
to
talk
about
that.
F
A
B
That's
similar
to
my
own
impression,
which
is
that
I
think,
last
time
I
looked
at
it,
app
crd
was
really
about
expressing
a
grouping
or
relationship,
but
not
about
carrying
information
about
the
manifest
content
or
or
anything
about
whether
the
manifest
or
any
manifest
had
been
applied
and
what
state
it
was
in.
So
I
guess
my
own
version
of
that
might
be
in
trojan
like.
B
A
A
Oh,
no,
it's
richard!
So,
oh
I'm
sorry,
no
worries
the
I
guess
like,
whereas
we
could
whatever
like
again.
I
might
be
misunderstanding
this,
but
what
I
understand
is
that
the
manifests
of
the
you
know
the
deployment
and
and
the
you
know,
services
and
config
maps
will
be
wrapped
up
into
one
big
crd
or
one
big
resource
definition.
A
I
guess
what
what
is
the
difference
in
working
model
when
every
all
the
manifests
are
packed
up
into
one
versus
if
they're,
decomposed
and
just
semantically,
linked
via
something
like
the
application?
Crt.
F
Yeah,
okay,
so
I
think
so
I
think
the
work
what
you
define
is
like
a
content,
a
list
of
the
content
that
you
don't
apply
on
the
on
this
cluster
on
the
cloud
on
this
cluster,
but
on
another
remote
cluster.
So
I
see.
B
A
F
A
Right
because
I
guess
to
to
maybe
add
some
color
to
my
question,
is
you
know
we
are
also
looking
at
you
know.
A
These
manifests
as
artifacts
right,
deployable,
artifacts
and
and
right
now
we
primarily
capitalize
on
the
fact
that
a
helm
chart
is
just
you
know,
a
tar
right,
and
so
we
push
and
pull
these
tars
from
our
from
into
our
repositories
during
our
build
processes
and
our
on
our
deployment
processes,
and
I
guess
this
would
replace
that
deployable
artifacts
when
we're
talking
about
sort
of
defining
which
cluster
this
workload
would
belong
in
or
multiple
for
that
matter.
Is
that
kind
of
true?
A
Could
you
just
repeat
the
last
part?
I'm
not
sure
I
followed
it
right,
so
I
I
guess,
with
with
our
current
model,
it's
the
helm,
chart
being
pushed
and
pulled
as
a
as
a
deployable
artifact,
and
in
this
case
it
would
be
the
work
resource
definition
that
would
be
traded
to
or
from
the
the
managed
clusters.
If
you
will
yeah.
B
So
I
think
I
if,
if
I'm
hearing
you
correctly,
I'm
hearing
just
contextualizing
the
the
work
api
as
like
a
unit
of
distribution,
of
a
collection
of
things
that
are
supposed
to
go
together,
and
I
I
think
it's
not
dissimilar
in
concept
to
a
helm
chart
in
the
sense
that
a
helm
chart
is,
is
a
collection
of
things
right.
B
What
what
I
think
we
should
be
clear
about
is
that
none
of
the
people
that
I
think
have
have
like
worked
together
in
this
group
around
work
apis
so
far
like
have
intended
to
replace
helm,
charts
or
necessarily
compete
with
them.
I
kind
of
view
this
personally
as
like,
as
you
could,
distribute
any
level
of
thing
this
way,
I
think
there's
so,
for
example,
when
I
say
any
level
of
thing
I
mean
at
any
level
of
detail
like
you,
could
you
could
absolutely
pack
wordpress
into
a
work
api?
B
If
you
wanted,
you
could
also
distribute
a
resource
that
was
like.
I
want
there
to
be
a
wordpress
like
say,
there's
a
wordpress
operator
running
somewhere
and
it's
sensitive
to
like
a
wordpress
crd.
You
could
distribute
a
wordpress
crd,
like
operator
type
cr
in
in
a
work
api,
so
it's
it's
not
not
intended
to
compete
at
any
particular
level
of
packaging.
B
In
my
own
mind
at
least
it's
just
about
distributing
collections
of
things
and
and
accounting
for
their
status
as
far
as
whether
they've
been
applied
or
not.
How
about
you,
children.
C
Yeah
we
had
a
good
question,
I
think
from
hector
which
is-
and
I
don't
know
if
it's
really
been
addressed
here
yet,
but
we're
talking
about
how
would
you
insure
an
agent
running
on
a
specific
cluster
from
watching
resources
that
did
not
belong
to
it?
So
is
there
like
a
concept
of
targeting
specific
clusters
or
how
is
how
is
it
actually
disseminated?
F
Currently
so
the
work
itself
is
a
namespace
scope
resource
and
the
the
agents
running
on
space.
Specific
cluster
need
to
have
a
certain
permissions
to
watch
that
needs
the
work
apis
in
that
name
space
to
to
get
the
work.
F
So
we
haven't
put
the
cluster
concept
in
the
in
the
work
because
I
think
maybe
we
could
have
them
separately.
C
Okay,
I
I
think
that
that
makes
sense.
I
like
scoping
at
name
space.
I
think
that's
pretty
consistent
with
other
things.
We've
talked
about
like
if
a
if
a
cluster
has
a
namespace
and
another
cluster
has
a
namespace.
They
should
both
have
access
to
the
same
things
right
for
that
namespace,
that's
kind
of
where
we've
been
going,
but
that
that
goes
to
another
question
I
had
which
is
so
the
the
manifest.
Then
is
there
an
expectation
that
that
every
manifest
or
every
bit
of
work
in
or
man?
C
Sorry,
if
any,
every
bit
of
work
in
a
manifest?
Should
it
all
belong
to
the
same?
Namespace
then
is
it.
Is
that
how
you're
thinking
about
this.
F
B
Yeah,
I
think
there
might
be
a
mismatch,
so
I
I
understood
that
trojan
that
you
were
referring
to
the
namespace
within
the
hub
as
a
container
of
works
that
were
scoped
to
a
cluster.
Oh
yeah,
yes,
jeremy
did
you
have
the
same
impression.
C
I'm
I'm
sorry
I'm
trying
to
to
map
my
methyl
model
here,
the
the
I
so
a
name.
So
a
namespace
in
this
to
the
work
api,
then
maps
to
a
cluster
but
not
to
a
namespace,
is
that
is
that
what
I'm
hearing
like
each
cluster
would
have
its
own
work.
Namespace.
F
I
think
we
could
right
we
if
we,
I,
so
I
think
if
they,
if,
if
multiple
clusters
have
some
shared
work,
that
they
want
to,
so
if
multiple
clusters
I
want
the
same
work
manifest
to
be
applied,
it
could
watch
the
name
space
it.
Just
all
the
agent
on
these
clusters
need
to
have
the
same.
Our
back
on
the
heart.
B
Well,
the
the
status
that's
in
there
currently
is
about
the
status
within
a
single
cluster
yeah
and-
and
this
is
sort
of
like
one
area
where
I
think,
valerie
and
trojan
have
tried
to
to
kind
of
work
backwards
from
the
the
like
interface
between
this
api
in
a
single
cluster,
but
without
trying
to
characterize
something
like
cluster
registration
or
having
to
address
a
particular
cluster
in
the
api
surface.
B
So
a
next
step
that
I
personally
anticipate
here
is
working
further
backward
to.
C
That
that
does
as
a
follow-up
and
probably
worth
thinking
about-
and
maybe
it
just
needs
more
digging
into,
but
I'm
very
curious
about
what
that
work
name.
Space
relationship
is
to
the
name.
Space
of
the
manifest
themselves,
like
is
the
expectation
that
a
cluster
has
permission
to
watch,
namespaces,
foo
and
bar,
but
the
the
work
contained
in
those
namespaces
is
entirely
unrelated
to
those.
How
does
that
kind
of
map
in
an
intuitive
way
and
like
what?
C
If
what
if
a
cluster
doesn't
actually
have
namespace
bar
and
you
can't
run
anything
in
bar,
but
it
watches
work
sent
to
bar
and
it
deploys
that
work
in
foo?
Like
you
know,
it
creates
this
kind
of
second
layer
of
namespace
matching
that
I
think
you'd
need
some
kind
of
mapping
between.
B
Yeah,
I
I
think
I
think,
there's
probably
a
lot
of
different
scenarios
to
talk
about
of
that
type.
So,
for
example,
it's
it's
possible
for
me
to
imagine-
maybe
that
you
have
multiple
agents
watching
particular
name
spaces
within
a
cluster.
That's
under
management
by
the
work
api,
and
maybe
those
individual
agents
have
permissions
that
are
partitioned
by
namespace
in
the
most
extreme,
where
maybe
they
can
only
write
into
the
namespace
that
they're
deployed
in,
but
maybe
they
have
different
degrees
of
permission
versus.
B
Maybe
you
only
want
only
run
one
agent
per
cluster.
It
seems
like
the
kind
of
thing
that
there
might
be
differences
in
how
different
operator
like
human
operators
might
want
to
characterize
their
like
deployment.
Topologies
right,
like
push
versus
pull
single
agent,
pulling
multiple
agents
pulling,
etc,
etc.
C
Yeah-
and
I
don't
know
that
these
things
necessarily
need
to
be
prescribed
by
an
api
either,
but
I
think
it
would
be
helpful
to
kind
of
add
to
this
doc
like
basically
what
that
namespace
mapping
permission
kind
of
looks
like
at
a
high
level.
G
Okay,
yeah.
I
have
another
question:
how,
if
you
are
a
handling
role
manifest?
G
F
Work
so
I
think
the
question
is:
if
there
is
some
other
actors
that
change
the
resources
that
have
been
applied
by
the
work,
how
would
the
the
controller
work
controller
reverted?
Is
that
the
correct
question.
G
No,
no,
no,
it
is
more
or
less
similar
to
what
you
can
have
now
with
qfed
that
you
have
overwrites,
so
you
can
define
overwrites
of
property
values
for
a
specific
cluster.
So
you
share
the
same
resource
and
you
just
say
I
want
to
just
change
change
the
the
image
tag
of
of
the
deployment
or
something
like
that,
but
I
only
want
to
change
it
for
a
cluster
name
a
or
in
this
case
right.
So
I'm
just
wondering
right
now
in
this
model
you
you're
having
raw
manifest.
G
F
Okay,
so
so
the
question
is
about
the
override,
so
so
I
think
that
in
the
cook
fact
there
is
some
ap
there's
apis
that
you
can
define
some.
You
can
define
that
in
one
cluster
you
could
override
some
feeling
in
deployment
and
in
another
cluster
override
to
another
value.
F
F
So,
but
I
think
it
is
possible
that
we
could
build
another
layer
on
that
that
have
the
override
to
use
a
work,
finally
distribute
workload.
B
Thank
you,
yeah,
I'm
plus
one
at
making
substitution
parameter
type
stuff,
something
that
happens
at
a
higher
layer
and
and
focusing
at
the
layer
described
here
on
just
the
the
most
essential
parts
of
is
what,
in
here,
what
got
applied
and
what
status
does
that
have
but
but
agree,
that's
that
seems
like
a
natural
like
overriding
parts
of
this
seem
like
they're
at
like
a
a
layer
above
when
it
comes
to
scheduling.
B
So
trijin
for
for
next
steps
it
it
might
be.
It
might
be
worth
kind
of
having
a
discussion
in
the
sig
about
like
what
what
use
cases
matter
to
people
around
scheduling.
What
use
cases
matter
around
around
substitution
to
and
then
also.
I
think
it
would
be
interesting
to
dive
into
some
of
the
some
of
the
edge
cases
and
what
kind
of
mitigations
we
see
for
them.
B
Okay,
it
sounds
like
we
might
be
talked
through
for
now,
unless
someone
was
getting
ready
to
say
something.
If
so,
go
ahead,.
B
All
right:
well
thanks
everybody
for
for
joining
us
today,
thanks
to
to
herman
and
trojan
for
presenting
appreciate
it
we'll
see
you
next
week.