►
From YouTube: Kubernetes SIG Multicluster 26 July 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Yeah,
but
it
does
feel
pretty
hot
up
here
and
lots
of
lack
of
air
conditioning
in
this
world
and
then
yeah
it
does
it
like
doesn't
get
below
70
at
night,
and
I
think
it's
going
to
get
worse
like
these
next
couple
days.
So
all
the
windows
are
open
and
I
am
definitely
doing
the
shorts
life
until.
A
D
E
E
D
Yeah,
I
don't
know
if
you
saw
the
announcement,
but
at
least
in
our
seattle
google
office,
they
are
going
to
be
like
preemptively
running
the
ac
to
try
to
like
get
ahead
of
it,
and
I
don't
know
why
that
that's
the
one
thing
that's
done
to
me
like
whoa.
We
have
to
take
proactive
measures
like
this
is
yeah.
D
This
is
not
good,
and-
and
like
I
remember
a
long
time
ago,
I
was
living
in
a
condo
and
they
put
up
notices,
saying
that,
like
buildings
in
seattle
are
not
built
to
go
over
88
degrees
and
so
like
they
might.
If
your
place
was
comfortable
below
88,
just
like
when
it
crosses
90,
beware
and-
and
it's
true
like
ac,
just
didn't
work
anymore
or
something
it
was
yikes.
A
All
right:
well,
we've
got
a
few
things
on
the
agenda.
Why
don't
we
get
started?
I
will
say
that
in
our
last
meeting
jeremy
and
I
took
an
action
item
to
articulate
a
decision
tree
around
cube
fed.
We
spent
some
time
talking
about
it,
but
we're
not
ready
to
give
the
decision
tree
yet
so
we'll
do
that
shooting
for
the
next
meeting
in
two
weeks.
Okay,
I
think
the
first
person
on
the
agenda
is
peter.
Around
stateful
set
migrations
go
ahead
peter
and
do
you
need
screen
share.
E
B
E
Okay
yeah,
so
today
I
want
to
talk
about
this
concept
of
staple
set
cluster
migration
and
how
it
relates
to
sig
multi-cluster
and
a
cap
that
I've
proposed
for
enabling
this
feature
or
enabling
this
use
case.
So
next
slide,
please.
E
So
what
is
a
staple
set
migration
and
why
do
we
care
about
it?
So
the
goal
for
this
idea
is
allowing
the
user
to
move
a
staple
set
across
clusters
because
of
data
gravity
it's
often
challenging
to
move
a
stateful
set.
You
know
stateless
is
fairly
easy
to
move.
You
can
use
like
partial
migration
or
directed
percentage
of
traffic
to
a
different
cluster,
but
for
staple
set.
You
know
if
your
if
your
database
application
or
your
safe
application
relies
on
having
a
quorum
available
having
data
available
it's
difficult
to
migrate,
seamlessly.
E
So
some
of
the
motivations
here
you
could
reach
some
scalability
limits
in
a
cluster.
So
say
you
have
a
multi-tenant
cluster
and
you're,
reaching
limits
particular
limit
of
kubernetes
intrinsically
and
you
need
to
move
particular
applications
out
in
order
to
decrease
the
load
in
that
cluster.
E
Maybe
one
of
the
applications
that
should
be
moved
is
a
staple
set.
The
other
motivations
tenant
application,
isolation
so
say
a
customer
or
a
user
has
like
a
shared
cluster.
They
want
to
move
an
application
out
to
an
isolated
cluster.
E
Sometimes,
new
features
are
only
available
in
a
new
cluster
and
it
really
kind
of
depends
on
how
you're,
initializing
and
running
your
clusters,
but
sometimes
it's
just
not
feasible
to
make
a
new
cluster
available
or
a
new
feature
available
to
existing
cluster
and
then
the
final
motivation
here
is
making
upgrades
providing
more
granular
control
so
rather
than
you
know,
performing
in-place
upgrades
where
you
upgrade
nodes
in
the
node
tool,
you
upgrade
control,
plane
nodes.
E
E
E
So
that's
a
solution
here
that
could
be
used.
The
other
aspect
of
migration
is
persistent
volume
position.
Volume
claim
migration,
so
the
underlying
data
really
needs
to
be
moved.
This
could
be
accomplished
either
by
accessing
the
same
underlying
persistent
storage
across
clusters.
If
that's
available
to
both
clusters
or
in
the
case
where
maybe
clusters
exist
in
different
continents
in
different
data
centers,
this
could
be
replicated.
E
So
you
know
the
data
could
be
copied
transferred
over
the
network
and
then
rehydrated
as
a
persistent
volume
in
cluster
b
and
then
the
final
step
is
this
pod
and
replica
orchestration,
so
pods
need
to
be
migrated
across
clusters
in
a
safe
available
way.
So
you
can
think
of
this.
You
know.
Maybe
pods
are
moved
over
with
respect
to
pod
disruption.
Budgets
or
pods
are
moved
one
at
a
time
similar
to
how
staple
sets
get
updated
like
a
rolling
update.
E
So
this
kept
that
I
proposed
kept
3335
this
link
there.
It
focuses
on
the
mechanism
for
the
third
problem,
so
just
this
idea
of
being
able
to
take
a
staple
set
and
control
a
subset
of
replicas
to
enable
movement
of
these
replicas
to
new
cluster
and
next
slide.
Please
so
there's
some
caveats
here
in
terms
of
what
stable
workloads,
what
the
requirements
are
and
kind
of
the
dependencies
of
a
stable
workload
that
needs
to
fulfill
before
it
could
be
migrated.
E
So
there's
a
few
properties
that
I've
scoped
out
here
so
dynamic
membership.
You
would
need
a
way
for
applications
to
dynamically,
add
or
remove
peers.
Some
applications,
don't
natively
support
this
and
require
a
lot
of
manual
configuration
or
maybe
some
wrapper
scripts
around
them
to
add
and
remove
peers.
E
E
But
it's
not
natively
supported
by
the
application.
It
kind
of
requires
this
external
controller
operator
to
modify
configuration
number
two
flexibility
and
network
identity.
So
the
way
that
multi-cluster
services
work
in
for
headless
services
is
that
the
host
names
of
particular
pods
have
a
cluster
membership
as
part
of
the
domain
name.
E
For
example,
this
could
be
done
through
multi-cluster
services
dns
that
spans
clusters-
okay,
next
slide,
please
so
just
going
back
to
the
core
problem
here.
Why
isn't
this
solvable
today
and
what
primitives
prevent
us
from
doing
this
so
kind
of
the
issue
is
that
we
have
two
control
planes
if
you
have
two
clusters
so
on
the
left,
there's
one
stable
site
controller
and
the
right
there's
another
and
staples
controllers.
If
you
give
them
a
set
of
replicas,
they
attempt
to
create
those
replicas
from
0
to
n.
E
E
E
So
this
just
goes
over
some
of
the
proposed
changes
and
there's
a
link
in
the
slides
to
the
cap,
if
anybody's
interested
here,
but
the
basic
premise
is
adding
a
field
to
staple
set
that
enables
a
start
and
then
end
ordinal
to
be
defined,
and
so
you
could
take
a
subset
of
a
staple
sets
replicators
and
run
them
on
another
cluster,
and
because
you
have
persistent
volumes,
they
could
be
migrated
across
clusters
and
networking
that
spans
clusters.
E
Eventually
this
set
of
replicas
across
different
clusters,
would
reach
a
stable
state
and
once
that's
you
know,
you've
maintained
availability
by
making
sure
that
there's
sufficient
quorum,
excluding
the
replicas
that
you
moved
and
then
you
know
once
an
application
is
stable.
Then
you
can
proceed
to
move
other
replicas.
E
E
So
in
this
case
like
say,
for
example,
you're
wanting
to
you're
running
a
staple
set
that
includes
redis
and
you
want
to
migrate
it
across
clusters.
So
you
can
imagine
that
say
in
the
cluster
you're
migrating
from
you
can
install
this
cluster
migration
custom
resource
and
specify
the
clusters
you're
migrating
to
and
the
staple
set
you
want
to
migrate.
E
Authentication
with
clusters
could
physically
migrate
a
stable
set
using
the
cap
specification,
as
well
as
moving
underlying
resources
or
dependencies,
the
simplest
it
has
so
moving
persistent
volumes
in
orchestration
with
moving
staple
set,
replicas
or
moving
dependencies.
This
staple
set
like
config
maps
or
secrets,
and
as
long
as
there's
networking
that's
spanning
both
clusters,
the
stafford
set
could
continue
to
operate
as
one
logical
application.
E
Okay
next
slide,
so
here
I'm
just
reiterating
the
core
problem
here
of
you
know
what
this
actually,
what
one
stage
of
moving
one
replica
would
look
like
so
left
side.
You
have
a
staple
set
three
replicas
next
slide,
please
the
logical
flow
would
be
you
know,
removing
a
replica
so
scaling
down
stable,
set
in
the
cluster
on
the
left
next
slide,
and
then
bringing
that
replica
back
up
using
this
new
specification
for
staples
set
in
the
right
cluster
next
slide.
Please.
E
E
Okay,
so
I'll
just
explain
the
premise
of
what's
happening
here,
there's
a
couple
boxes.
So,
let's
start
with
the
top
left.
Here
we
have
a
redis
cluster,
that's
running
six
replicas,
so
it's
running
as
a
staple
set
moving
one
pane
to
the
right.
We
have
a
block
cluster
and
we're
calling
this
red
black.
E
Probably
a
lot
of
you
are
familiar
with
the
concept
of
blue
green,
but
the
connotation
there
is.
It
typically
involves
spinning
up
two
instances
of
an
application
and
then
cutting
over
traffic
in
this
case
we're
migrating
piecemeal,
so
pod
bipod,
so
we're
referring
it
to
referring
it
by
a
different
name,
red
black,
to
kind
of
distinguish
that
concept.
E
E
One
pane
over.
We
see
red
cluster
pvcs.
These
are
position
volume
claims
that
exist
that
the
stable
set
is
using
in
the
red
cluster
and
then
one
more
paint
over
the
black
cluster
pvcs.
E
These
have
not
been
populated
yet
so
these
will
be
those
these
will
be
migrated
over
as
part
of
this
orchestration.
E
If
we
go
one
pane
down,
we
can
see
the
ip
addresses
associated
with
the
headless
service
that
this
cluster
has.
So
we
can
see
one
two,
three,
four,
five
there's
only
five,
maybe
it's
just
not
showing
up
but
yeah.
These
are
ip
addresses
that
are
10.104
they're
associated
with
the
red
cluster
and
we'll
see
these
change
to.
I
think
10.8,
which
is
the
cider
range
for
the
black
cluster.
E
If
we're
going
one
pane
down
this
one
is
just
a
mem
tier
benchmark,
making
requests
against
this
headless
service,
so
we're
just
trying
to
showcase
the
fact
that
availability
is
maintained
and
there's
a
bit
of
a
caveat
I'll,
explain
that
later
and
then
finally,
the
bottom
pane
is
this
custom
resource.
That
is,
a
cluster
migration
object.
So
it's
kind
of
that
proved
concept
specification
I
showcased
earlier.
It
contains
details
about
the
staple
set
to
migrate
and
the
cluster
to
migrate
from
and
the
cluster
to
migrate
to
okay.
E
So
we
can
play
the
video
here
and
just
kind
of
I'll
talk
to
the
results.
So
we
see
the
stable
sets
running
in
the
red
cluster.
We're
starting
mem
tier
here
we're
just
creating
that
cluster
migration
object.
That
specifies
you
know,
let's
move
pods
over
or
move
the
staple
set
over.
So
can
you
pause
it
for
a
second,
the
right
top
right,
most
pain?
We
just
saw
these
this
block
cluster
pvcs
get
created.
E
This
was
the
underlying
data
moving
over,
so
this
was
created
very
quickly,
but
the
underlying
data
or
the
disks
that
was
backing
the
stable
set,
those
weren't
actually
duplicated.
What
was
moved
over
was
the
references,
so
the
persistent
volume
of
this
persistent
volume
claim
those
resources
that
actually
reference
the
same
underlying
disk.
Those
were
just
copied
over,
so
the
current
status
is
lost
because
those
objects
were
just
created,
but
we're
going
to
see
the
precision
volume
or
the
yeah,
the
volume,
controller
and
kubernetes
start
to
reconcile
those
objects
and
mark
the
status
as
bound
okay.
E
And
is
he
kind
in
them?
Second,
bottomless
pain,
there's
a
connection
error,
so
this
does
happen
when
there
is
connectivity
lost
to
a
particular
replica,
but
typically
it
just
it's
a
retriable
error.
A
client
can
just
communicate
with
another
peer
that
it
has.
E
Okay
and
yeah,
we
probably
don't
need
to
watch
the
entire
thing
or
you
can
probably
skip
to
like
maybe
70
through.
Can
you
just
catch
the
tail
end
here?
E
Okay.
So
now
we
started
to
see
that
multi-cluster
service
been
updated,
so
we
we
see
10.8
ip
addresses
being
reflected
there
and
I'm
just
restarting
memptier,
there's
kind
of
a
it's
kind
of
a
deficiency
of
the
mensure
client.
It
only
caches
ips
when
it
initializes,
rather
than
fetching
ips
dynamically.
E
But
yeah
that's
kind
of
the
demo
there,
so
eventually
migration
is
complete.
Now
we
have
the
application
running
on
the
new
cluster
and
it's
just
up
to
the
admin,
the
kubernetes
administrator,
the
application
administrator
to
kind
of
clean
up
those
resources
and
now
maintain
that
application
in
the
black
cluster.
E
Okay,
so
yeah,
I
think
that's
my
last
slide
here,
but
just
kind
of
want
to
showcase
this
use
case.
You
know
we
think
we
we
think
it's
beneficial
for
having
an
alternative
to
an
in-place
upgrade
or
having
to
you
know,
maybe
potentially
have
control
playing
downtime
or
you
know,
take
the
risk
of
rolling
forward
a
control
plane
with
having
less
control
about
rolling
it
back.
E
So
having
this
option
to
kind
of
migrate,
your
workload
workload
by
workload
with
a
staple
set,
you
know
enables
an
administrator
with
more
flexibility
to
move
workloads
around
across
their
set
of
clusters.
E
A
E
Yeah,
that's
a
really
good
question,
so
it
definitely
is
affected
by
connectivity
issues
or
you
know,
potentially,
there's
resources,
resource
constraints
in
the
new
cluster
you're
migrating
to
so
the
orchestration,
the
demo
it
it
verifies
that
or
I
guess
it
only
progresses
to
moving
new
replicas
if
it
deems
that
the
system
is
stable,
the
application
is
stable.
So
that
means
that
you
know
the
new
replicas
have
been
brought
up
in
the
new
cluster
they're
yeah
they're
running
and
they're
ready.
So
you
know
using
those
liveness
and
running
this
probes.
E
So
you
know
in
say
in
the
case
that
there's
complete
disconnectivity
and
all
of
a
sudden
you're
left
with
a
subset
of
or
less
than
majority
quorum
in
the
old
cluster.
You
know,
we'd
just
have
a
network
petition
and
the
orchestrator
would
would
pause
orchestration.
D
A
About
if
you
run
a
an
upgrade
and
the
like
ingress
of
the
receiving
cluster
is
disrupted
like
one
of
the
hosts
running
in
ingress.
A
E
Be
handled
through
health
checking
of
the
application,
so
you
know
this
is
a
dependency.
It's
really.
You
know
this
causes
the
application
to
all
of
a
sudden.
An
experience
become
not
ready.
That
would
be
kind
of
the
up
level
to
the
controller
that
is
moving
replicas
over
okay,
but
yeah.
I
think
there
is
kind
of
this
conflict
of
you
know
if
you
are
performing
a
migration
while
there's
an
upgrade
happening,
you
can
get
into
these.
E
A
All
right:
well,
I
don't
want
to
eat
up
all
the
time
thanks
and
very
cool
demo.
I
see
jeremy's
got
his
hand
up.
D
Yeah
I
yeah
like
kind
of
following
on
that
set
of
questions.
I
think
there's
there's
a
lot
of
different
forms
of
readiness
and
what
readiness
actually
means
in
in
kubernetes
right,
like
pod
readiness,
doesn't
necessarily
mean
the
application
is
functioning
properly
and
I
think
paul.
That's
probably
what
you
were
touching
on
with
the
ingress
like
if
ingress
is
down,
pods
can
all
be
ready,
everything's
good.
You
can
still
talk
to
the
cluster,
but
it
doesn't
mean
anyone
can
talk
to
the
new
pods.
D
I
guess
what
would
it
take
to
plug
into
more
of
those
more
of
the
facilities.
Kubernetes
has
like
pod
disruption,
budgets
to
kind
of
have
have
a
little
bit
more
awareness
at
that
controller
level
about
what's
going
on
and.
D
I
mean
awesome
demo,
you
know
I'm
really
excited
about
this,
and
I
think
this
is
definitely
the
kind
of
problem
that
this
sig
needs
to
be
solving.
You
know
expanding
on
what
we've
done,
with
multi-cluster
services
figuring
out
how
to
expand
that
to
stateful
workloads.
But
what
do
we
need
to
do
to
make
this
a
little
bit
more
generally
applicable.
E
Yeah,
that's
a
good
point,
so
I
kind
of
think
there's
there's
kind
of
two
ways
like
either
bottoms
up
through
pod
readiness
gates.
You
know
tacking
on
to
like
existing
written
skates,
that
pods
have
to
kind
of
ensure
the
health
of
a
pod
holistically
rather
than
just
the
application.
Is
it
like
the
binary
itself?
The
container
itself
is
running
and
then
there's
kind
of
this
like
overall
health
of
an
orchestrator.
E
D
Cool
yeah,
let's
I
I
also
don't
want
to
eat
up
all
the
time.
I
know
we
have
more
on
the
agenda
today,
but
let's
keep
working
on
it
and
you
know.
B
Cool
all
right,
I
had
two
things
I
wanted
to
mention.
One
was
about
last
meeting.
We
had
talked
about
id.case.io
and
our
need
to
rename
it,
and
there
had
been
some
mailing
list
call
outs
for
brainstorms
and
then
a
survey.
So
I
wanted
to
go
over
that
a
little
bit.
B
I
have
the
survey
results
here
which
is
full
of
divisiveness,
or
you
know
not
a
clear
winner,
though
I
think
this
problem
is
very
nuanced,
so
I
think
we
knew
that
might
be
the
case,
but
the
way
that
I
organized
a
survey,
just
a
quick
overview,
was
to
ask
people
for
their
top
preference,
which
is
extremely
split
of
the
12
options
that
were
provided
from
the
brainstorming
sheet
and
then
also
ask
for
individual
feedback,
which
was
optional.
B
So
not
everybody
provides
individual
feedback,
but
people
could
give
comments
and
also
score
each
individual
item,
whether
they
love
it
like
it
hated
or
undecided.
B
Personally,
I
trend
very
much
when
it
comes
to
an
api
naming
to
avoid
hatred,
so
I'll
highlight
that
hatred
is
orange
and
just
how
orange
things
can
get
for
just
a
quick
visual
orange
check.
But
in
my
reading
of
this
and
I'll
publicize
all
the
results
too
so
people
can
look
at
it
some
more,
but
I
think,
and
the
category
of
things
that
are
least
hated
is
here.
We
have
cluster.clusterset.kates.io.
B
B
Yeah,
there's
not
a
lot
of
love
here.
Well,
here's
a
blue
question.
B
B
B
You
love
it
yeah,
okay,
so
anyway,
so
I
kind
of
wanted
to
highlight,
and
also
I'm
I'm
pulling
a
little
bit
from
all
the
comments
too,
because
it's
a
lot
to
go
over
all
at
once.
But
there's
some
interest
interest
in
this
idea
of
sort
of
like
property.scope.you,
know
general
suffix
as
like
the
trend.
B
So
that
was
something
to
recommend
cases
where
the
more
specific
property
details
were
before
the
sort
of
scoping
mechanism
and
this
naming
convention,
which
was,
is
one
thing
to
recommend,
for
example,
cluster
id.clusterset.co
and
there's
also
some
interest
in
keeping
it
simple,
which
I
agree
in
principle,
but
I
know
we're
in
a
hard
time
here
and
then
one
other
thing
I
want
to
highlight
in
terms
of
skipping
over
cluster
really
cluster.clusterset.com
the
other
least
hated
thing
cluster
id.kates.io
one
thing
I'm
sort
of
worried
about
this
is,
I
don't
think
this
gets
across.
B
The
thing
we
wanted,
which
was
this
id
is
relevant
to
this
cluster,
while
it's
in
a
cluster
set,
or
at
least
that
has
specific
restrictions
while
in
a
cluster
set,
and
I
don't
think
that
this
really
improves
over
id.case.io
in
that
sense.
So
while
it
was
widely
liked,
I
don't
know
that
this
achieves
our
goals,
so
I'm
I'm
prone
to
dropping
that
so
I'm
leaning
around
based
on
interp.
B
You
know
like
absorbing
all
this
into
my
brain,
cluster.clusterset.case.io
and
clusterid.clusterset.case.io,
which
was
the
one
they're
not
perfectly
near
each
other,
but
up
here
the
hyphenated
stuff
was
less
beloved.
I
don't
know
if
that's
a
hatred
of
hyphens
or
anything
else,
but
yeah.
So
I
wanted
to
give
kind
of
this
overview.
B
I
know
it's
kind
of
complex
and
I
think
I
can
publish
this
data
and
then
to
some
degree
it's
a
bit
up
to
api
review,
which
is
kind
of
like
tim
hawkin
is
my
api
reviewer
for
this
one.
But
I
would
like
to
open
the
floor
to
any
more
observations
of
this.
Now
that
I
gave
a
bit
of
a
preview
of
of
what
the
raw
data
looks
like
and
if
there's
anything
else
on
people's
minds
here.
D
I
I
definitely
agree
with
you
about
cluster
80.
kate's.
I
think
yeah,
that
is
just
a
more
verbose
version
of
iv.
B
D
Just
as
confusing,
although
the
I
think,
the
fact
that
people
like
it
suggest
that
you
know
people
still
want
id.k
or
like
we,
we
do
need
to
solve
the
id
situation.
Yeah.
D
I
I
don't
think
yeah
we
can't
use
id.kids
because
we
are
not
trying
to
solve
the
general
purpose
id
yet
but
sure
yeah.
I
read.
I
just
see
this
as
more
suggestion
that
we
need
to.
Personally
I
lean
the
way
you
lean.
I
think
it
works.
I
don't
think
I'm
super
opinionated,
but
we
were
gonna.
Have
cluster
set.kates.io
be
the
cluster
set
id,
so
this
seems
in
line
with
that
right.
D
If
clusterset.case.ios
cluster's
id,
then
cluster.clustersec
would
be
the
cluster
id
within
that
those
are
my
two
cents.
A
It
seems
like
cluster.clusterset.kates.io
is
the
least
maligned.
B
A
A
B
B
Yeah,
I
didn't
pull
it
all
the
way
I
started
doing
some
some
vlookups
in
the
background
that
had
some
null
averages.
C
B
I
didn't
resolve
in
time,
but
yeah
in
terms
of
like
massive
orange
amounts.
Cluster
settings.
B
B
We
had
talked
a
little
bit
that
this
might
be
too
specific
and
then
again,
a
hyphen
cluster
set
dash
id
received
a
lot
more
hatred
in
terms
of
orange
share
seems
like.
A
D
Yeah,
I
I
think
this
would
agree,
nobody
hates
it.
People
either
don't
care
they
like
it.
That's
that's
fine.
B
Cool
well,
I
yeah
so
as
mentioned
my
like
personal
opinion
for
api,
like
when
we're
discussing
ergonomics,
is
least
hatred.
So
I
trend
that
way
for
my
personal
rubric,
but
I
will
publish
all
of
this
on
the
mailing
list,
so
people
can
take
a
look
some
more
and
say
something
there,
but
I'll
provisionally
suggest
that
so
that
tim
and
I
can
talk
about
the
next
round
of
api
review
in
this
context,.
D
B
D
B
Yeah,
I
did,
I
know
I
didn't
prepare
a
deck.
I
was
so
nervous
and
then
I
got
to
share
peter's
deck
anyway,
so
I'm
happy
what
a
relief,
thank
god.
Okay.
The
other
thing
I
want
to
mention
is
that
gateway
api
sub
the
meeting.
So
I'm
not
sure
if
it's
the
project
meeting
or
there's
been
some
talk
of
like
spinning
up
a
new
meeting
or
not
but
anyways.
B
The
important
part
I
wanted
to
say
is
that
gateway
api
has
a
dock
about
using
the
gateway
api
for
east
west
traffic
and
they're
actually
discussing
it
today
at
3
p.m,
which
is
on
the
sig
network
calendar,
and
so
this
is
an
exploration
dock.
There's
a
lot
of
stuff
in
here,
and
in
particular
this
is
a
interaction
between
the
gateway
api
from
sig
network
and
like
the
larger
service
mesh
community.
B
So
there's
a
lot
of
requirements
in
here,
especially
the
list
of
requirements
was
in
here
somewhere
that
I
oh
here
we
go.
Some
use
cases
that
have
a
lot
more
detail
about
the
type
of
information
that
service
meshes
want
to
share
for
east
west
traffic
to
enable
their
more
sophisticated
features.
B
But
I
just
thought
it
was
probably
important
to
bring
up
to
this
group
and
the
mcs
api
is
not
mentioned
in
here
at
all,
so
just
wanted
to
put
this
on
the
radar
I'm
going
to
stop
by
this.
If
I
can
later
and
take
a
look
at
this
some
more
but
encourage
anybody
else
to
take
a
look
at
this
too.
If
you
didn't
see
the
slack
chat
or
otherwise
hear
the
rumblings
about
it,.
C
And
I'm
one
of
the
co-workers-
oh
hi,
it's
hi
excited
excited
to
see
this
on
the
agenda
and
to
hear
that
you'll
plan
to
be
coming
to
this
and
yeah.
I
very
much
would
love
for
it
to
include
perspective
on
potential
accessibility
to
like
service
import
service,
export
type
use
cases
as
well
yeah,
so
so
that's
exciting
and
it
is
planned
to
be
a
like
working
group
duration
type
initiative.
So
it's
a
like
some
project
of
gateway
api,
so
it'll
be
a
recurring
series
of
meetings.
C
I
linked
the
meeting
notes
doc
in
the
notes
here
on
tuesdays
on
alternating
times
to
accommodate
a
wider
range
of
times,
hopefully
yeah
so
looking
forward
to
seeing
folks
anyone
who's,
interested
and
checking
for
their
lives.
B
All
right,
so
that's
what
I
wanted
to
talk
about.