►
From YouTube: Gateway API GAMMA Meeting for 20230221
Description
Gateway API GAMMA Meeting for 20230221
A
Hello,
everybody
Welcome
to
the
February
21st
instance
of
the
Gateway
API
gaming
meeting
and
that
totally
just
stopped
sharing
the
page
they
wanted
to
share.
So.
A
Love
it
so,
yes,
this
is
the
February
21st
into
the
gateway,
API
gamma
meeting.
So
as
a
reminder,
this
meeting
is
governed
by
the
commitment
of
conduct,
so
please
treat
each
other
with
respect
and
be
kind.
This
meeting
also
has
an
open
Agenda
fund
was
so
kind
to
put
that
the
link
to
the
set
agenda
in
the
chat,
so
please
feel
free
to
open
that
up
and
if
you've
got
something
you
want
to
talk
about.
Please
add
it
to
the
agenda.
A
Also,
while
you're
there
add
your
name
of
the
attendees
list.
This
just
helps
us
to
keep
a
good
record
of
who
is
interested
in
these
kinds
of
things,
all
right.
A
So
to
start
off
with
as
we
as
is
now
our
custom,
we
we'll
discuss
a
recap
of
our
previous
previous
meeting,
so
the
first
item
on
the
radio
cap
is
that
the
Gateway
API,
the
the
meeting
that
occurs
on
Mondays
the
March
6th
meeting,
will
be
the
first
time
where
they're
going
to
be
trying
out
to
a
9
A.M
Pacific
time
time
slots
to
try
to
accommodate
folks
in
different
time
zones.
A
So
if
you
are
somebody
who
that
you
know
typically
can't
make
the
Gateway
API
meeting
on
Monday
evenings,
then
this
would
be
a
good
time
to
try
to
attend.
A
C
Yeah
on
that
I'll
just
jump
in
real
quick
I
added,
a
very
informal
poll
on
Gateway
API
slack,
there's
a
total
of
six
responses
right
now.
So,
if
you're
going
to
be
at
kubecon
and
haven't
had
a
chance
to
respond,
it's
really
just
the
days
of
the
week,
you
think
are
most
likely
you'll
be
around
and
able
to
meet
up
so
far
the
winners
are
Wednesday
and
Thursday,
and
the
very
clear
loser
is
Friday,
where
almost
nobody
seems
to
want
to
be
around
then
so
extra
extra
participation
is
very
much
appreciated.
C
That
goes
back
to
a
week
ago,
today,.
C
A
Great
thanks
for
that
update
Rob.
So
in
the
next
item
on
the
recap
is
making
to
make
gamma
there's
actually
getting
a
more
prominent
on
the
Gateway
API
website.
I,
don't
remember!
If
we
had
somebody
assigned
to
that
action
item
or
not
last
meeting.
Does
anybody
remember
off
the
top
of
their
head.
A
Okay,
I
can
tackle
that
so
I'll
make
an
ER
to.
A
Yeah
no
worries
so
I'll.
Take
this
and
make
a
note
to
go
ahead
and
try
to
get
this
added
to
get.
This
changed
on
the
giveaway
website,
so
folks,
actually
know
about
us
should
be
great
all
right.
A
Next
last
topic
on
the
recap:
we
had
a
quick
discussion
about
custom
health
checks
for
back-ends
and
how
not
to
get
my
API
is.
You
know
the
the
obvious
conversation
came
up
of.
Okay
is
kubernetes
Health
the
Readiness
probe
sufficient
for
most
folks
use
cases
or
is
the
the
lack
of
failover
in
the
specs.
Something
that's
really
desirable.
A
Behavior,
especially
in
the
mesh
context,
was
one
one
place
that
really
really
really
came
up:
I'm,
not
yeah,
I,
don't
think
there
was
any
action
as
of
yet,
but
it's
certainly
something
that
we
are
continuing
to
evaluate
and
maybe
discuss,
discuss
at
a
later
time
any
questions
or
comments
on
our
on
our
recap.
B
D
A
I
remembering
that
now
yes,
discussion,
all
right
we'll
follow
up
on
that
in
the
next
meeting,
then
all
right,
any
last
questions
or
comments.
A
All
right
awesome
so
moving
into
our
main
agenda
now,
at
this
point,
I'm
very
excited
for
this
next
topic.
Flynn
is
going
to
share
with
us
some
Linker
D
points
of
friction
when
that
they've
had
implementing
Gateway
API
I,
don't
know
if
this
is
Gateway
API
in
general
or
gamma
specific,
but
either
way
I'm.
B
A
gamma
stuff
in
specific,
you
might
be
I,
don't
know
you
might
be,
might
be
raising
your
expectations
a
little
high
to
be
very
excited
for
this,
but
you
know.
B
B
B
Linker
D
also
has
a
bit
of
an
ulterior
motive
here,
in
that
we
have
historically
gotten
away
with
having
very
very
few
Lincoln
specific
crds,
and
we
want
to
continue
that
Trend.
So,
to
the
extent
that
we
can
use
Gateway
API
to
talk
about
things
within
Linker
D,
we
can
legitimately
tell
people
that
we
still
have
very
very
few
linkage
specific
crds
and
blame.
The
Gateway
I
mean
a
lot
of
the
Gateway
API
to
be
used
for
interoper
interoperability.
B
So
we
have
run
into
a
couple
of
points
of
friction
as
we've
sort
of
going
down
this
road,
both
in
2.12
and
with
upcoming
work
in
2.13..
B
B
Just
in
case
you
haven't
run
across
this,
yet
linkerty
is
in
fact
not
an
Ingress
controller,
it's
a
service
measure
and
so
trying
to
talk
about
being
conformant
with
Gateway
API.
This
actually
predates
the
gamma
initiative
entirely.
That
was
clearly
not
going
to
happen
and
so
in
fact,
2.12
copied
HTTP
route
into
the
policy.linkerty.io
API
Group
solely.
So
we
could
sidestep
any
questions
about
Gateway
API
performance.
B
It's
a
minor
point
of
friction,
but
it
was
interesting
to
realize
that
the
moment
we
started
looking
at
this
sideways,
then
we
knew
we
were
going
to
get
questions
about
it
and
we
had
to
come
up
with
a
way
of
dealing
with
that
for
2.13.
Oh
and
sorry
when
I
say
copied,
we
copied
it
and
then
we
deleted
backend
refs
entirely,
because
HTTP
routes
in
2.12
are
not
about
routing
to
workloads.
They
are
about
giving
us
a
place
to
hang
authentication
policy
now
2.13
in
2.13.
B
Linker
D
will
be
happier
in
some
ways
with
static
installation,
because
that
will
allow
people
to
go
through
and
do
more
easily
controlled
configuration
of
everything.
I.
B
Personally,
don't
think
that
this
is
a
big
deal
in
terms
of
friction
anymore
after
having
gone
through
and
verified
that
we
can
in
fact
use
annotations
on
the
when
we're
using
a
a
Gateway,
API,
Ingress
controller
that
goes
through
and
dynamically
creates
everything
we
can
actually
annotate
the
namespace
in
which
it
will
create
all
of
the
deployments,
and
that
gives
us
a
way
to
control
the
configuration
of
those
deployments
with
respect
to
the
mesh.
It
feels
a
little
bit
ugly
kind
of
makes
me
you
know
want
to
go
and
take
a
shower.
B
After
I
suggest
to
people
that
they
should
go
and
annotate
somebody
else's
namespace
in
order
to
get
this
stuff
to
happen,
but
it's
at
least
a
functional
thing
that
we
can
use
to
dodge
this.
Is
this
issue
Mike
the
the
I
will
link
the
static
versus
Dynamic
thing
to
the
notes.
If
nobody
else
beats
me
to
it,
but
I'll
give
somebody
else
the
opportunity
to
beat
me
to
it
anyway.
The
the
fundamental
issue
there.
B
So
that's
one
sort
of
initial
point
of
friction
that
we
ran
across
a
second
thing
that
we
ran
across
gets
back
to
that
a
discussion
that
we've
had
a
lot
in
gamma
recently-
and
you
know
over
over
time
as
we
start
talking
about
what
exactly
should
the
parent
ref
of
an
HTTP
route
and
Gamma
be
allowed
to
be,
and
there
has
been
lots
and
lots
of
discussion
on
that
and
eventually,
what
we
settled
on
was
hey
for
right
now,
let's
go
ahead
and
allow
the
parent
ref
to
be
a
service,
and
that
will
at
least
give
us
a
place
to
stand.
B
Okay,
so
Linker,
D
and
2.13
has
in
fact
stood
on
that
place
to
stand,
and
oh
there's
the
link.
Thank
you
Shane.
What
we
have
found
is
that
using
Service
as
a
parent,
ref
works
okay
at
first
and
causes
a
lot
of
interesting
problems
when
you
start
talking
about
consumer
routing
and
not
just
provider
routing.
So
this
is
also
a
thing
that
has
come
up
in
discussions
in
gamma
I.
Think
it's
pretty
widely
accepted
that
you
know
I.
B
Think
that
I
think
that
everybody
thought
that
binding
to
a
service
would
be
okay
as
long
as
you're
talking
about
only
provider
stuff
and
that
binding
to
a
service
would
be
less
okay.
When
we
start
talking
about
consumer
stuff,
we
have
confirmed
that
it
is,
in
fact
a
large
problem
when
you
start
talking
about
consumer
stuff.
B
One
of
the
things
that
we
know
for
certain
that
Linker
to
users
wants
to
be
able
to
do
is
to
be
able
to
not
just
do
consumer
policy,
overriding
providers,
policy
and
I'll
pick
on
The
Perennial
timeout
example.
Here
as
a
specific
example,
where
you'd
want
to
do
this,
it
makes
a
lot
of
sense
for
consumers
to
be
able
to
say
things
like
well.
You
know
the
provider
is
willing
to
give
me
five
seconds,
but
I
want
to
constrain
to
one,
because
that
I
need
that
extra
Agency
for
my
client.
B
B
D
But
it
doesn't
have
to
have
the
same
solution.
I
mean
it
is
possible
to
have,
for
example,
producer
policies
attached
to
the
service
and
for
the
client
overrides
to
use
like
a
different
syntax
or
bind
to
something
else.
For.
D
Reasonable-
and
it's
also
the
common
use
case
that
we
discussed
in
the
past
about
doing
egress,
having
support
right
and
interest,
is
a
case
of
effectively
of
client
override
because
you're
talking
with
from
the
internet
and
you
are
effectively
in
exactly
the
same
situation
or
set
timeouts,
and
you
said
whatever
policies
you
want
on
egress
Gateway,
so
I
think
both
the
case
of
egress
gate,
who
has
a
case
of
client
overrides,
is
probably
can
be
addressed
in
the
same
way
and
can
be.
You
know
it's
important.
B
We
do
need
an
internally
consistent
solution
there,
even
if
it
doesn't
look
exactly
the
same.
Thank
you.
You
know.
So
that's
I'm,
bringing
this
up
not
in
the
sense
of
of
the
world,
is
going
to
come
to
an
end
tomorrow,
more
in
the
lines
of
yeah.
We
need
to
talk
about
this
and
figure
out
something
that
can
be
consistent.
Yeah
Nick,
you
were
going
to
say,
okay,.
E
Yeah
I
was
gonna
say
with
the
with
the
way
that
you
handle
it
at
the
moment.
How
do
you,
how
does
the
the
client
overrides
Target,
you
know
if
you
have
multiple
conflicting
client
overrides
in
a
namespace?
How
do
you
target
like
which
bits
in
the
namespace
are
doing
that.
B
Right
now
we
tend
to
hang
things
on
the
Linker
D
specific
server
definition,
which
allows
yeah.
It
allows
a
bunch
of
things
that
are
really
nice.
Like
I
said
we
don't
think
it's
an
ideal
solution.
It's
just
a
thing
that
we
did
in
the
past.
That
permits
a
bunch
of
things
that
we
can't
do
otherwise
go
ahead.
E
B
Yeah
yeah,
and
we
also
find
ourselves
looking
to
at
more
advanced
routing
policies.
It's
like
traffic,
splits,
for
example,
which
we
currently
do
with
the
SMI
extension
and
would
be
nice
to
not
have
to
use
the
SMI
extension
for
that,
but
also
things
like
traffic
shadowing
where
oh,
there
are
several
different
workloads
here,
and
we
want
this
one
to
get
a
copy
of
all
the
traffic
which
is
different
from
traffic
splitting
and
different
from
load
balancing.
B
That
is
a
lot
of
stuff.
We
know
sorry
yeah,
so
there's
been
a
long
discussion
on
slack
about
attaching
policy
to
deployments
and
pods
and
yeah.
B
You
know
there
are
definitely
cases
where
linkerty
would
like
to
be
able
to
attach
policy
to
deployments,
but
being
able
to
attach
policy
to
a
single
pod
really
does
not
seem
like
something
that
well,
let's
just
say,
I
haven't
been
able
to
come
up
with
a
use
case
where
that
would
make
sense
to
me
deployments,
on
the
other
hand,
being
able
to
identify
a
workload
and
say
this
workload.
Has
these
policies
attached
to
it?
That
could
be
an
extremely
powerful
thing
in
the
mesh.
Let's.
D
A
B
Want
to
emphasize
again
that
as
I
talk
about
this
stuff
I'm
talking
about
places
where
we've
found
friction,
which
is
not
the
same
thing
as
saying
places
where
we've
necessarily
found
Solutions,
what
we
believe
is
going
to
happen
in
2.13
is
we
will
support
binding
to
Services?
We
will
allow
the
backend
ref
to
be
a
service
as
well.
We
will
accept
that
the
human
user
probably
knows
what
they
mean
when
they
are
using
service
in
those
two
very
different
contexts,
and
we
will
allow
a
Linker
D
to
figure
it
out.
B
B
B
We
do
not
expect
that
the
back
end
ref
will
be
limited
to
service
for
terribly
long.
We
expect
we're
going
to
have
to
use.
Allow
more
than
that
see
previous
conversations
about
writing,
topology
and
all
that
one
of
the
other
things
that's
a
little
bit
interesting
is
the
question
of
whether
we
need
to
support
multiple
back-end
refs
or
whether
it
would
actually
suffice
to
have
a
single
backend
ref
that
always
points
to
something
that
talks
about
more
advanced
topologies.
B
B
B
D
E
I
I
just
had
a
small
stocking
coming
to
make
I
guess
if
only
there
was
some
sort
of
profile
that
finally,
it
was
some
sort
of
profile
that
you
could
use
to
choose.
What
sort
of
conformance
you
were
addressing
and.
B
I
guess
so,
but
yeah
I
mean
it
is.
It
is
a
good
point
and
one
of
the
questions
is
going
to
be
if
we
can
scrape
loose
of
bandwidth
for
stuff,
like
that
I'm
sure
it
will
shock
nobody
that
we're
a
little
a
little
constrained
in
terms
of
person
power
here
custom
were
you
gonna,
say.
D
B
B
That's
the
the
pre-linkerty
2.12
mechanism
in
Linker
d2.12,
that
is
supported,
but
basically
deprecated,
and
you
can
also
associate
authentication
policy
and
such
with
HTTP
route,
as
you
probably
heard
me
say,
linkerty
would
prefer
not
to
have
a
lot
of
linker
to
specific
crds,
and
this
is
one
of
the
you
know,
one
of
the
things
that
we've
never
been
delighted
by
with
serger
and
authentication
policy
and
server
authors.
D
Yeah
I
mean
if
we
can
find
I
mean
if
you
first
before
you
agree
on
the
common
crd
to
representative
authentication
policy.
But
yes,
the
Second
Step
will
be
to
figure
out
how
to
attach
it
and.
B
In
both
yeah
I
should
probably
also
point
out
since
Nick
is
on
the
call.
Sorry
Nick
that
one
of
the
other
things
that
Lincoln
runs
across
a
lot
is,
in
fact
the
you
know,
four-person
startup
scenario
and
that,
as
you
may
have
seen
in
the
notes
from
the
other
one
starts,
raising
a
lot
of
questions
about.
B
Okay
is
policy
attachment
the
right
answer
for
some
of
these
things,
because
yeah
that
can
get
to
be
really
complicated
if
you're
coming
into
this
as
a
beginner
again,
not
something
where
I'm
going
to
say,
we
have
a
solution,
just
a
place
that
we've
all
been
looking
at
it
and
going
like
wow.
This
is
very,
very
powerful,
but
oh
it'll
be
tough
to
explain
to
people
first
coming
into
this
world.
E
B
F
B
F
B
B
To
play
around
with
it,
but
yeah
so
linkertys,
release
module
model
for
people
who
are
not
aware
is
that
we
do
Edge
releases.
The
intention
is
to
do
Edge
releases.
Weekly
Edge
in
this
context
is
a
bit
of
an
unfortunate
bit
of
terminology.
This
is
Edge
in
the
sense
of
bleeding
edge,
as
opposed
to
stable,
not
in
the
sense
of
the
edge
of
the
cluster,
so
yeah.
Those
of
you
who
would
like
to
try
these
things
as
they
come
off.
B
The
presses
I
would
encourage
you
to
try
out
the
edge
releases
of
linkerty,
we
tweet
about
them
and
announce
them.
Malinquity
slack
and
all
that
and
oh
yeah.
You
should
100
or
100,
expect
a
lot
of
stuff
to
be
broken,
as
the
edges
come
out
too
so,
or
at
least
not
yet
fully
functional.
Maybe
that's
a
better
way
to
put
it,
but
but.
B
F
F
B
B
Anything
else
going
once
going
twice:
okay,
so
yeah
there
we
go,
there's
the
Linker
D
feedback
so
far,
and
we
will
happily
bring
back
more
of
this
as
we
have
it.
A
A
It
sounds
like
a
lot
of
the
feedback
is
under
the
the
theme
that
we've
kind
of
all
been
feeling
that
HTTP
route
buying
the
service
Works
enough
for
the
base
use
cases
but
losing
consumer
consumer
routing
and
overrides
is
something
that
a
lot
of
users,
potentially
in
Lincoln's
case,
definitely
are
going
to
want
to
do
so.
F
A
Sometimes
it
was
featured
great
all
right,
good
enough
for
me,
yeah
yeah
I
think
it
might
be
between
one
of
these
agenda
via
policy
attachments,
update
of
deny
them
in
a
couple
of
that's
in
further
on
the
agenda
and
this
feedback
I
think
we've
got
some
work
to
do
as
far
as
taking
another
crack
at
HTTP
route
binding
for
mesh
and
seeing
what
else
we
could
within
the
video
after
o7l,
of
course,
I
always
get
everything
else
out
the
door
right
all
right.
H
H
If
okay
I
cut
out
and
go
robot
okay,
if
it
gets
really
bad,
just
wave
your
hands
and
tell
me
to
stop,
but
just
in
case,
because
I
didn't
necessarily
understand
the
sarcasm,
but
just
in
case
it's
helpful
to
anybody
who
wasn't
really
catching
on.
There
is
a
conformance
profiles,
project
underway
and,
if
you're
interested
in
checking
it
out,
subscribe
to
the
link
that
I
put
in
chat
and
if
you're
interested
in
helping
out
feel
free
to
say
so
so
one
reminder
that
I
just
wanted
to
throw
out.
H
Dot
IO,
it's
going,
it's
going
to
be
deprecated
and
we're
replacing
it
with
registry.khs.io.
So
please
do
if
you
have
that
anywhere
in
your
CI
things
like
you
know,
core
DNS
are
in
there.
You
know
anything
you
might
be
using
in
your
cni
and
your
projects
everywhere.
Please
switch
them
over
to
the
new
registry.
It
is
equivalent,
but
the
old
one
is
being
deprecated
and
it
is
expensive.
So
just
a
general
call
out.
B
H
Okay,
yeah
I
just
throw
this
out
there
in
case
it's
unclear.
We're
all
also
change
it
in
your
downstreams.
But
yes,
that
issue
is
check
is
like
keeping
track
of
everything.
Upstream.
A
Yeah
thanks
for
bringing
that
up,
Shane
this
is
this
is
a
this
is
a
big
one.
You
know
a
lot
of
my
partner
teams
are
talking
about
it
internally.
You
know
there's,
because
is
it
so
Shane?
Could
you
remind
everybody
of
kind
of
what
the
effects
are
in
the
near
future?
It's
not
isn't
GCR
being
shut
down
soon,.
H
A
Yeah,
so
you
you
use
that
the
the
threat
of
resources
going
going
away
to
convince
convince
your
people
to
make
the
migration
yeah
outages,
help
get
people
committed
to
projects.
A
A
Good
fair
enough
good
good
thanks
for
the
clarification,
accurate,
all
right,
moving
right
along
here,
Sanji
you've
got
a
item
on
Military
cluster
sort
of
networking
model
and
framework.
I
Yes,
so
I've
spoken
to
a
couple
of
people
on
this
in
the
past
in
December-
and
you
know,
let
me
put
some
slides
out
to
help
provide
some
context.
A
Sure
I
will
retire.
The
student
chair.
C
A
B
B
I
Okay,
you
all
see
this
yep
yep
okay,
so
it's
a
slightly
broad-ish
topic,
but
hopefully
you
we'd
all
kind
of
agree
on
on
this.
The
scope
of
this
is
pretty
Broad
and
how
do
we
break
it
down
from
top
down
onwards?
So
so
what
we're
dealing
with
here
is
multicluster
service
networking,
as
opposed
to
Cluster
management
and
obviously,
there's
really
good
work
going
on
in
various
groups,
including
Sig
multi-cluster,
with
things
like
the
MCS
API
API.
I
Of
course,
this
team,
right
here
again
to
varying
degrees
of
priority
of
multi-cluster
related
pieces
in
in
some
of
these
projects,
there's
been
work
going
on
early
in
gamma,
maybe
not
so
much
just
yet,
but
I
would
contend
that
it's
good
to
get
get
it
going
and
and
in
a
way
that's
I
think
this
will
straddle
different
working
groups,
so
hopefully
have
provide
some
context
here
and
help
us
collectively
put
things
together.
I
So
one
of
the
proposals-
jumping
right
down
to
the
bottom
bullet
here
to
start
with,
is
that
the
community
kids
community
in
general
overall
should
assemble
and
document
a
kids
native
multi-cluster
service,
networking
set
of
solutions
and
apis,
starting
with
defining
an
overall
model
and
then
building
out
each
subcategory
within
it.
I
know
there's
quite
a
bit
of
work.
That's
already
been
done,
but
I
feel
like
a
little
bit
of
a
revisit
is
helpful
because
there's
a
lot
of
piecemeal
work
that
has
happened.
F
B
C
I
Yes,
I
will
add
it
to
the
agenda
I.
The
only
thing
I've
done
publicly
as
this
morning,
I
briefly
started
to
talk
about
this
in
the
sigmatic
cluster,
so
yeah
I
can
okay.
C
Great
and
then
the
other
thing
just
it
looks
like
this
is
a
great
presentation,
but
we
may
need
to
time
box
it
a
little
bit
because
there
are
a
couple
items
after
this
as
well.
A
One
one
thing
so
again:
thanks
for
the
work
here,
Sanjeev
one
thing
that
I
found
to
be
helpful
with
with
slide
decks,
is
to
potentially
give
folks
time
to
review
and
and
provide
feedback
and
discuss
in
like
a
follow-up.
A
If
so,
if
you
could
maybe
spend
you
know
anywhere
in
that
in
that
Bobby,
ten
minute
range,
giving
us
the
overview
and
then
share
the
document
agenda,
as
Rob
mentioned
I'm
very
interested
to
to
be
able
to
dig
in
on
my
own
time
and
add
feedback
here
and
discuss
in
the
next
game,
I'm
getting
a
meeting
where
you're
able
to
attend
yeah.
I
I
Okay,
so
I'll
just
take
five
to
ten
minutes
today,
just
to
kind
of
touch
on
a
couple
key
points.
So
first
thing
is
about
the
overall
model
right,
so
to
me
that
constitutes
defining
the
reference
topology
and
there
could
be
one
or
more
reference
topologies,
but
we
should
agree
on
what
those
are.
So
as
an
example,
there
are
all
these
terms
that
are
used
across
these
projects,
like
a
network
or
a
gateway
or
a
cluster
set,
a
mesh,
and
eventually
even
a
multi-mesh
and
a
it's
not
always
clear
and
B.
I
That
need
not
be
the
only
thing,
but
we
should
agree
if
that
is
the
case,
and
we
should
precisely
Define
it
in
the
case
of
the
reference
topology
that
in
turn
could
have
multiple
quote-unquote
Networks,
where
a
network
is
a
contiguously
routed
space
without
any
nothing
needed
to
any
for
any
endpoint
within
the
network
to
talk
to
any
other
endpoints.
So
a
single
clusters
per
networking
space
is
a
network
or
a
bunch
of
clusters
in
the
same
VPC
could
be
a
network
what,
as
long
as
they
can
talk
to
each
other
without
nothing.
I
So
some
of
these
multi-cluster
features
like
policy.
They
do
depend
on
that
right.
If,
if
you
want
to
have
a
policy
at
the
multi-cluster
set
level,
but
if
you
are
allowing
for
multiple
networks
to
be
within
a
multi-cluster
set,
then
then
you
have
IP
duplication
and
you
can't
do
certain
kinds
of
policies
and
when
I
mean
I've
tried
to
initiate
proposals
in
the
past
for
things
like
multi-cluster
Network
policy,
and
there
was
always
ambiguity
on
well,
what
is
the
exact
model?
Are
we
designing
towards?
I
And
then
how
does
that
change
with
Gateway
and
all
these
things?
So
it
comes
back
again
and
again
that
we
need
to
have
a
kind
of
a
consistent
reference
topology
across
a
number
of
these
projects
and
then
build
out
the
subcategories
and
then
definitely
you
know
there
are
implications
of
namespace
sameness
within
a
cluster
set,
but
clearly
no
namespacingness
across
clusters
across
cluster
sets
or
meshes,
and
it
is
expected
that
an
Enterprise
would
have
this.
I
This
could
be
one
kind
of
generic
topology
that
we
would
need
to
design
all
these
projects
around,
but
it's
not
clearly
stated
so.
We
would
need
to
agree
and
and
and
see
whether
this
is
the
case,
but
there
are
a
few
other
topology
options
which
have
come
up
in
while
we've
discussed
things
like
multi-cluster
policy,
for
example,
one
thing
that
came
up
was
no.
We
do
not
want
multiple
networks
within
a
cluster
set.
I
In
fact,
a
a
mesh
slash
lesson
set
should
have
a
single
contiguous
Network,
and
if
you
cannot
have
a
contiguous
Network,
then
you
must
have
another
mesh
that
simplifies
policy
and
load
balancing,
because
it's
all
one
continuous
IP
routing
domain,
the
endpoints
can
be
a
uniquely
identified
and
Scattered
anywhere.
I
But
that's
not
generic
enough
and
may
not
be
realistic
enough,
given
real
world
topology
requirements
of
you
know
private
networks
and
separate
domains,
and
also
when
you
look
at
you
know,
things
like
the
istio
project
have
got
all
these
deployment
models.
I
You
know
single
Network
multi-network
if
we
are
trying
to
colonize
that
through
a
gamma
multi-cluster
reference
topology
that
may
not
be
enough,
so
so
the
core
Point
here
is
that
there
are
a
few
things
that
we
can
go
into
is
to
defining
what
is
the
precise
reference
topology
that
we
want
to
Target
across
consistently
across
all
these
projects
and
consistently
Define
all
these
terms
and
the
relations
to
each
other,
so
that
we
can
then
map
policies
load
balancing
service
Discovery
into
that
okay.
I
So
you
can
see
it's
kind
of
a
top-down
approach,
but
I
think
there's
a
lot
of
you
know
stuff
here,
so,
okay,
so
I.
Just
let
me
just
take
two
more
minutes
here,
because
I
think
I'll
run
out
of
time.
So
I
just
mentioned
just
one
thing
here.
Obviously,
there's
more
details
about
reference
topology,
you
know,
there's
I
mean
I've
I've
got
a
these
slides
are
fairly
busy,
but
I'll
just
give
you
just
one
or
two
examples
like
is
the
MCS
API
adequate?
I
Does
it
you
know,
do
we
need
do
we
need
to
make
multi-cluster
Services
more
first
class
like
the
way
istio?
Does
automatic
service
Discovery
across
all
clusters,
or
do
we
need
more
of
an
explicit
opt-in
model
like
like
the
MCS
API,
going
back
to
that
multi-network
versus
single
network
decisions?
Right
there's
been
ambiguity
in
all
these
projects
on
whether
they
actually
work
in
multi-network
or
the
implied,
or
they
implicitly
assume
single
Network.
I
So
we
need
to
kind
of
agree
on
that.
You
know:
I
have
a
certain
perspective,
but
I'm
sure
we
should.
We
all
have
our
own
perspectives,
and
we
should
you
know,
agree
on
that
and
Define
these
terms
unambiguously
I
got
I
got
some
more.
You
know
things
we
can
go
into
and
do
potential
topologies,
but
I'll
skip
that
for
now.
Two
quick
notes:
one
on
load,
balancing
and
one-on
policy
for
now,
but
you
know
obviously
there's
been
work
done
in
in
Gateway
on
multi-cluster.
I
You
know
Ingress
kinds
of
things
where
maybe
you
can
sort
of
have
a
service
import
as
a
backend
ref,
but
it's
not
properly
formally
defined
in
in
the
Gateway
API
docs
as
such
and
I
feel
like.
Even
that
could
be,
you
know,
more
work
could
be
done
there.
I
There
are
actually
multiple
kinds
of
even
multi-cluster
Ingress
models.
You
know
that
that
we
would
need
to
sort
of
confirm
and-
and
you
know,
cover
and
I'm
just
rushing
through
the
story.
So
you
know
Rob
you've.
A
couple
of
people
have
their
hands
right.
So
I'll
just
finish
up
here.
Similarly
with
policy,
so
I'm
just
talking
about
Network
policy
here,
but
really
I,
look
at
it
really
as
any
policy
related
to
networking.
I
So,
for
example,
there
is
both
Network
policy
and
auth
policy
right
and
I
I
see
potential
for
convergence
on
The
L4
policy
for
off
with
with
net
policy.
I
So
the
the
net
takeaway
here
is
that
a
okay.
Let
me
just
jump
to
the
the
takeaways
slide
here,
next
steps
right,
so
a
what
I
was
hoping
is
that
we
can
decide
on
a
working
model
for
developing
this.
This
workout
further
across
gamma
Gateway
API
Sig
Network
Sig
multi-cluster.
I
I
If
we
wanted
to
break
this
up
into
caps
and
jeps,
it
would
feel
like
maybe
one
thing,
one
gap
on
the
overall
model,
which
describes
the
reference
topology
models
and
covers
these
kinds
of
topics
and
then
separate
follow-up
gaps
on
service,
Discovery
policy
load,
balancing
in
the
context
of
that
overall
model,
and
whether
this
group
is
the
potential
home
for
this
work
or
in
collaboration
with
Gateway
API,
sigmatic
clusters
and
so
on.
So
anyway,
that's
kind
of
a
quick
rush
through
there
was
a
bunch
of
things
here.
C
Thank
you,
I
know
we're
we're
running
out
of
time
here,
so
I'll
be
real,
quick,
I,
I
raise
my
hand
when
you
were
talking
about
the
lack
of
documentation
right
now
on
how
Gateway
API
interacts
with
MCS.
That's
something
I've
had
on
my
personal
to-do
list
for
a
while
I'm
going
to
create
an
issue
to
track
that
formally
in
OSS
and
assign
it
to
myself.
C
I
I
do
want
to
get
that
out
in
the
next
couple
of
weeks,
because
it
you
know
yeah
I,
agree
that
that's
a
huge
gaping
hole,
that's
relatively
easy
to
fill
I
I
would
say
this
presentation
as
a
whole
seems
very
Broad
in
scope,
which
is
great,
but
I
I
think
it
will
probably
have
to
I
think
as
you're
suggesting
tackle
this
in
small
little
chunks
as
we
can.
You
know
with
starting
with
the
most
well-known
or
easiest
to
Define
areas.
C
First,
before
we
go
too
far
down
some
of
these
things,
just
I,
you
know
I,
don't
think.
We've
had
enough
time
to
think
through,
but
I
definitely
think
between
this
group
and
the
Gateway
API
Group
and
the
Sig
multi-cluster
group.
You
already
talked
to
I
yeah
I
mean
it's.
It's
the
right
group
of
people.
We
probably
need
to
at
some
point
try
and
get
a
all
of
them
together
in
one
place
and
talk
about
it,
but
that's
really
hard
with
time
zones
so
we'll
see
what
we
can
do,
but
at
a
minimum.
C
Thank
you
for
starting
this
discussion.
Getting
this
presentation
out
it's
great
and
yeah.
Thank
you.
B
I
I
I
didn't
know
whether
to
put
too
much
detail
or
too
little
details.
So
in
some
topics
there's
some
detail,
but
but
there's
obviously
a
lot
more.
This
is
a
broad
scope,
but
I
the
the
one
key
Point
here
is
that
it
there's
been
lots
of
piecemeal
efforts
and
I
think
it
I
would
I
would
propose
and
welcome
your
thoughts
that
we
need
to
kind
of
more
formally
Define
a
reference
model
that
that
works
across
all
these
projects
and
then
and
then
and
then
you
know,
fill
in
the
gaps.
I
Some
of
these
projects
already
have
Solutions
and
in
some
cases
they
just
need
documentation,
but
in
some
cases
there's
additional
work
needed
and
then
so
yeah
we'll
have
to
kind
of
evolve
this
across
the
projects
and
I'm
happy
to
kind
of
contribute
to
working
with
you
all
to
make
this
happen.
G
I
I
definitely
see
the
need
for
more,
like
cohesive
cross,
Stig
collaboration
on
understanding
like
how
do
we
want
to
model
the
subject
area.
Yeah
I'm
excited
about
this
thanks
for
bringing
us
to
this
group
and
yeah
I.
Think
just
the
one
other
thing
I
would
note,
is
adding
the
multi-network
working
group
that
meets
on
Wednesday
mornings,
I
believe
Monday
morning,
EFC
I
have
another
stakeholder,
particularly
for
the
like
network
level,.
I
So
that's
the
other,
that's
another
slightly
unfortunate
overloading
of
terminology.
The
term
multi-network
as
used
in
that
group
I
believe,
is
not
quite
the
multi-network
referred
to
here
and
in
projects
like
sto,
so
without
belaboring
that
point
that
this
is
at
in
this
current
form.
This
is
not
quite
what
that
multi-network
group
is
doing,
but,
let's
just
it'll.
I
I
Yeah
I'm
skipping
over
I
mean
I've
done
work
in
the
past
about
multi-cluster
Network
policy
and
ran
into
some
of
these
issues
about
differences
in
the
interpretation
of
the
models.
So
I
have
some
context
here,
working
both
on
policy
as
well
as
on
the
MCS
API,
and
things
like
that.
A
Yeah
thanks
again,
yeah
I
really
appreciate
the
effort
put
into
that
presentation,
looking
forward
to
getting
a
hold
of
the
doc
and
adding
comments.
We
do
have
one
last
topic
on
the
agenda
here.
A
We
have
some
some
shuffling
items
but
though
you're
at
the
last
item
or
an
ending
and
exit
update
on
the
policy
Detachment
discussion,
so
a
couple
of
weeks
ago
in
and
I
expect
three
weeks
ago,
actually
in
a
game
of
meeting
and
then
subsequently
in
the
next
Gateway
API
meeting,
we
had
discussion
about
policy
attachments
and
you
know
share
some
of
the
feedback
that
we
had
in
this
group
to
get
my
API.
It
was
like
you
know.
It's
a
bit
feels
heavy
and
complex.
A
It's
concerned
about
security
proliferation
and
we
had
a
what
I
thought
was
a
very
fruitful
conversation
in
GitHub
discussion
that
linked
to
the
the
GitHub
discussion
in
the
agenda
notes
to
try
to
summarize
the
current
path
forward.
Here
from
my
understanding.
A
Is
that
we're
going
to
take
a
stab
at
the
timeout
question,
because
that
was
one
of
the
more
prevalent
you
know
examples
of
things
that
that
honestly,
a
bit
silly
to
have
these
users
configure
with
the
full
policy
attachment
hierarchy
or
at
least
having
at
a
to
use
another
term.
There
had
some
good
examples
here.
Some
prior
art
Nick
had
a
fantastic
kind
of
breakdown
of
different.
You
know
what
are
the
options?
A
Where
did
we
put
these
things
and
the
there
are
some
good
comments
here
about
kind
of
using
you
know,
potentially
having
a
field
on
HTTP
route.
That
was
vague
enough
to
not
lean
itself
to
a
particular
implementation,
but
not
too
big
to
where
it's
not
useful,
as
as
belief
in
clarified
here
so
I
think
the
open
question
in
my
head,
based
on
this
thread
on
what
rob
you
and
Nick,
are
here
to
kind
of
weigh
in
on
this.
It's
going
to.
Where
does
this
conversation
happen?
A
E
Sorry,
not
even
so,
you
open
the
deploy
preview,
not
the.
A
Yeah
I
was
just
about
to
say
it's
not
I,
read
the
floor
preview
and
quickly
stopped
because
of
diagrams.
The
diagrams
are
much
prettier
when
you're
into
the
preview.
So
you
can
see
the
next
hard
work
to
protest.
All
this
out.
Don't.
C
A
Here,
Nick
can
provide
kind
of
a
quick
overview,
yeah.
E
Yeah,
so
what
I
wanted
to
do
here
is
you
know
that
that
I'm
on
my
timeouts
diagram
is
roughly
what
what
I
had
to
do
when
I
was
first
working
with
onboard
timeouts
to
understand
all
of
the
different
options
available,
and
it's
one
of
the
reasons
the
fact
that
I
drew
a
diagram
like
that
was
one
of
the
reasons
and
then
I
looked
at
the
timeouts
for
other
ones
and
I'm
like
well.
Those
are
all
different.
It's
one
of
the
reasons
that
I
was
like
for
Tom,
that's
too
hard.
E
So
the
idea
here
is
to
make
it
so
that
you
you've
got
a
similar
diagram
that
you
can
look
to
see
what
the
timeouts
cover
and
so
that
we
can,
you
know,
look
a
bit
more
to
you,
know,
find
the
common
bits
and
talk
about
yeah.
Well,
if
we're
going
to
do
a
request,
timeout
you.
E
What
does
that
mean
and
how
would
data
planes
Implement
that
so
the
next
stage
for
me
to
update
this
is
to
update
update
this
with
some
of
the
suggestions
that
Arco
made
and
that
issue
that
you
already
popped,
open,
I,
think
those
are
good.
Those
are
all
excellent
suggestions,
so
I
think
it
seems
to
me
that
the
most
likely
candidates
are
something
like
a
request.
E
Timeout
and
you
know
something
like
some
sort
of
idle
timeout
exactly
where
that
idle
timeout
stops
and
starts
is
tricky,
and
so
the
I
think
that
the
points
that
were
made
in
the
discussion
about
hey
how
things
you
know
this
is
you
know,
maybe
if
we're
a
little
bit
vague,
it's
okay.
E
You
know
it's
probably
correct
the
person.
The
part
of
me
who
has
been
writing
specs
for
Gateway
API
for
multiple
years
now
screams
in
Terror,
at
the
thought
of
having
a
vague
spec
at
all.
E
So
you
know,
like
I,
think
if
we
are
going
to
be
vague,
we
need
to
be
vague
in
a
clearly
scoped
way.
You
know
like
we
need
to
talk
at
least
about
what
what
the
sorts
of
vagueness
that
will
permit
and
what
it
means
to
to
you,
and
so
that's
why
I
wanted
to
sort
of
draw
these
diagrams.
First
yeah
Flynn.
B
The
other
option
would
be
instead
of
well
instead
of
or
possibly
addition
to.
In
addition
to
being
simply
vague,
we
could
be
opinionated
and
say
if
you
are
going
to
be
a
properly
behaving
Gateway
API
implementation.
You
should
support
timeouts
that
do
this
and
recognize
that
not
everybody's
going
to
do
that
out
the
gate,
but
that
actually,
as
I've,
been
thinking
about
some
of
this
I,
wonder
if
that
could
end
up
being
a
more
productive
tax.
In
some
of
these.
E
B
A
Okay,
I
was
just
going
to
say
that
the
opinionated
bit
worries
me
a
bit
like
that.
Yes,
that's
kind
of
the
real
perspective,
it's
probably
a
more
attainable
situation,
but
I
guess-
and
this
hasn't
proven
itself
in
in
this
group
which
I'm
thankful
for,
but
what
I
don't
want
to
see
is
oh
hey.
There
are
like
seven
ongoing
implementations
and
we
all
think
that
Envoy
has
the
correct
semantics.
So
we're
going
to
be
opinion
about
Envoy.
A
Thank
you
for
that
time.
Emoji,
that's
perfect!
We
are
right
at
time.
Nick
I
want
to
give
you
give
you
a
chance
to
finish.
E
Yeah
yeah,
so
that
that
exact,
that
exact
thing
is
why
I
had
the
summary
of
who
uses
what
data
plane
the
summary
table
there
I
agree,
I,
think
that
it's
reasonable
for
us
to
lean
towards
having
some
sort
of
some
sort
of
you
yeah,
like
opinion
about
what
the
things
should
be
and
that's
that's
where
we
can
get
away
with
having
Badness,
because
we
say
a
request.
Timeout
means
this.
E
One
of
the
reasons
I
wanted
to
do
this.
This
sort
of
review
is
that
that
way,
we
can
make
sure
that
our
opinions
are
at
least
not
completely
impossible
for
all
of
the
major
data
planes
that
we
have
in
place
at
the
moment,
and
that's
that's
the
that's.
What
I'm
aiming
for
here
is
yeah
is
that
you
know
ideally
the
whatever
timeouts
we
do
should
be.
You
know
not
only
not
a
not
totally
impossible,
but
reasonably
straightforward
to
implement
in
each
of
the
data
planes.
E
You
know
you
shouldn't
need
to
sort
of
randomly
add
together
two
timeouts
and
then
arbitrarily
bucket
the
timeout
setting
you're,
given
into
some
like
proportion
or
something
like
that
like
it
should
fit
pretty
pretty
clear.
Cleanly
into
a
timeout
in
each
of
the
in
each
of
the
data
planes
and
that's
why
I
think
that
request,
timeout
is
reasonably
likely
and
also
some
form
of
idle
timeout
is
also
reasonably
likely
because
everybody
supports
something
like
that.
E
So
yeah,
that's
the
point
here
is,
you
know,
I'll,
be
honest.
I
got
tired
of
everyone,
but
no
one.
No
one,
really
believing
me
that
when
I
said
it
was
really
hard.
G
E
B
Yeah,
there
needs
to
be
a
way
that
we
can
explain
to
new
users
how
this
stuff
works
and
not
explode.
Their
heads
immediately,
which,
like
I,
said
I,
totally,
believe
you
that
it's
hard
I
was
actually
going
to
ask.
Can
one
of
y'all
link
to
the
diagram
specifically
because,
as
I
was
clicking
around
on
the
Gap
I
did
not
get
to
that
diagram,
which
so.
E
There's
a
deployed
preview,
the
the
diagrams
are
actually
done
in
the
markdown
using
using
mermaid
JS.
Thank
you,
Rob,
sorry
yeah.
So
I
thought
it
was
better
than
trying
to
put
png's
in
there.
They
couldn't
update
I've,
actually
updated
our
make
docs
install
so
that
it
supports
mermaid
I.
Think
so!
Oh
it
does.
It
works
in
GitHub
UI.
If
you,
if
you
switch
the
the
markdown
into
preview
mode
right
as
opposed
to
Raw
gutter,
yeah
GitHub
supports
mermaid
as
well.
So
nice.
B
A
Gonna
hop
in
here
and
try
to
try
to
wrap
this
up
just
respective
time.
We're
all
right
at
three
minutes
over,
but
yeah
I'm
excited
to
I.
Think
I
think
there's
a
little
bit
more
to
to
talk
about
as
far
as
policy
attachment
goes
because
I
think
there's
room
to
go
down
as
well
as
a
cross.
That's
really
what
that
means,
hopefully
next
time,
but
right
for
now
again.
Thank
you.
A
Everybody
who
who's
attending,
including
some
of
our
newer
attendees
next
time,
you're
in
we'll,
have
you
some
introductions,
but
for
now
I'll
call
this
meeting
done
adjourning
thanks.
Everybody
take
care,
see
you
next
time,
thanks.