►
From YouTube: Gateway API GAMMA Bi-Weekly Meeting for 20230307
Description
Gateway API GAMMA Bi-Weekly Meeting for 20230307
A
Good
afternoon
morning
evening,
everybody
this
is
the
March
7th
occurrence
of
the
Gateway
API
gamma
meeting
as
a
reminder,
as
we
always
do
this,
this
meeting
is
governed
by
the
kubernetes
code
of
conduct
so
make
sure
you're
being
respectful
and
nice
to
one
another.
We
do
have
an
open
Agenda
here.
A
link
to
that
agenda
agenda
is
now
posted
in
the
chat
of
the
meeting.
So
please
go
ahead
and
add
yourself
to
that
or
add
an
item
to
that
document.
A
If
you
have
something
like
to
discuss,
also,
we
ask
that
you
add
your
your
name
to
the
attendees
list
here
on
the
agenda
I'm
going
to
read
by
example.
So
I'll
put
your
name
in
org
and
I
can't
spell
when
my
screen
is
shared.
What's
your
name
and
organization
and
the
attendees
list
just
so,
you
can
have
a
record
of
who
attends
and
what
time
zoom
and
make
sure
you're,
adding
it
to
the
March
7th
instance,
not
the
March
14th
instance.
A
So
I
want
to
do
a
quick
recap
of
our
previous
meeting
to
get
everybody
up
to
speed
on
what
was
discussed
last
time,
because
you've
got
two
different.
You
know
meeting
occurrences
in
different
times
different.
You
have
different
people
attending
different
meetings,
so
we
try
to
recap
here:
Michael
Beaumont
shared
the
game
with
a
conformant
accepting
plan
to
get
some
feedback.
This
is
part
of
our
push
to
get
Gateway
API
into
the
sorry
yeah
to
get
gamma
into
the
next
Gateway
API
release.
A
So
I
think
there
is
a
single
detail
about
adding
Rob.
Had
this
comment
about
adding
where
the
tests
are
going
to
run
some
of
the
how
for
the
conformance
tests
and
so
I
think
we're
awaiting
a
in
some
changes
around
that,
but
other
than
that.
If
you've
got
other
feedback,
then
please
add
it
to
this
document
to
highlight
the
kept
1880
as
something
that
would
be
potentially
relevant
and
helpful
for
how
to
find
gateway
to
mesh
routing
interactions.
A
Writing
everybody
into
service
in
our
draft
spec,
but
the
this
cap
in
particular
creates
IP
address
resource
that
might
be
useful
for
us
to
use
I
think
it
might.
It
would
be
a
really
good
idea
and
in
fact,
I
plan
to
run
through
that
document
at
a
couple
of
our
of
our
future
gamma
meetings.
Just
to
make
sure
everybody's
on
the
same
page,
that
probably
happened
after
070,
though,
and
maybe
even
after
kubecon,
because
things
are
getting
busy,
they
we've
got
a
couple
of
open
PRS.
A
Here
one
is
seeking
feedback.
I
think
this
is
John's
PR
around
codifying
standard
gateway
to
mesh
interactions,
yeah
for
in
cluster,
specifically
I.
Think
yeah.
We've
got
a
lot
of
comments
here.
I
saw
John
made
some
changes
fairly
recently,
so
I'm
gonna
continue
to
keep
track
of
that
opened
a
discussion
and
GitHub
on
failover
here,
and
we
had
a
really
good
chat
about
what
was
being
done.
I
think
there
I
don't
believe
there
is
an
actual
action
item
from
that.
A
Let
me
just
clarify
some
with
the
spec
means
here
and
then
the
the
last
API
meeting,
like
the
main
meeting,
which
was
the
one
yesterday,
was
at
9am
Pacific
testing
out
alternating
meeting
time,
similar
to
gamma
question
for
Nick,
Shane
and
Rob.
Does
that
mean
that
the
meeting
after
next
also
be
in
the
morning
or
is
that
not.
B
Yet
I
haven't
decided
exactly
what
we're
doing
yet
the
turnout
was
decent.
We
want
to
potentially
we're
talking
about
potentially
trying
to
move
away
from
Monday
mornings
for
people,
so
we'll
we'll
come
back
around
to
it
all
right.
Yeah.
C
I'm
gonna,
be
it's
gonna,
be
very
relevant
to
me
in
about
four
weeks,
because
I
will
be
I'm
going
to
be
in
the
EU
TZ
for
three
months
in
about
four
weeks
studying
in
four
weeks
so
yeah.
Those
having
those
earlier
meetings
would
be
great
for
me,
but
yeah
I'll
figure
it
out
regardless.
D
No
for
the
record
we
had
maybe
60
or
70
percent
of
our
normal
afternoon,
Pacific
Time
afternoon
attendance
and
in
our
earlier
time
slot
so
I
think
one
of
the
ideas
we
at
least
consider
is
maybe
doing
it
once
a
month.
Instead
of
every
couple
weeks.
D
I
don't
know,
I
think
what
Shane
was
saying
is
kind
of
our
next
step
of
let's
try
something
that
isn't
a
Monday
morning.
Maybe
people
just
hate
Monday
Mornings
in
general.
So
if
we
try
that
and
we
get
a
bit
more
attendance-
maybe
it'll
be
more
regular
and
especially
with
Nick
switching
time
zones
in
a
bit.
Who
knows.
A
Yeah
this
this
kind
of
segue,
somewhat
decently,
into
some
other
conversations,
I've
been
having
you
know
with
with
gamma
and
Gateway
API
API,
both
having
a
meeting
each
week.
A
It
kind
of
allows
for
folks
to
be
a
part
of
Gateway
API,
multiple
times
throughout
throughout
the
week
throughout
the
month,
and
so
maybe
a
once
a
month
morning
meeting
would
work,
because
you
know,
unless
you
just
have
a
very
Ingress
specific
use
case,
you
could
probably
get
pretty
far
for
you
know,
talk
coming
into
the
gamma
meetings
that
are
every
other
week
in
the
morning
time
slot
so
I'm
yeah,
looking
forward
to
seeing
how
we,
okay,
how
that
continues
to
evolve.
A
I
missed
one
Rob
shared
the
contributor
ladder
for
a
Gateway
API
set
project
and
asked
for
some
explicit
feedback
from
the
community.
The
I
think
you
was
there
a
time
box,
I,
don't
think,
there's
a
time
box
for
that
Rob!
No.
D
Just
you
know
encourage
people
to
add
comments
to
Doc
their
questions
I
we
so
far,
we
have
I
think
three
people
who
have
volunteered
to
to
start
off
as
reviewers,
specifically
under
conformance
tests.
So
that's
that's
really
on
on
me
on
us
to
first
take
that
Doc
and
turn
it
into
formalized
Doc
in
gateway
gateway
via
repo
somewhere
and
then
next
actually
add
those
contributors.
D
As
you
know,
official
reviewers
somewhere
as
well
so
I
think
those
are
the
next
step,
but
certainly
if
this
is
something
you
want
to
get
involved
with
that,
that's
the
whole
point
we're
trying
to
make
it
easier
to
get
involved
in
this
project.
We
have
tons
of
opportunities.
I
know
it's
been
hard
to
discover
how
you
might
fit
in
this
is
trying
to
make
that
a
little
bit
easier.
B
B
Same
thing,
all
right,
I
I
was
just
gonna
say
and
yes,
and
if
you're
in
there
and
you're
like
thinking.
None
of
this
really
fits
me,
but
I
would
like
to
have
a
role
we'd
like
to
hear
about
that
too,
like
even
if
you
don't
want
to
comment
on
the
specific
stuff
in
the
doc.
Just
we
want
to
be
inclusive
so
make
sure
that
if
you're
feeling
like
yeah
I'd
like
to
have
an
official
role,
but
nothing
in
here
really
suits
what
I'm
kind
of
thinking.
Let
us
know
100.
A
Yeah
we
we
want
to.
There
are
so
many
people,
who've,
contributed
to
make
Gateway,
API
and
Gamma.
What
is
today
and
we
want
to
make
sure
that
those
folks
are
able
to
be
recognized
for
the
work
they're
doing.
A
Okay.
So
that's
that's
our
recap.
Do
we
have
any
questions
or
comments
about
the
topics
we
just
discussed.
A
All
right,
awesome
and
I
kept
it
within
15
minutes
feeling
good
about
myself
today,
all
right,
let's
get
into
our
main
agenda
for
today,
Nick
I
want
to
talk
about
simplification
of
policy,
attachment
proposals.
C
Yeah
look
I,
think
I
should
preface
this.
It's
been
a
little
while,
since
you
raised
the
policy
attachments,
complex
discussion
case,
but
I
think
it's
a
thousand
percent
Fair
feedback
I
think
the
the
probably
the
biggest
problem
with
policy
attachment
as
it
stands
right
now
is
that
it's
a
bit
underspect.
Well,
not
a
bit
totally
underspected.
C
It's
the
it's
the
skeleton
for
for
an
API
that
lets
you
define
more
apis,
and
so
you
know
the
it
needs
an
obvious
Garden.
So
I've
opened
a
PR
to
that's
one.
Five,
six,
five
to
sort
of
clarify
the
different
types
of
policy
attachment.
C
We're
talking
about
the
biggest
change
is
adding
a
direct
policy
attachment
which
doesn't
have
the
hierarchical
Notions
that
the
the
original
one
did,
which
I
think
will
be
far
more
useful
in
like
mesh
use
cases
as
well
as
others,
and
then
the
other
thing
that
I
did
was
as
part
of
this
yeah
I
was
like
look
no,
you
know
I
keep
saying:
timeouts
are
harder
than
everyone
thinks
and
people
keep
saying.
C
Surely
it's
not
that
bad,
so
I
wanted
to
make
some
diagrams
to
to
you
know
that
would
give
people
headaches
and
you
know,
sort
of
maybe
I'll
understand
where
I
was
coming
from
so
I
opened
that
timeout
skip
to
sort
of
start
collating
that
info
I
think
the
the
best
the
best
out
the
good.
The
good
thing
about
that
is
that
we
got
the
best
outcome
there.
C
Yeah
most
people
agreed
that
I'm
right.
The
timeouts
are
harder
than
you
think,
but
we
all
but
I
also
noticed
that
there
are.
There
is
more
commonality
than
we
originally
thought.
So
so
it
looks
like
there
is
a
path
forward
there,
so
that
is
so
I'm
kind
of
waiting
on
that
one
to
I
want
to
tidy
up
the
finish.
C
The
policy
attachment
one
first,
because
I
don't
want
to
be
adding
a
timeout
policy
if
we
haven't
like
clarified
the
language
about
policy
attachment,
so
the
timeout
policy
one
is
kind
of
on
hold
until
the
the
policy
attachment
update
is
merged
but
yeah.
To
summarize,
the
policy
attachment
update
yeah
that
that
comment,
I
put
at
the
bottom,
is
basically
the
the
summary
there
is
now
a
hierarchical
policy
attachment
that
has
the
defaults
and
overrides
like
it
originally
did,
and
the
other
one
is
called
a
direct
one.
C
That
is,
you
know
that
will
attach
to
a
single
object
or
conceivably.
We
might
extend
it
to
like
a
small
set.
C
That's
like
a
single
kind
in
a
single
namespace
or,
or
something
like
that,
you
know,
but
to
start
out
with
it
will
be
to
a
single
object
so
yeah,
maybe
so
that
that
one
would
cover
things
like
setting
the
TLs
parameters
for
a
back-end
service,
perhaps
so
that
one
is
relevant
for
some
other
issues
and
that's
one
of
the
reasons
why
I
haven't
pushed
harder
on
some
of
the
other
TLS
stuff
as
well
as
I,
want
to
get
this
done,
and
then
we
can
and
then
that
should
unblock
a
few
things.
C
So
the
current
state
of
it
is
yeah.
Currently,
the
original
form
is
called
hierarchical,
which
is,
as
I,
think,
Shane
and
I
were
discussing
this
earlier.
He
said
it's
technically
correct,
which
is
the
best
kind
of
correct,
but
it
is
super
hard
to
type
like
I.
I
personally
have
made
many
many
many
typos
attempting
to
type
hierarchical
and
so
I
think
we
need
a
different
name.
C
I
had
a
couple
of
thoughts,
there
I
think
rob.
You
gave
me
a
couple
of
other
thoughts
out
of
bands
that
I
kind
of
liked
as
well
I'll
edit.
The
thing
I
think
what
I'm
going
to
do
with
this.
For
now,
though,
is
I'm
going
to
put
a
big
note,
saying
I'm
going
to
leave
it
as
hierarchical
I'm,
going
to
put
a
big
to
do
and
a
note
saying
hierarchical
is
not
the
final
name.
C
You
know
this
will
not
be
the
final
way
to
talk
about
this
you
and
we're
going
to
come
back
and
do
another
PR
to
change
that
name.
But
for
now
the
concept
is
going
to
be
the
same.
It's
just
going
to
be
the
name.
It's
going
to
be
different,
so
I
think
that's
going
to
be
what
I
do
next
on
this
one,
but
there's
there's
still
a
couple
of
stuff
things.
I
need
to
add
for
the
side
docs.
This
is
mostly
reviewable
now.
C
A
Thanks
for
that
update
yeah,
this
feels
like
a
really
good
direction
to
me.
I
I'm
really
excited
to
be
able
to
add
polish,
simple
policies
directly
to
to
resources
without
all
the
other
stuff.
If
that
fits
the
use
case,
so
yeah
I
feel
like
this
is
going
to
give
a
lot
of
implementations
the
opportunity
to
kind
of
start
from
the
beginning
about
okay.
It's
now
that
we
are
like
five
six
years
more
experience,
how
we
go
back
and
set
up.
A
You
know
our
policies
and
our
implementation
from
the
beginning,
which
is
going
to
be
very
powerful,
great,
so
I'll,
add
this
I've
got
this
note
added
to
my
to-do
list
to
make
sure
I
review
this
all
right
and
the
the
timeout
stock
yeah,
looking
forward
to
iterating
on
that
more
once
the
policy
attachment
gets
merged.
Did
you
want
to
talk
about
this
onboard
Gateway
implementation
of
potentially
mentions
implementation?
The
rate
limiting
policy
is
this
you
or
somebody
else,
I.
C
Think
ARCA
raises
that
so
and
I
say
he's
here:
our
code.
Do
you
wanna?
Do
you
want
to
have
a
chat
about
this
one,
or
is
that
you
raise
this
one
or.
C
Yeah,
so
this
is
this
is
just.
This
is
just
a
point
that
Gateway
on
the
Gateway
has
used
has
used
Apollo.
We
talked
about
using
policy
attachment,
but
ultimately
you
know
I
I
was
not
available
enough
for
this
one,
but
I
think
I
agree
that
ultimately
Global
rate
limiting
there
has
been
configured
using
a
filter
rather
than
a
policy
attachment.
C
One
of
the
things
that
I
am
changing.
The
one
of
the
other
things
that
I'm
changing
in
the
in
the
that
policy
attachment
change
is
that
it's
originally
policy
attachment
was
explicitly
said.
No,
you
can't
use
a
policy
to
configure
an
another
extension
like
a
like
a
custom
filter.
C
I
have
relaxed
that
a
bit
to
be
like
hey.
You
can
do
it,
but
just
be
sure
you
know
what
you're
doing
yeah
like
you
know.
It
can
be
it's
going
to
be
really
complicated
and,
in
the
absence
of
us,
solving
the
status
problem,
which
is
the
biggest
outstanding
problem
with
policy
attachment,
it's
going
to
be
really
really
hard
to
troubleshoot.
If
anything
goes
wrong,
so
I
think
on
my
Gateway
is
handling
some
of
this
stuff.
C
Some
of
those
concerns
in
a
slightly
different
way,
but
yeah
I
think
it
is
relevant
for
people
to
know
that
that
this
is
the
way
that
homeboy
Gateway
has
handled
this.
One
out
constant
looks
like
you're,
not
muted,.
F
Yeah,
so
I
want
to
make
sure
that
that
we,
it
is
implementable,
both
sidecars
and
client-side
stuff,
and
how
it's
Implement
them
with
the
new
one.
Yes,
because
that
is
used
to
try
to
do
and
probably
others,
because
some
of
those
clients
overrides
are
particularly
Implement
in
one
model-
are
a
bit
more
difficult
to
implement
in
India.
But
so
we
need
to
make
sure
that
by
the
timeout
is
not
used
one.
We
should
support
it
and
so
forth,
but
but
we
need
to
make
sure
that
implementations.
F
Too
much
you
know
too
many
extensions
within
this
that
are
not
implementable
by
post
Defenders.
C
Yeah
again,
I
think
that
is
a
very,
very
fair
point
and
I
think
for
policy
attachment.
The
intended
flow.
That
I
sort
of
had
in
mind
here
was
that
it's
going
to
be
implementations
that
make
policies
to
start
with,
and
it
is
expected
and
normal
that
an
implementation
might
make
a
you
know
a
contour
policy
or
a
of
a
Gateway
policy
or
an
istio
policy
that
it
that's
only
that's
only
theirs.
C
The
idea
here
is
to
get
people
trying
those
things
out
and
to
like
and
then
to
have
a
couple
of
examples
where
we
can
say
hey.
Let's
look
at
the
commonalities
here,
and
maybe
we
make
you
know,
Envoy
specific
ones
for
all
of
the
people
that
use
onboard
the
data
plane
or
nginx
ones,
for
people
who
use
nginx's
data
playing
them
all,
or
maybe
there
are
enough
commonality
here
that
we
can
make.
C
You
know
extended
policies
that
are
that
are
included
in
Gateway
API
itself,
but
I
don't,
but
we're
never
going
to
be
able
to
do
that
until
people
have
actually
started
making
these
things
and
trying
them
out
and
seeing
what
works
and
what
doesn't
you
know?
I've
tried
really
hard
here
to
make
this
thing
be
implementable,
but
until
we
actually
have
people
implement
it,
you
know
it's
all.
C
Just
you
know
a
structure
that
exists
mainly
in
my
head,
rather
than
a
thing
that
people
have
actually
built
and
used,
which
is
not
where
we
want
this
sort
of
stuff
to
be.
A
All
right
do
we
have
any
last
questions
or
comments
on
the
general
policy
attachments
topic.
A
All
right,
in
that
case,
Rob
I'll,
let
you
take
over
screen,
sharing
and
hosting.
D
Cool
thanks.
Hopefully
this
is
what's
up
next.
This
is
sanjeev's
presentation
for
anybody
who
was
at
the
main
gateway
API
meeting.
What
was
it
a
week
ago,
somebody,
you
probably
remember
better
than
I
do
when
this
was
presented.
There.
E
No,
we
we
spent
like
just
a
few
minutes
in
the
grammar
call
of
two
weeks
ago.
Oh
a
little
bit
more
in
the
Sig
multi-cluster
of
this
morning,.
D
Okay,
yeah
perfect,
so
maybe
what
what
do
we
want
to
focus
on
today?.
E
D
Acts
I
think
Keith
is
the
only
one
who
can
make
you
kill
co-host
right
now.
A
Okay,
so
so
on,
Chief
I'm,
Gonna,
Make,
You,
co-hosts
and
then
Rob
I'm
gonna
make
you
host.
That
was
my
mistake.
Oh.
A
All
right
everything
should
be
good
to
go
now
perfect.
D
D
About
how
much
time
were
you
looking
to
spend
on
this.
E
Okay,
so
yeah.
This
was
very
briefly
discussed
last
time,
but
the
takeaway
was
to
provide
the
link
to
make
the
document
public
at
that
time.
It
wasn't
public
I,
mailed
public.
Since
then,
I
don't
know
if,
if
you
got
a
chance
to
look
at
it,
but
I
also
then
shared
it
in
the
sigmatic
cluster
this
morning-
and
this
is
a
somewhat
broad
topic
and
it
essentially
includes
a
few
subtopics
within
it.
E
So
one
of
the
things
that
I
would
welcome
the
community
input
on
is
how
to
I
mean
I,
have
some
thoughts
on
how
to
carve
it
up
and
how
to
take
it
forward,
but
definitely
open
to
your
guidelines.
Your
guidance
and
suggestions.
So
I'll
summarize
the
asks
at
the
end.
E
So
what
this
is
doing
is
that,
okay,
let's
just
revisit
almost
like
what
is
the
model
we
have
for
multi-cluster
service
networking
and
a
lot
of
the
projects
have
sort
of
implied
models.
There
are
some
explicit
models
like
the
kubernetes
multi-cluster
services,
API,
which
have
their
own
pros
and
cons
and
may
need
further
extensions.
E
But
then
individual
service
measures
like
sdo
and
Linkedin
others
they've
had
their
own
sort
of
multi-cluster
models
like
single
Network,
multi-network,
primary
remote
and
all
these
things,
and
these
tend
to
be
somewhat
not
always
consistent.
So
when
so,
let's
and
that's
the
motivation
here
and
then
I'll
talk
about
the
next
set
of
tasks
and
how
which
working
groups
we
should
use
those
to
forward
those
tasks.
Okay,
so
yeah.
E
So
as
of
course,
the
members
of
this
group
know
very
very
closely,
there's
this
general
movement
towards
providing
common
kubernetes
native
apis
for
any
kind
of
multi-cluster
networking
function
as
opposed
to
several.
You
know
the
individual
service
mesh
apis
and
and
things
like
project
gamma
and
the
adoption
of
Gateway
API
for
service
meshes
and
MC.
Cpi
are
all
efforts
in
that
direction,
but,
as
a
consequence,
we've
had
some
gaps.
E
E
You
know,
implied
multi-cluster
Services
Discovery,
you
know
a
service
named
Foo
in
one
cluster,
and
one
mesh
is
implicitly
bound
to
service
of
the
same
name
in
the
same
name:
space
in
a
different
cluster
of
the
service
mesh
and,
for
example,
the
surface
meshes.
You
know
like
istio,
for
example,
you
do.
E
It
is
supported
with
single
Network
and
multi-network
models,
but
when
we've
tried
to
do
multi-cluster
extensions
to
Native
kubernetes,
for
example,
we
try
to
introduce
kubernetes
Network
policy
and
see
what
needs
to
be
done
to
make
it
multi-cluster.
E
One
of
the
feedbacks
from
the
Sig
Network
Community
was
okay,
let's
that
should
focus
only
on
single
Network
models,
but
in
that
case
the
unified,
in
that
case,
Network
policy
wouldn't
cover
the
needs
of
Network
policy
for
kubernetes
Native
service
meshes.
E
So
you
see
that
there
is
disconnect
in
the
modeling
and
assumptions
of
the
multi-cluster
network
between
these
different
projects,
and
so
one
of
the
goals
of
this
presentation
is
to
highlight
this
point
and
kick
off
definition
of
a
reference
multi-cluster
model
that
all
the
projects
can
design
towards
or
design
against,
so
that
we
don't
get
into
these
unconsistency
kinds
of
things
and
whether
that's
an
extension
of
the
kubernetes
MCS
API
or
so
that
remains
to
be
seen
so
I'll
skip
over
these
slides
here,
but
I'll
just
illustrate,
for
example,
okay.
E
So
what
what
are
we
proposing
here?
So
the
proposal
here
is
that
we
Define
an
overall
reference
model
for
multi-cluster
service
networking
building
on
what's
already
been
done,
and
once
we
have
this
reference
model
going
back
to
individual
sub
features,
things
like
multi-cluster
policy,
multi-cluster,
load,
balancing
and
seeing
what
additionally
remains
to
be
done
to
to
work
within
that
model
that
we
Define.
E
E
Now
in
the
specific
question
of
the
model,
the
questions
to
be
answered
include
things
like
okay
is:
should
a
cluster
set,
which
is
an
MCS
API
concept,
should
it
be
identical
to
or
exactly
congruent
with
a
service
mesh,
it
would
appear
that
should
probably
not
be
the
case,
but
nowhere
has
this
been
explicitly
stated,
and
some
people
have
sort
of
implied
that
that
should
be
the
case,
because
both
of
them
require
namespace
sameness.
So.
E
It
suggests
that
a
cluster
set
should
be
identical
to
a
service
mesh,
but
but
cluster
set
is
really
about
cluster
controls
with
you.
You
typically
want
to
give
that
control
to
a
cluster
admin,
whereas
service
mesh,
you
typically
want
to
get
control
to
an
application
admin
that
knows
the
application
topology
across
meshes
and
the
application
topology
need
not
be
tied
to
the
cluster
topology.
E
So
that's
an
example
of
a
disconnect
on
the
modeling
assumptions.
Similarly,
should
the
model
support
a
single
Network
within
each
cluster
set
or
mesh,
or
should
it
support
multi-network
right
policy
discussions
so
far
have
suggested.
We
should
support
single
network,
but
service
meshes
like
istio
support,
multinetwork,
and
so
we
have
to
agree
on
it
so
that
we
we
don't
end
up
defining
two
versions
of
policy,
one
just
for
kubernetes
and
one
for
kubernetes
with
service
mesh
native
apis.
E
D
But
but
one
thing
I
want
to
highlight,
at
least
from
my
perspective,
on
Gateway
API,
and
anyone
can
feel
free
to
correct
me,
but
I
feel
like
Gateway
API
is
is
not
trying
to
solve
all
the
world's
problems
here,
and
so
you
know
we
we
can
specify
what
it
looks
like
for
say,
single
Network
or
for
the
current
state
of
the
world
with
multi-cluster
service
API,
without
necessarily
preventing
multi-network
from
being
a
future
Edition,
or
you
know,
I
feel
like
just
by
defining
how
one
thing
works:
we're
not
precluding
a
more
advanced
concept
from
working
in
the
future.
E
Yeah
agreed,
and
in
some
cases
that
is
possible,
but
in
some
cases
we
have
seen
a
disconnect.
For
example,
when
we
were
trying
to
have
a
proposal
for
policy
where,
if
you
assume
you
can't
just
start
with
a
single
Network
policy
and
then
extend
it
to
multi-network,
because
you
have
to
know
your
model
from
from
the
beginning.
So
in
some
of
these
features
it
could
be
possible
to
work
with
the
current
MCS
API
and,
and
that
does
not
preclude
any
future
enhancements
if
we
need
a
model
broader
than
the
current
MCA
CPI.
E
But
that's
not
true
for
all
these
features
and
secondly,
even
the
MCS
API
itself
has
had
pretty
limited
adoption.
So
it
is
well
worth
a
while
to
look
at
what
has
hampered
their
adoption
and-
and
maybe
the
Crux
of
the
effort
would
be
to
either
enhance
the
MCS
API
or
or
have
something
equivalent,
so
so
you're
right.
This
is
not
saying
that
we
cannot
proceed
with
the
models
that
we
have
currently,
but
it
would
appear
that
we
they
might
go
off
in
different
directions
that
are
hard
to
reconcile.
E
If,
if,
if
we
don't
be
more
explicit
in
the
model,
this
is
just
a
thought.
You
know
some
people
in
Sigma
multi-clusters
seem
to
agree
with
that,
but
this
is
open
for
discussion
and
you
know
if
we,
if
we
feel
that
is
not
the
case,
then
we
don't
need
to
redefine
a
model.
This
is
just
a
proposal
at
this
point.
E
So
I'll
skip
through
the
rest
of
it.
I've
got
some
topology
pictures
on
what
we
mean
by
different
kinds
of
models.
Here's
why.
G
Oh
sorry,
we'll
see
if
I
can.
G
Okay,
so
in
history,
we're
having
similar
discussion
about
the
semantics
of
multi-cluster
but
I
think
one
one
important
thing
is
to
to
start
with
requirements.
I
mean,
for
example,
requirements
at
cluster
local,
for
example,
has
a
semantics
that
do
not
break
when
mesh
is
adopted,
which
I
think
is
very
important
so
and
then
just
decide
what
what
are
the
common
things
that
we
want
to
do
it's
a
minimal,
viable
things
that
we
need
to
express
like
the
fact
that
Services
spanning
multiple
clusters
policy
should
have
the
requirement,
obviously
that
they
work.
F
E
We
don't
want
to
do
everything
all
at
once
or
sort
of
spread
too
broad,
but
it
it
does.
It
does
seem
worthwhile
to
look
at
the
aspects
of
modeling
that
that
should
be
done
up
front
so
that
different
projects
aren't
ending
up
in
consistent
and
basis,
and
we've
seen
that
before
right,
I
mean,
for
example,
this
single
Network
versus
multiple
network
topic
or
cluster
sets
versus
meshes,
so
it
it
seems
useful
to
to
have
at
least
one
relatively.
E
This
is
also
related
with
the
fact
that
the
MCC
has
seen
pretty
limited
adoption,
so
there
have
been
some
challenges
right
and
that
have
to
do
with
the
way
it
you
know
requires
namespace,
sameness
and
couples,
admin,
admin,
control,
cluster
scope,
admin
controls
with
application
scope,
so
it
it
does
appear
we're
open
to
discussion,
but
it
does
appear
that
there's
some
some
amount
of
modeling
revision
would
would
be
helpful
and,
as
you
said,
similar
discussions
are
happening
within
istio,
but
let
open
to
feedback.
D
E
Work
sure
so:
okay,
I'll
I'll
just
mention
this
one
slide
and
then
just
go
to
the
conclusion
slide.
So
please
look
at
it
and
feel
free,
and
this
is
supposed
to
be
just
open
work.
So
whoever
is
interested
please
contact.
We
can
all
work
together
on
this,
so
a
possible
model
that
we
might
conclude
based
on,
let's
say
a
first
pass
at
combining
the
istio
multi-cluster
model
with
the
multi-cluster
services.
Api
model
could
be
that
a
cluster
set
and
a
mesh
are
identical.
E
Of
course,
there's
good
reasons
for
that
not
to
be
the
case,
but
for
now,
let's,
let's
assume
that
so
that's,
but
this
is
not
stated
anywhere.
So
we
would
need
to
explicitly
state
that
if
you
are
trying
to
do
a
is,
if,
if
gamma
is
trying
to
provide
a
service
mesh
like
service
for
the
same
kinds
of
deployment
models,
that
istio
is
targeting
but
wants
to
do
that
in
the
context
of
an
MCS
API,
then
we
would
have
to
have
some
relation
between
cluster
sets
and
meshes
one
relation
could
be.
They
are
identical.
E
The
other
one
could
be
that
they're
non-overlapping
because
of
having
different
admin
scopes,
but
then,
in
the
istio
model
you
can
have
multiple
networks
within
mesh
or
possibly
cluster
said
if
we
were
to
make
that
extension.
E
But
the
efforts
to
do
Define
Network
policy
were
initially
scoped
within
a
single
Network,
so
you
wouldn't.
If
we
had
gone
down
that
without
trying
to
have
a
common
model,
we
would
have
not
had
a
usable
policy
for
this
model
right
so
that
that's
an
example
of
the
disconnect.
So
in
that
that's
an
example
where
you
want
to
agree
on
the
model
before
you
kick
off
different
projects
for
features
like
policy
versus
load,
balancing
and
so
on.
So
the
next
one
is
I.
E
Think
Rob
is
going
to
cover
this
in
the
very
next
topic,
so
I'm
going
to
skip
over
that-
and
you
know
this
is
the
policy
topic
which
I
just
discussed.
So
let's
just
go
to
the
conclusions
and
next
steps
and
what
what
the
ask
is
here
right.
E
So
the
proposal
here
is
that
we
document
a
reference
model
for
multi-cluster
service
networking,
and
that
is
of
interest
to
both
this
working
group.
As
well
as
Gateway
API
sign,
Network
and
multi-cluster,
which
group
we
do
it
in
or
whether
we
bring
it
across
multiple
groups,
that's
open
right.
It
would
appear
that
a
lot
of
the
service
mesh
leads
tend
to
attend
this
working
group.
So
we
need
your
your
input
here.
E
The
way
I
look
at
it
is
that
this
is
defined
by
what
are
the
multi-clustered
needs
for
the
various
service
networking
projects,
whether
they
be
sto
or
linked
or
celium,
or
console,
and
then,
and
and
have
a
cap
or
a
gap
which
defines
a
model
that
that
works
for
that
meets
all
the
requirements
or
as
many
as
possible
and
then
decide
whether
we
need
additional
enhancements
to
policy
or
load
balancing
or
service
discovery
in
the
context
of
that
model,
and
that
would
be
separate
gaps
or
caps.
E
D
D
Is
this
is
such
a
massive
area?
I?
You
know
it's
hard
to
find
the
right
process
and
procedure
for
going
forward.
I
know:
we've
struggled
in
Gateway
API,
even
with
the
moderately
sized
gaps
getting
them
to
push
forward
like
policy
attachment
is
one
that
Nick
was
presenting
on
earlier.
That
feels
smaller
in
scope
than
this,
but
is
has
also
been
sufficiently
large
that
it's
been
hard
to
push
forward
I.
D
You
know
I
I,
basically
I'm
trying
to
say
that
I
don't
know
that
we
have
a
Ace
I,
don't
think
we've
solved
this,
that's
what
I'll
say.
I
I
am
very
open
to
ideas
on
on
improving
this.
At
a
high
level,
we
have
had
the
best
success
when
we've
been
able
to
split
out
ideas
into
smaller,
smaller
Concepts.
So,
instead
of
trying
to
tackle
everything,
if
we
could
say
you
know
not
to
pick
on
just
one
thing
but
like
what
does
it
look
like
for
the
cluster
set
and
service
mesh
overlap
by
like?
D
Can
we
just
work
on
defining
that
a
cluster
set
and
service
mesh
are
either
identical
or
different,
and
you
know
like
if
we
can
take
these
bits
apart
and
and
work
on
them
in
relatively
small
chunks,
I
feel
like
we
might
be
able
to
push
things
through
a
little
bit
faster,
but
I
know
you
know.
I
I
appreciate,
first
of
all
off
that
you're
bringing
this
up
and
asking
really
good
questions.
D
I,
don't
have
great
guidance
other
than
you
know.
Maybe
if
we
can
split
this
up
into
smaller
pieces,
I
think
that
all
the
the
you
know
groups
you
mentioned
here.
The
sigs
and
sub
projects
all
are
relevant
and
so,
barring
another
meeting
we
probably
will
just
kind
of
have
to
Circle
back
and
forth.
From
my
perspective,
the
Ingress
side
is
relatively
straightforward.
G
Why
do
you
feel
that
Ingress
is
different
sense
and
Gamma
I
mean
I?
Think
we
are
in
agreement
that
if
HTTP
routes
allows
class
service
import
to
be
used
as
a
backend,
there
is
no
no
difference
in
Behavior
or,
and
if
you
define
that,
if,
if
Ingress
defines
the
semantics
of
class
that
local
to
be
local
I,
don't
see
any
reason
gamma
to
be
different,
those
two
things
could
be
pretty
quickly
agreed
on
and
moved
forward.
G
D
I
I
generally
agree
with
you,
I
I
would
say:
there's
there's
two
things
that
make
it
a
little
bit
more
complicated
one.
This
idea
that
right
now
it's
possible
for
a
mesh.
You
know
the
group
of
connected
clusters
under
a
mesh
can
be
different
than
a
cluster
set.
I,
add
some
level
of
complexity
and
then
two
I
think
the
idea
that
with
Ingress
we
don't
have
a
whole
lot
of
prior
art
when
it
comes
to
multi-cluster
networking
and
what
we
do
is
largely
consistent
with
the
kubernetes
API,
whereas
with
mesh.
D
G
I
understand
your
question
so
again:
I
I
think
it's
it's
it's
something
we
need
to
do
in
either
way,
so
to
make
sure
that
any
mesh
is
consistent
with
kubernetes
and
I,
mean
istio
being
slightly
off
and
I.
Think
compatibility
is
created
a
lot
of
problems
for
both
these
users
and
kubernetes
users.
So
I
think
we
are
at
least
I
mean
I,
don't
know
I
see
for
myself,
but
I
think
we're
pretty
committed
to
you
know,
alignment
and
to
be
consistent
with
their
medicine.
G
So
so
the
mesh
works
the
same
way
as
other
meshes
and
as
kubernetes
and
that's
a
pretty
good
startup
I
mean
fixing
is
a
semantics
of
cluster
local
and
cluster,
said
the
local
and
and
aligning
all
the
things
that
are
probably
common
across
everyone.
Every
mesh
I
mean
I,
know,
link
Hardy,
for
example,
is
I
just
looked
at
the
implementation
and
it's
also
kind
of
similar
except
user
annotation.
So
I
don't
think
we
will
have
troubles
moving
fast
on
on
at
least
the
most
common
features.
E
Rob
one
quick
thing:
does
anybody
else?
Have
any
comments?
I'll
just
add
one
quick
thing
here
about
sort
of
breaking
breaking
up
the
tasks
here.
So
one
proposal
here
is
sort
of
a
cap
on
the
model
and
then
caps
on
individual
features
like
Discovery
or
policy
or
whatever,
and
that's
also
the
way
the
MCS
API
was
introduced
right.
The
MCS
API
was
one
cap
right
which
had
a
certain
model
of
cluster
services.
E
So
in
that
granularity
you
know
it
might
be
okay
to
have
the
first
chunk
of
work
be
what's
highlighted
here,
which
is
this,
but
I
think
you
might
be
proposing
an
even
finer
granularity,
although
that's
not
how
this
MCS
API
was
sort
of
defined
right,
it
was
here's
a
possible
model
and
then
yeah
so.
D
Yeah
and
I
think
that
I
think
that
comes
from
the
bias
of
you
know
how
Gateway
API
has
done
things.
I
know
Gateway,
API
and
MCS.
Have
you
know,
act
a
little
bit
differently?
Mcs
has
evolved
and
worked
through
a
single
cap
and
Gateway
API.
We
have
a
series
of
smaller
gaps
that
make
up
the
state
of
API,
so
our
bias
has
been
so
far
to
to
work
on
smaller
pieces
and
kind
of
form
them
together
into
something
larger,
but
I
think
I
cut
off
Nick,
so
maybe
I'll.
Let
you
keep
going.
C
Yeah
I
think
I
think
yeah
I
think
what
says
is
fair,
though
our
bias
is
about
our.
What
our
experience
has
been
like
robs
in
my
particular
has
experience
has
been
working
on
Gateway
API
for
like
like
four
and
a
half
years
now
right
so
the
and
what
we
found
is
that
when
we've
tried
to
do
bigger
things
like
this,
so
many
people
have
so
many
opinions
that
that
correlating
all
of
those
opinions
together
into
one
like
getting
everybody
to
agreement
can
be
really
difficult.
C
C
As
you
can,
I
mean
you
absolutely
can
do
it
with
someone
with
what
you're
doing
here
we're
sort
of
established
sort
of
top-down
building
a
model,
but
what
what
we
have
found
is
that
the
amount
of
bandwidth
required
cognitive
bandwidth
required
for
somebody
to
sit
down
and
process
an
entire
like
long
dark
with
a
with
a
really
big
model.
That's
very
comprehensive!
It's
so
large
that
it's
very
easy
for
it
to
forward
the
bottom
of
people's
priority
tree
and-
and
it
takes
a
re-
can
take
a
really
long
time
to
build
consensus.
C
Whereas
if
you
doing
doing
the
sort
of
thing
that
costing
us
saying
well
you're
taking
like
the
chunks
that
you
are
pretty
sure
everyone
is
close
to
agreement
on,
then
you
then
you
can
build
those
first
and
then
you
find
in
the
process
of
building
those.
C
You
find
the
bits
that
people
don't
agree
on
and
you
defer
the
bits
you
can
you
decide
on
the
bits
you
can't
you
can
and
the
further
bit
you
can't
and
then
that's
how
you
sort
of
agglutinative
build
an
API
sort
of
more
from
the
bottom
up,
rather
than
building
a
really
big
API
from
the
top
down
which,
in
the
case
where
you've
got
we've
got
pre-existing.
Implementations
like
you
do
for
mesh
that
have
like
most
of
the
different
meshes,
as
you
say,
have
a
way
of
doing.
C
Some
sort
of
like
multi-cluster
already
like
psyllium,
has
its
own
cluster
mesh.
That
works
in
a
slightly
different
way
to
some
of
what
some,
what
everyone
else
does
yeah,
but
it
does
work
right
and
so
the
so
I
and
that's
a
good
example
of
where
you,
starting
from
the
common
things,
means
that
everybody
can
agree.
Yes,
that's
what
we
want
to
do
and
then
building
up
the
model
from
the
smaller
building
blocks
that
you've
had
everyone
or
mostly
agree
on
means
that
it's
easier
to
build
consensus
for
something
much
larger.
E
Yeah
totally
agree
and
I
I
do
think
that
what
we
would
document
first
would
would
be
fairly
consumable.
It
would
sort
of
focus
on
really
these
two
questions
which
I'm
highlighting
here
right
so
forget
about
policy
and
load
balancing
all
of
that
other
stuff.
For
now
those
are
follow-on
pieces.
Two
very
basic
questions
are:
is
a
cluster
set
congruent
with
a
mesh?
Are
we
dealing
with
single
networks
or
multiple
networks
within
a
cluster
set
or
mesh?
E
That
itself
is
a
pretty
key
model
definition
that
we
need
to
agree
on
then
subsequence
other
caps
can
talk
about
policy
or
load
balancing
whatever
so
I
know
I
combined
level
of
things
here,
but
if
you
net
it
down,
the
first
kept
we're
talking
about
is
not
that
big.
It
is
really
addressing
primarily
these
two
questions,
but
anyway,
I
think
we've
gone
on
quite
a.
F
E
D
D
You
know,
I
think
the
best
starting
point
is
a
gap
that
comes
back
to
a
multi-plus
service
cap,
but
we
can
work
through
and
the
key
thing
is
we
have
a
doc
somewhere
that
everyone
can
review
where
it
lives
is
less
important.
D
This
feels
very
relevant
to
gamma,
so
I
I
feel
like
starting
it
as
a
gap
under
there.
Maybe.
F
D
E
F
D
Well
said,
all
right,
so
the
the
very
last
thing-
oh,
not
quite
the
last
thing.
Sorry,
we
I
missed
that
we
had
a
couple
things
on
here,
so
I'll
be
awfully
quick.
You
have
probably
seen
many
of
the
people
on.
This
call
have
probably
seen
this
doc
already.
This
is
a
doc
that
I
presented
at
the
Gateway
API
meeting
yesterday,
as
well
as
the
multi-cluster
meeting
earlier
today.
D
So
I
won't
focus
too
much
on
it,
but
what
I
would
say
is
the
the
high
level
idea,
Here
If
This
Were,
to
turn
into
a
gap
which
is
the
goal
is
anywhere
where
you
can
use
service
within
Gateway
API.
Today
you
can
use
service
import
and
the
distinction
is
a
service
refers
to
Cluster
local
endpoints
and
a
service
import
refers
to
endpoints
within
the
cluster
Set.
That's
that's
the
tldr
of
the
entire
thing.
D
It's
just
saying:
hey
anywhere,
you
can
use
service,
you
can
use
service,
import,
I
would
have
extended
performance
and
yeah
this.
This
is
exactly
what
I'm
proposing
I
think
this.
The
mesh
side
of
things
may
be
a
little
bit
more
interesting
and
certainly
more
interesting
for
the
scope
of
this
call.
I
know
that
many
meshes
today
don't
make
any
kind
of
distinction
between
cluster
local
and
cluster
set.
D
Endpoints
I
know
cluster
set,
wouldn't
necessarily
be
the
right
term
here,
but
you
know
whether
the
endpoints
are
within
the
same
cluster
or
just
within
the
mesh
somewhere.
It's
not
a
distinction
that
many
meshes
make.
D
Something
like
this
Gap
proposes.
I
would
like
some.
You
know
feedback
on
this
idea,
but
I
think
this
kind
of
goes
hand
in
hand
with
the
ideas
that
Sanjeev
was
already
talking
about,
and
you
know,
I
think
what
were
what
I'm
proposing
here
is
just
kind
of
one
of
a
series
of
gaps
related
to
multi-cluster
and
Gateway
API.
D
This
feels
like
a
reasonable
starting
point,
but
this
this
certainly
does
not
answer
every
question
as
Sanjeev
is
pointing
out
so
yeah
feedback.
Welcome.
I
won't
spend
too
much
time
on
this,
because
there
is
one
more
item
from
Shane,
but
just
be
aware
of
this,
please
add
comments.
I
am
intending
to
turn
this
into
a
get
PR
sometime
soon,
but
and
we
can
continue
the
discussion
there-
yeah,
that's
all
I
got
in
this
one,
all
right,
any
any
quick
comments
before
we
move
on.
B
I'll
be
really
super
quick,
there
is
a
provisional
gap
for
conformance
profiles,
which
includes
conformance
levels,
is
what
we
used
to
call
this.
It
includes
conformance
reporting.
It
does
not
currently
have
a
whole
lot
of
mesh
stuff
in
it.
We
refer
to
the
fact
that
we
will
do
conformance
profiles
for
match,
but
we
don't
have
mesh
tests
currently,
so
I
just
wanted
to
make
sure
this
is
on
people's
radar.
That's
it
take
a
look
at
it.
Please
do
feel
free.
B
It
would
very
much
encourage
you
to
like
add
changes
like
PRS
and
stuff
for
things
that
you
want
to
see
change
if
you
wanted
to
add
some
context
about
things
that
you
want
to
see
in
mesh
it'd
be
fair
to
add
it
here,
but
that's
it
just
kind
of
putting
it
on
radar,
especially
with
beaumont's
dock
recently,
like
just
keep
it
in
mind
that
we're
kind
of
doing
this
right
now
for
the
core
Gateway
stuff.
D
Yeah-
and
you
know,
thank
you-
thank
you
Shane
for
that
data.
I
just
want
to
highlight
again
like
a
key
goal
with
all
of
this
conformance
work
is
we're
trying
to
get
to
a
point
where
we
have
some
kind
of
global
visibility
in
turn
into
who's
conformant,
with
what
features
that
can
help.
You
know
Drive
decisions
about
what
features
are
ready
to
graduate
to
the
next
level,
and
it
can
also
help
you
know
us
understand
the
help
of
the
ecosystem.
D
What
kind
of
delay
is
there
between
us
introducing
a
feature
to
the
API
and
implementations
being
conformed
with
it?
So
I
think
this
is
going
to
be
huge.
I
really
appreciate
to
everyone.
Who's
been
been
working
to
push
this
forward
and
yeah.
Please
please
take
a
look.
D
Right
yeah.
D
That's
it.
We
even
have
a
couple
minutes
left.
So
thank
you.
Everyone
with
that,
it
sounds
like
we've
got
lots
lots
of
follow-ups,
so
we
can
continue
discussions
in
slack
on
GitHub
wherever,
but
yeah.
Thanks
for
great
discussion
have
a
great
rest
of
your
week.
Cheers
everybody.