►
From YouTube: Gateway API GAMMA Meeting for 20230228
Description
Gateway API GAMMA Meeting for 20230228
A
Foreign,
hello,
everybody
Welcome
to
the
February
28
2023
occurrence
of
the
Gateway
API
gaming.
As
a
reminder,
this
meeting
is
governed
by
the
kubernetes
code
of
conduct,
so
please
be
respectful
and
nice
to
everybody,
not
just
in
this
meeting,
but
in
general,
because
of
people
is
a
good
thing
to
do
we.
As
a
reminder,
we
have
an
open,
Agenda,
I'll
post,
the
agenda
meeting,
notes,
link
here
in
the
chat
and
then
I'm
going
to
share
my
screen
so
that
we
can
all
look
at
said
agenda
as
we
go
throughout
the
meeting.
A
Please,
if
you're,
if
you're
here,
make
sure
to
add
your
name
and
organization
to
the
attendees
list.
This
helps
us
keep
track
and
evaluate
if
these
split
times
are
working
great
for
folks
or
if
we
need
to
make
any
adjustments
to
our
applications
or
schedule
to
make
sure
we're
accommodating
people.
A
Also,
if
you
have
a
topic
you'd
like
to
discuss,
the
agenda
is
open,
so
just
pop
a
topic
here
down
in
the
meeting
notes
section
to
start
off
with
we're
going
to
go
through
a
recap,
as
is
become
our
custom.
So
during
our
last
meeting
we
had
spent
a
good
bit
of
time
listening
and
discussing
feedback
from
Linker
D's
experience,
adopting
Gateway
API,
both
I
believe
with
both
feedback,
both
the
Ingress
and
the
mesh
use
cases
and.
B
So
yeah
that
was
pretty
much
the
mass
UK
mesh
use
case.
Wow
I
can't
say
that
yeah
just
for
the
record.
A
So
thanks
for
the
clarification
that
was
the
mesh
use,
cases
that
we
discussed
last
time
and
All
Things,
Considered
I
think
the
takeaway
there
was
that
there's
going
to
be
some
work
necessary
to
make
sure
routes
is,
is
going
to
be
feasible
for
Linker
D,
but
all
in
all
hoping
it
looks,
looking
positive
as
far
as
thinker
needs
adoption,
with
our
with
our
draft
spec
that
we
hope
to
release
in
070.
A
We
also
had
a
great
discussion
that
was
kicked
off
by
Sanjeev
about
multi-cluster
network
service
networking
and
the
various
mesh.
It's
not
issues
but
like
the
problem.
Space
for
mesh
within
multi-cluster
kind
of
Desiring,
a
somewhat
standard
approach
to
kubernetes
for
a
service
for
multi-cluster
service
networking
had
a
a
he
linked
to
PowerPoint.
A
If
nobody
else
does
afterward
I'll
go
through
and
and
add
the
link
back
here
in
the
recap
to
that
slide:
Deck
with
great
content
and
diagram
sticking
into
the
approach
and
the
and
the
ask
one
of
my
one
of
my
takeaways.
There
I
had
a
chance
to
talk
to
Sanjeev,
asynchronously
and
slack
along
with
Rob
and
again
a
lot
of
a
lot
of
great
points
that
were
raised
and
I.
A
Think
the
the
takeaway
for
me
is
that
there's
still
a
lot
of
work
to
do
potentially
a
lot
of
use
cases
and
Sig
multi-cluster
and
the
MCS
API
that
aren't
as
smooth
with
the
users
as
we'd
like
them
to
be
I.
A
Think
for
me,
the
the
best
place
to
start
that
those
conversations
will
be
in
thick
multi-cluster
and
yet,
while
that
is
yet
another
community
meeting
to
add
to
everybody's
agenda,
I
know
I'm
just
trying
to
start
attending
those
more
frequently
and
share
some
feedback
that
I've
gotten
from
some
of
my
users
and
my
customers.
As
far
as
how
MCS,
API
and
other
things
from
Sigma
to
Cluster
are
working
and
then
maybe
we
can
get
sync
Network
signal
to
Cluster.
A
Gamma
Gateway
together
at
some
point
and
address
some
of
these
things
in
a
kind
of
an
iterative
and
iterative
approach.
So
thanks
to
whoever
looked
at
that
presentation,
it's
there
in
the
recap
and
yeah.
That
might
be
a
good
thing
to
do
some
slack
discussion
on
or
get
up
discussions
asynchronously
to
capture
thoughts.
There.
A
No,
it's
actually
yeah
I
think
it's
actually.
This
same
near
the
same
time,
slot
bi-weekly
on
Tuesdays
well.
E
F
A
A
All
right
last
call
for
questions
or
comments
on
our
recap.
A
All
right,
fantastic,
then,
let's
get
into
our
agenda.
So
first
we've
got
Mike
Beaumont
seeking
feedback
on
the
gamma
conformance
testing
plan
that
was
put
together.
I'm
really
excited
about
this,
because
this
helps
gets
us
closer
to
our
070
Milestone
Mike.
Are
you
here
and
able
to
speak
on
this
yeah.
G
Yeah,
so
it's
draft
of
the
testing
plan
and
I
guess:
there's
not
much
to
say
about
it
here,
but
I
would
love
any
feedback
on
What's,
Missing,
perhaps
and
I'm
not
sure.
If
there's
I'm
not
even
sure
what
kind
of
testing
plan
is
like
what
a
testing
plan
is
in
terms
of
what
we
need
for
the
release,
so
any
feedback
on
that
would
be
great.
I
mean
we
know
what
it
has
to
have
in
it.
So
yeah,
please.
F
Yeah
I
left
one
comment
on
there.
The
thing.
G
F
Think
yeah,
thank
you,
I
think
it's
a
really
useful
plan
to
have
it
out
there
I
think
the
thing
I'm
most
interested
in
is
what
we're
going
to
do
differently
from
our
existing
conformance
tests,
because
it
seems
like
we'll
have
at
least
some
modifications
like
the
requests
will
likely
have
to
originate
from
within
the
cluster
from
different
name
spaces.
You
know
how
will
do
that
roughly
yeah,
but
otherwise
I
I'd
really
like
what
you
already
have
there.
B
B
I
was
going
to
ask
a
question:
I'm,
not
really
sure
how
to
meaningfully
phrase
this
one
I
need
to
go.
Look
at
how
conformance
tests
run
in
more
detail
right
now,
but
does
this
turn
out
to
be
a
a
different
mode
that
we
run
the
conformance
tests?
Is
it
a
different
set
of
tests
entirely
within
the
Gateway
API
repo?
G
A
Yeah
I'll
I'll
chime
in
here
a
bit
because
I
was
working
starting
some
work
on
performance
tests,
Side
Tracks
for
a
little
bit,
and
basically
it
was
kind
of
I
can't
remember
when
we
did
this.
A
It
was
built
a
couple
months
ago,
but
we
decided
that,
for
the
actual,
how
of
essentially
doing
requests
within
the
cluster
we're
going
to
try
to
leverage
some
of
istio's
conformance
test
framework,
I
I
started
a
spike
of
trying
to
get
those
two
integrated,
istio's
conformance
test
framework
is,
what's
the
best
way
to
phrase
this.
It's
extremely
robust.
It
can
do
a
lot
of
a
lot
of
things
and
it
has
a
lot
of
abstractions
for
doing
a
lot
of
these
things.
Those
abstractions
didn't
necessarily.
A
It
wasn't
a
super
easy
lift
and
shift
of
the
testing
framework
and
I.
After
after
like
copy
pasting
like
10
packages,
I
said:
okay,
let's
take
a
step
back
because
I
really
don't
want
to
try
to
bloat
the
Gateway
API
repo
test
framework,
so
I'm
taking
a
second
pass
at
it
and
trying
to
create
some,
the
appropriate
abstractions
on
the
Gateway
API
side
and
just
get
a
basic
like
Coupe
detail.
Exec
implementation
in
and
then
try
to
use
the
istio
echo
server
and
the
second
go
around.
A
A
All
right
sounds
like
the
action
item
here
then,
is
for
folks
to
take
a
look
at
my
Michael
Mike,
which
which
deep.
A
Okay
on
on
Mike's,
Max
performance
testing,
doc,
I
think
once
we
get
that
in
good
shape,
we'll
be
able
to
close
the
associated
issue
and
move
forward
in
our
Milestone
planning,
which
I'll
that
reminds
me
I'll,
add
a
topic
for
our
next
meeting
to
check
out
our
Mouse
and
it's
been.
It's
been
a
couple
of
weeks.
I'm
gonna
try
to
get
those
closed.
A
All
right.
Next
topic,
Flynn
want
to
give
us
a
status
update
working
through
Gateway
and
mesh
interactions
with
Mike
Morris.
B
B
I
agree
that
I'm
hopeful
that
we
will
have
a
draft
that
we
can
meaningfully
share
next
week.
I,
don't
think,
that's
going
to
be
too
heavy
a
lift
for
that.
I
think
that
one
of
the
big
things
that
we
started
running
immediately
into
is.
B
Maybe
we
could
summarize
it
as
thoughts
about
the
least
surprising
thing
to
happen
from
the
perspective
of
an
operator
of
an
Ingress
controller,
an
operator
of
a
service
mesh,
an
end
user,
etc,
etc,
because
it
as
we
talked
about
it,
it
started
feeling
quickly,
like
those
people
may
or
may
not
have
the
same
thoughts
about
what's
going
to
happen,
in
particular,
if
you're
running
say
an
istio
Gateway
over
an
istio
mesh
or
if
you're
running
a
Linker
D
mesh
with
some
random
third-party
Gateway.
B
You
know
the
the
level
of
integration
between
those
two
is
going
to
be
relevant
and
I.
Think
that's
the
the
big
thing
we're
trying
to
figure
out
to
make
sure
whether
we're
communicating
clearly
in
that
document.
C
I
think
a
big
part
of
what
I
personally
want
to
do
next
for
the
stock
is
try
to
make
sure
that
we
have
explicit
user
stories.
So,
instead
of
just
having
a
proposal,
let's
make
sure
that
we
understand
fully
the
perspective
of
each
of
the
different
relevant
actors
for
these
different
scenarios
and
what
their
expectations
might
be.
B
Hopefully,
that's
unfair
I
mean
I
I.
Think
we're
also
kind
of
looking
at
some
of
this
and
going
this
might
end
up
having
some
ramifications
with
respect
to
policy
attachment
or
changing
routes
or
whatever,
and
so
we're
kind
of
trying
to
understand
some
of
that.
So
we
can
call
it
out
for
people
to
think
about
too.
C
A
So
yay,
you
may
already
sorry
go
ahead,
so
you
I'm
curious,
has
have
we
thought
about
and
granted.
This
would
be
like
a
couple
of
KHS
releases
before
this
went
into
Alpha,
but
I
believe
the
multi-service
ciders
cap
has
been
merged
or
is
very
close
to
merging.
I
need
to
take
a
take,
a
look
at
that,
but
that
does
introduce
this
IP
address
primitive
within
kubernetes.
That
could
potentially
be
really
useful
here.
B
F
I
I,
it's
been
a
little
while,
since
I've
talked
to
Antonio
about
that,
but
I
I
know
that's
very
much
high
priority.
If
it
didn't
get
into
this
cycle,
I'm
sure
it
will
next,
but
I
got
it
got
into
this
cycle,
but
also
it's
been
a
minute.
So
I'll
have
to
confirm,
get
back
to
you
and
and
I
agree.
It
is
very
relevant
for
Gateway,
API
and
I
think
even
added
some
callouts
to
Gateway
API
in
the
cat
cap
itself.
H
A
Yeah
I
saw
I,
saw
some
code
get
merged
so
but
beyond.
Just
the
yaml
saying
here's
what
you
want
to
do
in
this
cap,
I
saw
a
a
code,
PR
get
merged
and
I
believe
Cal.
One
of
the
network
reviewers
actually
reviewed
it
crazy
to
prove
every
get
the
specific
title,
but
yeah
I
think
there's
a
review
and
a
code.
A
Pr
I,
don't
know
if
the
code
got
merged
yet,
but
it
feels
it's
further
along
than
the
last
time
we
talked
about
it
for
sure,
and
you
know
I
know:
we've
got
this,
you
know
what
we've
kind
of
termed
draft
spec
that
we
are
7.0
but
and
again,
because
this
is
a
cap-
it'll
be
Alpha.
A
Initially,
it
will
take
probably
take
some
time
to
graduate
for
the
future
Gates,
but
it
really
might
be
worthwhile
for
for
us
to
spend
some
time
looking
at
this
and
how
the
IP
address
like
what
the
ideal
use
cases
would
be
on
the
on
the
on
the
mesh
side
and
Gateway
side
here.
C
B
B
A
Yeah
no
problem
yeah
all
right,
so
we
will
I
think
these
are
probably
two
separate
work
streams
though
I
think
we
should
get
some
clarification
from
flood
and
Mike
on
Gateway,
with
the
current
draft
spec
and
then
separately.
Let's
start
looking
forward
and
considering
okay,
how
could
this
you
know,
get
1880
fit
within
the
framework
of
Gateway
API,
Gateway,
API
and
Gamma,
specifically
once
it
gets
out
of
gets
out
of
alpha
all
right,
any
last
questions
or
comments
here.
E
A
Right
awesome
then
John
you're
up
next
talk
about
your
your
gap
on
I
believe
it's
called
cluster
local
gateways
or
something
like
that.
I
There
you
go
okay,
can
we
open
this
I
I
assume?
Not.
Everyone
was
at
the
meeting
yesterday.
Otherwise,
what
I'm
saying
is
going
to
be
entirely
redundant.
I
Okay,
I'll
be
fairly
quick
because
it
would
take
a
while
to
go
through
all
the
details
here,
but
the
general
idea.
This
is
kind
of
vaguely
gamma-related.
That's
why
I
want
to
at
least
bring
attention
to
it.
But
the
idea
is
that
in
the
Ingress
world
we
have
kind
of
these
cloud
implementations
and
then
these
influster
implementations
and
the
in-cluster
ones
everyone's
kind
of
doing
different
things
and
trying
to
solve
all
these
different
problems,
but
we're
doing
it
in
different
way.
I
Some
of
the
I
forgot.
There
was
a
reason
that
I
brought
this
to
the
gamma
meeting
and
I'm
forgetting
why
it
seems
so
relevant
I
mean
there
is
discussion
about
like
customizing
things,
which
maybe
could
be
relevant
to
a
gamma
implementation.
I
A
I
Yeah,
so
I
don't
know
if
we
need
to
go
more
in
depth
here.
This
could
just
be
it
unless
there's
any
questions,
but
I'd
encourage
folks
to
at
least
take
a
pass
through
this
and
see
if
there's
anything
interesting
to
you
in
it,
foreign.
B
C
Oh,
you
need
to
also
update
the
like
capable
of
contents
to
add
it
to
it
there
or
something
like
that
yeah,
because
that's
a
manual
list
now,
instead
of
being.
B
Okay,
so
it's
not
just
me
I
feel
much
better.
Now.
C
E
A
And
I
just
linked
that
rendered
preview
in
the
in
the
notes
here
so
excellent.
Hopefully,
folks
yeah
it'll
be
easy
to
find
it
next
time
all
right,
trudging.
G
A
Yep,
okay,
I,
don't
know
whose
topic
this
is,
but
who
wants
to
talk
about
simpler
policy
attachment
proposal.
C
I
can
touch
on
it
briefly
at
least
introduce
it.
It's
mostly
I
think
next
topic
who
is
unable
to
attend
on
this
alternating
week
due
to
the
time
difference,
but
so
Keith
started
an
issue
a
couple
weeks
back
about
get
feedback
on
policy
attachment
and
how
it
is
extra.
C
It
feels
extraordinarily
heavy
and
complex
for
many
particularly
mesh
use
cases
where
the
like
hierarchical
targeting
is
something
that
is
less
desirable
or
needed
in
some
scenarios,
and
it
feels
like
a
lot
of
like
pushing
messages
to
implement
crd
parallel
that
proliferation.
C
So
it's
kind
of
like
two
different
texts
on
here
and
I
haven't
had
a
chance
to
catch
the
recording
from
two
weeks
ago
that
there
was
recording
where
Nick
went
into
more
detail
on
the
timeouts
Gap
draft,
which
I
believe
is
one
attempt
of
proposing
something
more
native
for
some
of
this
functionality.
C
F
C
On
it-
and
we
have
a
more
in-depth
conversation
next
week,
perhaps
I
just
wanted
to
raise
that
work
is
happening
to
address
this
use
case,
and
if
this
is
relevant
to
you
or
something
you
care
about,
please
take
a
look
at
these
and
feel
free
to
weigh
in
with
feedback.
A
B
F
Don't
I
it
we
still.
We
still
don't
have
that,
because
the
concern
is
that
there's
simply
too
many
resources
that
would
need
to
change
like
there's
not.
You
know,
for
example,
the
service
resource,
if
you
want
to
attach
policy
to
a
service
or
if
you
want
to
attach
policy
to
a
secret
or
you
know
any
any
resource
like
that,
we
can't
add.
You
know
arbitrary
references
from
all
these
different
resources.
Unfortunately,
or
we
haven't,
found
a
scalable
way
to
do
that,
so
instead,
what's
happening
is
still
the
inverse.
F
What
this
really
proposes
is
that
we
get
rid
of
two
of
the
other
complicating
factors.
One
of
them
is
it's:
it's
a
direct
reference.
There's
no
hierarchy
involved,
so
there's
no
defaults,
there's
no
overrides
it's
just.
This
is
the
policy
that
applies
to
this
thing,
so
it
it
is
a
step
towards
simplification,
but
I
I
agree.
If
we
could
find
a
way
to
do
the
you
know,
reverse
the
direction
of
those
references
that
would
be
amazing
have
not
found
a
great
way.
B
One
of
the
things
that
I
realized
not
too
long
ago
that
I
hadn't
been
able
to
meaningfully
articulate
before
is
a
a
major
concern
of
mine
with
respect
to
policy
attachment
is
that
it
will
be
impossible
for
a
most
users
to
actually
sit
down
at
their
terminal
and
use
Cube
control
to
answer
what
are
the
policies
that
are
meaningful
for
this
particular
resource
yeah.
F
B
F
No
I
I
absolutely
completely
agree
with
you.
I
one
of
the
goals
in
the
original
Gap
was
that
every
you
know
by
defining
the
structure,
the
standardized
format,
everything
that
you
could
build,
that
kind
of
ux
that
would
show
you
here
are
all
the
policies
that
apply
to
my
resource.
I
would
be
some
kind
of
extension,
at
least
initially,
but
it
the
main
goal
is:
if
we
can
get
everyone
to
agree
on
a
standard,
we
can
build
that
ux.
F
F
But
yeah,
as
everyone
has
said
here,
like
feedback,
is
very
very
welcome.
We
may
be
missing
something
very
obvious
that
could
simplify
this
further.
F
If
you
know
open
to
any
alternative,
I
think
that
the
biggest
goal
we
have
is
we
want
to
provide
a
you
know,
standard
attachment
mechanisms
that
will
be
familiar
and
consistent
and
eventually
easy
to
understand
for
users,
even
if
that
does
require
some
kind
of
extension,
but
if
there's
better
ways
to
get
there,
I
think
any
everyone's
on
board.
With
that.
A
Yeah
I
agree,
I
think
the
the
direct
policy
attachment
is
for
if,
for
no
other
reason,
it's
it's
good
to
try
to
prove
out
that
approach
and
see
if
there
are
any
issues
with
that,
because
I
think
that's
probably
the
the
base
case
so
I'm
glad
that
that's
the
potentially
going
to
be
represented
in
presented
in
the
spec
yeah
I'll
carve
out
some
time
to
go
through
and
look
look
through
this
with
the
changes.
So
yeah
appreciate
that
any
last
comments
here.
A
Let's
keep
moving
through
Flynn
back
in
failover
and
HTTP
route.
B
This
was
an
action
item
of
mine
from
last
time
that
we
were
discussing
the
fact
that,
as
written
there
are
places
where
you
can
have
a
service
or
sorry
you
can
have
a
route,
you
can
add
a
back-end
ref
to
it
and
typo
something
and
suddenly
half
of
your
end.
User's
traffic
will
get
500s
and
we
were
kind
of
I.
Don't
remember
the
context
that
this
came
up
in
originally,
but
the
action
item
was
dude.
You
should
start
a
GitHub
discussion
about
this,
so
I
started
a
GitHub
discussion
about
it.
B
The
specific
thing
that
I'm
looking
at
here
is
that
I
100
agree
with
the
idea
that
we
should
try
to
quickly
surface
errors
when
they
are
made
and
I
also
100,
disagree
that
we
should
surface
them
by
having
the
end
users
blow
up
so
kind
of
hunting
for
a
better
way.
There
I
also
think
for
the
record
that
this
is
likely
to
be
maybe
a
little
bit
more
important
on
the
mesh
side
of
the
world
than
on
the
Ingress
side
of
the
world,
but
I'm,
not
a
hundred
percent.
Convinced
of
that.
Yet.
B
I
I
I
I
feel
much
more
uneasy
about
an
unhealthy
backend
being
routed
to
or
sorry
like
an.
I
An
invalid
one
like
I,
could
see
the
the
I
mean
I,
see
the
logic
for
both
of
them
really
but
I'm
closer
to
agreeing
on
the
invalid
one
than
the
unhealthy
one.
If
you
see
my
comment
down
there,
this
is
kind
of
a
case
I'm
worried
about
where
you
know
we
say:
I
really
want
only
a
small
amount
of
traffic
sent
to
this
one
service.
Maybe
that's
because
it's
can't
handle
the
load
or
it's
like
highly
experimental
or
whatever
the
case
right.
I
B
For
the
record,
I
am
a
hundred
percent
not
trying
to
argue
that
an
implementation
with
your
case
would
be
required
to
start
sending
100
of
the
traffic
to
Super
fragile,
if
there's
a
momentary,
blip
and
handle
heavy
load
to
me,
the
question
is
more
along
the
lines
of,
let's
suppose
that
you
know
super
fragile.
Let's
suppose
that
you're
doing
a
50,
50
split,
one
of
them
becomes
unhealthy.
B
Okay,
are
we
allowed
to
just
take
everything
back
to
the
one?
That's
not,
or
does
the
spec
require
us
to
continue
routing
to
the
one
that
we
already
know
is
unhealthy
and
and
I
did
lump
them
together
because
they're
kind
of
easy
to
talk
about.
In
the
same,
you
know
it's
from
the
same
starting
point,
but
obviously
we
can
certainly
split
those
and
deal
with
them.
One
at
a
time
rob
you
were
first
I
think.
F
Yeah
I
I
think
one
of
the
concerns
we've
had
is
just
that
of
Silent
failure.
Right,
I
I
can
imagine
this
scenario
where
you
don't
even
realize
that
all
your
traffic
is
going
to
your
Canary
endpoint,
for
example,
right
because
things
still
are
working
and
it
takes
you
days
or
weeks
to
figure
out.
Oh
wait,
a
second
everything's
going
to
experimental
or
whatever
that
is,
and
you
know
one
of
one
of
the
early
guidelines
in
this
API
was.
F
We
wanted
to
make
it
very
clear
when
you
configured
something
wrong
to
basically
favor
immediate
feedback
that
you
know
you
can
fix
and
repair
as
opposed
to
this
kind
of
Silent
ongoing
failure.
Why
is
my
app
responding
a
bit
slower?
Why
are
there?
Why
is
there
a
higher
error
rate,
whatever
it
is?
My
config
says
I'm
splitting,
but
actually
I'm
not.
F
There
may
be
better
ways
to
handle
that,
but
I
just
want
to
be
very
clear,
like
that.
Is
the
at
least
the
inspiration
behind
this
kind
of
guidance
in
the
spec
I
I.
Think
there's
there's
probably
some
wiggle
room
like
John
was
saying
you
know
like
maybe
invalid
back-ends,
like
I
I,
admit
that
there
are
some
cases
where
we
probably
want
to
handle
them
a
little
bit
more
gracefully.
We
just
want
to
make
sure
that
we're
not
you
know
we're
not
going
into
that
silent
failure
scenario.
B
There's
a
talk
that
I
did
with
Daniel
Bryant
at
kukon
in
Detroit,
which
is
talking
about
resilience
using
the
combination
of
an
API
Gateway
at
a
mesh
and
one
of
the
things
that
you
can
do.
You
know
we
talk
about
in
that
case
it
it
it's
a
demo
that
has
a
deliberately
horrible
application
and
then
you
go
through
and
use
various
features
like.
Oh,
we
can
enable
retries.
We
can
do
this.
We
can
route
the
traffic
elsewhere.
You
know
things
like
that.
B
One
of
the
questions
that
often
turns
up
with
that
one.
When
we
demonstrate
here's
a
way
you
can
take
a
terrible,
terrible
application
and
then
make
it
show
up
with
a
vaguely
okay
user
experience,
one
of
the
questions
that
always
ends
up
being
yeah
but
from
an
operational
point
of
view,
isn't
it
terrible
that
now
you've
got
a
situation
where
you've
made
it?
Look
like
it's
okay,
even
though
it's
horrible
and
the
answer
to
that
question
is
that
is
not
a
great
situation,
but
guess
what
that's?
B
B
A
So
this
this
is
coming
with
some
feedback
from
one
of
my
teammates
internally.
Looking
at
the
game,
apis
and
I
want
to
start
with
just
thinking
about
the
mesh
use
case,
specifically
the
mesh
use
case.
A
So
far
correct,
my
I
think
where
I
wanted
to
go
with
this
is
in
a
lot
of
implementation,
trap
that
there's
not
there's
not
this
distinction
between
traffic
splitting
and
traffic
policy
with
failover
policy
and
things
of
that
nature.
So
what
I'm?
What
I'm
wondering
is
again
thinking
specifically
for
the
mesh
use
case?
A
A
But
we've
also
got
this
traffic
splitting
in
HTTP
route,
and
so
you've
got
this
okay,
so
policy
overrides
HP
route,
sure
that
I
guess
that
is
kind
of
makes
sense,
but
you've
got
two
places
to
look
to
look
at
now.
So
I
guess
I'm.
Just
wanting
to
ask
the
crazy
question
are:
does
it
does
it
make
more
sense
to
have
traffic
live
in
one
policy
resource
and
instead
of
blowing
up
HTTP
routes
with
failover
configuration
things
like
that?
What
it
makes
us
actually
extract
abstract
that
I
don't
know.
I'm
posting
the
question.
H
Yeah
I
was
just
going
to
say
that
would
kind
of
answer
that
we
had
a
previous
point
about.
Oh,
we
don't
want
to
have
to
look
in
10
different
places
to
see
all
the
policies
that
apply
to
a
single
resource
that
would
solve
that
problem.
You're.
Actually
thinking
about
the
same
thing
over
here
at
nginx,
we
started
looking.
E
B
Even
if
you
do
defer
some
of
this
failover
stuff
to
apologize,
an
attached
policy,
you
still
end
up
with
questions
about
okay.
What
should
the
spec?
What
does
the
spec
allow,
if
you
didn't
attach
a
failover
policy
right
is?
Is
the
mesh
required
to
continue
going
ahead
and
routing
to
the
bad
service,
yeah,
John,
I,
think
you're,
sorry
Keith?
Where
was
that?
A
For
the
moment,
I
think
mostly
I
want
that
question
to
continue
sitting
a
bit.
I'll
I'll
count
I'll
I'll
book
in
that
part
of
it
and
let
the
Jungle
by
saying
this
and
a
lot
of
meshes
that
I've
seen
I
think
one
of
the
reasons
this
gets
kind
of
different,
like
funky,
for
the
message,
because
for
a
lot
of
mesh
implementations,
you've
got
a
resource,
that's
not
explicitly
bound
to
service.
That's
a
lot
thicker
that
HTTP
route
is
so
like
an
issue
award.
You've
got
virtual
Service
with
SMI
you'd.
A
Have
the
mini
cover
failover
as
much,
but
you
have
HTTP
route,
which
is
is
more
similar
here
with
even
OS
10
inputting
SMI.
We
had
a
custom
resource
for
uptune
traffic
setting
where
these
other
trafficky
things
were
defined,
forgetting
I,
don't
remember
the
Linker
to
your
console
ones
here,
but
the
rigidity
of
HTTP
route
as
a
beta
API
kind
of
precludes
us
from
adding
Advanced
load
balancing
configuration
to
that
resource.
A
I
Yeah
I
was
gonna,
say
I
I
think
like
there
is
a
difference
between
a
back
end
and
an
end
point.
So
the
spec,
like
obviously
allows
and
I,
don't
see
how
you
cannot
like
load
bouncing
between
endpoints
within
a
back
end.
But
what
we're
suggesting
here
is
not
just
a
simple
weighted
split
between
back
ends
but
a
more
sophisticated
load
balancing
across
them,
which
I
think
can
be
really
tricky
like
those
back
ends
can
have
different
configs,
for
example.
I
So
you
may
have
like
one
that
would
probably
work
is
like
okay,
different
retries
on
handle
heavy
load
and
super
fragile.
That's
probably
fine
like
when
you
pick
the
end
point.
You
apply
the
different
retry
policy.
Okay,
I!
Don't
think
you
could
do
that
in
Envoy,
for
example,
but
I
can
imagine
that
working
but.
I
Balancing
policy
like
what
does
it
mean
to
have
a
round
rob
a
load
balancer
on
handle
a
heavy
load
in
a
random
load,
balancer
on
super
fragile
or
something,
and
maybe
a
you
know,
lease
request
on
some
third
back
end,
like
you
can't
like
mix
these
load,
balancing
Concepts
like
that
as
far
as
I
know
and
I'm
sure,
there's
other
policies
that
the
same
applies
to
me.
It
almost
feels
like
what
you're
after
is
not
two
back
ends
with
a
different
weight
you're
after
one
back
end.
I
That
represents
a
composition
of
these
two,
which
is
something
I'm,
not
saying
it's
necessarily
something
you
want
to
do,
but
it's
something
you
could
do
today
right.
You
could
make
a
custom
type,
that's
like
a
service
split
or
something,
and
then
within
that
that's
one
backend
from
the
HTTP
route
perspective.
You
send
all
traffic
there
and
then
you
implement
your
custom
semantics
there.
I
B
If
you
configure
a
50
50
split,
but
if
you
have
something
that's
already
working
and
then
you
typo
a
new
thing
being
added
and
then
half
of
it
stops
working
that
feels
like
it
would
surprise
almost
everybody
using
these
things
right
now,
that's
the
invalid
case.
I
was
talking
about
beforehand,
I'm.
F
I
think
one
of
the
concerns
just
to
go
back
to
that
a
little
bit
is
if
we
have
some
kind
of
Auto
fix
capability
right.
You
know
we
we
knew
the
previous
state
was
good
and
we're
going
to
stick
with
that
and
then
for
some
reason
our
controller
restarts
loses
State,
something
happens,
and
then
it
breaks.
You
know
a
week
from
when
the
config
change
happens
over
the
weekend
or
something
then
you've
created
a
worse
problem.
Then
you
know
it
broke.
The
second
I
changed
this,
so
I
need
to
roll
back
this
specific
change.
B
There
are
things
there
are
things
in
Emissary
that
you
know
Keith,
let's
not
run
chat.
Gpt
is
a
kubernetes
operator.
That's
a
terrifying
idea.
Yeah.
There
are
things
in
Emissary
where,
if
we
notice
that
you've
put
in
an
invalid
resource,
the
invalid
resource
will
be
rejected.
Just
you
know
completely
rejected,
and
so
that
does
have
the
the
effect
the
way
emissaries
API
language
works.
B
That
has
the
effect
that,
if
you
in
fact
try
to
do
an
invalid,
the
equivalent
of
an
invalid
back
end,
there
are
a
lot
of
situations
where
air
Miss
area
will
simply
ignore
your
invalid
back
end
and
use
the
other
ones.
Generally
speaking,
we've
found
that
to
be
a
reasonable
thing,
based
on
customer
feedback.
B
There
is
also
the
situation
in
Emissary
where
it
is
possible
in
some
situations,
for
it
to
generate
an
invalid
Envoy,
config
figure
out.
It's
invalid,
not
use
it
and
then
a
week
later,
something
restarts
and
it
doesn't
have
the
Goodwin
anymore,
and
so
we've
seen
that
case
too,
and
that
is
a
much
worse
situation
that
I
would
love
to
avoid.
But.
D
I
Point
I
would
also
like
to
make:
is
I,
don't
know
if
we
have,
as
a
community
decided
what
this
API
is
right?
Is
this
the
common
language
that
all
implementers
use,
that
is
in
the
building
blocks
for
higher
level
apis,
or
is
this
the
high
level
API
right,
like
kubernetes
to
some
extent,
has
decided
that
it's
the
building
blocks
right?
You
can?
I
J
A
Yeah
I
think
I,
really
like
John
distinction
earlier
about
service
Health
versus
endpoint,
Health
I
understand
the
use
case
of
trying
to
trying
to
help
the
user
I
think
in
again
in
a
lot
of
the
distinction
between
service
health
and
endpoint
Health.
That
kind
of
my
gut
want
tells
me
to
Pivot
back
to
that
Gateway
API.
A
You
know
that
Gateway
API
idea
of
failing
fast
and
failing
loudly
and
not
doing
things
in
in
secret
if
at
end
point
if,
if
an
endpoint's
healthy
or
an
endpoint
is
no
longer
healthy,
you
probably
want
to
transparently
Route
around
that
and
and
basically
that's
how
kubernetes
does
it
like
if
a
pod
isn't
ready,
kubernetes
won't
send
traffic
to
it.
A
I
kind
of
feel
like
the
same
principle
applies
here
to
where
okay,
if
a
an
endpoint
in
your
service
is
not
healthy,
then
sure
your
mesh
implementation
or
Gateway
implementation
you're
out
there
and
kubernetes
actually
will
help
you
not
route
there
for
obvious
failures
based
on
your
pro
placing
your
health
checks.
But
if
you
have
got
other
mesh
specific
health
checks,
you
want
to
do
then
yeah
you
can
you
can
do
that
as
well.
If
your
service
is
unhealthy,
though,
then
you
probably
want
that
to
be
work.
A
That
means
all
endpoints
are
down
and
if
all
endpoints
are
down
that
points
to
okay,
you've
got
a
cloud
provider
outage
of
some
sort
in
your
AZ
or
you
sent
some
bad
config
somewhere
or
you
know
I'm
trying
to
think
okay,
what
are
the
scenarios
that
would
cause
every
end
point
for
a
service
to
go
down
and
is
there
a
scenario
where
we'd
actually
want
to
protect
the
user
from
that
by
transparently
sending
traffic
somewhere
else
and
I
struggle
to
think
of
a
scenario
like
that?
Maybe
you
have
one
but
I
I
I.
B
My
main
thing
on
the
on
a
healthy
question
is
I
think
it
would
be
a
good
thing
to
write
down
what
the
spec
you
know
what
Gateway
requires
and
what
are
the
expectations
are
because
I
didn't
actually
see
anything
about
that
in
the
spec.
When
I
was
looking
around
with
this
stuff,
the
the
invalid
one
is
a
little
bit
more.
A
Yeah
I
think
that
makes
sense,
I
think
that
the
I
think
there
is
Rob
correct
me
if
I'm
wrong
I
think
there
is
a
section,
the
spec
about
the
expected
behavior
for
there's.
F
B
In
that
case,
we
should
just
write
down
that.
Clearly,
the
right
answer
here
is
to
understand
this.
You
have
to
go,
ask
Rob
and
then
that'll
be
great,
perfect.
B
Yeah,
but
the
funny
thing,
of
course,
is
that
the
real
reason
I
put
that
in
there
was
just
that
I
had
the
action
item
and
figured
I
would
say
hey
this
is
done
now,
so
we
should
probably
move
on
to
other
things
and
continue
this
asynchronously.
But
thanks
for
the
very
interesting
discussion,
everybody
I
appreciate
that
yeah.
D
Good
points
from
everyone,
I
think
the
I
think
we
don't
have
any
specific
action
items
to
follow
up,
but
rather
just
join
the
discussion
and
let
this
sink
for
sink
in
for
a
while
yep.
A
Good
all
right,
Rob
you've
got
the
last
two
topics
and
about
seven
minutes
to
go
through
them.
You
want
to
start
the
contributor
ladder.
F
Yeah
I
think
plenty
of
time,
for
both
of
these
contributor
ladder
is
a
doc
that
I
talked
about
yesterday
as
well
at
the
main
meeting.
I
just
wanted
to
highlight
it
here
because
there's
not
a
complete
overlapping
people.
We
really
appreciate
everyone.
Who's
been
already
actively
helping
the
you
know
to
preview,
commit
lots
of
new
conformance
tests,
lots
of
great
reviews
out
there,
and
we
want
to
recognize
that
with
additional
roles
in
the
community.
F
So
if
you've
been
involved
and
you're
looking
to
formalize
that
role
in
some
way
or
fashion,
we
have
tons
of
opportunities.
We
are
about
to
have
three
different
projects
under
Gateway,
API,
obviously
Gateway
API
itself
and
then
Ingress
to
Gateway,
and
it's
going
to
be
bleaks
once
that
transition
completes.
F
We
want
to
be
very
clear
that
everyone
is
welcome
to
you
know,
get
involved,
you
don't
have
to
have
a
formal
role,
but
you're
very
welcome
to
work
towards
one
and
some
guidelines
there.
The
contributor
ladder
I've
I've,
proposed
below,
is
very
similar
to
what
exists
in
Upstream
kubernetes.
There's
the
organization
member
we've
already
helped
many
people
get
to
organization
member
thanks
for
all
the
great
contributions
that
helped
to
get
to
that
point.
Then
Reviewer
is
really
kind
of
that.
F
First
line,
the
first
people
that
are
reviewing
all
the
PRS
you
know,
and
all
these
cases
you
can
specialize
in
a
few
different
areas,
whether
that's
conformance
documentation
gaps.
There
are
other
areas
within
our
code
base
too.
You
could
work
on
oblixt
or
Ingress
to
Gateway
or
something
else
entirely,
but
we're
really
trying
to
find
ways
for
people
to
get
involved
as
as
much
as
possible
and
to
formalize
the
contributions
that
have
already
been
made.
There's
there's
people
that
already
I
think
qualify
for
many
of
these
roles.
F
So
if
this
is
something
that
interests
you
just
reach
out
to
to
any
one
of
the
maintainers
or
leads
on
this
project
and
we'll
try
and
help
get
you
set
up,
but
this
is
kind
of
a
combined
thank
you
to
everyone.
Who's
already
been
doing
so
much
and
also
you
know,
if,
if
you
want
to
be
recognized
with
the
formal
role,
we'd
love
to
have
you
so
just
yeah
and
and
also
on
that
note,
don't
hesitate
to
add
comments,
questions
whatever
this
is
just
a
proposal.
We
can
change
things.
F
E
F
Thanks
Rob
cool
and
the
next
one
is
really
short,
but
because
this
is
the
morning
meeting,
at
least
in
Pacific
time,
we
are
trying
out
a
main
gateway,
API
meeting,
also
around
this
time.
So
if
you
happen
to
be
in
Europe
or
on
the
east
coast
and
want
to
go
to
the
main
gateway,
API
meeting,
that's
Monday
at
9
00
a.m.
F
Pacific
we're
testing
out
this
time,
because
we've
heard
that
there
are
some
people
that
would
love
to
come
but
haven't
been
able
to
make
it
because
it's
too
late
in
the
day
yeah.
So
please
this
is
kind
of
our
our
test
run
to
see
if
people
will
actually
show
up.
We've
heard
that
there's
interest,
but
you
know
if
there
is,
please
try
and
make
it
so.
We
can
clearly
understand
that
there's
interest
there
and
it's
worth
rotating
back
and
forth
or
whatever
it
ends
up
being
yeah.
H
F
F
I
actually
missed
an
important
detail
in
all
of
this
I.
During
this
meeting
we
were
talking
about
Antonio's
cap
and
Antonio's,
based
in
EU,
so
I
asked
if
he
could
show
up
to
the
next
meeting
to
talk
about
his
cap.
He
will
be
there.
So,
if
you're
interested
in
learning
more
about
all
that
stuff
he'll
be
there
to
answer
any
questions
and
give
a
high
level
overview.
J
G
F
F
A
Was
already
going
to
be
there,
but
now
I'm
certainly
going
to
make
sure
I've
got
time
for
that,
I'm
really
excited
for
that.
That's
awesome.
E
A
Cool
all
right,
fantastic!
Thank
you,
everybody
again
for
for
showing
up
and
for
having
these
great
discussions
a
reminder.
The
next
Gateway
API
meeting
next
week,
like
Rob
just
said,
will
be
at
9am
Pacific
on
Monday.
The
next
game
of
meeting
will
be
at
3
P.M
Pacific
that
next
Tuesday
take
care.
Everybody
and
I'll
talk
to
you
all
later.