►
From YouTube: SIG Network Gateway API Bi-Weekly Meeting for 20220822
Description
SIG Network Gateway API Bi-Weekly Meeting for 20220822
A
All
right
welcome
to
the
gateway
api
sync
for
august
22nd,
just
as
a
reminder.
This
meeting
is
covered
by
the
kubernetes
code
of
conduct,
which
pretty
much
boils
down
to
being
nice
to
each
other,
and
the
agenda
is
open
for
anyone
to
put
anything
that
they
want
on
it.
If
you
are
looking
to
add
something
to
the
agenda,
just
make
a
comment
and
chat.
If
you
don't
have
a
copy
of
the
doc
and
we
can
send
you
a
link.
A
All
right,
sorry
august
22nd,
2022
all
right,
so
let's
go
ahead
and
get
started.
I
had
a
couple
of
topics
that
I
wanted
to
talk
about.
So
I'll
start
with.
I
was
thinking
about
this
from
the
perspective
of
the
implementations
that
I
maintain.
So
it
would
be
nice
if
there
were
scaling
semantics
within
the
gateway
resource
itself
or
within
gateway.
A
B
Yeah,
that's
one
of
my
concerns
would
be
just
that
that
it's
hard
to
decide
where
the
limits
are
on
what
you
pass
through
like
I
know
there
are
a
lot
of
reasonable
requests
in
this
realm
and
one
of
the
other
ones
I've
seen
was
basically
how
on
a
gateway,
you
could
communicate
what
kinds
of
service
type
load
balancer
configuration
you
wanted.
For
you
know,
a
lot
of
implementations
are
provisioning,
a
service
type
that'll
be
in
front
of
their
gateway,
and
so
trying
to
pass
config
through
that
way
was
also.
B
It
seems
like
a
very
similar
concept
here
of
gateway,
specific
configuration,
and
I
think
you
mentioned
you
know
like
what
we
would
do
on
what
what
google
would
theoretically
do.
If
we
ever
made
this
configurable,
I
think
it
would
be
probably
a
different
gateway
class
or
you
know,
a
different
gateway
class
config,
so
that
overlaps
a
bit
with
what
it
sounds
like
console
is
already
doing.
B
I
I
don't
want
to
say
this
doesn't
make
sense.
I
just
we
have
to
be
careful
with
anything.
We
add
to
a
gateway,
because
you
know
we
have
to
be
very
calculated
in
what
the
scope
of
a
gateway
is
and
what
is
we're
just
templating
other
resources,
yeah
but
I'll
hand
it
off
to.
I
think
keith
you
had
next.
E
Yeah
I
share
some
of
the
concerns
that
that
rob
brought
up.
It
feels
to
me,
like
the
gateway
api
up
to
this
point,
has
been
very
much
describing
ways
describing
actions
that
the
data
plane
should
take
when
handling
traffic,
traffic,
routing
or
policy
attachment.
It's
describing
actions.
The
data
plane
should
take
management
of
the
data
plane.
E
E
Maybe
gamma
has
some
interesting
use
cases
here
when
it
comes
to
psi
carlos
mesh
or
other
mesh
configuration,
but
that's
a
can.
My
gut
tells
me
that
outside
cam
we
likely
want
to
kick
down
the
road
until
we've
got
more
concrete.
E
Well,
the
use
case
is
clear.
I
I
guess
what
I'll
say
is
that
if
we,
if
that's
decided
to
be
a
part
of
the
gateway
api,
it
should
be
a
completely
separate
thing,
because
I
can
just
already
see
the
collisions
between
the
existing
routing
apis
and
then
management
and
crossing
of
concerns.
And
it
doesn't
look
fun
from
where
I'm
sitting.
C
A
I
mean
that
would
be
cr
cool.
One
of
the
things
that
I
put
up
in
the
notes
here
was
I
was
and
like
again
I
didn't
even
make
this
a
discussion,
because
I
already
realized
some
of
the
problems
with
it
and
more
or
less
just
thought
I'd
like
to
talk
it
out,
but
I
did
think
it
would
be
really
cool
if
we
could
have
gateways
be
the
target
of
auto
scaler
so
like,
for
instance,
the
hpa
could
actually
target
gateways
in
addition
to
like
that
might
be
wrong
too.
But
it
was
a
neat.
A
I
don't
recognize
voices
very
well,
but
the
person
from
council
said
we
have
a
gateway,
configuration,
that's
used
through
parameters
referenced
in
gateway
class,
and
that's
how
you
do
it
today,
but
in
our
operator
like
you
would
deploy
a
gateway
and
then
theoretically
you
would
have
let's
say
traffic,
or
I
mean
you
could
have
some
of
the
basic
stuff
that
you
get
with
like
the
non-custom
metrics
for
the
for
an
auto
scale,
or
something
like
that.
But
you
could
definitely
have
like
traffic
based
custom
metrics.
A
That
would
lead
you
to
need
additional
instances
of
the
underlying
pods
for
that
gateway
right
and
having
that
something
that
the
spec
accounts
for
seemed
like
neat
to
me.
Does
anybody
just
like
that?
Just
would
not
work
at
all
in
their
setup.
I'd
be
curious
to
hear
is
like
nope.
That
just
would
break
everything.
There's.
That
is
not
interesting.
A
F
I
think,
the
thing
that
it
scares
me
for
some
reason
right
like,
and
I
think
I
think
it
scares
me
because
because
it
feels
like
there's
going
to
be
a
lot
of
definition
required
for
like
what
do
you
mean
by
loaded,
you
know
how
do
you,
how
do
you
define
like
what
traffic
is
important
and
you
know
what
metrics
are
you
gonna
use
and
stuff
like
that
and
there's
I
mean
we've
had
some
stuff
open
for
a
while
about
like
how
do
you
produce
standard
metrics
for
for
the
sort
of
stuff
that
we're
talking
about
and
you
I
think
I.
F
F
In
smi
about
that,
but
and
and.
C
F
Think,
like
it
kind
of
feels
like
I
agree
that
this
would
this
would
be
cool
to
have,
but,
like
the
the
I
think,
it'd
be
cool
to
have.
But
I
really.
G
F
B
Yeah,
I
was
just
going
to
say
you
know
as
far
as
things
that
just
don't
work
this.
This
is
something
that
is
fairly
is
not
meaningful
to
the
implementation
at
gk
right,
it's
something
like
I
don't
want
to
get
in
the
way
of
something
that
is
clearly
useful
for
a
large
number
of
implementations,
but
it's
something
that
it
would
not
be
meaningful
to
us.
B
A
Is
actually
the
biggest
attracting
thing
from
my
from
my
perspective
so
like
on
one
hand
I
was
like
it
makes
some
sense
because,
at
least
in
our
implementation,
a
gateway
ultimately
becomes
a
deployment
right
and
you
can
scale
a
replica
set
directly
if
you
felt
like
doing
that
versus
the
deployment,
which
is
a
higher
level
higher
level
api,
and
it
would
seem
to
make
sense
that,
as
you
go
higher,
you
know,
there's
still
some
things
that
you
can
figure.
On
the
other
hand,
not
everybody
is
modeling
gateways.
A
Do
not
ultim
automatically
become
deployments
that
everybody
set
up.
B
B
Like
things
that
are
are
not
currently
in
our
set
of
implementations,
I
don't
want
to
back
ourselves
too
far
into
a
corner
and
say
that
you
know
in
the
vast
majority
of
cases,
gateways
are
going
to
be
represented
by
a
set
of
pods,
because
I
think
there
will
be
other
mechanisms
other
than
just
cloud
providers
to
deploy
a
gateway.
At
least
that's
that's
my
perspective.
I
could
be
wrong
in
this.
D
By
the
way,
this
will
be
amazing,
cucum
talk,
so
there's
your
ticket.
I
think,
like
the
the
struggle
I
have
with
this
is
how
useful
it
is
to
generalize
it
as
like
an
api
construct
or
like
something
that
the
community
is
required
to
implement
versus,
like
hey.
Here
are
some
really
good
best
practices
and
we
found
that
this
works,
and
this
doesn't
and
we
shouldn't
be
reinventing
the
wheel
among
the
different
providers.
D
Also
just
it
probably
needs
some
experience
to
inform
the
design.
I
don't
know
how
far
people
have
gotten.
A
The
we've
gotten,
as
far
as
I
wanted
to
have
this
conversation,
although
I'm
confused
by
that,
maybe
that's
not
what
you
meant
it
sounded
like.
You
were
like
how
far
is
this
thought
gone.
D
I
think
it's
definitely
useful
to
understand
how
people
have
been
doing
it,
but
to
like
mandate
it
as
a
requirement.
It's
less
clear.
F
Down
to
what
that's
really
what
the
heart
of
it
is
for
me
that
it's
that
it
feels
like
something
that
would
be
difficult
to
generalize
correctly
and
yeah
and
at
the
same
time
yeah.
It
would
require
a
lot
of
specific.
F
Able
to
generalize
it
correctly,
because
I
certainly
don't
have
a
good
intuition
about
how
I
write
something
about
that.
So
I
think,
like
I
guess
from
my
point
of
view,
I'm
like
I'm
not.
I
don't
know.
I'm
I'd
like
to
see
I'd
like
to
see
some
more
about
this,
but
it
does
also
feel
like
yeah
like.
A
C
Yeah
go
on
the
question:
when
you
talk
about
generalizing
to
me,
it
seems
like
it's
either
an
implementation
supports
it
or
not,
because
the
scale
api
is
extremely
specific
to
make
a
resource
scalable,
you
need
a
slash
scale
subresource.
That
has,
I
think,
it's
a
selector
to
identify
the
thing.
You're
scaling
and
I
think,
like
a
field
saying
how
many
replicas
do
you
currently
have
something
like
that.
So
when
we
say
generalizable,
are
we
saying
how
generally
applicable
or
is
there
something
we
could
do
that
would
make
this
api
more
or
less
general.
B
I
think
you
know
what
since
you're
mentioning
that
I
I
think
one
of
my
questions
about
this
would
be
how
meaningful
is
it.
You
know
this.
The
scaling
api
is
something
that
is
well
known
and
there's
I
you
know,
I
believe
some
efforts
to
make
traffic
based,
auto
scaling
thing
right
or
or
some
concept
of
that
for
deployments
right.
So
if
you
have
a
deployment,
that's
backing
gateway
that
can
support
auto
scaling.
B
Is
that
significantly
different
than
a
gateway
that
maps
directly
to
pods?
That
I'm
not
sure-
and
I
don't
want
to-
I
think,
there's
more.
You
know
somebody
needs
to
sketch
this
out
in
a
little
bit
more
detail.
I
don't
want
to
go
too
far
down
this
rabbit
hole
quite
yet,
but
I'm
a
little
curious.
If
we
can,
you
know
model
the
same
thing
a
little
bit
differently.
A
G
G
Sounds
right,
one
thing:
I
would
just
notice
that
I
don't
want
to
scope
creep
things
too
much,
but
I
do
feel
like
there
is
a
very
large
general
problem
of
like
customizing
improv
in
cluster
proxies
right,
it's
more
than
just
auto
scaling.
Oftentimes
there's
you
know
resource
requirements
and
basically
the
whole
deployment
and
service
spec,
potentially.
G
D
G
Long
while
back,
they
had
some
discussion
of
like
classifying
like
types
of
deployments,
but
we
never
really
standardized
in
any
way.
For
for
how
this
would
work.
A
That's
a
good
thought:
yeah,
the
the
in
cluster,
the
in
cluster
proxy
thing
is
my
life.
I
don't
have.
I
don't
work
on
anything
that
does
you
know
anything
like
gke,
so
I
have
found
that
the
configuration
for
the
gateways
has
been.
I
mean
right
now,
one
of
the
ways
that
we
do
it
is.
A
We
have
a
gateway
configuration
crd
that
you
attach
via
parameters
ref,
we
toyed
with
the
notion
of
having
other
things,
but
that's
just
what
we
have
for
now
and
it's
not
great,
but
it
is
the
way
you
would
do
like
some
of
the
scaling
stuff
right
now.
Theoretically,
so
it's
a
good
thought,
like
maybe
solving
in
general,
better
how
you
do
the
configuration
of
a
gateway
might
be
something
that
would
inform
this
more,
but
also
just
kind
of
solve
a
general
problem.
We're
all
solving
different
ways
there
may
be.
A
I
can't
see
so
somebody
else
has
their
hand
raised
just
start.
You
know
interrupting
me,
but.
A
I
think
that's,
I
think
the
action
item
for
this
then
is
thank
you
all
for
the
feedback
on
the
thought.
I
think
that
action
item
is,
I'm
gonna,
think
about
it
and
maybe
bring
up
a
discussion
with
something
a
little
bit
meatier
like
start
or
or
even
reach
out
to
individuals
and
just
kind
of
see
how
everybody's
doing
this
and
come
back
around
to
this
with
another
discussion
with
more
meat
and
potatoes
to
it.
So.
A
All
right,
so,
the
next
thing
that
I
got
here
is
that
I
wanted
to
talk
about
is
just
I've
had
two
people
recently
that
are
your
in
european
time
zones
tell
me
that
they
want
to
come
to
this
meeting,
but
it's
too
late,
and
so
I
know
we've
talked
about
it
before
and
I
just
wanted
to
like.
What's
that?
A
Is
there
anybody
on
this
call?
That
would
like
an
earlier
like
to
do
the
alternating
thing
kind
of
like
what
we're
doing
for
gamma.
Anybody
opposed
to
it.
Trying.
F
Oh,
switching
this
to
alternating
means
I'll
only
make
every
second
one
yeah
making
it.
F
Earlier
puts
it
at
one
am
my
time
and
when.
C
F
C
B
I
think
I
would
also
for
a
little
bit
of
historical
context.
We
for
some
period
of
time
we
did
rotate
between
times
that
were
friendly
to
europe
and
times
were
friendly
to
aipac.
B
I
what
ended
up
happening-
and
this
was
at
least
a
year
ago
now
I
is
that
the
people
attending
the
europe
friendly
time
zone
meeting
were
a
subset
of
the
people
attending
the
apac
friendly
time
and
we
didn't
miss
anyone
by
switching
to
the
apac
friendly
time.
I'm
not
saying
that
would
happen
today.
B
That's
just
what
happened
historically,
it
was
a
lot
of
it
was
challenging
in
the
sense
that
I
there
were
meetings
that
were
significantly
larger
than
other
meetings.
So,
if
you
had
a
big
proposal,
you
try
and
save
it
for
the
bigger
apac
meeting
as
an
example,
because
more
people
would
be
there.
B
That
would
be
asking
people
like
myself
in
pacific
time
to
come
to
a
meeting
at
eight
a.m
on
a
monday
morning,
which
may
not
be
fun.
For
me,
at
least
that
is
a
tiny,
tiny
complaint
compared
to
what
everyone
else
is
dealing
with
as
far
as
time
zones.
So
I
can't
you
know
whatever
my
personal
preference
here
would
be
to
wait
and
see
a
little
bit
longer.
B
How
gamma
does
with
this
since
they're
really
kind
of
pioneering
this
idea
again,
and
I
think
they
have
had
decent
turnout
for
both,
but
maybe
someone
who's
more
involved.
I
don't
know
what
keith
I
think
you're
on
this
call,
john.
I
have
you
any
any
thoughts
on
your
experience
so
far
with
alternating
time
zones.
E
Yeah,
so
this
is
actually
part
of
the
status
up
that
I
was
going
to
I've
got
on
the
agenda
here,
but
we've
seen
a
pretty
pretty
good
attendance
for
both
meetings
as
a
cursory
glance.
I
don't
necessarily
see
folks
that
we
consistently
gain
with
the
morning
meeting,
but
there
there
like
there
are
some
times
where
we
have
more
people
show
up
to
the
morning
meeting.
So
I
guess
it
remains
to
be
seen.
E
We've
still
got
around
12
to
15
people
each
occurrence,
so
that
tells
me
that
something
is
going
well
so,
and
the
cool
and
the
thing
with
us
is
that
we've
got
main
we've
got
leads
across
the
entire
continental
united
states
when
it
comes
to
time
zones,
so
we're
swapping
duties,
which
makes
it
a
bit
more
sustainable.
So
I
guess
our
my
takeaway
is
to
keep
that
going
until
we
see
reason
not
to.
I
know
I
I
think
john
had
to
step
away
about
mike.
You
have
any
thought.
C
Yeah,
I
think
we
don't
have
a
lot
of
evidence
in
terms
of
like
folks
that
are
only
able
to
make
the
the
amir
friendly,
but
it
certainly.
C
Every
time
yeah
but
yeah
as
far
as
rotating
hosts
is
I'm
I'm
working
on
hosting
those
ones
so
start
off.
The
dock.
G
F
C
F
C
A
Okay,
actually
yeah,
if
you
just
put
your
thoughts
in
there
that
way.
Next
time
we
come
back
to
this,
we
can
actually
I'm
trying
to
take
notes
today,
because
I
realize
there's
some
things
we
come
back
to
every
once
in
a
while,
and
so
having
notes
could
be
helpful,
I'm
okay
with
waiting.
I
think
the
idea
of
just
waiting
a
little
bit
longer
to
see
what's
going
on
with
gamma
gamma
looks
promising
right.
The
meetings
are
working,
but
it's
mostly
in
terms
of
it
looks
promising.
A
Attendance
is
actually
happening,
not
necessarily
we
are
gaining
anyone,
so
maybe
just
take
a
little
bit
more
time
and
keep
an
eye
on
that
sounds
like
keith's
already,
specifically
trying
to
kind
of
keep
an
eye
on
that
and
use
that
to
inform
a
decision
in
the
meantime
for
the
couple
of
people.
I
know
that
are
like
want
to
come,
but
it's
prohibitively
late.
I
will
I'll
talk
to
them
a
little
bit
more
and
tell
them
that
so.
B
E
You
also
gave
me
an
idea,
rob
I'll
put
this
as
a
I
put
as
engine
item
in
our
next
gamma
emea
meeting
just
to
ask
folks
if
they're
enjoying
the
flexibility
of
the
other
time
to
maybe
more
directly
gauge.
So
I
should
have
an
update
in
a
couple
weeks
sounds.
A
Good
all
right,
and
on
that
note
keith,
do
you
want
to
do
the
what
rest
of
your
gamma
status,
update.
E
Sure
I
made
a
slide
deck
just
for
folks
who
might
not
be
able
to
who
might
not
be
here
today
as
well
as
anybody
who
wants
reference
back
to
it.
How
do
you
want
to
do?
Do
you
just
want
to
continue
sharing
your
screen?
No
I'll
I'll,
stop
and
let.
E
Do
the
screen
sharing
dancer
you
gotta,
allow
me
to
as
hosts
gotta.
Allow
me
to
screen
share.
You
got
it.
E
So
just
a
quick
status
update
on
gamma
and
where
we
are
we,
when
we
kind
of
started
the
gamma
initiative,
it
was
suggested
that
we
do
these
kinds
of
recurrent
status
updates,
so
that
folks,
who
I
might
not
have
the
time
to
attend,
gamer
meetings
or
just
people
who
are
otherwise
interested
in
gamma,
can
get
a
update
every
so
often
of
what's
going
on.
This
is
kind
of
a
high
level
overview
slide
of
how
things
are
going.
We've
had
four
meetings.
E
This
is
about
a
month
since
our
original
original
meeting
kickoff
meeting
on
july
26th,
we
are
as
as
mentioned
all
settings
time
slots
different
time
zones
that
solid
attendance
between
both
time
slots
about
12
13
people.
The
first
meeting
had
around,
I
think,
32
attendees
when
we
started
so
that
was
really
cool
to
see
that
kind
of
interest
of
the
stuff
that
we're
doing
for
service
mentioned
gateway
api.
E
We
started
keeping
track
of
organizations
across
different
game
meetings
and
I
went
through
and
did
account
of
all
the
different
organizations
that
are
represented
here.
There
have
been
14
different
organizations,
I
think,
with
nick's
new
orc
change
15
once
he
starts
to
come
to
a
good
meeting,
so
we've
got
pretty
wide
wide
interest
across
the
community
as
well,
which
is
very
exciting.
E
We've
got
our
first
get
merged.
As
of
last
week,
we
got
get
1324
merch
into
get
your
api.
Great
yeah
yeah
get
the
claps
going.
This
is
a
high
level
account
just
talking
about
the
goals,
non
goals
and
the
places
where
we're
explicitly
cutting
scope
and
deferring
certain
goals
for
things
and
we'll
get
more
into
detail
about
that,
and
then
just
a
quick
kind
of
up
next
section
that
we're
going
to
more
detail
about.
E
E
So
where
are
we
now?
Can
you
does
someone
say
something?
Go
ahead:
okay,
so
where
are
we
now?
The?
What
and
the
y
have
kind
of
have
been
clearly
established?
We
were
trying
to
be
very
intentional
about
establishing
what
we're
trying
to
accomplish
and
why
we
were
solving
those
problems
before
digging
too
deep
into
implementation
detail,
and
this
version
24
covers
that
I'll
I'll
say
I'll
call
up
to
pay
special
attention
to
this
glossary.
E
We
went
through
this
on
the
last
game
of
call,
but
we've
got
this
really
cool
glossary
here.
That
goes
through
a
lot
of
terms
that
are
going
to
be
important
for
having
mesh
conversations,
and
some
of
these
might
even
be
broadly
applicable
to
gateway
api
there's,
also
a
deferred
goals
section,
which
are
things
we're
explicitly
not
excluding,
but
we're
going
to
revisit.
E
Some
might
be
disappointed
to
find
their
favorite
use
case
here,
but
we're
helping
to
iterate
quickly
and
having
cutting
the
scope
is
going
to
help
us
to
get
something
out
there.
A
lot
sooner
and
now,
at
this
point
we're
ready
to
set
our
slides
on
the
how
we've
got
two
gaps
around
next
representation
and
then
measurements
binding
that
we're
going
to
start
really
digging
into
the
meat
meet
towards.
E
I
don't
even
know
if
that's
correct
english,
but
whatever
in
their
next
meetings,
so
we'll
probably
start
with
mesh
representation
first
and
then,
once
we
decide
how
to
represent
a
mesh
within
gateway
api,
we'll
figure
out
how
to
start
binding
routes
between
services
and
back
to
the
overall
mesh
construct,
as
a
as
in
general,
we're
hoping
to
bias
towards
action
in
these
early
days.
A
lot
of
conversations
about
striking
that
balance,
making
sure
we're
not
going
to
regret.
You
know
some
of
the
precedents
and
actions
that
we
take.
E
But
the
kind
of
answer
to
that
is,
we
hope
to
try
to
keep
scope,
small
and
iterate
very
quickly.
We've
got
about
two
months
from
now
until
kukan,
and
we
hope
to
you
know,
have
these
two
gaps
knocked
out
by
then
and
have
something
that
implementations
can
start
implementing
at
least
generally
generally
opportunities
for
alignment.
E
I
think
it's
really
important
because
of
the
nature
of
kind
of
these
two
work
streams
between
ingress
from
the
main
api
meeting
in
the
service
mesh
with
gamma,
that
we
call
out
opportunities
first
to
align
and
work
together
and
so
calling
a
a
few
of
those
out
here.
E
We're
currently
evaluating
the
need
for
a
get
or
some
other
clarification
around
dns
dns
has
popped
up
a
lot
in
some
of
our
recent
conversation
and
the
greater
gateway
api
kind
of
the
ingress
use
case
is
kind
of
being
able
to
sidestep
that
it
seems
because
the
binding
happens
directly
to
serve
to
service
resources
or
to
other,
like
kubernetes
resources
that
exist
within
std.
E
When
you
start
dealing
with
host
name,
though,
in
in
reference
and
intercepting
traffic
for
destinations
like
you
do
with
the
mesh
you,
you
might
not
be
able
to
get
around
some
sort
of
clarification
for
for
dns,
but
they're.
E
Not
a
gap
is
the
right
place
for
that,
not
sure
yet,
but
we're
going
to
be
exploring
that
as
well
in
some
of
our
gam
meetings,
the
next
one-
and
I
think
we
caught
this
out
in
the
last
last
week's
skate
with
api
meeting
as
we're
gonna-
need
a
gap
around
ingram
egress.
Excuse
me:
there's
been
a
lot
of
questions
and
conversation
around
egress
for
get
your
api
just
for
like
the
egress
gateway
use
case,
external
name,
services
and
things
of
that
nature
and
gamma.
E
As
a
service
mesh
exercise
use
case
group,
we
naturally
have
some
some
concerns
around
or
some
questions
around
egress
as
well
and
we're
seeking
a
champion.
I
think
that
was
the
consensus
around
the
gateway,
api,
ingress
use
case
meeting
and
I
think
there's
even
commented
on
a
discussion
or
a
or
an
issue
somewhere.
E
If
you
haven't
really,
if
you
happen
to
be
really
passionate
around
the
egress
use
case,
that's
a
pain
point
for
you
and
your
implementation.
Then
please
we
would
love
for
somebody
to
come
and
to
take
this
all
the
way
and
teach
me
some
things,
because
it's
a
complicated
problem
and
to
do
well
and
the
third
one
is
get
1282.
that
knicks
kind
of
pioneering
you're
on
backhand
properties.
It's
the
provisional
pr
has
been
submitted
and
it
will
likely
impact
our
service
attachment
gap.
E
So
perhaps
we're
binding
to
this
spec
and
properties
resource
instead
of
an
actual
service,
because
that's
gonna
depend
on
some
of
the
implementation,
some
of
the
how
that's
not
yet
been
been
decided
upon
when
it
comes
to
the
1282
get
so
again.
I
would
very
much
like
to
be
a
part
of
that
conversation
love
to
try
to
work
together
around
this,
and
then
one
thing
I'll
call
out
here
is
that
specifically
a
part
of
the
gamma
glossary
for
this
gep?
E
Is
this
duality
between
a
kubernetes
service
being
a
front
end
and
a
back
end
and
I'll
call
this
out
because
shane
on
the
on
this
gap?
1324
pr?
You
call
this
out
as
something
that's
been
talked
about
before
within
gateway
api,
and
it
might
make
sense
to
start
with
the
backend
properties
get
already
in
progress.
Maybe
it
makes
sense
to
to
bring
this
over
to
the
ingress
use
case
with
with
gateway
api.
E
I
don't
care
about
the
name,
but
this
concept
of
you
know
the
kubernetes
service
resource
serves
two
functions,
one
as
a
a
front
end
that
can
be
addressable
in
reference
and
the
back
end
representing
a
set
of
endpoints
or
addressable
hosts
depending
on
your
proxy
terminology.
E
So
that's
something
I'll
call
out
there
for
some
folks
who
are
really
embedded
within
the
ingress
use
case
of
gateway
api
to
to
see
if
this
might
be
useful
in
some
upstream
conversations.
So
that's
the
end
of
kind
of
my
status.
Update
for
for
everybody
is
I'll,
go
ahead
and
keep
sharing
my
screen
in
case
there's
any
questions
or
clarifications
something
for
folks.
I
want
to
see
for
next
status,
update
or
anything
like
that.
B
Yeah
echo
the
same
thing.
I
really
appreciate
how
you
all
have
taken
gamma
on
and
just
make
run
with
it,
and
this
is
great
progress
for
just
being
a
month
in
yeah.
Thanks
for
the
update.
C
C
E
Appreciate
it
yeah,
so
I
will
like
I
said
this.
This
slide
deck
is
in
within
the
meeting
notes
for
for
this
meeting
so
you're
watching
this
recording
and
you
want
to
see
this
or
add
comments.
It
should
be
commentable
to
everybody,
so
feel
free
to
take
it.
However,
we
don't
I
wanna
ask:
do
we
want
to
figure
out
when
the
next
status
update
should
be
establish
a
cadence?
E
Perhaps
I'm
I'm
fine
with
it
being
ad
hoc,
but
I
don't
know
if
anybody
has
preference
for
more
structure
or
not.
E
B
Sorry
rob
no
sorry
either
way,
but
I
would
say
you
know
every
month
seems
like
a
fine
cadence,
but
also,
if
you
wanted
to
you
know,
if
there's
significant
something
that
you
want
to
bring
and
discuss
in
more
detail,
then
we
could
do
that
too,
but
at
least
in
these
early
days
making
sure
it's
you
know
once
a
month
or
so
at
least
so
we're
not
going
too
far
in
different
directions
seems
reasonable.
A
E
Yep
sounds
good
to
me,
even
if
you
know
the
update's,
not
big,
I
do
want
to
try
to
make
sure.
We
call
me
whoever's
making
these
calls
out
the
opportunities
for
alignment
here.
So
I
think
that's
one
of
the
big
benefits
of
having
this
kind
of
status
update
is:
how
can
we
bring
these
workstreams
together
to
make
sure
that
we're
moving
in
the
same
direction?
So
all
right
appreciate
the
opportunity
to
share
I'll
turn
it
back
over
to
you,
shane.
A
B
B
I
I
don't
think
we've
documented
that
anywhere
right
now
I
have
a
rough
proposal
of
what
I
would
write
in
this
issue
and
I've
assigned
it
to
myself
so
I'll
work
on
a
pr
that
does
this.
But
the
high
level
thing
here
here
is
maybe
worth
discussing
a
little
bit
more,
and
that
is
that
you
know
talking
to
some
people
in
api
machinery
around
how
they
would
recommend
we
do
this.
B
The
the
the
overwhelming
reaction
I
got
was
to
avoid
supporting
alpha
two
longer
than
you
have
to
you
know
I
I'm
fine
with
supporting
beta
one
for
a
long
time,
maybe
forever,
but
alpha
2.
It
doesn't
seem
like
there's
any
reason
we
want
users
to
keep
on
using
v1
alpha
2
anywhere.
Our
docs
have
all
said:
v1
beta
1
for
month,
plus
now
so
with
that.
C
B
The
recommendation
would
be
to
set
the
new
version
v1
beta
1,
as
storage
in
o60,
and
stop
serving
the
old
api
version,
which
is
v1
alpha
2.,
so
the
old
api
version
would
still
exist
in
the
crd.
B
So
old
resources
still
work
controllers
that
are
reading
from
v1
beta
1
still
understand,
v1
alpha
2
resources,
and
it's
all
good.
It's
just
starting
an
o60.
B
If
you
use
those
crds
and
you
try
and
write
or
update
a
resource
with
v1
alpha
2,
it's
not
going
to
work.
That's
the
proposal
for
some
context.
This
is
something
that
we
are
planning
to
get
ahead
of
with
gk.
With
the
you
know,
we
we
have
no
plans
of
of
installing
alpha
api
versions
in
in
our
managed
product.
B
So
if
you're
using
gateway
api
you're
going
to
see
this
experience
a
little
bit
sooner
on
gke
of
just
you
only
get
beta
as
the
exposed
api
there's,
also
a
reference
to
the
idea
of
cube
storage
version,
migrator
we're
going
to
need
to
work
on
a
something
that
makes
this
a
little
easier
for
users
to
understand.
B
But
in
you
know,
if
we
ever
remove
v1
alpha
2
from
this
crd
completely,
instead
of
just
not
serving
it,
but
just
also,
not
including
it
at
all.
We'll
need
to
make
sure
that
users
that
created
something
with
v1
alpha
2
have
done
something
to
update
their
resources.
When
v1
beta
1
was
a
storage
version,
storage
version
migrator.
Does
that
automatically
just
bumps
a
resource
and
says
hey
you're,
not
using
the
right
storage
version?
It
just
bumps
you
up
automatically.
B
Yeah,
which
I
I
think
we
probably
want
a
more
type
of
feature
yeah,
you
know
I,
you
would
hope,
that's
a
low-risk
thing,
but
I
think
we
probably
want
a
more
targeted
thing
that
just
targets
gateway
resources
because
yeah,
that's
that's
something
that
I'm
intending
to
work
on
and
the
related
pr's.
I
guess
we
can
go
through.
Those
now
are
really
the
same
thing
right.
Well,
actually,
maybe
let's
go
to
the
next
pr
first,
because
that
lines
up
more
closely
with
the
documentation.
B
This
makes
that
change.
My
goal
is
to
get
it
in
before
o60.
I
think
it's
ready
to
go
now
and
I
don't
see
any
reason
to
hold
off
on
it,
but
it
ended
up
being
a
cascading
change
and
that
what
turned
into
you
know
it
started
with
I'm
going
to
change
three
lines
in
the
types
files
and
then
I
realized
well
hold
on
a
second.
B
I
need
to
change
the
examples
because
we
don't
need
v1,
alpha
2
examples
anymore
in
many
cases,
and
then
I
need
to
update
the
docs
to
make
sure
they're,
including
the
related
things.
I
tried
to
cover
the
very
specific
changes
that
I'm
doing
in
this
pr.
B
B
That's
the
big
change
that
kind
of
implements
what
I
was
suggesting
documenting
before
I
go
any
further
any
questions
or
concerns
about
this
kind
of
change.
B
Okay,
cool
yeah
great,
so
then
the
next
one
we'll
go
to
the
supported
versions,
one
that
yeah.
Thank
you.
This
is
more
of
just
a
if
you
are
launching,
I
think
many
or
most
of
us
on
this
call
are
planning
on
launching
or
building
a
product
around
this
api,
and
it
would
be
very
helpful
to
have
some
kind
of
guarantees
that
the
api
will
be
stable.
B
So
this
tries
to
make
some
commitments
in
that
in
that
vein,
so
one
right
now,
our
docs
just
say
well
right
now,
just
because
we
support
up
to
118
backwards,
compat
116,
if
you're
not
using
that
protocol.
You
know
a
variety
of
things
like
that,
but
we
don't
have
any
commitment
that
that
will
forever
be
the
case.
B
So
first
step
is:
let's
say:
we're
going
to
support
at
least
the
most
recent
five
kubernetes
minor
versions.
I
think
the
most
relevant
thing
that
could
be
frustrating
with
this
is
there's
a
new
thing
called
cel
coming
out
for
crds,
which
allows
us
to
represent
a
lot
of
the
validation
we
have
in
a
web
hook
in
the
crd
itself,
and
by
saying
this,
we're
saying
we're
going
to
wait.
Basically,
five
kubernetes
minor
versions
before
we
make
that
transition
to
ceo
instead
of
webhook
validation.
Yes,.
B
Yeah,
it's
it's!
It's
a
long
time
so,
five
years
times,
four
months,
yeah
at
least.
A
B
Don't
think
so
I
think
cl
is
just
getting
to
beta
in
this
in
125,
so
there's
not
really
any
usage
of
it
yet,
but
you
know
a
year
from
now
we
may
be
regretting
it
and
saying:
oh
man,
I
wish
we
could
use
ceo
today.
F
I
would,
but
personally
I
would
bias
us
towards
not
not
having
five.
I
I
think
at
the
maximum
we
should
do
is
the
same
as
the
upstream
kubernetes
support
ie3
one
year,
the
because
yeah,
just
just
because,
like
I.
F
How
long
it
took
us
to
get
to
that
in
the
lps
working
group?
But
you
know,
I
think,
that's
that's
about
the
best
that
we
can
do,
because
otherwise
we're
going
to
be
stuck,
and
it's
going
to
be
it's
hard
enough
to
make
changes
already
and
we're
going
to
need
to
push
changes
to
some
stuff.
That
takes
a
long
time
to
change.
And
if
we
also
have
to
not
drop
that
until
later.
It's
going
to
take
us
years
and
years
and
years
to
be
able
to
make
solar
changes,
and
I
think.
F
It's
going
to
make
the
web
hook
a
lot
easier.
Yeah.
Yes,
that's
I
think
that's
it.
Makaya
has
put
a
link
in
the
chat
yeah,
so.
F
That
yeah
being
able
to
being
able
to
do
the
being
able
to
use
the
being
able
to
use
that
will
be
really
helpful.
We
will
be
able
to
drop
a
lot
of
the
stuff,
that's
in
the
admission
book
as
soon
as
we
start
using
that
I
mean
there's,
there's
some
stuff
to
be
checked
about
like
hey.
If
you
put
those
into
the
into
the
cid,
you
just
see
the
cid
specs.
What
happens
if
you
apply
it
to
an
older
cluster
and
stuff
like
that?
F
F
B
I
have
a
bias.
You
know
this
is
a
personal
bias
to
provide
longer
support
than
than
three
for
the
reason
that
you
know,
users
are
coming
to
crds,
with
the
expectation
that
they
can
install
these
crds
in
any
kubernetes
cluster.
B
And
if
you
look
at
what
the
majority
of
kubernetes
clusters
running
today
are,
they
are
likely
older
than
three
versions
old.
If
you
say
125
is
out
now
we're
about
to
be
out
125
24
23,
I
don't
think
the
majority
of
kubernetes
users
are
on
123
today
and
I
I
hate
to
artificially
limit
where
this
can
be
deployed,
just
because
we
want
to
use
the
latest
thing,
and
I
know
that
there
has
to
be
a
balance
here
and
I
don't
want
to
push
too
far
the
other
direction.
B
I
just
want
to
be
clear
that,
right
now
we
have
an
api
that
we
say
if
you're
running
a
kubernetes
cluster,
our
api
works
and
what
we
would
say
if
we
say
the
latest
three
kubernetes
versions
would
be
as
maybe
half
of
kubernetes
clusters
are
supported
by
our
api,
which
is
a
change
and
may
not
be
expected
with
the
crd.
E
Yeah,
I
guess
my
putting
this
another
way.
F
F
I
you
know
the
fact
that
we're
being
delivered
with
crds
feels
like
implementation
detail
to.
G
D
F
That
it
will,
I
know
that
so
I
know
that
you're
100
right
rob
that
crds
do
have
a
slightly
different
support
expectation.
But
equally
you
know.
I
think
we
just
need
to
manage
the
expectation
like
in
our
release,
notes
and
talk
about
it
in
these
meetings
and
talk
about
it
in
calls
and
make
sure
that
it's
clearly
communicated
that
hey.
You
know
these.
These
might
work
in
order
clusters,
but,
like
we'd,
like
you
to
try
and
encoura
we're
encouraging
you.
G
F
Try
and
move
move
a
bit
faster
than
that,
so
that
so
that
we
can
use
this
newer
stuff
that
we
need
to
be
able
to
provide
you
with
a
better.
You
know
with
a
better
experience
at
the
end
of
the
day,
not
having
to
run
on
admission
web
hook.
I
think,
is
a
very
big
deal
and
would
be
really
nice.
B
I
yeah
I
get
that
I
maybe
a
question
for
other
maintainers
on
the
call,
including
eunic,
would
be
you
know
for
your
implementation.
What
version
range
do
you
support
with
the
latest
release?
What
kubernetes
version
range?
Do
you
support
trailing
three
or
is
it
a
wider
range
than
that.
F
For
contour
we've,
we
I'm
not
a
maintainer
there
anymore,
but
for
con
contour
supports
three
and
has
for
a
long
time,
because
because
of
me,
so
I
I
was
like
not
three.
Only
in
the
story:
it'll
work,
it'll
work
back
down
to
like
116
or
something
like
that.
But
but
the
explicit
support
is
for
is
for
only
three
because
of
this
sort.
C
F
You
don't
want
to
have
to
wait.
You
know
in
the
case
of
english.
V1
is
the
classic
one
here
right,
like
you,
don't
you
know
the
transition
from
v1
to
beta1
to
v1
took
forever,
and
then
it
still
screwed
people,
because
people
weren't
looking
for
it
and
didn't
know
that
sort
of
thing
could
happen.
F
You
know
the
removal
of
v1b
to
one
really
surprised
a
lot
of
people
and
people
were
shocked,
and
so
you,
I
think
that
the
way
to
stop
that
from
happening
is
to
ensure
that
you're
that
you're
telling
people
you
can't
just
keep
your
1
12
kubernetes
crusted
around
forever.
Right,
like
you,
have
to
keep
upgrading
it.
Just
like
any
other
software,
and
I
know
and
like
I
know,
rob
that
one
of
the
reasons
is
that
all
the
big
cloud
providers
don't
usually
stick
within
the
within
a
supported
window.
F
Cloud
providers
have
their
managed,
kubernetes
clusters
within
that
supported
window
like
and
so
yeah
like
it
that
that
kind
of
sucks,
too
and
so
like,
but
from
up
like
wearing
my
upstream
hat,
it's
better
for
upstream
for
us
to
keep
it
to
the
to
the
three
things.
I
acknowledge
that
that
the
practical
problems
of
you
know
managed
platforms
needing
to
qualify
new
versions
of
kubernetes.
F
You
know
and
it
taking
a
very
long
time,
and
so
it
can
it's
very
difficult
to
keep
to
the
managed
platforms
to
keep
their
managed
versions
within
the
within
the
supported
kubernetes
versions.
But,
like
I
mean
I'm
going
to
be
facetious
about
it,
but
like
that
seems
like
a
you
problem
like
as
far
as
wearing
my
upstream.
D
F
I
was
wearing
my
upstream
hat
right.
Like
you
know,
it's
like
hey,
that's
yeah,
that's
like
that's
like
we
got.
F
Everyone
else
needs
to
feed
themselves
out
right,
like
yeah
and
like
I,
I
don't
want
to
change
the
upstream
stuff
just
to
like
yeah.
I
don't
want
to
acknowledge
that
too
much
or
anything
I
mean
like
you
know
it's
like
hey.
I
know
that.
There's
a
lot
of
work
there.
I
know
there's
practical
reasons,
but
I
think
it's
better
for
the
prep
for
the
official
support
position
to
be
three
same
as
the
same
as
the
rest
of
it
and
for
us
to
manage
the
expectation
issues
separately.
Personally,
sorry,
I've
talked
about.
B
I
no
that's,
that's
very
helpful
and
I
just
I
want
to
share-
and
I
I
think
you
alluded
to
it
earlier
with
ingress
right.
I
I
am
very
familiar
of
the
pain
we
caused.
I
caused
with
ingress,
and
I
know
we
had
multiple
implementers
come
to
us
and
say
you
know
this
really
hurts
the
idea
that
there
is
really
a
by
the
way.
Sorry
yeah
there
that
there
were
three
support
versions
that
had
both
b1
and
v1
beta1,
and
if
you
wanted
to
support
anything
outside
of
that
range
you
needed
to
either.
B
You
know,
provide
parallel
support
or
you
needed
to
have
different.
You
know
different
versions
that
you
released,
and
you
know
big
implementations
ingress
engine
x
and
I
several
others
that
I
can't
remember
the
name
of
right
now.
I
had
issues
with
this
where
they
supported
a
bigger
range
of
versions
than
that
three
and
it
was
a
really
rough
transition
for
the
community
as
a
whole.
As
a
result,
there's
no
perfect
number
here,
but
we
do
have
prior
experience
that
three
can
be
painful
and
I
you
know
we.
B
A
I
would
like
to
I
mean
the
action
seems
pretty
obvious
to
have
the
conversation
in
the
issue,
or
rather
in
the
pr
I'm
leaning
next
way.
To
be
honest
with
you,
I
mean
it's
to
me.
It
seems
like
the
what
we're
really
we'll
always
try
to
support.
Far
back
like
we
at
kong,
we
can
get
you
running
on
some
pretty
old
versions
of
kubernetes.
A
We
don't
want
to
be
beholden
to
that
and
that's
kind
of
the
key
here
right.
That's
at
the
end
of
the
day,
we'll
put
some
effort,
but
we
don't
want
to
be
beholden
to
an
inordinate
amount
of
effort
to
try
to
keep
you
on
an
old
version.
At
some
point
you
got
to
give
and
as
the
end
user
you
got
to
be
like.
I
just
need
to
update
my
cluster.
A
It's
five
versions
behind
you
know,
so
I
kind
of
lean
that
way,
but
I'll
make
I'll
comment
on
the
on
the
thing,
because,
honestly,
I
kind
of
thumbs
it
up
at
first
and
then
now
after
hearing
this,
I'm
like
yeah,
no,
I
kind
of
agree
with
a
shorter
span,
because
five
is
a
lot
so.
F
I
think
that,
as
key
said,
best
of
it
is
a
magical
phrase.
I
think
that
that
putting
some
stuff
in
there
to
say,
hey
three
three
versions
being
the
supported
version
doesn't
mean
that
we're
gonna
just
that
we're
gonna,
always
hard
drop
things.
We
will
make
every
effort
to
make
sure
that
that
a
longer
support
that
the.
F
F
That
the
the
amount
of
effort
in
maintaining
the
extra
web
hook
as
well
is
probably
not
that
much.
What
I'm
worried
about
is,
if
there's
no
easy
way
that
your
crds
can
have
ceo
enabled
and
not
have
it
enabled
then
you're
kind
of
you
know,
then
you're
kind
of
screwed
right,
like
you
know,
if
you,
if
you
can't
have
the
ceo
stuff
in
the
crds
and
have
them
apply
to
older
clusters,
then
that
means
that
we
then
have
to
write
like.
B
B
I'll
follow
up
on
the
on
the
issue.
I
I'm
not
convinced
yet,
but
I
want
I
I
understand,
and
I
and
I
also
understand
I'm
outnumbered
right
now,
so
I'll
I'll
try
to
make
my
best
case
but
we'll
defer
to
the
community
for
sure.
F
A
All
right:
well,
we
don't
have
any
time
for
anything
else,
but
all
we
had
left
on
the
agenda
was
triage,
so
please
just
make
sure
to
check
out
the
recent
prs
and
issues
and
stuff
like
that.
If
you
have
time,
if
there's
anything
that
piques
your
interest
and
get
some
feedback
on
them,
but
we'll
check
in
again
on
them
on
monday
next
monday,
thanks
everybody
for
coming
and
have
a
good
one
thanks.