►
From YouTube: Gateway API GAMMA Meeting for 20230919
Description
Gateway API GAMMA Meeting for 20230919
A
Hello,
everybody
Welcome
to
the
September
19th
instance
of
the
Gateway
API
gamma
meeting
as
a
reminder
of
this
meeting
is
governed
by
the
kubernetes
code
of
conduct,
so
please
be
respectful
to
everybody.
We
have
an
open,
Agenda
and
we've
got
a
couple
of
topics
today.
But
if
you
are
happy
like
to
talk
about,
please
feel
free
to
add
that
to
the
agenda
and
then
go
ahead
and
share
my
screen.
So
we
can
all
be
watching
the
agenda.
B
A
Oh
yes,
and
please
make
sure
to
add
your
name
in
these
lists,
so
we
can
keep
track
of
who
are
you
attending
I'll
just
go
ahead
and
add
myself
there
power
over
here
and
install
my
employer's
day
exactly
while
I'm
at
it?
But
the
the
first
item
on
our
agenda
is
from
Nick
I.
Don't
think
he's
here
and
does
anybody
want
to
discuss
the
nest
resource.
D
D
If
and
when
your
you,
the
implementation
of
the
text,
that
the
version
of
crds
is
not
something
they
recognize.
So
there
may
be
features
that
are
configured
that
they
don't
understand.
Basically,
it'd
be
really
nice
to
have
a
place
to
put
that
kind
of
stuff
for
mesh
implementations
somewhere,
because
I
expect
there'll
become
a
day
where
some
mesh
implementations
don't
support
some
things
in
Gateway,
API,
gamma,
whatever
and
it'd
be
nice
to
be
able
to
communicate.
That
I
think
met.
D
E
C
The
thing
that
I
find
really
interesting
about
that
is
that
your
last
Point
Shane
would
actually
imply
that
it
should
be
a
cluster
scope
resource,
but.
E
Yeah
I
mean
I,
don't
know
if
I
feel
how
I
feel
about
that,
because
the
problem
is,
we
have
these
people
who
will
not
touch
them.
We
have
customers
at
Kong
that
are
mad
about
Gateway
class
because
they
they're
like
nope.
We
will
never
do
that.
We
will
not
use
cluster
script
resources.
Please
F
off.
You
know
like.
C
E
Think
it's
mostly
a
religious
objection
and
it
comes
from
it
stems
from
like
this.
We
don't
want
to
give
our
back
permissions
to
Cluster
script
resources
to
anyone.
If
we
don't
absolutely
need
to
and
yeah
I
think
I,
don't
think
it's
I,
don't
think
it's
every
conversation
I've
actually
had
with
customers
about
this.
It's
more
of
like
a
no
we're
just
not
going
to
start,
including
cluster
scope,
resources
for
an
Ingress
controller.
You
know.
E
F
E
Trying
our
Gateway
implementation
and
has
stopped
complaining,
but
it
is
not
just
I
I,
wouldn't
even
be
talking
about
this.
If
it
wasn't
for
the
fact
that
I've
heard
this
from
other
implementations,
I've
heard
like
if
it
was
just
Kong,
I'd
be
like
well,
our
customers
are,
you
know,
have
a
different
use
case
or
something
we
have
some
more
traditional,
like
infrastructures,
type
work,
use,
cases
and
stuff.
E
Maybe
it's
just
stems
from
that,
but
I've
heard
it
from
well
Nick
Nick
has
been
saying
that
that's
been
a
thing,
that's
going
on
at
least
with
Contour,
if
not
also
with
ice
surveillance,.
C
I
have
no
direct
experience
with
that
because
you
know
my
time
with
Emissary
we
didn't
have
any
clusters
resources,
so
Keith
has
been
patiently
waiting
so
Keith.
Why
don't
you
interrupt.
A
I,
don't
know
yeah
I've.
This
is
this
is
interesting
to
me
on
the
initial
use
case.
I'm
a
big
plus
one
I,
don't
I,
don't
see
any
big
reasons
to
let's
see
any
big
reasons
to
not
do
this
I
think
it's
useful
I
do
think
that
I've
been
looking
into
policy
attachment
on
the
SEO
on
the
SEO
side
to
do
some
work
there
recently.
A
So
this
obviously
set
my
mind
in
a
couple
of
different
directions,
because
at
the
moment,
right
now
when
it
can
look
at
Gateway
apis
mesh
supports
and
inherited
policy.
There's
not
really.
You
can
go
to
service
to
Route
automatically.
A
A
A
Until
we
have
kind
of
more
compelling
need,
or
customer
asked,
this
deals,
the
skills
like
it
might
be
now
as
far
as
the
conforming
stuff
looked
like
it's
a
better,
a
more
compelling
use
case
and
if
that
use
case
is
more
compelling
and
I'm
interested
in
exploring
what
that
looks
like
because
in
this
year
we
kind
of
have
this
whole
namespace
like
if
you
install
a
resource
in
the
newspaper
system
namespace
for
the
SEO
installation,
then
that's
your
mesh
to
thought.
A
I'll
give
another
way
of
doing
this
pattern,
but
I
think
the
idea
of
having
two
meshes
installed
is
is
compelling.
It
does
get
weird
with
burgeoning
I.
Think
that
you
know
before
we
go
too
far.
Actually
I
think
very
interesting
is
something
you
have
to
figure
out.
Do
two
versions
of
the
same
mesh
share
the
same
cluster
scope
to
mesh
resource,
either
defaults,
propagate
back
and
forth,
and
that's
open
questions,
but
I
don't
know
that
we
have
to
aim
for
that
now.
A
I
think
that
this
is
just
for
the
point
being
brought
up
for
conformance
status,
features
and
version
compatibility.
I
think
that
a
master
resource
makes
sense.
C
C
It's
also
pretty
difficult
for
me
to
think
that
it's
really
that
much
harder
to
just
go
and
say
give
me
all
the
mesh
resources
in
the
cluster
as
opposed
to
give
me
a
single
cluster
script
measure
resource.
So
you
know
for
linkerty
the
idea
that
you
would
toss
a
mesh
resource
in
the
Linker
D
name
space
and
have
it
be
names
this
namespace,
scoped
sure
fine.
You
know
that
that
seems
like
a
reasonable
way
to
do
that.
C
I
think
it's
important
to
lay
my
biases
on
the
table
here.
Linker
D
is
typically
simple
enough
to
upgrade
and
downgrade
that
people,
rather
than
trying
to
run
two
of
like
two
versions
in
the
same
cluster.
They
would
tend
to
just
try
the
new
version
on
staging
or
try
it
on
a
developer
cluster
and
then
go
ahead
and
upgrade
production.
C
C
Are
you
know,
clusters
are
cheap
right
and
likerty,
as
of
2.14
can
do
flat,
Network
multi-clustered,
rather
simply
so
yeah
just
checking
it's
a
little
weird
to
me
to
think
of
I
have
to
figure
out
the
right
way
to
phrase
this
one.
C
Okay,
yeah,
so
I
need
to
go
check
with
some
of
the
the
more
salesy
folks
over
at
buen's
to
talk
about
what
we've
seen
from
customers,
but
the
customer
conversations
with
other
customers.
The
customer
conversations
that
I've
been
part
of
have
tended
more
towards
treating
clusters
as
units
of
isolation,
rather
than
anything
else.
So.
E
C
The
you
know,
Lincoln
is
a
security
component
right,
and
so
people
tend
to
believe
that
Lincoln
can
do
a
good
job
of
isolating
chunks
of
your
cluster
without
kind
of
going.
Oh,
my
God
we
have
to
isolate
the
different
linkages
is,
is
has
been
my
experience
anyway.
B
A
F
Have
been
the
same
way
so
after
the
introduction
by
cluster
appearing,
it
enabled
a
pattern
of
having
different
versions
in
different
clusters.
C
F
Don't
see
any
use
case
for
trying
to
actively
run
multiple
versions
of
the
single
cluster
I.
Don't
think
that
would
make
sense.
The
upgrade
case,
though
I
think,
is
relevant
just
not.
B
F
F
Thing
that
like
there
was
definitely
a
lot
of
hesitation
on
the
part
of
large
users
due
to
the
like
upgrading
and
major
infrastructure
component
and
I.
F
Reach
out
to
some
of
the
folks
that
are
still
in
the
consultant
to
ask
about
like
if
there
is
any
consideration
for
upgrade
strategies,
so
I
think
that
we
had
looked
at
some
of
like
what
this
deal
was
doing
with
like
the
multiple
versions,
simultaneous
upgrades
as
a
thing
of
like.
Oh,
should
we
maybe
try
that.
C
Kind
of
related
to
my
flip
concept,
flip
comment
about
making
the
upgrade
case.
Less
scary
and
it'd
be
very
interesting
to
hear
John's
commentary
on
this,
of
course,
but
a
lot
of
that
stuff.
B
C
It
tended
to
feel
to
me
when
we
ran
across
customers
who
were
interact
or
users
of
say
Emissary,
who
were
interacting
with
this,
do
as
well
a
lot
of
that
upgrade
stuff.
They
would
describe
as
being
really
a
lot
of
added
complexity,
instead
of
you
know
like
like
they,
they
tended
to
describe
it
as
a
necessary
evil
to
mitigate
the
scariness
of
upgrading,
rather
than
really
a
feature,
but,
like
I
said,
I'm
really
curious
to
see
you
know
like
John
and
Rob
and
people
what
you
know.
B
G
C
Yeah
we
we
saw,
we
have
seen
people
who,
just
you
know,
will
yellow
liquidy
in
production
and
we
don't
necessarily
understand
them.
We
try
to
make
it
work,
but
we
do
not
necessarily
understand
those
people.
We
think
that's
a
very
bad
idea.
Let's
be
clear
about
that.
Try
it
try
it
someplace
else.
First,
please.
F
C
F
C
C
So
let
me
throw
in
that
I
am
worried
a
little
bit
about
the
slippery
slope
aspect
of
adding
a
mesh
resource,
but
I'm
not
worried
enough
to
think
that
this
is
a
bad
idea.
I
think
that
it's
a
good
idea
to
have
a
resource
someplace
that
we
can
put
status
on
and
I
definitely
think
having
a
place
we
can
put
status
is
a
a
really
good
idea.
So
so
overall,
I
think
that
this
is
probably
a
good
idea.
A
Yeah
I
was
going
to
say,
I,
didn't
I,
don't
hear
any
objections
to
the
original
use
case,
which
is
a
pretty
good
sign,
I,
think
and
I
think
what
we
probably
do
is
we
move
forward
with
that
original
use
case
and
then
just
if
somebody
comes
with
an
idea
to
expand
that
for
full
transparency.
That
might
be
me
in
a
couple
of
weeks,
we
just
hold
that
to
to
high
scrutiny
and
really
kind
of
interrogate
the
use
cases
and
figure
out
as
a
spec.
How
you
want
to
proceed.
C
And
for
the
record
Keith,
the
idea
that
you
would
attach
policy
to
the
mesh
as
a
way
of
setting
defaults
also
makes
sense
to
me.
I
think
it
might
be
likely
that
I
end
up
bringing
something
to
this
group
about
defaults
versus
overrides.
C
But
that's
going
to
be
a
different
discussion
and
I.
Don't
think
it
impacts
the
validity
of
attaching
defaults
to
a
mesh
resource.
D
B
C
Well,
I
was
just
going
to
follow
that
up
by
saying,
there's
no
chance
that
I'm
gonna
have
time
to
lead
a
gap
on
this
in
for
the
rest
of
September.
There's
a
small
chance.
I
could
get
to
it
in
October.
If
nobody
feeds
me
to
it,
but
I.
E
B
E
Like
I
satellite,
a
mesh
team
but
I
don't
work
directly,
but
I
might
be
able
to
I'll
ask
Beaumont
and
see,
if
he's
interested,
not
to
sign
him
up
for
it
at
anything.
But
I
think
he
might
be
interested
so
but
I'm
sure
that
will
be
a
post,
kubecon
kind
of
thing.
Sure
of.
C
It
I
think
we
should
just
assign
it
to
Bruce
Wayne,
and
then
we.
F
A
So
what
I've
been
working
on
the
UCS
side
is,
you
know,
bringing
some
other
resources
to
use.
Target
ref
aligned
with
Gateway
the
API
policy
attachment
part
of
the
spec
and
I
thought
that
occurred
to
me.
A
I
think
Sean
brought
this
up
at
some
point
in
the
past,
but
you
know
there's
currently
no
way
in
the
Guinness
spec
to
Target
a
set
of
workloads,
everything's
kind
of
service
based
Gateway
based
kind
of
the
policy
targets
proxies,
but
at
the
moment
there's
no
way
for
bringing
a
policy
that
targets
a
set
of
workloads.
A
What
you
know,
obviously
initially
you've
got
a
pattern.
Every
mesh
pretty
much
has
a
pattern.
How
do
we
feel
and
I'm
curious
to
get
like
Shane
and
Rob's
thoughts
on
this?
How
do
we
feel
about
well
I,
guess
before
we
get
to
part
about
how
we
feel
do
we
feel
like
there
is
a
con?
A
Is
there
compelling
use
case
on
the
Ingress
side
for
workload,
targeted
policy
I,
think
that
you
know
if
this
is
mesh
only
we
can
have
that
conversation,
but
I'm
curious
just
to
see
if
there's
a
values,
because
in
the
Ingress
side
will
work
really
targeting
policy
obviously
might
be
Mike
brought
up,
might
be
a
good
one.
If
you
want
to
do
like,
obviously
or
on
the
Gateway
side,.
E
C
So
in
Emissary,
for
example,
if
you're
talking
about
requiring
Authentication,
then
you
are
implicitly
requiring
authentication
of
a
workload
not
of
a
service.
If
that
makes
any
sense
and
it
it's
a
little
weird,
because
the
thing
that
Emissary
is
going
to
consider
an
upstream
and
Emissary
speak
might
be
canaring
between
multiple
services
and
things
like
that.
But
there
is
always
this
implicit
belief
assumption
that
if
you
are
doing
that,
then
the
workloads
behind
it
must
necessarily
be
considered
equivalent.
C
Right
yeah-
and
this
was
actually
a
thing
that
was
very
strange
for
me-
coming
over
to
the
mesh
side
of
the
world
and
starting
to
understand
that.
Oh
no
wait,
we
actually
have
to
care
a
lot
about
the
distinction
between
the
service
and
the
workload
and
stuff
like
that.
So
linkerty
has
a
resource
called
server.
That
is
there
to
be
able
to
talk
about
the
workload
different
from
the
service.
C
John
is
raising
his
hand,
perhaps
to
go
and
mock
me
for
saying
what
I
just
said.
No.
G
C
B
G
First,
we
were
like:
oh,
we
have
a
policy
that
attaches
to
workloads,
we
better
Implement,
that
in
the
Waypoint
proxy,
and
so
it's
like
well
now,
I
may
have
like
100
different
workloads
so
now,
I
need
like
100
different
copies
of
all
the
policies,
and
we're
like
this
is
not
going
to
work
out
and
besides,
like
no
proxy
of
an
Envoy,
can
actually
do
this,
like
you
can't
you
don't
apply
policy
after
you
load
balance.
It's
usually
one
of
the
last
steps.
C
Can
actually
do
that
yeah
I
said
normal
price
just.
C
It's
a
little
bit
of
an
artifact,
oh
crap,
I
do
have
to
run
it's
a
little
bit
of
an
artifact
at
the
implementation,
but
linkergy
the
traffic
will
leave
the
origination
proxy,
where
load
balancing
happens
and
then
enter
a
destination
proxy
which
can
itself
go
in.
You
know
Implement
policy
at
that
point
as
well,
so
so
linkerty
has
that
interesting
capability
which
again
could
be
a
could
be
a
bug
or
could
be
a
feature.
We
think
it's
a
feature
anyway.
Yeah.
C
C
G
Yeah
anyhow,
one
other
thing:
we've
been
considering
in
the
Waypoint
case.
Well,
first,
we
we
changed
so
that
you
just
apply
policies
to
the
Waypoint,
which
is
basically
like
how
Ingress
Works
more
or
less
instead
of
two
workloads,
but
one
thing
that
I've
been
also
looking
into,
which
is
only
my
opinion
at
this
point,
not
the
Project's
opinion,
but
is
that
maybe
we
should
just
apply
them
to
service
and
even
though
applying
authorization
follow
to
service
like
what,
if
they
just
don't,
go
through
the
service
yeah.
G
A
A
This
it's
got
weird.
It
got
very
weird
because
of
the.
A
The
fact
that
multiple
black
workloads
can
belong
to
multiple
services
and
that
that
is
a
pretty
common
use
case
outside
of
Nashville
doing
like
a
V1,
V2
Canary.
That
sort
of
thing
you
have
the
same
set
of
workloads
with
labels,
but
one's
being
one
and
one
is
V2,
and
so
you've
got
one
of
those
and
you
want
to
be
able
to
document.
A
So
you
have
like
one
service
that
has
both
the
1v2
once
would
be
one
one
or
V2,
and
so
that
was
a
bit
complex
and
also
it's
kind
of
difficult
to
turn
into
Johnson.
A
The
point
John
made:
oh,
you
came
through
the
the
one
service
and
you
are,
but
the
policy
is
on
the
V2
service,
and
so
the
policy
in
order
applies
because
you
came
through
a
different
cluster
domain,
like
typically
when
it
comes
to
at
least
from
what
I've
seen
with
customer
when
customers
are
writing
policy
against
their
workloads,
because
there's
sensitive
like
some
sensitive
information
on
the
workload
the
workload
can
handle.
A
Certain
paths
like
the
the
policy
is
meant
to
protect
the
workload
itself,
and
so
you
know
we
found
limitations
on
that
with
osm
and
like
on
the
in
the
current
istio
sidecar
based
architecture.
You,
you
just
apply
policy
to
a
set
of
label,
selectors
and
typically
I
I.
Maybe
John
probably
has
more
more
horror
stories,
but
I
typically
only
see
folks
like
apply
policy
to
a
set
of
labels
that
tends
to
match,
with
one
workload,
look
at
the
app
type
label.
A
It's
logically
the
same
typical,
the
same
container,
that's
running,
but
yeah
just
figure
out
through
that
context
from
the
osm
Sni
side.
But
there
there's
some
hidden
surprises
when
you
start
adding
policy
to
to
service.
That
was.
G
B
G
G
There's
a
policy
that
matches
that
label
I,
don't
like
it
very
much,
but
I
have
seen
it
before,
but
the
label
selection
stuff
is
I'm,
also
not
a
huge
fan
like
once.
You
end
up
with
a
lot
of
different
objects.
You
have
like
service
matching
pods
and
then
like
Gateway
matching
pods
and
then
authorization
policy
matching
pods
and
then
HPA
matching
whatever
it
matches
or
I.
Don't
know
all
these
things.
It's
like
if
they're
all
matching
the
same
label.
G
Fine,
like
it's
kind
of
makes
sense,
but
then
once
they're
all
like
slightly
different
labels-
and
you
have
like
this
like
end
M
to
X,
to
Y
to
Z
to
Q
to
W,
you
go.
You
run
out
of
alphabet
letters
at
some
point
and
you
have
no
idea
what's
going
on.
It's
like
this
absurd
Venn
diagram,
so
the
target
ref
stuff
is,
is
quite
nice
for
that
and
now
for
patching
a
workload.
So
it's
very
hard,
so
I
don't
have
another
solution.
There.
A
One
thing
I
looked
into
when
I
was
looking
into
you
know
doing
the
works,
trying
to
write
start
again.
Creation
policy
was
the
Linker
D
server
resource.
It
was
continuing
I
hate
that
film
had
to
go,
but
I
can't
decide
how
I
feel
about
that
resource,
because
on
the
one
hand
it
describes
from
what
I
understand
it's
basically
a
resource
to
describe
the
network
properties
of
a
workload.
A
So
what
port
listens
on?
That's
the
main
one
as
well,
some
other
information,
and
if
you
apply
a
policy
to
that,
then
that's
a
good
enough
presentation
of
a
of
a
workload
that
you
end
up.
Ditching
some
of
the
complexities,
if
someone's
doing
way
too
many
things,
but
it's
a
decent
representation
of
like
what
it
is
you're
wanting
to
bind
to
problem
with.
A
That,
though,
is
that
now
you
should
have
just
have
your
deployment
and
their
service
Etc,
the
user
has
to
go
and
make
a
server
resource
or
whatever
we
want
to
call
it
like
some
sort
of
resource
to
kind
of
project,
their
deployment
to
other
the
staple
set
or
their
Damon
Center
whatever.
It
is
project
that
into
the
networking
space
of
the
cluster,
and
that
can
be
you
know
tedious,
but
I
I
it
does
I
do
there
do
seem
to
be
kind
of
three
options
for
workload
policy
from
us.
A
It
feels
like
there's
at
least
some
consensus,
that
is
on
the
mesh
side.
We
kind
of
need
a
way
to
represent
that
you
either
on
the
policy
or
service
use
labels,
or
you
create
some
third
thing.
That
is
a
suggestion
of
the
workload
into
networking
domain
of
your
cluster,
your
mesh,
whatever
it's
kind
of
how
I
rationalize
it
at
least.
A
D
I,
don't
I
don't
know,
and
so
I
may
have
missed
this
in
in
the
discussion,
but
I
it
sounds
like
at
least
some
meshes
have
of
an
intermediate
resource
that
describes
a
group
of
workloads.
That
is
not
say
a
deployment
or
something
it's
basically
something
that
I
selects
a
group
of
PODS.
Is
that
a
is
that
a
fair
assessment
or
is
selecting
a
group
of
PODS?
Often
just
part
of
the
policy
attachment
mechanism
directly
with
part
of
the
policy.
A
A
They
have
these
routes,
they
have
these
ports
Etc
and
then
they
attach
policy.
To
that,
then
you've
got
the
SEO
case
where
you,
the
policy
itself,
just
selects.
A
Rather
that's
kind
of
the
dichotomy
that
I've
that
I've
seen
and
then
you
can
then
osm
like
SEO
1.0
applied
the
policy
of
the
service.
So
we
kind
of
there
are
examples,
at
least
of
all
the
different
options.
D
This
is
me
not
not
knowing
the
full
use
case
here,
but
you
know
when
I
think
of
groups
of
PODS
I
mean
the
the
most
obvious
one
is
Service,
which
I
know
has
its
own
connotations,
but
you
could
say
that
kubernetes
Services
already
are
a
group
of
PODS
largely,
but
another
thing
you
could
say
is
that
deployments
or
David
sets
or
any
of
those
are
also
groups
of
PODS
that
are
usually
connected
to
each
other
share
share
purpose
Etc,
those
those
are
decidedly
different
than
the
label
selection
based
world
that
I
think
meshes
are
coming
from
today,
but
it
it
basically
what
I,
where
I'm
going
with
this
is.
D
Do
we
need
to
create
a
new
workload
resource,
I
I'm,
trying
to
avoid
changing
policy
attachment
to
add
yet
another
mechanism
you
know
like
you,
can
attack,
attach
via
object
reference
or
buy
label
selector.
It
just
feels
confusing
to
me,
but
I
also
hate.
The
idea
of
you
can
attach
via
object
reference
to
another
thing
that
then
attaches
like,
then
is
label
selection?
You
know
like
a
new
workload
reason.
You
know
like
that
that
I,
don't
know
I
mean
sure.
D
A
Yeah
I
mean
I,
don't
I,
don't
love
the
I.
Think
it's
It
generally
better
to
move
away
from
that.
You
know,
n
by
n,
by
X,
by
y's
and
don't
describing
the
if
service
was
just
a
bag
of
endpoints
I.
Don't
think
we
could
just
kind
of
tap
in
that
back
end
role
of
service
and
I
think
the
problem
is
pretty
much
solved.
A
If
service
is
just
a
pod,
a
pod
grouping,
then
you
know
that's,
that's
fine
and
that's
an
appropriate
to
Target
to
you,
but
I,
don't
know
if
that
matches
how
customers
use
service,
and
so
maybe
what
maybe,
what
we're
circling
around
is
that
we
need
to
train
users
to
use
service
differently
and
to
create.
You
know
if
you're
going
to
do
splitting.
You
know
create
a
service
that
contains
two
other
services
and
make
that
a
pattern.
And
then
you
attach
policy
to
the
grouping
circles
and
yes,
that's
going
to
be.
A
A
G
Just
saying
one
thing
that
does
exist:
I'm,
not
sure
this
is
great,
but
in
Easter
you
have
this
idea
of
a
canonical
service
which
I
think
is
a
bad
name.
It
should
probably
be
canonical
workload
or
something
or
application,
but
basically
it's
a
each
pod
has
one
workload
name
and
it's
kind
of
a
higher
order,
name,
it's
derived
from
a
variety
of
different
labels
with
like
some
president
logic,
one
of
being
like
the
app
label,
which
is
kind
of
the
most
direct
counterpart.
G
Yeah,
here's
an
idea,
it's
very
opinionated
I,
would
say
it's
like.
Instead
of
having
a
selector,
you
basically
focus
in
on
one
single
label
and
you
match
that
more
or
less
yeah,
there's
probably
been
other
like
I
haven't,
followed
it
along,
but
I
know
it's
like
the
Sig
application
or
I.
Don't
know
if
they're
sick,
whatever
that
whole
application
group,
like
I'm,
sure
there's
other
things
going
on
in
kubernetes
land
that
are
similar,
that
I'm
not
familiar
with,
but.
B
D
Yeah
I
can
see
the
idea
of
that,
but
then,
of
course,
surely
there
are
some
pods
that
want
to
belong
to
multiple
workload.
Definition
I,
don't
know,
maybe
not
maybe
workload
is
meant
to
be
unique
per
pod,
yeah
yeah.
In
any
case
service
you
could
get
a
little
bit
closer
to
removing
some
of
all
some
of
that
extra
functionality.
If
you
used
a
headless
service
that
makes
some
changes,
but
it's
still
not
great.
D
Dns
gets
a
gets.
Wonky
then,
which
you
just
don't
want
DNS
at
that
point,
yeah
I,
I,
don't
know,
I,
don't
know
what
the
solution
is
there,
but
I.
You
know,
I
know
that
the
the
long
term
goal
or
whatever
is
that
Gateway
API
becomes
the
front
end
and
service
becomes
a
collection
of
endpoints
with
moving
away
from
the
front-end
Concepts
as
much
as
we
can
I
mean
we're
going
to
be
stuck
with
them.
D
But
again
the
idea
is
that
there's
a
front
end
in
front
of
service
that
is
in
Services,
primarily
just
for
collecting
endpoints,
which,
if
you
try
and
look
into
the
future
and
believe
we
eventually
achieve
something
like
that.
Then
Service
as
a
workload
a
way
to
represent
workloads
Fields,
maybe
not
that
far-fetched,
but
if
it
requires
some
level
of
belief
in
future
success,
so
yeah.
A
I
I
don't
know,
I,
don't
know
that
because
of
the
timeline
I,
don't
know
if
we're
at
the
stage
to
codify
the
pattern
and
the
spec
I
would
like
to,
but
there's
so
many
different
directions.
That
message
to
me
data
is
not
a
clear,
like
tradition
argument
for
doing
one
way
versus
the
other,
because
there's
so
many
different
options
and
the
future
is
so
far
into
the
future.
A
That
I
don't
know
that
codifying
it
now,
let's
go
into
and
making
conform
assess
against
it.
You
know
like
we
want
to
do
for
something
in
respect.
I,
don't
know
if
that's
going
to
be
putting
strain
on
the
ecosystem,
because
the
pieces
aren't
all
the
way
there
to
make
that
reality.
Work.
A
D
D
Agree
that
this
is
these
are
the
use
cases
we
want
to
represent
could
get
us
part
of
the
way
to
figuring
out
if
the
solutions
we're
thinking
about
even
solve
the
the
use
cases
we're
going
for
yeah
I,
don't
know
I
feel
like
that's,
maybe
the
best
Next
Step.
A
Yeah
that
makes
sense,
I
think
that
I,
don't
I,
don't
know
which
group
specifically
John
was
mentioning,
but
thinking
about
group
com.
This
would
be
a
very
interesting
conversation
to
have
like
maybe
it's
Tag
app
delivery,
say.
A
I
I
will
check
all
the
different
things,
but
I
feel
like
there's.
Maybe
somebody
that
confirmed
that
like
has
opinions
on
this
on
the
abstraction
that
we're
trying
to
make,
because
essentially
a
group
of
it
almost
feels
like
reflecting
for
okay,
these
containers
share
the
same
thing
and
you
want
to
be
able
to
reference
them
and
write
policy
based
on
the
code,
that's
running
in
the
container
and
maybe
there's
something
out
there
that
has
established
patterns
for
that
or
is
working
to
establish
funding.
It's
not
Matthew
more
the
kubecon
discussion.
A
If
we
can
figure
out
and
find
the
right
people
just
a
thought,
yeah
I've
got
to
do
some
more
thinking.
That's
I
just
want
to
bring
it
up,
though,
and
see
if
you
know
see
if
anybody
smarter
than
me
and
came
up
with
a
it's
a
perfect
solution.
A
Thank
you.
So
foreign.
D
D
D
D
Many
people
on
this
call
are
planning
on
being
at
kubecon,
but
if
you
haven't
already
communicated
your
plans
to
Shane
or
Nick
or
myself,
that'd
be
helpful.
We're
trying
to
organize
one
or
two
things:
just
who
knows
what'll
actually
work
out,
but
would
be
great
to
see
people
if
you're,
if
you're
there
in
person
yeah.
So
just
let
us
know
if
you're
planning
on
being
around.
A
I
know
I'll,
be
there
so
I'm
excited
to
see
folks
in
person
again
in
the
chat.
D
Yeah,
it's
almost
always.
You
know
the
the
cfps
are
closing
a
week
after
your
flight.
You
know
back
from
coupon
you're
just
done
with
kubecon
and
you've
got
like
a
week
or
less
to
finish
up
your
proposal
for
the
next
one
and
yeah
anyway,
and
this
year,
there's
kubecon
India
too,
which
I
am
really
interested
in,
but
three
coupons
a
year
is,
is
a
lot
so
much.
A
All
right,
there's
nothing
else.
I'll
give
everybody
seven
minutes
back.
It
was
right
to
start
working
ball
and
we'll
talk
again
in
two
weeks.