►
From YouTube: Service APIs Office Hours 20200513
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
we're
recording
this
is
service
apis
office
hours
for
may,
13,
2020
and
thanks
to
harry,
he
added
something
to
the
agenda
to
start
off
and
I
think
that's
a
good
starting
point.
There's
been
lots
of
new
pr's
and
changes
and
issues
that
have
come
up
in
the
past
a
little
bit.
A
B
Yeah,
let
me
take
a
look
here:
ads
request,
reach
yeah.
I
think
it's.
If
you
click
on
the
ht
number
58
there.
It
gives
us
some
some
background
and
I'll
talk
through
it.
So
this
I
believe
this
came
from
the
k-native
community
requirements
that
I
actually
actually
see
in
support
of
k-native
serving
6593
in
the
comment
section
below.
B
So
we
see
that
k-native
has
a
handful
of
capabilities
that
they
would
like
service
apis
to
support,
so
they
no
longer
need
to
maintain
their
ingress
cr,
so
they
ultimately
want
to
adopt
service
apis,
and
in
order
to
do
so,
this
is
some
of
the
one
of
the
post
forwarding
action
traffic
manipulation
right,
because
the
current
flow
that
we
have
right
now
is
you
know
we
match
optionally.
We
can
do
some
type
of
filtering
and
then
we
take
an
action
on
on
the
request.
B
B
We
need
to
be
able
to
perform
some
type
of
manipulation
and
other
capabilities
such
as
being
able
to
not
just
necessarily
forward
the
request
to
a
back
end,
but
to
actually
start
applying
weights
to
certain
back
ends,
maybe
mirroring
the
traffic
to
some
other
service.
B
And
this
is
the
issue
that
I
tried
to
kind
of
capture
a
lot
of
those
requirements
and
going
back
to
your
question,
rob
the
the
pr
addresses,
let's
see
the
request,
so
the
ability
to
retry.
So
we
go
ahead
and
we
forward.
We
attempt
to
create
a
connection
to
that
back
end
and
we
need
some
kind
of
timeouts
or
do
we
want
to
be
able
to
do
to
retry
making
that
connection
to
the
back
end
and
that's
what
this
pr
is
attempting
to
do.
A
Great
thanks
and
harry
you
had
a
couple
of
follow-up
questions
on
this.
We
are
so.
C
What
I
was
wondering
was
the
reason
where
why
you
know
k
native
ingress,
and
now
we
are
trying
to
add
these
just
because
there
is
nowhere
in
the
service
resource
to
actually
do
anything
with
these.
So
should
these.
So
if
you
look
at
the,
if
you
scroll
up
the
timeout
policy
and
retry
policy
are
included
in
in
the
forward
to
target,
I
think.
B
D
D
Of
I
hope
my
audio
is
okay,
the
interesting
enough
like
for
the
way
we
have
set
it
up
for
ingress
on
google
cloud.
If
these
are
more
on
a
per
service
basis,
it
seems
to
have
just
shook
out
like
that
in
our
schema.
D
B
Yeah,
I
I
haven't
checked
every
different
implementation.
I
think
I
looked
at
h
a
proxy
and
I
looked
at
contour.
I
look
at
anything
else.
I
think
those
are
probably
the
two
that
that
I
looked
at.
D
Yeah,
if
you
have
done
the
research,
it
would
be
good
to
kind
of.
I
don't
know
where
we
would
record
it,
because
probably
the
cloud
providers
will
have
slightly
different.
Although
we
know
everyone
is
eventually
going
to
converge.
We
just
need
to
be
careful
that
we
don't
go
too
far
in
terms
of
requiring
it.
If
anything
it,
maybe
it
could
be
extended.
B
D
So
that
is
in
the
concept:
stock
core
means
that
it
is
100
portable.
So,
for
example,
the
current
ingress
v1
is
supposed
to
be
100.
Portable
extended
means
that
if
you
support
it,
then
it
will
behave
as
if
it
will
behave.
Portably
saying
that
you
are,
you
are
supporting
it.
You
know
modulo.
D
D
See
concepts
core
how
to
contribute?
B
Add
it
file
an
issue,
we're
missing
details.
B
Yeah-
and
I
want
to
say
here
recently
bowie
I
looked
at
one
of
the
presentations
you
did
where
it
showed
kind
of
the
gravitational
pull
of
you
know
two
core:
it
would
be
kind
of
it'd
be
nice
to
have,
I
don't
know,
maybe
have
that
diagram
and
talk
a
little
bit
about
what
you
had
in
the
slide
into
the
concept
stock.
D
D
Okay,
all
right,
yeah
harry,
that's
a
good
point
given
sort
of
that
we're
factoring
surface
around
there's
no,
like
very
nice,
wear
place
to
for
it
to
live.
D
I
do
want
to
kind
of
preserve
that
thought
just
because
from
our
experience
like
the
yeah,
the
weird
thing
about
kubernetes
service
is
it's
not
a
service
in
how
most
people
think
of
like
a
deployment
or
application
right?
And
yet
it's
like
it
sort
of
is.
C
C
D
C
D
It
just
like
would
be
bad
for
everyone,
because
then
people
would
have
to
for
every
service.
Comma.
You
know
different
set
of
con
configuration
like
do
something
it
kind
of
like
explodes
the
number
of
things
that
you
have
to
handle
when.
D
C
D
Yeah,
it's
like
a
little
bit
weird
how
kubernetes
has
decoupled
the
two
like
a
lot
of
people
will
talk
about
service
they'll
talk
about
application
which
comes
with
a
service
and
then,
when
we
talk
about
kubernetes
api,
the
only
thing
we
can
really
point
to
is
the
service
but
yeah.
And
then
these
attributes
are
sort
of
like
attributes
of
a
given
application
or
deployment
right.
E
C
E
Say
anything
that
refers
to
so
anything
that
refers
to
me
should
have
this
time
actually
should
look
at
this
timeout
or
you
could
put
it
on,
as
you
have
here
on
the
route
which
says
oh
for
for
this
proxy
client
talking
to
this
and
test
talking
this
back
end.
This
is
the
timeout
that
I
should
use.
C
And
that's
how
most
people
get
around
rules.
So
that's
like
the
root
of
the
problem
that
and
the
reason
I
commented
like
I'm
bringing
this
up
is
because
you
have
to
create
different
upstream
pools.
What
is
one
service?
A
proxy?
Please
has
five
services,
there
are
five
routes
pointing
to
that
one
service
right
and
which
is
probably
not
operationally
correct.
C
If
you
do
have
a
problem-
and
you
start
digging
into
that
and
I
think
most
implementations
of
even
ingress-
we
won't
do
that
because
we
want
so
so
each
service
is
not
one
service
inside,
let's
say
genex
ingress.
It's
like
four
services.
If
you
have
four
routes,
so
four
upstream
pools
which
sort
of
defeats
the
point
of
having
that
abstraction.
D
Commonality,
I
also
remember
h.a
proxy
yeah
so
like
it
seems
like
a
common
issue,
and
we
should
definitely
file
an
issue
and
talk
about
how
to
represent
it,
because
if
everyone
is
end
up
having
to
do
this,
like
multiplicative
like
blowout
to
all
the
different
combinations
that
seems
less
great.
Also,
it
does
have
an
issue
with
respect
to,
if
you're
doing
metrics
per
pool,
then
they
get
all
strange
because
they're
all
aliases
of
each
other
and
you'd
have
to
like
aggregate
them
again.
C
D
I
think,
given
that
then
we
should
sort
of
put
on
hold
for
now.
I
think
the
different
attributes
like
should
I
have
a
timeout,
should
I
have
a
retry
like
those
are
things
that
we
can
just
discuss
and
try
to
evaluate,
because
that
is
sort
of
like
independent
of
where
it
gets
attached.
I
think
the
the
real
big
question
is
like:
does
it
get
attached
here
specifically
under
like
a
match
rule,
or
does
it
get
attached
sort
of
in
aggregate
across
a
destination
service?
C
C
And
while
we
are
at
this
pr,
I
think
one
thing
that
at
least
nginx
doesn't
support
is
probably
the
interval
field
inside
the
retry
policy
struct,
that's
probably
inside
the
file
view
yeah.
So
I
don't
think
there
is
a
way.
C
D
A
A
All
right
well,
thank
you
yeah.
It
sounds
like
we
have
good
follow-ups
on
this
one.
Let's
move
on
because
I
know
there's
been
lots
of
other
discussion
here
on
different
pull
requests
and
issues.
A
Are
there
any?
I
know,
there's
a
big
pull
request
from
james
that
I'd
like
to
talk
about
at
some
point
today.
Are
there
any
smaller
ones
that
we
should
cover?
First
anything
that
really-
and
I
know,
there's
a
few
lgtms
that
are
ready
to
go
and
just
need
an
approve.
A
Yeah
and
so
beyond
that,
are
there
any
issues
or
pr's
that
well,
actually,
let's
I
I
thought
this
was
interesting.
It
probably
deserves
some
broader
discussion,
so
maybe
james,
if
you
want
to
just
discuss
the
high
level
you're
you're
aiming
towards
here.
E
Sure
I
can
talk
about
this.
If
you
go
back
to
the
conversation,
there's
a
doc
there,
where
I
I
draw
a
diagram
of
kind
of
what
happens
and
how
things
look
when
you
try
to
provision
a
tls
service,
and
I
think
it
I
felt
like
this
was
kind
of
a.
I
was
quite
pleased
with
myself
with
this,
because
I
thought
oh,
this
is
what
I
a
way
to
specify
kind
of
what
I
thought
was
kind
of
clearly
about
what.
E
A
little
bit
of
a
problem
that
we've
been
struggling
with
for
a
while-
and
I
think
the
root
problem
is
that
there
is
this
thing
people
think
about
that.
That's
my
application
right
and
I
want
to
expose
my
application.
So
we
have
a
cluster
operator
who
own
who
role
that
owns
the
gateway
in
the
secrets.
He
owns
the
gateway
stuff
and
then
that's
decoupled
from
the
application
owners.
E
Objects
by
you
know
the
route
selector,
but
this
concept
of
of
of
an
application
kind
of
crosses
both
roles.
So
you
have
one
object
or
one
at
part
of
your
thing
that
part
of
your
application
that
exists
kind
of
in
the
world
in
people's
people's
internal
models,
and
it's
split
across
these
two
roles-
and
that's,
I
think,
is
where
and
the
fact
that
you
take
one
thing
and
you
smear
it
across
and
you
both
try
to
manage.
The
same
thing
is
what
creates
kind
of
the
awkwardness
around
around
this
model.
E
And
it's
what
and
it's
kind
of,
I
think
it's
the
root
cause
of
why
we've
struggled
with
tls
and
the
gateway
relationship
for
for
so
long.
E
So
taking
that,
if,
if
that's
our
problem
definition,
then
okay,
so
the
obvious
next
step
is
to
say:
oh
well,
let's
take
these.
Let's
take
these
two
parts
and
stick
them
in
and
see
what
happens
if
we,
if
we
put
them
together
and
what
does
that
look
like,
so
I
had
this
concept,
so
I
said
okay.
This
is
like.
E
Basically,
the
concept
here
is
a
a
logical,
endpoint
nick
and
I
talked
about
it
for
a
bit
nick
says
nick
was
like
oh
well,
that's
that's
basically
a
listener,
so
it
went
from
going
less
than
trying
to
have
a
separate
endpoint
concept.
Just
saying:
okay,
let's
fold
this
into
listener
and
say
instead
of
listener
and
say
well,
listener
is
now
firmly
on
the
side
of
being
a
a
logical,
a
logical,
endpoint
right.
So
logical,
endpoint
means
it's
that
there
is
a
process
that
can
accept
some
sort
of
client
request
on
that.
D
E
Diff
thanks
rob
so
now,
if
you
scroll
down
to
listener,
let's
just
close
the
comments
on
that,
so
we
can
see
more
on
the
screen.
E
Okay,
so
so
now
what
hap?
What
we
have
is
that
that
addresses
the
address
is
a
direct
field
in
the
gateway
spec
and
the
all
the
other
attributes
are
in
the
listener.
So
the
listener
has
a
hostname,
a
port.
It
has.
If
you
keep
going
down,
it,
has
the
protocol
and
it
has
the
tls
so
and
it
has
the
routes.
E
So
all
these
things,
which
are
logically
required
to
create
to
you
know,
expose
an
application
through
a
gateway
are
now
in
the
same
in
the
same
struct,
so
they're,
always
logically,
that
these
things
are
always
logically
together,
so
now
they're
sort
of
together
in
the
api
structure
as
well,
so
I
think
putting
them
together,
putting
things
that
that
are
needed
that
are
used
together.
There
are
logically,
together,
physically
together
kind
of
extinct,
makes
the
api
easier
easier
to
reason
about
and
easier
to
describe
so.
D
E
So
when
you
want
to
so
all
the
listeners
are
exposed
on
all
the
addresses
on
a
gateway.
If
you
need,
if
you
need
to
vary
that
relationship,
then
you
would
create
multiple
gateways.
D
E
You
say
the
simplest
model.
Can
you
what
what
are
you
thinking
of
there.
D
Route
is
indirected,
you
don't
have
that
much
duplication.
So
what
one
of
the
things
that
we
were
sort
of
like,
for
example,
the
seo
api,
had
like
a
lot
of
duplication,
because
you
need
to
to
kind
of
duplicate
things
between
the
two
protocols,
but
since
we
can
just
reference
the
route,
it
won't
be
too
bad.
Here
I
don't
think.
E
So
in
this
model
to
expose
a
http
service
over
port
80,
you
would
set
a
a
single
listener
which
would
have
a
host
name.
Port
80
support
is
required
now
because
port
is
either
implicit
based
on
the
protocol
or
it's
required.
So
it's
just
simpler
to
say
required.
E
E
E
So
I
think
it's
pretty,
I
think
the
end
result
of
that
yaml
is
pretty
pretty
similar
to
what
you
had
before.
But
I'd
argue
that
it's
I'd
argue
that
the
intent
is
clearer
and
that
it's
easier
for
influencers
to
to
kind
of
explain
the
rules
about
how
things
work.
C
Yeah,
overall,
this
is
definitely
much
better
than
what
we
had
before.
Can
you
explain
how
tls
config
would
look
like
in
this
case
a
little
bit?
I
don't
know
if
you
thought
about
that.
E
E
So
I
haven't
changed
the
tls
config
here
at
all,
but
what
I
expect
is
that
what
the
way
you'd
expose
a
https
service
is,
you
would
again
you
would
set
the
host
name.
You
would
set
the
protocol
type.
You
would
set
the
port
to.
You
know.
C
E
Sni
name
https
and
443,
and
then
you
would
provide
a
tls
config
because
you
have
a
single
struct
with
all
the
same
fields.
It's
easy
to
write
what
the
validation
rules
are
it's
a
little.
One
of
the
things
that's
a
little
bit
weird.
Is
that
the
required
field,
the
required
fields
here,
depend
on
the
protocol.
E
So
you
could
you
could
write
a
table
that
says
well.
If
the
protocol
is
tls,
you
need
these
ones.
If
the
protocol
is
http,
you
need
these
different
ones.
The
protocol,
so
that's
a
little
uncomfortable,
because
it's
basically
I
mean
you
really
sort
of
it's
really
a
union.
It's
really
like
a
crappy
union
type
that
you
have
here.
E
So
what
you?
So,
if
you
scroll,
if
you
scroll
up,
rob
one
of
the
things
rob
pointed
out
when
he
when
he
commented
the
review
is
that
I'm
pinning
the
set
of
protocols
to
a
specific
protocol
type
here.
So
I'm
saying
these
are
the
protocol
types
like
previously
protocol
type
was
was
a
string
with
the
implication
that
it
can
be
anything
you
want,
I'm
I'm
kind
of
pulling
that
back.
I'm
saying
look
in
practice:
it
can't
be
anything
you
want.
E
It
can
be
one
of
these
things
and
one
of
the
benefits
of
I
think
saying
that
of
kind
of
nailing.
These
things
down
is
that
it
lets
you
explain
what
implementations
should
do
with
them,
so
you
can
see
that
right
down.
E
The
bottom
of
the
section
rob's
showing
here
is,
we
can
now
say
explicitly:
okay,
the
gateway
can
collapse,
multiple
listeners
into
a
single
into
something
that
accepts
into
a
single
connection,
accepting
thing
I'm
trying
very
hard
not
to
reuse
the
word
listener
and-
and
we
can
say
what
makes
them
compatible
like
we
can
say
the
circumstances
under
which
you
can
do
that.
So
this
is
how
you
this
is
how
you
define
how
sni
works.
E
So
if
you
have
listeners
which
are,
if
you
have
a
listener
which
is
https,
has
a
host
name
now
you
can
do
sni
and
if
the
port
matches
two
then
okay,
I
can
put.
I
can
put
all
these
listeners
on
the
same
port
and
discriminate
using
using
sni,
but
as
soon
as
you
start
as
soon
as
you
try
to
start
trying
to
say
that
how
you
can
define
listener,
how
you
can
define
multiple
applications
on
the
same
port
at
that
point,
you're
moving
into
some
you're
moving
into
a
world.
That's
protocol
specific.
E
So
my
argument
in
this
pr
is
like:
okay,
let's
go
to,
let's
just
let's
just
accept
that,
go
to
a
protocol
protocol
specific
way
and
accept
that
it's
going
to
be
limited
but
and
then
take
the
benefit
of
being
able
to
kind
of
express
things
in
a
very
concrete
way.
Within
those
limitations.
D
E
E
What
I've
sort
of
assumed
here
is
that
https
is
any
old
crap
over
https
there's,
no
way,
there's
no
way
to
in
there's
no
way
in
this
model
today
in
this
pr
to
publish
the
alternate
protocols
over
in
in
the
tls
handshake,
so
that's
kind
of
wacky.
It
also
seems
like
for
grpc,
specifically
you,
you
might
end
up
with
a
different
route
type
or
difference
yeah
they're
outside
to
go.
That
goes
to
a
different
type,
and
I
pointed
that
I
sort
of
in
one
of
the
comments
that
rob
made.
E
D
E
D
Yeah,
so
this
looks
very
interesting.
That
is
my
example
of
like
an
odd
duck
that
we
could
see
how
we
would
get
it
in
here
somehow
or
what
it
what
we
would
could
it
would
it
would
it
fit
or
like
do
we
need
to
add,
like
a
small
stuff,
to
add
to
make
it
work
or
like?
Would
it
just
not
work,
but
I.
E
Like
I
think
you
could,
I
mean
you
could
just
say
you
could
just
add
a
new,
a
grpc
protocol
type
right
if
you
added
a
grpc
protocol
type
that
would
fit
in
the
model
and
then
you'd
have
to
answer
the
question
of
okay.
What
happens
to
what
happens
to
the
route?
What
kind
of
what
kind
of
routes
should
my
route
selector
do?
Do
I
need
any
extra
fields
in
the
listener
to
kind
of
handle
that.
D
Yeah
so
the
with
the
grpc,
it
would
use
the
same
port
as
443.
If
you
want
to
be
strange
and
then.
C
B
C
E
C
E
We
are
so,
if
you
would
you
mind,
scrolling
up
a
bit
rob
so
just
above
the
listener.
Okay,
yep!
Thank
you.
That's
perfect.
So
we
had
this.
We
had
this
notion
of
collapsing
of
collapsing
listeners
into
some
implementation
defined
proxy
construct,
and
so
because,
because
the
listeners
like
is
now
this
really
explicit
thing,
we
can
we
can
kind
of
tease
out
the
rules
for
that.
So
you
can
say
that
we
can
cut.
We
will
collapse.
E
What
we
call
compatible
listeners
and
listeners
are
compatible
if,
in
this
case,
https
or
tls,
it's
specified,
the
host
name
is
specified
and
the
portfield
matches.
So
we
can
say:
okay,
tls
and
you
have
two
listeners.
One
is
https,
one
is
tls:
they
both
have
different
host
names,
both
on
the
same
port.
Okay,
you
can
compile
that
down
to
an
sni
matcher.
E
E
Oh,
let's
see
what
you
mean
yeah,
there's
kind
of
two.
I
guess
there's
there's
two
kinds
of
pass
through,
at
least
in
in
contour,
there's
two
kinds:
there's
there's
pass
through
where
you're
just
passing
through
the
t,
the
bite
stream
after
a
tease
after
a
termination
after
it
after
a
sni
match
and
then
there's
the
termination
as
well
so
yeah,
there's
actually
two
cases
there
yeah.
D
C
B
C
A
document
to
figure
out,
like
you
know
what
are
compatible
or
not,
and
then,
if
we
document
that,
I
think
this
will
be
compatible
with
most
pro
implementations
out
there.
So.
E
C
E
Can
do
we
can
do?
I
know
we
can
do
this
in
envoy,
I'm
pretty
sure
we
can't
do
it
in
traffic
server
and
I'm
not
sure
about
the
other
proxies
yeah.
D
Now
this
is
very
interesting.
I
it
it
is
very
interesting,
so
I
think
everyone
should
just
kind
of
look
over
this
and
then
it
it's
pretty
promising.
Actually.
C
E
E
So
the
route
the
route
host
is
only
the
routes.
So
in
terms
of
the
diagram
in
the
google
doc,
everything
that
exposes
that
that
exposes
an
application
through
a
gateway
is
now
the
responsibility
of
the
cluster
operator
and
everything
else
and
on
the
other
side
of
that
is
now.
The
responsibility
is
now
the
responsibility
of
the
application
owner.
So
that's
the
boundary
at
which
we
decouple
so
that's
not
necessarily
a
boundary.
E
That's
going
to
work
for
all
use
cases
right
because
now
you
say:
okay,
well,
the
the
cluster
operator
is
now
responsible
for
more
stuff
than
I
really
want.
But
given
that
our
argument
initially
was
that
the
problem
given
our
problem
definition
of
the
problem
is
that
this
thing
is
split
across
the
two
domains.
Then
okay,
we've
chosen
a
domain.
We've
chosen.
The
cluster
operator
gets
to
own
this
thing.
D
Yeah,
this
is
a
change,
so
I'm
wondering
if
we
go
back
to
our
use
cases,
which
ones
become
less
possible.
D
D
C
That
and
plus
now,
let's
say
you
have
two
listeners
and
one
is
for
a
bar
dot,
wordpress.com
another
for
food.wordpress.com,
and
now
you
have
to
figure
out
the
right
labels
to
use
to
bind
to
the
correct
listener
right,
because
if
you
scroll
down
rob
again,
you
have
the
route
selector
at
the
listener
level
right,
I
think
yeah
so
now,
you'll
have
to
correctly
label
the
routes.
Otherwise
you
will
connect
your
route
to
the
wrong
domain,
which
seems
like
a
little
clunky.
E
Yeah,
so
if
you
had
a
lot
basically
that
the
concept
here
is
that
you
would
have
a
listener
per
domain,
so
you
could
end
up
with
more.
You
would
end
up
with
like
a
lot
of
listeners.
Potentially,
if
you
had
a
large
number
of
host
names,
I
honestly
haven't
really
thought
about
wild
cards
at
all,
so
the
use
case
of
taking
a
wildcard
and
and
then
filtering
those
host
names
at
that
http
route
layer.
C
B
D
Yeah,
so
this
looks
promising.
I
will
definitely
comment
and
go
through
some
of
the
examples.
E
Have
you
could
definitely
use
like
a
wild
card
certificate
and
share
that,
but
it's
hard
to
see
a
way
around
having
to
add
a
new
listener
for
every
host
name
that
you
want
to
expose.
E
D
But
don't
you
need
to
like,
let's
say
your
route,
I
see
so
you're
guaranteed.
That's
like
only
one
host
name.
E
C
D
E
If
you
put
a
wild
card
in
the
host
name,
I
I
suppose
you
could
do
that.
You'd
need
to
retain
the
host
name
in
the
http
route
host,
which
I'd
always
thought
about
the
rat
host
as
the
host
and
field
in
route
host,
as
kind
of
an
upward
configuration.
E
D
Okay,
I
think,
let's
comment
on
this
well.
This
definitely
is
pretty
interesting.
E
D
E
Aside
from
matching
yeah
at
the
listener
level,
it's
purely
there
for
to
rationalize
the
tls
and
sni
stuff
right.
D
E
Because
you
have
the
because
you
have
the
because
you
know
what
the
host
name
is,
and
you
know
the
certificate
you
know,
then,
that
this
is
the
certificate
bundle
that
belongs
to
the
host
name.
So
when
you
get
a
s
request
for
a
host
name,
you
know
exactly
what
certificate
bundle
to
use
without
needing
any
additional
rules.
C
I
see
okay,
so
that's
the
part
that
is
being
used
for
the
proxy
config,
so
yeah.
I.
B
C
E
Yeah
this
is
it's
a
I'm,
not
necessarily
saying
this.
This
is
the
way
we
should
go,
but
oh,
it's
kind
of
what
falls
if
you,
if
you
think
that,
if
you
agree
with
the
problem
statement,
then
this
is
kind
of
one
of
the
things
that
sort
of
falls
out
of
trying
to
address
that.
D
Yeah,
I
think
the
other
this
the,
where
the
host
name
is,
I
think,
is
an
interesting
one.
The
other
one
is
given
some
of
the
conversations
we've
been
also
thinking
about
how
l4
would
work
so,
and
I
know
some
folks
they're
not
on
the
call
today
is
that
right.
A
D
Yeah,
so
that
would
also
be
an
interesting
exercise
to
run
through
this
rearrangement,
and
then
I
think
we
should
explicitly
say
that
sni
sniffing
bypass
should
just
be
tcp
protocol
or
udp.
I
guess
if
you
use
dtls,
because
that
one
resolves
a
lot
of
some
of
the
ambiguities
about
like
where
it
lives.
D
C
E
D
A
To
him
already
today,
because
they're
they're
planning
on
presenting
about
their
a
plan
for
al
for
tomorrow
tomorrow
morning
for
the
main
meeting
and
this
this
is
you
one
of
the
things
I
talked
to
him
specifically
about,
or
that
came
up
was
the
idea
that
we
probably
wanted
the
route
selector
to
be
aware
of
protocol,
so,
instead
of
it
selecting
every
kind
of
route
that
matched
that
it
could
select
a
specific
kind
of
route.
D
Okay,
yeah:
hey
rob
if
you're
gonna
do
the
legwork
yeah.
That
would
be
good
cool.
A
Not
yet
not
yet
this
is
this
came
out
shortly.
I
don't
know,
but
yeah
talking
it
through
now.
It
seems
like
there's
some
good
overlap
here.
So
good
news
all
around.
A
Yeah
yeah,
thank
you.
We
we
only
have
a
few
minutes
left,
but
are
there?
Are
there
any
quick
prs
that
could
use
a
bit
of
attention
here.
A
So
that
works-
I
I
think,
maybe
we
should.
We
should
call
it
then,
because
we've
had
great
discussions
all
around
and.
E
E
Announcement,
it's
not
really
an
announcement.
I
know
nick
raised
this
on
the
slack
channel.
We
we
chatted.
We
thought
it
might
be
an
interesting
exercise
for
everyone
who
has
an
ingress
controller
and
is
involved
in
the
working
group
to
just
walk,
walk
the
other
members
of
the
working
group
through
how
to
set
up
a
https
site
or
microservice
using
their
ingress
controller.
E
D
B
Addition
to
the
agenda,
it
would
be
nice
to
actually
capture
that
somewhere.
So
maybe,
if
it's
in
addition
to
or
in
replacement
of
actually
presenting,
it
is
actually
just
you
know,
a
simple
document
that
shows
step
one.
This
is
what
you
do
step
two
and
voila.
D
B
A
A
I've
been
working
on
a
dog
talking
with
some
multi-cluster
folks
and
others
about
how
the
two
can
interact,
and
I
can
I
think,
and-
and
I
hope
I
can
have
this
doc
ready
to
share
by
tomorrow,
but
I
I
think
that
minimal
to
no
changes
are
required
to
to
the
existing
traffic
splitting
plan
we
have
so.
A
I
I
think
what
what
danian
had
already
proposed
should
should
work
well
with
multi-cluster
use
cases,
but
just
trying
to
get
final
confirmation
on
that.
B
A
B
This
is
it
and
it
actually
links
to
the
cat,
I'm
still
trying
to
rationalize
the
need
for
some
of
these
new
resources
like
the
service
import
and
export
yeah,
and
I've
been
having
some
conversations
with
jeremy
olmsted
tom
thompson.
A
So
yeah
he's
he's
been
involved
here
too,
but
yeah
it
seems
it
seems
like
we
like
you
say
we
may
not
even
need
to
reference
service
export
directly.
We
may
you
know
as
far
as
traffic
splitting,
even
in
a
multi-cluster
use
case,
we
may
be
able
to
reference
the
service
object
itself,
but
that's
that's
less
clear,
but
our
model
allows
for
referencing
either,
and
so
it's
great.
A
But
yeah
okay!
Well,
I
think
I
think
that's
all
we
have
for
today
any
any
last
thoughts
comments.
A
All
right
well,
thank
you.
Everyone
we'll
see
all
of
you
tomorrow.