►
From YouTube: Ambient Mesh WG Meeting 2023 02 08
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
All
right
looks
like
John
you're
at
the
top
of
the
agenda.
There.
A
A
Basically,
we
kind
of
came
up
we
like
for
the
past
couple
weeks.
We've
been
talking
about
like
okay.
What
are
we
going
to
put
in
Sni?
What's
the
format
how
we're
going
to
use
that
route
in
in
multi-network
and
I?
A
Think
I
I
couldn't
find
a
way
to
make
all
the
pieces
connect,
so
I
kind
of
stepped
back
and
thought
through
kind
of
a
different
way
to
do
things
and
the
TL
DR
I
forget
if
I
put
a
picture
eventually,
but
is
that
four
traffic,
oh
I
did
perfect
for
traffic
through
East
West
Gateway.
We
will
actually
just
do
two
layers
of
H1
tunnel,
and
so
the
inner
tunnel
looks
exactly
like
if
we
were
requesting
the
destination
directly,
so
we're
going
to
use.
You
know
the
applications.
A
A
Only
when
we're
going
to
the
east
west
Gateway
and
in
there
we
will
put
the
destination
that
we
want
in
the
authority.
Header
and
for
the
client
and
server
sort
validation,
we'll
actually
use
the
tunnel
cert
and
the
East-West
Gateway
cert
directly.
Instead
of
doing
any
weird
spoofing
type
thing,
that's
both
simpler
and
it
allows
a
lot
of
connection
reuse.
So
we
can
probably
have
one
connection
between
each
Z
tunnel
and
the
East-West
Gateway
and
Multiplex.
Everything
on
that.
A
This
is
actually
a
bit
confusing
my
diagram
because
I
put
the
TLs
info
next
to
the
h-bone
info,
but
that's
that's
actually
not
right.
In
you
know.
The
connection
is
separate
from
H1,
so
there's
multiple
H1
streams
about
one
TLS,
so
I
should
probably
update
the
diagram,
that's
kind
of
the
basic
idea,
so
you'll
notice
there's
no
s
and
I
used
here.
So
we
don't
have
to
worry
about
kind
of
mingling.
You
know
to
try
and
get
the
info.
A
We
want
in
Sni
or
coming
up
with
a
new
format
or
doing
all
these
kind
of
weird
weird
things.
We
just
do
normal
routing
on
h-bone
and
that's
that's
pretty
much.
The
summary
I
go
into
details
on
how
precise
it
will
work.
I
forget
the
details
at
this
point,
but
I
can
go
through
them
or
people
can
ask
questions
whatever
we
want
to
do.
A
The
cool
part
about
this
proposal
is
that
we
don't
actually
have
to
do
anything
right
now,
because
the
protocol
is
completely
unchanged.
This
is
all
just
additive,
so
unless
someone
really
cares
about
doing
multi-network
as
soon
as
possible,
like
I,
would
just
not
even
touch
the
stock
for
six
months
and
then
once
we
ship,
alpha
or
beta,
come
back
to
it
and
start
working
on
it,
whereas
some
of
the
other
proposals
working
at
like
rewriting
the
entire
protocol
and
everything
which
needed
changes
immediately.
A
Yep,
so
what
we
do
today
would
be,
if
we
just
I
mean,
obviously
we
don't
do
H
bone,
but
if
we
just
had-
and
we
just
sent
a
normal
MPL
stream,
but
then
we
send
it
to
the
east
west
Gateway
now
that
mtls
stream
has
an
SN
I
format
that
is
kind
of
this
weird.
A
If
you've
seen
it's
like
the
dot
underscore
dot
underscore
thing
as
a
service
name
and
the
subset
and
the
port,
and
then
we
like
unmingle
that
convert
it
to
a
cluster
and
forward
it,
as
is
so
there's
a
few
issues
with
that.
One
is
that
the
Sni
formats,
like
totally
bizarre
and
it
leaks
Beyond,
just
multi-network
like
we
need
to
understand
it
at
all
at
all
points.
A
You
know
in
Z
tunnel
and
envoy
inside
cars,
and
we
really
want
to
make
H1
kind
of
not
have
these
kind
of
funky
istioisms
in
there.
Another
issue
is
that
we
don't
actually
do
any
validation
at
all
on
the
east
west
Gateway,
so
we're
we're
kind
of
opening
up
this
endpoint
on
the
public,
potentially
public,
internet
and
saying
okay,
any
requests
can
come
through
we're
not
going
to
validate.
You
know
with
TLS
and
we're
just
going
to
pass
it
into
our
Network.
A
So,
while
you
know,
Network
perimeter
is
kind
of
are
seen
as
kind
of
weak
in
zero
trust
and
something
shouldn't
rely
on
it's.
It
doesn't
mean
that
it
should
just
be
opened
up
entirely
to
the
public
internet
right.
C
A
We
speculate
and
Eton
said
that
he
was
going
to
test
it
cool,
so
the
speculation
is
that
it
will
be
fairly
low.
That
handshake
is
going
to
be
amortized
because
we'll,
what's
your
connection
between
z-tunnel
and
East,
West
Gateway,
so
we'll
have
you
know
one
or
a
handful,
the
CPU
cost.
D
B
Right
Google
meet
actually
works
today.
For
me,
so
quick
question:
is
it
possible
if
the
performance
test
is
not
going
to
be
very
favorable
I
know
you
mentioned
I.
Think
you
said
800
not
Ethan.
Is
that
right?
It's
doing
it's
going
to
do
some
tests,
okay,
yeah!
So
if
the
performance
is
not
very
favorable,
then
would
it
be
possible
still
in
allow
user
to
go
back
to
use
pass
through
you
with
this
design.
A
B
A
A
E
A
I
haven't
seen
their
performance
numbers,
but
you
know
this
is
the
people
at
Google
and
apple
and
whatnot
doing
this
so
I
assume
they've
done
this
more
thoroughly
and
we'll
get
the
real
numbers
as
well
and
see
yeah
one
other
note
as
well
is
like
General
network
is
also
really
expensive
anyways.
So
if
we're
adding
a
small
overhead
on
top
of
that,
it
may
be
kind
of
minuscule
compared
to
the
cost
of
the
full
Network
reversal.
C
A
Yeah
so
when
the
Z
tunnel
opens
up
a
connection
to
the
east
west
Gateway
and
needs
to
prevent
some
present
some
certificate,
and
so
in
that
case
it
will
present
one
with
its
own
identity.
So
that's
something
we
don't
actually
use
today,
I,
don't
even
know
if
we
fetch
a
certificate
for
it.
We
only
fetched
them
for
the
actual
workloads
right,
but
we'll
start
fetching
that
certificate
and
then
use
that
one.
F
A
Don't
need
to
say
that
this
is
like
the
HTTP
bin
client
talking,
because
that's
on
the
inner
layer
right
and
by
using
the
the
Z
tunnel
one
we
can
Multiplex
more.
We
could
use
the
the
same
client
one
on
both.
A
B
Yeah,
that
makes
perfect
sense
so
because
you
check
the
source
and
Target
have
if
they
have
the
same
identity
yeah.
So
using
Z
tunnel
at
least
all
the
paths
on
the
local
on
the
co-locating
on
the
same
node.
They
can
pre
pre-assume
using
the
same
same
Tunnel,
right,
yep,
yeah,
that's
a
great
question
by
the
way.
C
Thanks
all
right,
I'm
trying
I,
see
I,
think
I
understand
what
this
looks
like
over
the
wire.
What
does
it
look
like
from
an
orchestration
standpoint?
Is
the
East
West
Gateway
still
an
Envoy
similar
to
other
gateways,
or
is
it
implemented
100
in
the
Z
tunnel?
If
so,
is
it
a
particular
instance
of
the
Z
tunnel
or
can
any
node
Z
tunnel
act
as
an
East-West
Gateway
for
multi-cluster
traffic.
A
Yeah,
so
I
I
think
it
could
be
a
z
tunnel,
but
given
that
we
are
probably
going
to
support
backwards,
compatibility
for
the
old
way
with
sidecars
and
whatnot,
my
proposal
is
that
we
do
it
in
Envoy
at
first,
maybe
down
the
road.
If
we
like,
don't
care
about
that
case
anymore,
and
we
find
that
it's
much
simpler
and
Z
tunnel
then
sure
like
we
can
go,
do
it
in
Z
tunnel,
but
I.
Think
most
of
the
benefits
are
kind
of
lost.
A
Like
you
know,
envoy,
can
you
know
Piper
TCP
stream
pretty
quickly?
It
may
be
the
config
isn't
as
efficient,
but
it's
also
you're
only
running
a
handful
of
these
right.
It's
not
like
you're
running
them
on
every
every
pod,
so
I
think
we
just
start
with
Envoy,
but
it
could
be
Z
tunnel.
Certainly
thanks.
B
Also
with
this
proposal
the
the
detonal
client
I
guess
it
could
potentially
be
a
East
West
Gateway,
also
right.
So
if,
if
the
organization
wants
to
control
all
the
outbound
traffic
through
the
East-West
Gateway,
so
that's
possible
and
I
think
we
discussed
it
could
potentially
be
a
waypoint
as
well.
B
A
B
Yeah,
which
could
be
zetano
and
which
also
could
be
egress
Gateway,
which
could
also
be
psychi
I,
assume
that
could.
B
Yeah
but
I'm
not
sure
if
it
could
be
a
waypoint
given
we
get
rid
of
client-side
Waypoint
proxy.
A
I
I
forget
I,
think
it's
discussed
below
I
I
think
it
can
be
a
waypoint.
Actually,
if
you
have
well,
if
you
have
a
service
that
has
a
local
Waypoint
but
then
only
remote
pods,
you
may
hit
the
Waypoint
on
your
local
network
and
then
report
it
to
the
remote
network.
But
we
there
was
an
open
discussion
about
what
does
it
mean
to
like
today
we
say:
okay
I'm,
connecting
to
workload
X.
A
It
has
a
waypoint
and
they're
approach
is
sent
to
the
Waypoint,
but
we
don't
Define
what
exactly
it
means
to
have
a
waypoint
right.
Does
that
mean
there's
a
waypoint
in
the
mesh
for
that
service
account
in
the
cluster
in
the
network?
That's
something
we
haven't
actually
answered
yet
so,
depending
on
the
answer
that
question
would
answer
your
question.
Okay,.
G
So
I
have
one
Live
question
here,
so
this
pod
IP
here
is
the
destination
Port
IP
right.
So
where?
Where
is
the
load
balancing
happening
for
the
destination
Port
IP
I
was
thinking
it
should
be
Waypoint
on
the
destination
side
right.
A
Yeah,
so
there's
actually
different
cases,
sometimes
we'll
send
the
service
IPS,
sometimes
the
file
IP,
depending
on
the
client
and
the
destination,
and
what
capabilities
they
have.
So,
if
there's
a
waypoint
on
the
other
side,
then
we
would
likely
send
the
service
IP
and
the
Waypoint
will
handle
that.
If
there's
no
waypoints
on
the
path,
we
may
just
resolve
it
to
apodacy
in
the
Z
tunnel.
A
It's
not
faster,
it
terminates
the
outer
layer
and
then
passes
through
the
inner
layer.
So
I
mean
it
is
kind
of
passed
through,
but
it's
not
same.
C
A
Not
really
unless
people
have
concerns
I
mean
the
main
thing
as
well
is
like
we
can
discuss
like
the
Minor
Details
for
the
coming
months,
because
I
don't
think
we
plan
to
actually
implement
this,
but
the
main
thing
is
agreeing
to
not
make
protocol
changes.
Basically,
so
we
don't
have
to
make
it
do
anything
now
and
we
can
defer
this
to
later.
A
So
yeah,
thanks
for
for
sharing,
I'd,
encourage
people
to
take
a
closer
look
at
the
dock
as
well
and
think
through
it.
A
C
Thank
you,
okay.
So
looking
back
at
our
agenda
John
you
had
another
Docker.
Are
you
wanting
to
discuss
that
as
well.
A
C
A
little
right,
the
same
concern
up
front
about
this
dock.
It
doesn't
look
like
it's
been
put
into
our
steel
Drive.
So
that
means
that
community
members
can't
comment
or
make
proposals.
If
we
could
get
that
moved.
That
would
be
ideal.
B
Yes,
this
is
for
the
community
yeah,
okay,.
H
Yeah,
so
if
you've
got
the
screen
share,
then
we
can
start
with
the
background.
The
Proposal
here
is
a
design
for
how
we
want
to
handle
bringing
functionality.
That
kind
of
has
existed,
Missy
already
via
service
entries,
adding
to
the
service
registry,
in
particular
the
the
VIPs
and
the
host
names,
and
how
we
want
to
handle
the
DNS
integration
in
istio
open
source
for
ambient.
H
So
it's
a
rehash
or
recap
for
the
community.
There's
some
functionality
in
istio
today,
classic
sidecar,
where
in
istio
proxy
in
the
sidecar,
there
runs
like
a
golang,
DNS
server
and
some
of
the
you
know,
if
you
add
a
new
service
entry
and
you
add
a
new
virtual
host
and
FIP
that
gets
resolved
or
is
resolvable
by
the
DNS
server.
That's
in
the
sidecar.
H
The
result
of
this
is
that
services
in
your
mesh
are
aware
of
your
service
entries
and
we
want
to
bring
this
functionality
back
to
ambient
in
terms
of
the
requirements.
The
first
thing
that's
worth
calling
out
is
that
because
of
the
original
behavior
is
on
the
sidecar
it.
It
means
that
and
in
additionally
there's
that
export
to
API.
So
if
you
were
to
scroll
up
here,
oops
but
anyway,
to
perform
with
the
export
to
that
lets,
you
select
a
list
of
namespaces
that
this
service
entry
applies
to
and
again.
H
The
result
of
this
is
basically
that
the
DNS
service
kind
of
client,
aware
in
theory,
it's
possible
for
you
to
have
like
a
www.example.com
resolve
to
different
VIPs
if
they
had
different
export
to
name
spaces,
because
they'd
be
handled
by
the
DNS
servers
and
the
sidecars.
So
it's
just
kind
of
an
interesting
design
application
that
affects
kind
of
the
scalability
of
of
you
know.
If
we
wanted
to
bring
this
to
a
sidecarless
istio,
how
would
that
work?
H
And
so
the
proposal
here
is
kind
of
to
not
make
it
client
aware
and
kind
of
pulling.
You
know
use
cases
that
we've
seen
in
open
source.
There
aren't
a
lot
of
good
reasons
that
we've
seen
to
have
the
same
virtual
host
apply
differently
as
like
resolve
to
different
VIPs.
So
basically,
the
The
Proposal
here
is
to
resolve.
Scalability
concerns
is
to
have
a
single
DNS
server
that
lives
kind
of
separate
or
it's
a
sidecar
to
Z
tunnel
which
go
over
the
options.
H
But
the
one
thing
that
they
all
have
in
common
is
that
they
are
not
client
aware,
like
a
single
virtual
host
will
resolve
to
a
single
dip.
You
know
kind
of
across
the
mesh
and
so
the
the
logistics
of
how
we
do
that
I
guess
we
can
go
to
some
of
the
pictures
if
we
scroll
down
Lynn
so
option.
One
here
is
to
have
istio
DNS
like
basically
that
sidecar
that
are
in
the
sidecar
today.
H
There's
that
istio
DNS
server,
so
the
baked
into
the
The
Proposal
here
is
that
we're
going
to
like
rip
that
out
and
even
make
its
own
Standalone
rebuild
it
change
it
to
another.
You
know
build
it
into
Z
tunnel
or
something
else,
but
you
know
integrate
with
an
external
DNS
and
add
our
hosts
to
that
DNS
server.
H
But
the
idea
here
is
that
in
this
scenario,
let's
pretend
that
istio
DNS
is
a
Daemon
set,
and
so
the
client
locally
can
resolve
to
that
DNS
server
any
hosts
and
then
only
send
this
to
Z
tunnel
and
so
Z
tunnel,
then
just
forwards
things
as
usual,
because
we'll
tell
that
you
know
sdot
will
tell
you
what
what
that
VIP
resolves
to
as
backing
IPS
and
another
option
is
to
instead
go
ahead
and
do
this
from.
If
we
would
scroll
down
to
option
two
I
guess
worth
calling
up
for
option.
H
One
is
that
this
is
all
node
local,
there's,
there's
nothing
over
the
network,
the
istio
DNS
server.
Here
we
want
to
be
you
know
we
don't
want
to
have
to
worry
about
DNS
sector
encryption
or
anything
like
that.
So
it's
all
node
local
communicating
plain
text.
H
H
You
know
I
think
John
had
pointed
this
out
before,
but
the
existing
DNS
functionality
is
sort
of
experimental,
and
so
that
kind
of
gives
us
some
play
here
like
maybe
it
would
be
okay
to
ship
ambient
with
it
is
a
sidecar,
and
then
you
know
later
down
the
road,
build
it
into
Z
tunnel
itself,
which
would
probably
be
ideal.
C
So
I
have
a
naive
question
about
option
one
if,
if
we're
already
essentially
running
adjacent
to
the
Z
tunnel
and
the
Z
tunnel
is
client
aware,
why
can
the
DNS
server
not
be
client
to
wear
similar
to
what
it
is
with
the
sidecar
model?
Today,.
H
So
we
technically,
we
could
make
it
client
aware,
but
if
we
did
that
the
memory
config
for
sdo
DNS
would
potentially
it
could,
it
could
go
as
big
as
it
could
in
today's
implementation.
It
like
it,
wouldn't
scale
her
pod
each
node.
So
you
need
a
map.
You
need
a
map
of
like
the
namespace
that
you're
in
to
all
of
the
VIP
map
like
host,
resolves
and
so
before
that
gets
amortized,
because
each
side
card
gets
one.
H
You
know
entry
in
that
map,
whereas
now
istio
DNS
would
have
to
have
like
a
key
for
each
namespace
and
then
a
duplicated.
So
it
would
just
be
more
memory,
but
we
could
do
it.
It
just
would
mimic
the
memory
profile
of
sidecar
more
closely,
which
we
kind
of
wanted
to
get
away
from
well
technically,
we
could
do
it.
So
the
proposal
was
to
not
do
that
and
then,
if
we
got
pushed
to
it,
maybe
we
we
go
that
route,
but
it
didn't
seem
like
something:
users
were
actually
taking
advantage
of.
H
And
so
the
second
proposal
is
kind
of
to
do
something.
You
know
this
is
very
similar
to
the
first
one.
If
we
rewrite
it
as
a
sad
card.
The
original
idea
proposal
was
to
have
istio
DNS,
potentially
live
separate
right
like
if
it
weren't
client
aware
it
would
have
a
very
low
CPU
memory
profile
and
we
could
just
hit
it
remotely
and
then
cache
DNS
results.
H
And
then
you
wouldn't
need
like
a
ton
of
sidecars.
You
could
just
have
one
remote
pod,
but
if
we
did
this,
then
we
would
probably
need
to
do
DNS,
SEC
or
something
like
that.
It
would
introduce.
H
You
know
a
lot
of
network
concerns
that
it
might
just
be
worth
for
the
Simplicity
to
build
it
into
Z
tunnel
as
like
a
sidecar,
so
I'm
kind
of
in
favor
of
this
sidecar
approach
rather
than
this,
but
I
just
want
to
call
it
this
as
an
option
rather
than
we
could
have
just
one
state
deployment
or
pod
living
remote
and
then
talk
to
it.
C
So
it
looks
like
to
me,
like
the
difference
between
these
two
diagrams
is
not
so
much
where
istio
DNS
runs,
whether
it's
a
sidecar
or
an
independent
pod,
but
rather
how
the
traffic
is
routed
from
the
client
to
istio
DNS.
Whether.
D
H
Similarly,
there's
a
third
option
is
kind
of
like
an
integration
or
extension
Point.
Really
we
just
need
the
DNS
server
to
be
aware
of
what
the
service
entries
have
configured.
There's
no
reason
we
couldn't
take
some,
you
know
third-party
DNS
server
and
then
somehow
integrate
it
with
istio,
so
that
you
know,
if
you
add
a
service
entry,
istio
knows
what
that
service
entry
means
and
that
it
tells
your
external
DNS
like
core
DNS
or
something
else
how
that
resolves,
and
so
then
clients
could
just
talk
to
that.
H
H
It's
something
that
I
could
see
people
doing,
but
in
terms
of
just
like
installing
bare
istio
and
having
it
work
on
on
everyone's
platform,
it
feels
more
like
we'd
want
to
build
it
into
the
existing.
You
know
open
source
project
and
then
let
people
optimize
and
customize
with
some
sort
of
extension
point.
H
Yeah
and
I
think
that
covers
the
the
basic
gist
of
of
The
Proposal.
Here
is
Linda
were
able
to
share
this
document
publicly,
there's,
certainly
more
text
and
thoughts
here,
but
I
don't
want
to
or
everyone
with
the
the
less
relevant
details,
but
I
want
people
to
review
this
yeah.
B
So
the
doc
is
public
available
for
anybody.
The
challenging
I'm
trying
to
work
through
is
move
to
our
Google
Drive,
so
other
people
can
comment
on
it.
So
I'll
do
that
today
and
looks
like
we
have
some
questions.
Leaving
has
a
question
and
that
Ben
also
have
a
question,
so
we
should
get
to
them.
I
Yeah
I
I
have
a
question
about
interacting
with
the
core
DNS
and
this
I
know
like
in
the
part
when
you
do
DNS,
it
goes
through
a
search
pass
right.
You
know
whatever
in
the
resolve.com,
how
does
it
get
to
the
I
still
DNS?
Did
it
go
through
the
accordion
as
first
you're
going
to
resolve
and
then
go
to
this
sto
DNS.
H
So
for
the
initial
two
proposals,
it
was
more
about
how
the
client
talks
to
the
DNS
server
right.
That's
kind
of
the
Crux
of
the
question.
So
if
we,
if
we
take
option
two
where
Z
tunnel
kind
of
processor
has
it
as
a
sidecar
we'd
use
the
same
kind
of
redirection
logic
that
we
have
today
so
ibtables
or
ebpf
would
have
a
rule
set
up
that
captures
Port
53
and
then
that
traffic
would
go
to
Z
tunnel
and
then
z-tunnel
would
either
on
the
sidecar
or
proxy.
H
You
know
remote
somewhere,
which
we
might
want
to
avoid
for
security
issues
and
I'm
kind
of
pushing
against.
But
it's
an
option:
that's
how
the
traffic
could
get
there.
If
it
were
its
own,
like
steam
Daemon
set,
we
could
update
the
cni
such
that
it
redirects
traffic
to
that
Damon
set
instead
of
Z
tunnel,
but
there's
not
a
huge
gain
and
I
think
the
the
sidecar
makes
a
little
more
sense
because
that's
similar
to
what
classic
istio
does
today
and
what
people
are
familiar
with.
H
You
just
know
the
traffic's
going
to
Z
tunnel
with
respect
to
the
core
DNS.
If
it's
like
a
remote
integration
that
would
be
more
akin
to
you
know,
whatever's
in
your
resolve,
conf
for
whatever,
like
we,
wouldn't
be
doing
anything
fancy
in
istio
to
capture
that
DNS
traffic.
H
The
idea
of
the
integration
would
be
that
on,
like
the
core
DNS
or
your
external
DNS
server,
that
would
have
to
integrate
with
istio
and
that's
kind
of
a
kind
of
a
hand
wavy
like
potential
down
the
road
option,
but
not
really
something
I
would
push
for
in
a
completely
open
source
work
on
any
platform
offering.
I
I
We
I
would
think
that's
probably
better
solution,
because
otherwise
you
have
to
provision
all
the
ppf
rules.
All
these
things
redirect
the
traffic
adjust.
My.
H
I
B
E
I
mean
I,
just
I.
Think
Kevin
was
mentioning
that
we
wouldn't
talk
about
DNS
Tech
with
option
one
but
like
we
would,
if
we
had
a
client
app
that
already
expected
to
use
DNS
SEC
right
like
is
that
some?
That's
something
that
we
don't
necessarily
have
control
over
correct
like
we
don't
necessarily
know
if
we're
going
to
be
getting
DNS,
SEC
traffic
or
DNS
plain
text
traffic
we
just
would
have
to
handle
both
correct
in
all
cases,
depending
on
what
the
client
might
expect.
E
H
Yeah
so
I
imagine
to
support
full
feature
set.
We
would
have
to
do
something
I'm,
not
frankly
a
DNS
Tech
quiz
either,
but
I.
Don't
think
that
we
do
anything
with
that
today
and
already
and
I
think
it's
about
future
parity
or
or
near
you.
E
B
Okay,
great,
so
we
don't
support
DNS
SEC
for
psycha
today
right.
So
that's
not
the
short-term
goal
for
ambient.
H
E
E
So
it's
like
I
think
that's
an
argument
for
keeping
that
stuff
separate,
because
DNS
servers
are
all
entirely
complex,
Beast
so
like
either.
We
keep
it
all
out
of
Z
tunnel
or
we
like
get
very
clear
about
what
subset
we
will
of
DNS.
We
will
not
support
NT
tunnel
I
think
is
like
the
only
thing
that
really
sticks
in
my
head
is
something
we
need
to
worry
about.
C
Yeah,
particularly,
you
know,
one
of
our
goals
for
Z
tunnel
from
an
orchestration
perspective
is
that
it'd
be
very
safe
to
upgrade
and
the
way
that
we
accomplish,
that
is
by
doing
very,
very
little
in
the
Z
tunnel,
because
of
course,
the
blast
radius
is
huge
right.
It's
the
entire
network
for
your
entire
cluster,
so
it
I
would
worry
that
adding
DNS,
even
as
a
sidecar
to
the
Z
tunnel,
would
create
additional
opportunities
for
regressions
when
the
Z
tunnel
is
upgraded.
C
But
then
again,
wherever
you
put
a
DNS
server,
it
has
to
get
upgraded
right,
there's,
no,
whether
it's
core
DNS
and
the
customers
have
to
upgrade
separately
or
in
the
Z
tunnel.
E
Keeping
Z
tunnel
is
a
very
thoughtless
proxy
for
DNS,
whether
we
do
that
would
be
a
psychar,
whatever
I
think
makes
a
lot
of
sense
versus
implementing
like
DNS
stack
and
Z
tunnel,
but
I.
Don't
think
anybody's
necessarily
proposing
that
yet.
So
that's
maybe
premature
to
talk
about.
J
So
so
so
something
I
think
I
missed
from
the
earlier
discussion.
If
we
do
a
sidecar,
you
know
implemented
by
you
know
whatever
accordionists
whatever's
serving
DNS,
how
do
we
implement
the
isolation
by
namespace?
Since
you
know
the
node
could
be
running
pods
from
different
namespaces.
J
Yeah
exactly
the
export
to
behavior,
so
so,
if,
if
yeah
like
www.myapp.com,
whatever
you
know,
only
applies
to
namespace
one,
but
we've
got
pods
from
namespace
one
and
namespace
two
on
the
same
node
would
we
have
to
have
two
instances
of
the
DNS
server
and
and
somehow
route
between
them,
based
on
which
client
it's
coming
from,
or
something
like
that?
How
would
that
work.
H
So
the
proposal
here
is
to
not
do
that
because
we
did
not
see
many
use
cases
where
that
was
something
that
was
desirable
to
have
different
host
names
and
different
dips.
However,
that
doesn't
preclude
us
from
someday.
You
know
if
they
became
really
strong
use
cases
supporting
that
we
just
could
update
the
istio
DNS
server
config,
rather
than
it
just
being
like
a
simple
map
of
host
to
like
name
table
results
or
like
the
NDS.
H
It
could
have
the
client
namespace
in
the
in-memory
that
istio
descends
over
and
then
we'd
have
to
figure
out
the
mapping
from
the
client's
namespace.
You
know
the
client
IP
to
the
namespace,
so
I
imagine
there
would
be
some
kind
of
iptables
or
ebpf
like
we
know
who
the
source
IP
is.
We
can
add
some
kind
of
packet,
Mark
map
that
to
a
namespace
and
then
use
that
to
do
the
right
lookup.
So
you
know
I,
don't
think
we
could
technically
we
could
technically
do
it.
I,
just
don't
think
we'll
ever
be
asked
to
so.
J
H
J
H
Perhaps
we
explicitly
I
think
the
default
I
believe
the
default
is
global.
If
you
omit
exports
to
all
together,
it.
H
So
maybe
we
just
only
honor
an
ambient
if
export
2
is
excluded
that
way
that
they're,
you
know
if
there
is
an
export
to
field
on
ambient
we
just
like
warn
or
or
if
there
were
statuses.
We
signal
to
the
user
somehow
that
that's
not
going
to
be
picked
up
in
ambient,
and
that
also
means,
if
we
ever
get
pushed
in
the
future
to
add
this
functionality,
we
we
can
without
any
API
break.
H
B
And
I
do
think
we're
going
to
have
X42
changes,
not
just
for
this
right,
because
if
we
look
at
Waypoint
proxy,
there
is
no
psycha
resource
applied
to
Waypoint
proxy.
The
destination.
Rule
and
virtual
service
today
supports
export
2
along
with
service
entry
right
I'm,
not
sure.
If
the
destination
Rule
and
service
entry
would
support
export
hearing
ambient
I
would
argue
no
because
we
don't
need
to
right,
because
that
Waypoint
is
producer.
B
K
Oh
I
was
just
going
to
raise
that
Ian
Ian
Rudy
in
the
chat
made
a
a
very
good
point
that
we
can
use
the
validating
webhook
here,
in
the
same
way
that
we
do
for
L4
that
we're
doing
for
the
for
the
L4
policies.
C
So
we
can't
make
validating
web
hook,
changes
between
minor
versions
of
istio
that
will
block
upgrades.
Basically,
you
have
a
completely
valid
version
of
the
API.
In
one
version
you
upgrade
and
it
becomes
invalid.
C
So
we
can
warn
on
it
and
we
can
write
analysis
messages
to
the
status
field,
but
but
failing
validation
is
not
an
option.
There.
C
B
Actually,
in
that
scenario,
if
you
have
a
export
you
on
your
service
entry
from
say,
cardboard
right
and
then
you
move
to
ambient.
Well,
your
export,
you
is
being
ignored,
it
would
continue
to
work.
It's
the
only
thing.
Is
your
service
entry
potentially
being
seen
by
more
parts?
Then
it's
actually
necessary
in
the
Cycle
World
right,
so
it
actually
would
work
just
fine,
so
I'm,
not
sure
if
we
would
want
to
reject
I.
Think
warning
is
probably
makes
sense
in
that
scenario,.
C
Lynn,
you
mentioned
virtual
service
and
destination
rules
export
to
functionality
through
the
sidecar
resource.
My
understanding
is
we're
not
supporting
those
in
ambient
we're
requiring
Gateway
API.
Is
that
right
right.
B
So
I'm
just
saying
that
it's
it's
consistent
to
the
user
right
export.
You
is
no
longer
being
supported
with
ambient
foam
service
entry
to
Virtual
service
and
destination
rule.
B
C
B
I,
don't
believe
so,
at
least
not
at
the
current
designing
phase.
So.
C
C
B
I
We
have
you
know
in
in
our
implementation,
for
the
PC
ladders
using
Gateway
API,
the
Gateway
is
only
can
be
defined
at
the
cluster
level
and
HTTP
route
is
at
the
namespace
level
and
we
do
plan
to
have
a
DNS
for
the
HTTP
route
and
that's
we're
going
to
use
the
Route
53
.50.
I
Not
necessary,
so
we
are
in
our
implementation.
Is
you
can
access
to
it?
You
can
see
it,
but
whether
you
can
access
to
it
is
really
defined
by
the
in
policy.
So
when
you
you
can
Define
that
I
am
policy
to
see
in
another
service
owner,
you
can
decide
who
can
access
you
and,
and
the
client
you
know,
will
have.
Every
client
will
have
a
im
policy
attached
to
the
client.
So
that's
our
implementation.
F
Okay,
I
had
some
other
meeting
and
I'm,
obviously
very
interested
in
DNS
in
general.
F
So
I'm
very
excited,
you
know
to
see
more
integration
with
DNS
I'm
I'm
just
want
to
make
sure
that
whatever
you
are
doing
can
also
be
implemented
with
other.
You
know,
other
DNS
providers
and
it's
consistent
with
what
other
people
are
doing
in
both
kubernetes
and
outside
kubernetes.
F
I
Oh
you
mean
in
our
implementation
today.
Yes,
current
implementation
is
when
you
specify
the
host
name
on
that
HTTP
route,
that
host
name.
Basically,
it
will
be
your
entry
in
around
53.
So
when
the
part
you
know
when
you
say
I
want
to
the
next
result
of
that
hostname,
that
DNS
request
will
get
resolved
through
the
1253
gamma
implementation.
Yet
I
could
imagine
in
the
future
that
can
be
a
gamma.
That's
probably
a
result
through
the
code
DNS.
F
So
Route
53
I
assume
you
do
validations
that
solo
IO
is
owned
by
that
particular
owner
and
I
mean
there
are
kind
of
access,
controls
and
issues
with
ownership
of
the
domain
and
then
authorizations
that
that
particular
route
can
do
create
DNS
entries,
because
that's.
I
Yes,
that's
correct
yeah,
so
this
is
right.
Now
it's
out
of
band,
so
it's
after
you
know,
user
have
to
outside
the
kubernetes.
They
read
raw
53
entries,
you
know,
and
and
and
during
the
creation
you're
going
to
check
whether
you
own
those
things
yeah.
Okay,
no
is.
F
External
DNS
and
and
what
What
other
people
are
doing
that
that's
fine.
F
I
And
similar
names
yeah
we're
still
looking
into
that.
Okay,
it's
just
not
yet
I
just
need
more
work.
You
know,
that's
one!
That's
one
of
the
things
I
was
looking
to
option
three
mention
this
document.
Is
it
possible?
We
can
have
open
source
Community
effort
have
enhancement
on
codiness,
which
I
understand
can
respond
to.
You
know
work
for
sdo
for.
F
Yes,
I
I
had
the
same
hope
I'm,
not
entirely
sure
that
it's
very
realistic
give
us
a
time
difference
and
everything
else
did
you
consider
integration
with
I
think
you
also
have
a
private
DNS.
Also
in
because
you
know
there
is
external
DNS
that
only
familiar
with
kubernetes
external
DNS
project,
but
they
have
ability
to
synchronize
private
names
to
a
private,
DNS,
Zone
yeah.
I
That's
the
plan,
yeah,
that's
the
plan.
We
will
be
better
on
that
external
DNS,
plugin
I,
understand
the
Gateway
API
and
will
automatically
add
the
entry,
the
cname
entries
to
the
Route
53.
I
F
With
the
public
DNS
I
mean
there
is
a
public
DNS
and
there
is
a
private,
the
split
Horizon
you
know
in
in
DNS
servers.
You
can
have
external
ones
that
is
visible
from
outside
and
you
have
a
DPC
only
version
of
the
DNS.
That
is
also
available.
That's
the
one
I'm
trying
to
find
a
solution
for-
and
you
know
some
common
ground
about
how
we
structure
the
names
I.
F
F
Mean
I
think
on
the
external
I
think
where
everyone
is
pretty
much
doing
the
same
things
there's
no.
Why
do
we
do
it
practically,
but
for
the
internal
one
I
think
we
should.
We
should
sync
up
in
particular
and
how
how
we
represents
namespace
and
cluster
and
and
disappeared
between
different
clusters.
Yeah.
C
K
F
Aws
Google
DNS,
there
are
multiple
vendors
that
will
probably
have
DNS
implementations
at
will
expose
this
yeah
yeah
no.
C
So
Kevin,
thanks
for
the
doc
for,
for
my
part,
I,
would
like
to
see
something
that
has
a
little
bit
more
of
a
global
perspective
on
DNS
and
ambient,
not
just
for
service
entries,
but
also
covering
the
you
know,
the
Waypoint
gateways
and
how
those
will
interact
with
DNS
as
well
and
and
then
that
export
to
decision
becomes
something
that
we
can
make
consistent
across
the
entire
initiative.
Rather
than
making
the
decision
again
and
again
for
service
entries
and
then
gateways
Etc.
F
Yeah
and
and
we
we
have
a
parallel
discussion
about
naming
the
board-
I
mean
the
host
is
a
proper
hostname
of
of
a
board
and
how
to
represent
it.
In
Sni,
header
John
had
the
document
last
time
and
it
will
be
perfect
if
it's
aligned
with
the
DNS.
So
if
we
decide
on
a
DNS
name
for
supports
which
is
used
as
an
I,
then
we
should
probably
have
our
corresponding
DNA
Sentry
and
support
IP
reflected
in
that
automatically.
F
C
B
C
K
K
K
Sure
so,
essentially,
the
problem
that
we're
aiming
to
solve
here
is
that
it's
it's
kind
of
an
extension
of
the
problem
we
were
just
talking
about
DNS
and
the
reason
that
it's
an
extension
is
that
istio
with
sidecars
essentially
allows
you
to
attach
policies
specifically
L7
policies
to
host
names
that
don't
exist
within
the
cluster
itself,
so
non
kubernetes
host
names,
and
this
is
something
that
this
is
that
we've
seen
to
be
a
very
popular
feature.
K
And
so
you
know
this
becomes
a
little
more
difficult
in
ambient
as
there
are.
There
is
no
client
side,
Envoy
proxy
anymore
and
with
the
recent
decision
to
not.
D
K
Any
client-side
waypoints
that
you
know
that
we
have
been
thinking
about
how
to
apply
L7
policy,
meaning
service
entries
in
in
this
case,
basically
service
entries
and
destination
rules.
K
D
K
Names
that
don't
exist
on
the
same
cluster,
so
yeah,
basically
kind
of
kind
of
thinking
through
a
generic
way
to
apply
the
service
the
to
to
decide
where
the
the
point
at
which
this
the
that
policy
is
enforced,
given
that
there
is
no
client-side
car
and
no
client-side
Waypoint.
K
So
this
doc
kind
of
thinks
through
you
know
the
multi-cluster
scenario
or
the
egress
scenario,
and
basically,
how
do
we
accurately
describe
what
the
policy
enforcement
point
is?
If
there
is
no
natural
one?