►
From YouTube: Ambient Mesh WG meeting 2023 08 23
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello:
everyone
welcome
to
the
August
23rd
Wednesday
SEO
ambient
contributors
meeting.
So
first
up
on
the
agenda
is
Kevin
on
service
entry,
scope,
design,
Kevin,
go
ahead
and
present.
C
B
See
you
should
all
see
my
screen
map,
please
let
me
know
if
you
do
not
yep.
B
Fantastic
yeah,
so
I'd
like
to
talk
a
little
bit
about
the
service
entry,
API
kind
of
like
tenancy
models
and
potential.
You
know
how
this
can
impact
users
and
perhaps
undesirable
ways:
I,
guess
the
backgrounds
right
off
the
bat.
This
is
kind
of
sprung
about,
because
there
was
some
behavior
on
in
Z
tunnel.
That
I
noticed
that
I
thought
was
a
little
bizarre
and
I
put
up
a
PR
to
change
it
and
after
a
little
discussion,
it
became
clear.
This
was
probably
worth
a
design,
doc
and
discussion.
B
So
I'd
like
to
walk
through
kind
of
the
thought
process
and
the
problem
being
solved,
and
then
you
know
open
up
to
discussion
and
and
how
we
want
to
handle
this
so
yeah
to
jump
into
it.
B
The
the
real
Crux
of
the
question
is
that
I
like
to
ask
is
what
does
it
mean
to
apply
like
what
does
it
mean
to
be
a
service
entry
and
who
is
it
for
one
concrete
example
is
like
what
happens
if
I
apply
two
different
service
entries
with
the
same
host
that
conflict
in
two
different
name
spaces,
and
so
this
is
a
little
bit
contrived,
but
I
do
think
it.
B
It
will
end
up
mattering
and
I'll
show
why,
as
a
user,
I
might
care,
but
just
just
to
ask
some
of
the
questions
to
frame
what
I'm
thinking
about
you
know
if
I,
if
I
have
these
service
entries
one,
is
this
even
even
valid,
or
how
could
we
prevent
this
from
being
valid
and
two,
if
it,
if
it
is
valid
and
I
make
a
query
to
conflictinghost.com
what
do
I
get
back?
Do
I
get
back
both
of
these
VIPs.
Do
I
get
a
load
balanced
a
round
robin
random.
B
Do
I
get
one
with
a
namespace
preference
like
what
do
I
expect
my
request
to
actually
do,
and
so
a
different
way
of
asking
the
question
that
I
want
to
ask
is:
is
service
entry
like
a
namespaced
resource
or
a
global
one,
and
what
are
those
trade-offs?
B
Stop
me
there.
If
that
doesn't
make
any
sense,
if
not
I
can
I
can
get
into
the
user-facing
implications.
All
of
this
part
of
the
dock
is
just
a
refresher
for,
like
the
actual
definition
of
what
we
call
a
service
entry
and
none
of
it
kind
of
hints
at
the
globality
or
namespace
nature
of
a
service
entry.
B
I
also
just
wanted
to
call
out
that
if
we
have
a
conflict
on
hosts
across
like
a
cube
service
in
a
service
entry,
we
merge
them.
But
right
now
today,
if
we
have
a
conflict
on
two
different
service
entries,
we
kind
of
bug
out
and
take
the
older
one.
So
there's
a
little
consistency
there.
We
also
have
the
export
2
field,
which
can
control
the
scope
of
a
service
entry,
but
I
still
think
that
this
kind
of
holistically
could
be
problematic
and
so
jumping
into
what
it.
B
What
do
I
see
is
potentially
an
issue.
There's
not
a
cares.
You
know,
istio
has
a
little
bit
of
a
reputation
of
being,
you
know,
I
think
foot,
guns
or,
or
you
know,
lack
of
proper
and
nice
ux
and,
in
this
case
I
think
there's
an
unclear
API
owner
like
who
is
the
service
entry
for,
and
so
one
example
of
how
this
could
be
problematic
is
let's
say:
I'm
a
user
and
I
have
an
application
in
the
mesh
that's
going
to
google.com
and
it's
been
going
there
for
a
long
time.
B
I'm
in
my
namespace
one,
some
random
user
in
some
namespace
too,
adds
a
service
entry
for
google.com,
and
this
is
inherited
globally.
So
now
my
application,
which
used
to
go
to
google.com
successfully,
now
goes
to
this
like
test
VIP
and
hello
world.
This
is
true
in
classic
today,
as
well:
classic
istio
and
I'm,
bringing
this
discussion
forward,
because
I
believe
there's
some
appetite
for
change
to
API
and
ambient
and
I'd
like
to
make
it
better.
B
But
yes,
so,
basically,
like
a
user
in
another,
namespace
can
go
ahead
and
break
my
traffic
and
there's
nothing.
I
can
do
to
prevent
them
from
doing
that,
and
the
Crux
of
this
is
is
in
my
mind.
Thinking
about
this
is:
who
is
the
Persona
that
a
service
entry
is
for?
B
Is
it
meant
for
any
istio
user
to
like
Define
their
own
hosts?
And
thus,
if
it's
done
that
way,
is
it
scope
to
their
namespace?
Is
it
only
meant
for
admins
right?
Is
it
is
you
know,
should
we
even
be
allowing
people
to
do
this?
If,
if
you
know
they're,
not
a
mesh
administrator,
I,
don't
think
that's
very
realistic.
The
way
that
we
see
service
entries
use
today
or
some
combination
of
both
does
the
the
problem
here
that
I'm
calling
out
sort
of
makes
sense.
B
Okay,
I'm
going
to
move
forward
as
if
that's
a
yes,
so
just
going
through
some
of
the
possible
API
semantics.
This
is
the
way
that
service
entries
work
today
for
both
classic
and
ambient.
So
we
are
consistent
right
now.
Service
entry
hosts
are
Global,
so
that's
why
a
user
can
go
ahead
and
muck
around.
B
With
my
you
know:
existing
host
to
Google
if
there's
a
conflict,
so
if
I
were
to
create
you
know,
even
duplicate
hosts,
for
example,
in
classic
istio
will
just
knock
the
config
and
we'll
get
stuck
until
one
of
them
is
gone
in
ambient.
We
have
this
Behavior,
where
we
prefer
our
local
namespace
and
otherwise
randomly
load
balance,
and
this
was
the
original
Behavior
I
thought.
I
looked
at
this
and
I'm
like
this
is
weird.
B
Why
are
we
doing
this
and
so
I
wasn't
about
to
change
it,
but
then
I
thought
about
it
more,
and
so
that
was
this
doc.
The
implementation
we
have
today
is
kind
of
a
weird
middle
ground
for
who
owns
this
resource.
B
So
one
way
we
could
go
about
rethinking
what
a
service
entry
is
and
I'll
get
it
to
the
end
of
the
stock.
To
my
concrete
recommendations
or
thoughts
here
and
I'd
love
to
field,
you
know
thoughts
at
any
point.
Please
stop
me
if
you
have
questions
or
things
you'd
like
to
discuss
in
the
middle
love
to
keep
things
conversational,
but
so
the
the
next
way
we
could
think
about.
This
is
service.
Entry
is
global
and
we've
merged
semantics.
B
So,
instead
of
having
this
conflict,
where
we
take
the
older
service
entry,
we
could
say
that
they're
Global,
a
service
entry
is
a
global
host
name,
selects
back-end
workloads.
B
B
If
we
wanted
to
only
use
specific
back-ends,
we
could
use
the
VIP
rather
than
load,
balancing
across
all
the
services.
This
application
kind
of
assumes
that
service
entries
are
really
for
admins
or
they're,
shared
they're
they're
cluster
Global.
And
so,
if
you
really
wanted
to
to
use
your
service,
then
you
need
to
use
the
VIP
because
they're
shared
the
hosts
are
shared
I'm,
not
a
big
fan
of
this,
but
it
does,
you
know,
have
a
clear
delineation
of
who
owns
this
resource.
It
doesn't
have
this
like
awkward.
B
If
we
want
to
export
a
service
entry
to
as
a
behind
a
host
to
other
namespaces,
we
have
a
variety
of
options
and
I'd
love
to
detail
what
those
those
are
and
they're,
not
huge
changes,
but
there
are
changes,
but
the
default
would
be
that
they're
not
Global
right.
If,
if
someone
came
in
and
applied
this
resource,
it
would
only
affect
namespace
too,
it
would
not
affect
namespace
one
which
is
not
the
behavior
today.
B
I'd
compare
this
a
little
bit
to
kubernetes
Services
I,
just
also
call
out
like
kubernetes
services,
are
a
little
bit
better
off
because
they
can't
get
these
conflicts.
The
fqdn
has
the
namespace
inlines
and
the
cluster
VIPs
are
you
know
they
reject
them
in
the
API
server
or
a
liberty
that
we
can't
really
assume.
So
we
could
have
VIP
conflicts.
If
someone,
you
know,
takes
down
the
validity
what
book
or
what
have
you?
B
So
we
have
a
little
bit
more
challenge
here
and
so
kind
of
just
comparing
to
you
know
what
we
have
today
classic
service
entry
today,
they're
Global,
the
oldest,
wins,
there's
no
conflict
resolution
in
classic
sidecar
we're
inconsistent
about
how
we
handle
conflicts
between
Coop
services
and
service
entries,
there's
no
way
to
our
back
people
from
from
globally
breaking
traffic.
B
And
so
yeah
we
can
compare
this
to
a
kubernetes
service,
which
is
is
very
similar
in
that
their
VIPs
are
Global
and
their
hosts
are
Global,
but
we
can't
get
those
conflicts
by
nature
of
how
group
services
are
constructed,
so
we're
a
little
bit
better
off,
but
there
are
still
things
we
can
lead
into
here.
B
My
favorite
idea
is
to
to
make
service
entries
namespaced
and
so
the
question
the
natural
question
would
be:
okay,
if
it's
still
a
valid
use
case
to
access
them
across
namespaces.
How
do
we
do
that
in
ambient?
B
And
so
we
can
lean
into
some
prior
art
here
on
multi-cluster
services
and
kind
of
have
proper
import
export
semantics.
You
know
just
like
multi-cluster
Services,
you,
you
have
a
service
import
and
a
service
export.
This
is
just
a
refresher
for
anyone
who
hasn't
seen
multi-cluster
services
in
kubernetes,
but
we
could
do
pretty
much
the
same
thing
and
in
fact
we
already
have
the
export
to
field.
B
You
know
in
istio.
This
could
look
something
like
this.
We
bring
export
two
to
ambient,
but
we
change
the
default
Behavior.
It's
not
exported
without
a
requisite
import,
and
then
we
need
some
kind
of
new
import
from
field
that
allows
us
to
explicitly
try
to
take
things
across
namespaces.
B
B
We
could
also
explore
massaging
the
host,
but
I
think
this
is
a
bigger
change.
It's
kind
of
inspired
by
multi-cluster
host
themes.
The
dock
is
available.
If
anyone
wants
to
jump
into
that,
the
last
thing
I'll
call
out
is
and
I
don't
know
if
this
is
a
valid
use
case,
I'd
love
to
hear,
but
is
there
a
use
case
for
people
like
administrators
wanting
to
set
like
a
default
service
entry
cluster-wide?
B
If
so,
I've
detailed
some
options
we
have
here
as
well
for
the
namespace
model,
but
it's
it
remains
to
be
seen
to
me
if
this
is
even
something
we
need
to
support
or
people
are
doing
today.
I
would
love
to
hear,
but
let's
say,
if
I
had
a
service
entry
or
a
host
that
I
wanted
to.
B
As
an
administrator
like
force,
everyone
to
pick
up
the
way
I
see
we
could
do
that
is.
We
need
either
a
new
crd
to
export
and
the
reason
we
need
this
is
to
have
our
back.
B
That
explicitly
only
allows
admins
access
to
this
right,
because
service
entries
are
for
everybody,
so
only
admins
can
export
theirs,
and
if
we
do
that,
then
we
can
assume
on
conflict
that
ours
is
more
important
because
we're
at
administrators
there's
no
need
to
import
these,
so
either
a
new
crd,
which
is
my
favorite
option,
or
we
could
do
some
kind
of
special
casing
like
if
you're
an
admin
admin
only
has
R
back
to
istio
system.
B
Therefore,
this
export
to
is
global
and
we
have
to
fix
a
workload
selector
issue
and
select
everywhere.
The
nice
thing
about
this
is
that
there's
actually
no
new
API
changes.
We
just
kind
of
change
the
semantics,
but
the
shape
is
the
same,
but
I
still
don't
like
it,
because
there's
some
magic.
B
The
next
is
a
summary
I
kind
of
went
quickly.
But,
to
recap
the
the
issue
that
I'm
calling
out
is
that
service
entries
are
a
little
dangerous,
they're
kind
of
a
foot
gun
in
that
they're
they're
Global,
and
that
people
that
I
can't
stop
can
go
ahead
and
change
my
traffic
routing
and
break
my
traffic
and
given
that
as
I
understand,
there's
an
appetite
and
ambient
to
change
apis
I'd
like
to
change
it
and
fix
this
problem.
B
B
B
B
D
Yeah
thanks
thanks
to
the
doctor
Kevin.
This
is
really
compelling
it.
It
seems
very
much
to
me
that
service
entry
is
is
broken
as
it
stands
today.
I
guess
from
where
I
sit
processing
this.
It
feels
like
the
the
high
level
decisions
of
how
to
move
forward
of
there's
at
least
some
of
it.
D
That's
decided
for
us
just
from
the
API
perspective
service
entry
is
a
namespace
kubernetes
resource
and
the
facts
that
a
native
space
kubernetes
resource
can
have
Global
impacts
on
the
control
plane
feels
much
like
a
like
a
broken
behavior,
and
so
that
I
think
I'm
with
Kevin,
where
I
feel,
like
that's
the
kind
of
the
most
obvious
Way
Forward,
the
kind
of
the
impact
of
doing
that.
D
It's
not
an
API
change,
but
it's
definitely
a
behavioral
change
that
will
break
clusters
and
so
I
think
we
have
maybe
a
couple
of
ways
to
to
handle
that,
but
yeah
I
I
think
for
both
classic
istio
and
ambient.
Something
needs
to
change
when
it
comes
to
a
service,
injury.
E
B
The
example
I
gave
was
I,
have
I'm
in
namespace
one
I'm
going
to
google.com,
that's
great,
some
tenant
in
namespace
2
that
I,
don't
control,
creates
a
service
entry
for
google.com,
and
now
my
traffic
goes
to
this
random
dip.
Yeah.
C
B
A
D
Again,
not
only
that
but
export
to
if
it's,
if
it's
absence,
I
believe
that
current
thought
behaviors
export
everywhere.
Is
that
correct?
That
is
correct
right.
So
at
that
point
we're
modifying
the
default.
It's
a
default
change,
essentially
moving
from
export
to
everywhere
to
export
to
only
this
namespace
and
and
that's
what
would
be
breaking
from
the
API
perspective
or
behavioral
perspective.
Rather.
B
Right
and
so
open
question
I
mean
I
do
think
it's
broken.
It's
a
flip
gun,
I'd
love
to
change
it
I,
don't
know
what
our
appetite
is
for
doing
that
inside
car,
our
deprecation
process,
things
of
that
sort,
I'd
love
to
hear
that
and
then
also
the
motivator
was
ambient
like,
since
that's
not
new
I
think,
there's
less
objection
to
doing
that
here,
but
I'd
love
to
just
get
like
Community
pulse
on
that
those
ideas,
I.
C
Guess
the
concern
would
be
if
you
use
a
transition
from
psychot
to
ambient.
Well,
there
may
be
a
period
of
the
psycha
and
the
ambient
coexists
and
it's
the
same
service
entry.
So
how
do
they
think
through
that?
Because
if
we
look
at
the
migration
from
cycle
to
ambient
I
do
expect,
there
are
transition
period
like
you
might
be
able
to
shift
the
10
of
your
traffic
to
ambient
pods
the
same
book
info
pause
and
also
like
90
to
psycha.
So
there
has
to
be
this
transition
period
And
if
they
are
using
the
service
entry.
G
D
Beta
1.,
that
was
going
to
be
my
suggestion
as
well.
Actually
I
want
I'm.
I
was
just
trying
to
Google
and
see
what
the
current
kubernetes
versioning
guidelines
are
between
different
versions,
but
look
in
the
same
maturity
for
so
Alpha
One.
We
have
three
things:
we
can
get
some
Alpha
One
to
Alpha
two
or
from
beta
1
to
Beta
2.
C
Yeah
that
makes
sense
I
feel
like
we
have
to
do
something
like
this.
So
it
knows
that
this
is
for
the
newer
API
for
ambient.
D
D
Right,
either
to
either
a
converging
web
hook
or
or
something
like
that,
yeah.
F
H
B
I
agree:
I
saw
your
original
comment,
Sanjeev
I
think
to
be
fair,
Coop
services
are
Global
like
their
VIPs
are
Global
and
their
host
names,
their
qdns
are
Global.
So
perhaps
the
only
difference
is
that
we
can't
get
conflicts
like
they
have
solutions
that
we
don't
have.
That
would
also
be
breaks
right,
like
their
fqdns,
don't
conflict
because
the
space
is
inlined
in
the
host
and
their
VIPs
never
conflicts,
because
the
API
server
can
reject
it,
and
we
can't
depend
on
a
validating
what
book
to
prevent
that.
D
Yeah,
the
the
DNS
record
and
this
one's
really
about
service
is
kind
of
annoying.
The
DNS
records
created
globally
for
the
entire
cluster,
but
the
like
Kevin's
at
the
endpoints
are
only
namespaced
are
namespaced.
No.
H
D
Assuming
that
the
appetite
for
adding
additional
version
handling
and
SEO
is
not
there,
what
what
does
it
look
like
to
like?
D
Do
we
need
a
new
resource
like
do
we
need
to
in
a
hypothetical
scenario,
just
so
we're
playing
this
all
out
like
if
we,
you
know,
as
a
community,
take
the
position
that
service
entry
is
broken
and
we
don't
want
to
Wrangle
the
burdening
issue?
Is
there
a
path
in
our
community
processes
to
deprecate
a
beta
resource
and
create
something
different.
D
Another
piece
of
this
is
that
service
entry
has
been
beta
for
I,
don't
know
if
it
feels
like
it's
been
at
least
a
year
or
two
maybe
longer
in
theory,
if
we're,
if
we're
doing
yeah
everything's
a
data
forever,
if
we're
doing
up
a
route
like
this,
this
should
have
been
stable,
but.
D
Yeah,
it's
a
really
tough
call,
because
there
are
absolutely
users
who
depend
on
this
functionality
who,
as
soon
as
they
upgrade
they're
they're,
going
to
be
broken
now,
one
option
is
to.
If
you
get
one
to
change
it,
one
option
would
be
to
put
it
behind
a
feature
flag,
new
Behavior,
behind
the
future
flag
and
just
have
it
chain.
D
You
know
and
turn
that
feature
flag
defaults
into
on
some
on
the
current
period,
I'll
what
we
did
in
1.10
for
product
key
routing
instead
of
localhost
routing
from
Sidecar
problem
with
that
is
that
you
know
what.
Where
is
the
mechanism
like
where's
the
incentive
for
app
to
flip
the
switch?
Just
you
know
once
it's
on
dude?
That's
probably
that's
probably
the
most.
C
Yeah
I
think
we
have
done
something
similar
like
the
hard
networking
behavior
that
was
changed
in
one
of
the
release.
So
basically
we
did.
We
launched
it
Darkly
first
and
we
write
blogs
about
this
upcoming
change,
how
this
is
going
to
impact
user
and
then
wait
until
the
next
release
to
enable
it
by
default.
We
also
have
istiocado
analytes
to
tell
people
which
of
their
parts
going
to
be
impacted
by
this
change.
Maybe
we
should
do
something
like
that
too.
So
if
we
decide
this
is
the
right
change
we
want
to
enable.
I
More
of
a
general
question
about
people,
upgrading,
obviously
I,
think
I
agree.
We
should
add
the
analyzer,
but
do
we
emphasize
that
people
should
run
that
on
upgrades
regularly.
C
The
analyzer
Commander,
yes,
we
do
recommend
people
to
run
so
at
least
they
know
what
things
could
be
flagged.
Okay,.
C
I
Okay,
good
now,
I
just
want
to
make
sure
that
it
wasn't
something's
like
you
know
best
practice,
but
no
one
actually
does
it,
and
then
we
end
up
breaking
people
and
so
do
we
need
to
advertise
it
better.
But
if
we,
if
we
feel
like
people,
usually
use
it,
then
that's
good,
just
just
more
Curious
anything
else.
Yeah.
C
And
we
also
recommend
the
user
to
use
the
cluster,
not
just
for
upgrade
time,
so
they
can
get
a
lot
of
saving
of
their
time
on
Resources,
with
the
messages
that
warnings
from
it,
they
analyze-
oh
yeah,
yeah.
It
was
from
T-Mobile.
So
thank
you.
D
Also
have
pre-chip,
which
I
don't
know
if
that's
distinct
from
analyze
or
not,
but
the
upgrade
doc
students
say
call
it
using
ifcrutl
X
pre-check.
C
Yeah,
that's
that's
specifically
for
migration.
I
think.
B
So
I
I
do
know
it's
good
to
get
this
initial
feedback
and
that
there's
support
I
do
think
that
taking
action
on
this
is
going
to
be
a
multi-week
lots
of
you
know.
It's
gonna
be
a
lot
of
work
and
we'll
be
talking
about
this
for
a
while.
If
we're
going
to
take
action
and
it
sounds
like
we're
going
to
I
know,
we
only
have
an
hour
and
two
more
topics.
C
Sounds
good,
though
yeah,
let's
make
sure
we
have
good
meeting
minutes
for
this
too.
A
Great
sounds
good,
so
next
up
is
Jackie.
Last
run,
reviews
for
the
Target
ref
PR.
G
Yes,
I
so
yeah.
This
PR
has
been
open,
I
think
for
about
two
weeks
now
so
I'm
just
looking
for,
and
it
has
evolved
quite
a
bit
since
its
initial
implementation.
So
I'm
curious
just
for
a
final
round
of
looks
on
the
pr
just
to
get
the
target
ref
Proto
added.
So
we
can
continue
work
on
the
rest
of
the
implementation.
G
G
Due
to
the
complexity
and
unknowns
there,
yeah
I
believe
those
are
the
main
changes
and
then
we'll
also,
if
a
Target
ref
is
created
in
the
root
namespace,
it
is
assumed
that
the
resource
or
the
Waypoint,
that
is
a
that
is
applied
to
is
also
in
that
room
root,
namespace
we're
not
restricting
anything
there
and
then
differently
from
workload
selectors.
That
does
not
mean
it
will
apply
to
all
waypoints
if
it
is
in
that
root
name
space.
G
So
those
are
the
main
changes
that
have
been
made
from
the
design
dock.
So
if
we
could
get
additional
reviews
on
this,
that
would
be
appreciated
yeah.
Thank
you.
A
Okay,
any
questions
from
the
community
on
this
particular
PR.
C
So
a
quick
question:
Jackie
and
kids
did
we
reach
consensus
with
the
workload?
Selector
remains
for
a
limit
period
of
release
and
Target
ref
is
preferred,
I
know
in
case
you
started
a
vote
on
that.
I.
Just
don't
recall
you
know.
What's
the
decision
on
that.
D
The
current
I
think
the
outcome
of
the
votes
in
my
also
my
personal
opinion
is
that
we
should
break
when
we
can,
since
we
make
such
a
point
of
not
doing
things
in
the
backwards
and
compatible
way
and
things
that
are
Beyond
Alpha,
because
ambient
is
Alpha,
we
should
you
know
kind
of
train
users
to
to
expect
breaking
changes
with
Alpha
software,
and
so
for
that
reason,
I
was
in
favor
of
of
just
letting
Target
F
be
the
only
supported
figure
for
waypoints
they're
for
operation
policies.
D
Specifically,
there
is
a
an
issue
with
doing
that
until
we
can
get
layer,
Target
operation
policy
yeah.
Let
me
talk
about
application
policy
proposal
through
just
because
we
need
to
know
whether
this
is
a
global
policy
or
a
waypoint
specific
policy,
but
yeah
that
was
that
was
the
the
outcome
of
the
last
conversation.
C
Okay
cool
so
now
here
you
correctly,
then
the
workload
selector,
it's
just
temporary,
because
it's
only
used
in
Alpha
for
authorization
policy.
D
Your
own
rest,
so
once
we
get
the
layer
targeting
work
in
for
authorization
policy,
then
Target
then
Target
up
would
be
the
only
valid
way
to
Target
a
waypoint
and
workload
selector
to
be
ignored,
or
specifically,
things
that
are
in
the
application
layer
in
ambient.
Just
to
be
completely
clear
on
that
yeah.
D
If
you
see
PRS,
I,
think
Whitney
and
Jeremy
are
working
a
couple
of
PR.
So
if
you
see
PRS
that
sell
logical
selector,
that's
because
we
don't
have
the
water
Target
operation
policy.
Stuff
done
yet
to
be
able
to
know.
Is
this
an
L7
policy
or
a
L4
policy?
So.
C
D
We
would
just
if
the
weight,
if
the
workload
selector
is,
is
specifically
targeting
a
a
waypoint
label
like
the
special
magic
string,
Waypoint
label
that
we
have
in
Alpha,
then
we
will
then
that
won't,
won't
take
effect
and
we'll
say
in
English
will
say,
use
we'll
say,
use
the
target
map
the
more
I
think
about
it,
though.
C
D
Yeah,
so
the
decision
on
the
on
the
dock
was
to
was
to
before
it
to
only
apply
to
Sidecar
and
like
Ingress
gateways,
for
example,
and
that,
if
you're
targeting
a
waypoint
specifically,
you
would
need
to
use
a
Target
ref
because
yeah
can
we
get
the
API
certified
for
when
we
want
to
do
in
beta
the
behavior,
for,
if,
for,
if
you
do,
try
to
use
a
workload
selector
instead
of
a
Target
ref
would
be
that
it
wouldn't
apply
in
your
CTL.
D
Yes,
looking
thinking
through
this
again
with
fresh
eyes
and.
D
When
you
start
looking
at
how
you
devouted
this
in
a
validating
weapon
configuration,
you
really
only
have
that
the
only
thing
that
we're
blocking
to
formulate
it
a
different
way,
the
only
way
to
to
Really
block
it,
is
to
look
for
that
special
magic
string
like
the
Waypoint
labels
that
we
that
we
have
now
and
to
and
to
block
that
at
the
replic
level,
which
is
fine
but
I,
guess
yeah.
What
we're
saying
yeah!
What
we're
saying
is
you
want
to
talk
to
a
white
point?
C
Right
and
also,
we
reserve
that
label
for
very
point,
and
nobody
else
should
be
using
it
right.
D
D
D
Yeah,
exactly
is
that
correct,
I
think
that's
the
case,
so
so
yeah,
it's
a
kind
of
go
back
to
to
where
we
stand.
It
feels
like.
We
have
directional
agreement
on
the
API
and
the
questions
lie
on
the
on
the.
So,
how
are
we
going
to
validate
you
know
which,
which
one
the
customer
that
the
user
is
using
and
block
things
we
want
to?
D
That
is
that
we
need
to
be
handed
on
configuration
it's
that
if
that's
the
case,
if
that's
what
the
contention
is,
does
it
make
sense
to
make
the
API
change
and
then
discuss
the
validation,
specifics
and
the
validation
PR?
D
Just
so
we
can
know
what
the
open
questions
are.
There
is
one
open
question
around
defaults
that
I
think
is
important,
but
it
would
be
great
to
be
able
to
to
have
the
validation
discussion
about
hmpr.
In
my
opinion,.
F
Jenny
I'm
just
wondering:
if
also
are
we
conflating
the
term
delegation
I,
know,
there's
a
validation
web
plug
logic,
but
also
a
recently
learned
about
analyzers
and
also
the
pr
that
Whitney's
working
on
is
doing
some
blocking
as
well
right
when
the
policy
is
about
to
be
applied.
So
it's
really
just
like
validation
mechanisms
in
general.
That
should
be
called
development
design
Docker,
just
specifically
the
validation
of
logic.
D
There
is
some
level
values
we
can
go
at
the
Proto
level,
but
most
of
that
might
end
up
being
duplicated
in
the
web
at
the
web
level.
Anyway,
right,
yeah.
C
The
other
thing,
I
would
add
also
like
istiocado
analyze,
so
because
the
fact
is,
the
user
might
already
have
their
wagon
plug-in
resource
deployed
right,
so
it
might
already
be
on
their
system,
so
I'm
thinking
loud
here.
C
What
are
the
scenarios
they
might
be
using
cycle
today
with
once
I'm
plugging
resource,
for
example,
and
then,
if
they
move
to
ambient
when
they
have
pods
with
outside
car
right,
if
they
need
to
move
to
this
Waypoint
model,
you
know,
how
are
we
going
to
alert
them,
because,
essentially,
what
I'm
looking
at
is
every
single
resource
would
need
to
rework
right
all
the
resources
in
Europe
here
are
bottom
and
ask
the
policy
and
I
think
there
is
telemetry.
Also
that
means
whatever
they
have
today
for
psycha.
C
D
Right
I
think
I
think
that's
all
correct
for
beta
scope,
we
or
yeah
it's
not
the
Coalition.
It's
the
migration
path.
D
D
Yeah
yeah,
it's
not
it's
not
just.
This
is
not
just
a
question
about
coexistence,
it's
about
how
does
user
get
from
sidecar
to
ambient,
and
so
we
need
to
be
cognizant
of
of
what
their
changes
necessary.
There
are
you.
D
Yeah
I
was
talking
about
to
John
about
something
similar
where
with
wave.
This
is
also
a.
This
is
kind
of
a
fundamental
change
where,
even
though
routes
to
apply
to
Services
the
policies
are
now
applying
to
waypoints
as
a
Gateway.
So
it's
kind
of
like
a
Gateway
application
of
our
policies
in
istio.
Just
the
whole
point
of
Target
rep
right
is
to
align
more
with
the
Gateway
API.
D
One
of
the
consequences
of
that
is
a
ship
where
an
istio
policies
apply
toward
applied
to
specifically
or
policies
apply
to
the
workloads
of
the
services
themselves,
but
now,
with
Gateway
API
and
with
ambience
policies
apply
at
one
resource
at
a
waypoint,
and
so
because
of
that
fundamental
shift.
D
C
Just
want
to
share
this
one
saying
I,
don't
know
if
it's
Uncle
reach
out
to
you.
He
reached
out
to
me
a
few
days
ago,
like
he
didn't
really
like
the
fact
that
we're
removing
workload,
selector
support
and
change
to
Target
ref.
So
he
said
I'm
going
to
talk
to
John
about
it.
That
got
me
thinking
about.
You
know
how
this
works
with
existing
sidecar,
where
people
are
already
familiar
with
workout
selector
and
everything.
D
Okay
yeah,
he
hasn't
reach
out
to
me,
but
that's
good
feedback.
Yeah
I
did
see
how
to
comment
in
the
original
design
doc,
but
yeah
we're
happy
to
have
a
conversation
about
that
to
see
yeah
I,
I
guess.
I
I
was
under
the
impression
that
this
was
kind
of
a
direction
that
we
knew
we
wanted
to
move
community-wide
and
from
the
and
the
API.
D
D
Okay,
in
that
case,
in
a
note
that
Sanjay's
got
a
topic
here,
I
want
to
see
it
before
to
him
in
a
second
but
yeah
I
think
that
it,
it
probably
makes
sense
to
have
a
discussion
kind
of
go
back
to
First
principles,
about
a
Target,
left,
move
and
evaluate
and
evaluate
the
need.
Now
that
we
have
yeah
spent
spent
the
time
moving
towards
implementation.
We
have
a
better
idea
of
what
the
trade-offs
are.
H
Sure
so,
I'm,
just
just
FYI,
so
I'm
new
to
the
community,
so
I'll
probably
have
a
few
basic
things.
So
pardon
me
if
something's,
very
Elementary
but
I'm
helping
out
with
some
documentation
and
I
want
to
run
by
a
few
things
with
a
group
here
and
those
include
some
embedded
questions
as
well.
So
the
first
thing
is
I
spoke
with
John
and
Lynn
and
we
agreed
on
the
following
plan
for
some
documentation.
H
Okay,
so
the
first
point
was
to
do
I
have
a
series
of
user
guides,
so
this
is
just
sort
of
the
work
in
progress
page
here,
so
ambient
documentation
currently
is
under
this
operations,
tab
of
istio.io,
so
the
thought
so
far
is
to
have
this
user
guides.
H
H
So
the
first
one
I
started
on
was
Z
tunnel
and
L4
networking,
and
you
know
we
could
add
more
dog
guides
here
on
L7
networking,
maybe
something
specifically
on
other
topics:
authorization,
multi-cluster
and
so
on,
and
then,
within
that
to
structure
the
document
as
a
kind
of
user
Journey,
almost
like
you
know,
setting
it
up
and
using
it
and
monitoring
it
and
so
on.
So
the
first
question
is
that
are
we
okay
with
this
I
just
want
to
run
it
by
the
community
here?
H
So
the
first
thing
is
just
to
be
clear
on
the
current
feature:
constraints
of
istio
Advent-
and
this
is
a
kind
of
initial
working
list
I
made.
Let
me
know
if
this
makes
sense
right.
So
if
we
document
the
current
lease
unsupported
features,
firstly,
istio
multi-cluster
is
not
supported
in
ambient
mode.
Neither
are
non-kubernetes
workloads
or
even
windows
and
non-lux
compute
nodes,
of
course,
and
then
the
the
cni
support
is
also
Limited.
H
H
So
it
seems
to
me
correct
me
if
I'm
mistaken
here
as
a
general
rule,
ambient
mesh
only
supports
cni
plugins
that
are
based
on
the
networking.
The
Linux
networking
data
path
right
so
CNR
plugins
that
are
based
on
Alternate
data
planes
like
ovs
or
a
completely
separate
ebbf
database,
are
currently
not
supported,
which
includes
things
like
tanzu
and
SX,
and
red
hat
openshift.
H
H
The
second
thing
is
that
for
those
cni
plugins
that
are
supported,
they're
are
constraints
on
certain
feature:
combinations,
for
example,
kubernetes,
Network
policy
and
kubernetes
service,
like
node.
Balancing
may
not
function
as
expected
in
the
presence
of
ambient.
However,
functionally
equivalent
Behavior
may
be
achieved
via
the
istio
apis.
H
I,
don't
know
whether
this
is
true
or
not,
but
I'm
welcoming
your
inputs
and
comments.
Whether
this
is
a
reasonable
list
of
constraints
to
document
for
the
first
release
and
then
finally,
it
seems
like
we
want
to
explicitly
certify
which
combinations
of
ambient
and
which
cni
plugins
are
explicitly
certified,
and
is
it
a
supported
combination
or
what
are
the
feature
constraints
for
a
particular
comp?
You
know
ambient
and
scenery
is
any
comments
on
this.
C
C
The
current
list.
That
certainly
is
not
very
great,
so
I'm,
not
sure
who
we
want
to
elaborate
into
great
details
other
than
just
mentioned.
A
C
J
H
J
H
Sure,
okay,
so
I'll
I'll
be
updating
the
VR
and
I'll
invite
comments.
So
you
you
can
provide
your
comments
on
what's
the
appropriate
verbiage
and
how
much
detail
we
want
to
put
or
not
put,
but
this
is
just
some
content
that
I
want
to
put
into
the
user
guides
at
you
know
with
the
appropriate
wording.
The
second
thing
is
I'm,
just
sort
of
drawing
some
data
plane
diagrams,
partly
for
documentation,
partly
for
my
own
understanding.
So
I
just
want
to
run
this
by
to
see
if
this
is
accurate.
H
So,
firstly,
in
terms
of
sort
of
the
current
IP
tables
redirect
data
path
right.
So
is
this
Fair
like
so
you
have
a
Pack
and
I'm
trying
to
capture
two
things
here.
One
is
what's
happening
in
the
Pod
Network
namespace
and
what's
happening
in
the
host
Network
namespace
and
then
also
what
is
in
the
user
context
and
what's
in
the
kernel
context,
so
when
a
poor
descending,
you
know
traffic
or
you
know,
making
a
socket
call.
H
Getting
happened
around
going
through
the
geneve
tunnel
to
into
this
eternal
part,
again
stripped
of
the
geneve
tunnel
going
through
each
bone
and
cap
coming
back
out,
istio
out
again
pieced
you
out
and
then
going
out
the
host
it's
doing
external
traffic.
Is
this
accurate.
H
I
We
think
that's
going
to
be
difficult
because
h-bone
is
HTTP
connect
over
mtls
and
those
are
both
fairly
hard
to
do
in
the
kernel.
H
Okay,
so
we
is
it
fair
to
have
a
diagram
like
this
in
in
the
in
the
user,
guide,
it'll
kind
of
give
a
visual
and
then
and
then
I.
I
J
I
I
think
what
might
be
useful
is
something
like
what
you've
got
here,
where
it's
just
curdled:
it's
just
a
host
net,
namespace
pod
net
namespace
and
then
just
arrows
showing
at
where,
where
things
go
between
those
two
and
then
you
just
drop
the
geneve,
the
Australian
out
details,
they're,
basically
saying
like
look,
it
starts.
The
traffic
starts
in
the
Pod
that
remains
space.
J
It
goes
up
the
host
Network
namespace,
so
that
when
we
grab
it
send
it
in
the
Z
tunnel
see
tunnel
that
stuff
and
then
it
goes
back
out
into
the
hostnet
space
and
then
back
into
the
Pod
or
whatever
yeah
so
like
just
just
like
that.
Two
level
like
hostname
space,
pod
namespace.
Where
does
stuff
go
and
when
does
it
move
from
one
to
the
other?
That
might
be
good
enough.
H
C
C
To
debugging
really
because
we'd,
rather
they
you
know,
reach
out
to
the
community
or
maybe
eating
a
vendor,
because
it's
very
complicated
and
I
agree
with
Justin
I.
Think
I
also
mentioned
that
in
the
chat
this
is
super
complicated
to
to
illustrate
for
the
user
we
need
to.
If
we
want
to
provide
some
diagram
like
this
I
think
we
want
to
abstract
that
out.
Like
Ben
mentioned.
J
H
Yeah,
so
just
one
thing
going
back
to
this
thing
is
I'm
open
to
helping
making
some
of
these
obvious
based
cnis
or
some
of
the
other
Alternatives
work
with
ambient.
So
if
somebody's
already
working
on
it,
let
me
know
when
we
can
sync
up
and
we
can
look
at
what
it
takes.
Are
there
separate
teams
already
working
on
sort
of
making
ambient
work
with
different
cnis
and
in
particular
like
openshift,
and
things
like
that.
I
J
No
but
I
don't
think
anybody's
working
on
openshift
right,
John
volunteered
to
do
psyllium
I
think
there's
smoke
guys
for
that,
but
openshift
I,
don't
think,
is
getting
special
attention
right
now.
That's
correct!
Okay,.
H
So
maybe
I
can
help
with
that,
but
that
was
also
the
purpose
of
these
diagrams
to
clearly
understand,
what's
happening
right
now,
so
that
when
we
try
to
make
it
work
with
other
cnis,
we
know
exactly
where
the
packets
are
flowing
in
order
to
help
us
make
it
work
with
other
CNS
and
then.
Finally,
the
last
point
I
wanted
to
again
sort
of
similar
well
I.
Think
Keith
had.
D
Yeah
go
ahead,
yeah
I
just
want
to
say,
I.
Think,
even
though
I
hear
the
feedback
that
you
know,
this
is
just
remixes
for
a
user-facing
guy.
I
think
this
is
exactly
the
kind
of
thing
we
want
to
encapsulate
in
architecture
documents
so
that
it's
it's
referenceable
and
you
can
change
it
into
PR
over
over
time
and
see
how
the
evolution
of
the
implementation
details
change.
So
we've
got
a
architecture
directory
in
in
GitHub
I.
This
would
be
really
great
to
have
this.
D
This
image
and
other
stuff
here
in
this
in
this
in
this
directory
for
Focus
reference.
C
H
Okay,
yeah
happy
to
find
the
right
place
for
it,
so
that
it
helps
people
who
are
you
know
making
other
cnis
work,
and
you
know
new
people
to
the
community
can
better
understand
the
data
path
and
make
improvements
and
by
the
way,
if
there's
somebody
that
I
have
some
follow-up
questions
can
I
reach
out
to
any
of
you
guys.
Well,
some
of
these
I
can
put
it
on
the
slack
Channel.
C
Let
me
just
do
us:
can
we
just
do
a
thread
ambient
dab,
because
other
people
might
be
interested
okay,.
H
Sure
yeah,
this
is
actually
more
of
a
topic
for
office
hours.
That's
why
I
was
looking
forward
to
the
officers
getting
started.
They
haven't!
That's
why
I'm
bringing
it
to
this
meeting,
because
this
is
the
only
other
meeting,
but
really
this
is
an
office
hours
kind
of
a
discussion
anyway.
The
last
point
I
know
we're
at
at
the
end
of
the
time
so
I'll
just
very
briefly
mention
again.
H
This
is
sort
of
on
the
same
points
as
the
previous
picture,
but
looking
at
it
a
slightly
different
way
that
if
you
look
at
the
CNN
data
paths,
this
sort
of
illustrates
how
we
are
skipping.
The
CNN
data
pass
because
we're
sort
of
intercepting
the
packet
right
at
the
wheat
now,
if
it
is
the
actual
behavior,
depends
on
the
kind
of
cni.
If
it's
an
obvious,
based
cni.
Well,
there
is
that's
not
even
visible
to
Linux
right.
H
You
can't
even
put
an
IP
table's
redirect
rule
there,
so
we
need
to
think
about
putting
some
Flows
In
OBS
to
kind
of
redirect
it
into
this
Z
Channel
path,
whereas
with
an
ebpf
data
path,
okay,
it
is
still
sort
of
an
extension
of
a
Linux
data
path,
so
you
know
so
having
this
helps
us
plan
and
as
well
as
test
for
other
cnis.
So
the
questions
is
number
one
are
all
the
existing
sidecar,
sto,
L4
or
routing
and
auth
features
also
implemented
in
emit
mode?
Can
we
state
that
in
the
docs
that.
H
H
And
then
the
other
question
is
that
is
there
a
desire
to
implement
gate
services
Within
ambient
by
either
some
combination
of
the
cni
and
the
Z10
data
path,
like
all
these
gate,
service
features
like
service
affinity
and
traffic
policy
and
endpoint
slices,
or
is
that
explicitly
a
non-goal?
We
really
do
not
need
to
implement
kid
services
either
leveraging
the
cni
or
implementing
them
in
Zee
tunnel.
H
E
Today
we
Implement
service,
we
don't
Implement
100
of
the
server
spec,
but
we
Implement
most
of
it
like
no
one
really
uses
service
Affinity.
Even
though
it's
there,
we
don't
implement
it.
We
could
add
it
potentially
I
think
there's
a
possible
future
path
where
we
rely
on
the
cni
to
do
it,
but
that
would
be
kind
of
a
Percy
and
I
implementation,
so
we'll
need
some
more
work.
There
personally
I
think
that
these
are
likely
long-term
things,
but
not
worth
spending
any
time
on
right
now,
because
they're
very
low
priority
and
very
high
cost.
H
So
I
think
it
will
be
fair
to
at
least
state
in
the
documentation
that
kubernetes
Network
policy
and
kubernetes
service
features
may
not
work
in
the
presence
of
ambient,
because
because
people
shouldn't
be
surprised,
yeah.
E
H
So
I'll
I'll
have
some
appropriate
summary
of
this
in
there
and
I'll
continue
to
work
with
you
guys
on.
What's
the
appropriate
information
for
box
versus
internal
architecture?
Okay,
we
are
past
the
time
so
I'll
stop
here.
Thanks,
I'll
feel
free
to
send
me
any
comments
you
might
have
I'll
continue,
updating
and
then
send
out
the
pr
for
review
in
a
week
or
so.