►
From YouTube: Ambient Mesh WG meeting 2023 07 19
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Foreign
welcome
to
the
July
19th
Wednesday
occurrence
of
ambient
work
group
meeting.
So
oh
I
need
to
present
agenda
one
sec.
A
All
right
so,
while
we're
waiting
already
took
care
of
that
top
agenda,
let's
go
on
to
the
second
one
review
the
RFC
allow
per
pod
DNS
setting
for
ambient,
whose
topic.
B
Is
real,
quick
on
that
top
one?
Is
that
like
next
week's
ambient
meeting,
or
how
do
people
join
in
on
that.
A
A
So
we
did
discuss
this
a
few
weeks
ago
and
the
the
decision
I
tie
missing
stead
of
using
this
meeting
for
it,
because
not
everyone
is
interested.
The
the
field
leaders
in
this
space
will
just
go
review
this
offline
and
present
back
the
the
results,
which
is
what
we
did
last
week.
A
C
C
I
I
picked
up
issue
555
from
Z
tunnel
Nathan
created
that
I
thought
it'd
be
a
good
opportunity
for
me
to
get
some
more.
You
know
more
dense,
like
worked
on
as
far
as
like
PRS
and
stuff
to
get
more
familiar
with
istio.
C
So
that's
why
I
picked
this
up,
but
as
I
was
going
through
like
solutioning
and
trying
to
think
of
ideas
and
how
I
could
address
the
task
at
hand
which
is
to
support
for
pod
DNS
settings,
I
have
some
questions
that
I
shared
with
both
Keith
and
Nathan,
and
that
led
me
to
actually
creating
this
RFC
so
that
we
could
talk
about
it
like
over.
A
document.
C
Mainly
the
question
I
did
have
was
around
DNS
policy
and
how
that
could
be
represented.
Within
istio
I
have
to
talk
with
Nathan.
It's
it's
pretty
obvious
that
we
want
to
be
like
agnostic
to
the
platform
that
is
leveraging
istio.
So
if
it's
kubernetes
or
not,
that
shouldn't
really
matter-
and
it
seemed
kind
of
weird
to
like
do
a
one-to-one
translation
of
the
DNS
policy
structure-
that
kubernetes
has
into
istio
so
I
want
to
get
feedback
on
what
people
think
about
that.
C
If
we,
we
should
keep
that
in
mind
when
adding
new
data
structures.
This
DNS
policy
policy
structure,
for
example,
would
probably
be
added
to
like
the
workload
Proto
throughout
the
code
and
then
I
think
that's
how
you
could
leverage
it,
but
I
want
to
get
feedback
from
other
folks
and
they
can
and
discuss
it
within.
D
I
have
some
general
feedback,
that's
not
actually
about
your
design,
so
much,
which
I
think
looks
very
nice
and
I.
Don't
I
don't
want
this
to
dismiss
your
work,
but
I
feel
like
we
as
a
project
are
focusing
on
the
wrong
things
like
as
we're
moving
to
Beta.
We
have
all
these
fundamental
things
that
are
broken
with
the
core
and
we
keep
piling
on
more
and
more
stuff,
like
I,
deeply
regret,
accepting
adprs
about
service
entry
and
workload
entry
and
maybe
even
DNS.
D
At
this
point,
I
mean
I,
don't
think
we
should
go
as
far
as
to
revert
them
at
this
point,
but
I
think
we
really
should
have
focused
on
making
the
core
stable
before
we
added
all
this
other
stuff.
D
D
But
yeah
I
wanted
to
throw
it
out
there
and
it
doesn't
necessarily
mean
that,
like
you
know,
it's
just
not
unnecessary
that
we
need
to
stop
everything
and
there's
a
it's
a
death
by
a
thousand
cats.
So
we
could
definitely.
We
could
potentially
have
this
done
and
then
start
freezing
after
that,
or
just
want
to
bring
that
up.
E
I
would
pile
on
what
John
said
and
Double
Down
I
regret,
having
the
DNS
stuff
in
istio
proper
and
having
it
stuck
in
in
limbo,
in
in
kind
of
enable
on
VMS
and
not
enabled
on,
and
you
know,
putting
all
kind
of
features
that
are
not
standard
DNS
into
it,
and
just
like
Telemetry
and
security
I
think
it's
not
our
business
as
history
to
redefine
how
DNS
works.
E
If
there
is
a
need
for
DNS
to
behave
differently
on
workload
level,
that's
probably
something
that
kubernetes
should
address
in
particular,
given
the
ambient
goals
of
being
transparent
and
being
compatible
with
the
rest
of
DNS.
So
the
reason
we
did
DNS
in
the
first
place
was
a
security
risk
that,
if
DNS
is
insecure,
then
the
entire
security
model
needs
to
be
scrambling.
But
beyond
that
you
know
I'm
really.
Sorry,
we
didn't
have
the
strength
to
say
no
to
adding
other
kind
of
extra
features
and
and
messing
up
how
DNS
works.
That's
my
my
take.
B
C
B
D
My
understanding
yeah,
but
the
internal
API,
the
XDS
API,
needs
to
be
extended
to
support
it
as
well.
B
Right,
but
that's
not
XDS
is
not
a
user
facing
API,
so
this
would
not
be
us
creating
a
new
commitment
to
users
in
terms
of
what
apis.
We
support
the
way
that
we
did
with
workload,
entry
and
service
entry.
It
would
just
be
us
respecting
the
API
that
is
already
there
in
kubernetes,
which,
as
I
understand
it,
we
sort
of
break
if
you're,
using
DNS
policy
in
your
cluster
and
then
you
install
ambient
DNS,
begins
behaving
differently
right,
I.
E
Don't
think
so?
If
we
don't
do
the
sub
DNS,
there
is
no
change.
I
mean
DNS
will
keep
working
as
it
is.
What
stops
working
is
the
extensions
that
we
added
to
DNS
and
twists.
We
need
to
kubernetes
DNS,
meaning
that,
if
you
have
a
service
in
a
remote
cluster
with
standard
is
to
expect
that
cluster.
Because
of
how
multi-cluster
works,
you
expect
it
to
to
be
there
with
cluster.local
is
not
a
problem.
E
Everything
works
as
expected
in
kubernetes,
but
is
your
DNS
will
return
a
different
thing
that
kubernetes
DNS
in
the
sense
that
it
will
merge
services
that
aren't
exported
from
other
clusters?
And
that
goes
back
to
the
discussion
we
had
about
adopting
the
MCS
and
kubernetes
standard
for
multi-cluster
we're
keeping
istio
model.
B
So
today,
if
you
have
a
cluster
that
has
pods
with
DNS
set
to
Cluster,
first
and
you've
got
that
working
for
you
without
istio
and
you
install
ambient.
Will
the
cluster
first
DNS
rules
continue
applying.
E
D
Part
of
this,
as
well
as
I'm,
almost
certain
that
they
did
the
no
local
DNS
cache,
which
is
the
kubernetes
project
that
basically
does
what
we're
doing
doesn't
respect
this
I,
don't
know
how
you
could
possibly
respect
it,
given
that
it
relies
on
reading
the
result.com
file
in
the
Pod,
which
The
Dana
said,
doesn't
really
have
access
to.
D
F
I'll
also
add
that
to
CEO
with
with
sidecars,
only
allows
I
think
it's
cluster
first.
So
if
you
don't
have
question
first
DNS
set,
then
it
has
a
warning
and
it
says
things
might
not
work
so
there's
already
some
restrictions
in
sidecar
mode,
okay,.
G
G
G
Because,
like
I
agree
that
if,
if
this
API
is
going
to
be
a
permanent
kubernetes
thing,
and
it's
stable
enough
in
kubernetes
that
we
need
to
support
it,
that
it
makes
sense
to
support
it
or
respect
it
because
I
don't
think
we
really
do
like
Keith
and
everyone
else
is
saying
today
we
don't
fully
support
even
in
sidecar
mode.
So
you
know
I,
don't
see
it
as
a
priority
for
beta,
but
if,
on
the
other
hand,
this
is
something
like
John
you're
saying
because
of
what
is
happening
in
Upstream
kubernetes.
H
D
I
think
I
have
I
have
a
few
thoughts.
One
is
the
DNS
mode
in
Z
tunnel.
Well,
so
one
like
the
goal
of
z-panel
is
to
be
drop
in
compatible
with
all
of
kubernetes
right.
Yes,
but
DNS
can
never
meet
that.
So
the
DNS
proxy
is
kind
of
always
a
permanent
opt-in
thing,
because
by
definition
it
mutates.
The
behavior
right
so
100
compatible
with
all
kubernetes
is
not
actually
a
requirement
of
DNS,
even
though
it's
a
requirement
for
Z
tunnel
in
general,
I
think
ow.
D
That
doesn't
mean
that
we
can't
add
it,
but
it
means
that
we
don't
have
to
based
on
our
philosophies,
but
then
I
would
say.
If
we're
going
to
add
it,
we
ought
to
look
at
what
the
node
local
DNS
product
is
doing
for
this,
because
it
seems
quite
hard
to
actually
Implement
and
so
I'd
like
to
see
what
another
project,
that's,
basically,
what
we're
doing
does
for
it
if
they
don't
implement
it.
That's
a
very
strong
signal
trying
to
implement
it.
D
A
E
Some
some
questions
about
the
technical
feasibility
of
doing
what
what
we
claim
to
do.
D
D
I
I
E
Yeah
I
I
think
there
is
some
objection
regarding
priority
and
there
is
some
objection
regarding
you
know:
changing
the
behavior
and
and
ability
our
ability
to
be
transparent
and
one
by
default.
So
we
cannot
be
on
by
default.
If
we
change
behavior
in
DNS.
E
By
the
way,
since
we're
on
this
topic,
John
regarding
service
entity,
workload,
entry
and
this
stuff
since
I'm,
one
of
the
people
who
pushed
for
having
workload
entry,
support
I
want
to
clarify
that
my
concern
was
mostly
related
to
supporting
XDS
server
sending
stuff
from
you
know
the
Federation
and,
and
you
know
other
sources,
not
necessarily
the
workload
entry
apis
that
we
have,
which
is
arguably
not
critical
for
ambient,
or
something
that
we
may
need
to
support
long
term.
D
H
Just
real
real
quick
regarding
your
general
comment
about
instability
or
unusability
I
assume,
there's
a
burn
down
list
for
that
that
folks
are
working
on.
D
Five
minutes
should
be
in
this
drive
ambient
mesh
to
Beta.
So
that's
not
a
good
one.
I
mean
that's,
probably
also
part
of
why
it
was
done
like
it's
very
easy
to
say,
hey,
we
should
go
Implement.
I,
don't
want
to
pick
on
your
feature,
but
hey.
We
should
go
Implement
DNS,
but
yeah.
That's
a
great
nice
fun
project.
It's
well
understood!
We
go
implement
it.
D
D
D
H
H
Just
saying
it's,
it's
important
that
we,
you
know
that
we
that
we
are
tracking
everything
in.
H
J
Now,
by
the
way,
I
just
want
to
make
a
quick
point,
though
John
I
I
know
you
mentioned,
you
can
never
regrets
having
the
XDS
stuff
to
support
a
workload,
entry
and
service
entry,
but,
on
the
other
hand,
right
having
stable
API
is
also
important
for
reaching
data
right
so
that
the
XDS
API
can
be
stable.
That
people
can
looking
into
moving
between
releases
I
think
that's
also
a
beta
requirements,
even
just
for
single
cluster.
Would
you
agree
with
that?.
J
D
E
B
D
D
C
B
D
No,
no,
we
don't
have
any
upgrade
docs
for
sure.
I
mean
we're
in
Alpha
still
so
there's
no
upgrades
at
all
I
absolutely
agree
that
once
we're
in
beta
technically
we
don't
need
it
for
beta.
We
just
need
it
for
beta
1
to
Beta
two
right.
J
D
B
D
That's
a
good
point.
We
could.
We
could
start
documenting
the
upgrade
process
that
we
intend
to
have
and
have
users
doing
that
for
Alpha,
even
without
committing
to
it
working
between
Alpha
and
beta
or
between
versions,
right
yeah,
there's,
a
difference
between
documenting
the
process
and
guaranteeing
its
support.
Moving
forward.
E
Fair
enough
so
on
this
topic,
I
did
quite
a
bit
of
research
with
the
cni
upgrades
Eternal
upgrade
and-
and
that's
actually
the
last
topic
also
that
we
want
to
discuss
at
this
moment,
I
mean
with
what
John
was
saying
about.
You
know,
focusing
on
the
most
stable
and
things
that
we
can
support.
I,
don't
see
any
scenario
we
can
do.
E
Then
you
know
the
stats
that
normally
is
done
by
by
kubernetes
when
when
they
upgrade
node
infrastructure
or
cni
or
or
low
levels
of
Kernel
and
stuff,
like
that,
even
if
there
was
more
fancy
ways,
it's
not
clear
that
we
need
to
support
it
because
you
can
have
a
viable
product
and
very
safe
product
by
just
doing
not
coordinating
and
not
upgrade,
not
sexy,
not
fancy,
but
it's
the
most
stable
well-tested
and
bulletproof
way
to
do
upgrades
and
we
need
to
discuss
if
we
want
to
focus
time
on.
E
You
know
fixing
all
the
kind
of
corner
cases,
and-
and
you
know
the
tunnel
cannot
be
scheduled.
Cni
and
zetanol
are
missed.
You
know
removing
the
rules
when
when
zetan
is
upgraded
or
we
want
to
focus
on
the
on
the
stable.
F
F
This
was
a
follow-up
from
some
discussions
that
we've
had
thanks
in
the
ambient
API
doc,
and
opposition
policy
in
general
took
some
feedback
from
meeting
that
we
had
a
couple
weeks
ago,
we're
on
the
naming
this
this
view
was
previously
called
protocol,
which
I
think
there's
kind
of
consensus
that
that
was
too
specific
for
what
we
were
wanting
to
do,
and
so
I've
changed
the
name
to
layer
I'm
not
married
to
that
name.
F
F
If
it's
unspecified,
then
we
will
just
use
our
existing
Behavior
they're.
You
know
the
specifics
of
the
of
the
designer
are
laid
out
here
as,
along
with
quite
a
lot
of
a
lot
of
Alternatives
considered.
F
The
I
think
that's
pretty
much
the
only
risky
thing
here.
This
does
require
API
Pages,
so
we
need
some
Toc
approval
and
I
also
add
that
this
that
this
design
is
for
automation,
policy
coexists
with
the
more
General
design
for
adding
Target.
We
have
to
operative
engine
policy
to
several
different
policy
resources,
so
our
operation
policy
would
have
this
new
layer
field
as
well
as
a
Target
Rec
field.
So
that's
the
that's
the
design
and
any
questions.
D
I
like
it
on
the
device,
because
it's
flushing
out
my
proposal,
so
it's
amazing
unfair
to
say
that
but
I
think
it's
great.
It
solves
a
lot
of
problems.
I,
don't
love
the
names,
but
I'm
I.
Don't
care
that
much
and
happy
to
bike
send
them
after
we
get
consensus
on
the
direction.
F
F
One
thing
one
thing
to
keep
in
mind
here:
that
I
almost
forgot
is
that
this
would
also
unblock
you
know
current
bug,
where
hair
pinned
authorization
policy
doesn't
currently
work,
because
we
are
we're
currently
skipping
it
because
it's
got
a
waypoint
IP.
This
would
allow
us
to
actually
apply
L4
policy
after
hairpinning
for
traffic
that
comes
from
outside
of
the
mesh.
Apparently
we
don't
really
have
a
good
way
to
do
that.
So
I
want
to
add
that
contact.
F
Okay,
well,
if
there's
nothing
else
here,
I
think
the
ask
from
the
ask
on
this
on
this
dock
is
TRC
approval,
because
there
will
be
eight
pack
changes
then
to
try
to
to
aim
in
an
interest
of
time,
try
to
aim
for
major
consensus.
I
don't
have
any.
Nobody
adds
comments
on
the
on
the
dock,
for
any
pic
is
just
with
the
approach.
Then
I'll,
just
you
know,
start
seeing
PRS
being
submitted
in
let's
say
two
weeks
or
so
the
new
body
associated.
A
B
What
are
we
going
to
do
about
that?
Is
that
a
validation
failure,
an
analyzer
failure.
F
Validation
failure
I
believe
we
I
call
that
out
in
the
doc
here,
yeah
that
would
actually
prevent
L7
attributes
from
being
set
with
layer,
4,
configures,
okay,.
E
Yeah
I
I
will
Reserve
all
the
Cosmetic
and
bike
shedding
comments
for
the
discussion
in
the
gamma
and
Gateway
working
groups,
where
other
vendors
were
also
because
I
think.
Eventually
the
authorization
policy
will
need
to
be
standardized
and
it
should
be
a
priority
for
for
the
kubernetes
community
and
hopefully,
at
that
point
we
can
remove
some
of
the
crafts
that
we
inherited
and
keep
clear
l4l7
separation
and
end
up
with
the
pin
API.
But
meanwhile,
whatever
you
do,
it's
as
good
as
in.
F
I'm
glad
you
brought
that
up
passing
because
I've
got
a
overdue
action
item
to
actually
create
a
dots
for
gamma,
to
try
to
tackle
authorization,
policy
and
I
hope
to
try
to
to
handle
that
by
the
next
gamma
meeting.
But
just
wanted
to
get
this
one
done
first,
because
we've
got
this.
This
beta,
the
owner,
aiming
for.
B
E
I
I
think
engagement
and
gamma's
layers
are
clearly
separated
and
you
have
HTTP
route,
you
have
TCP
route,
and
so
there
is
no
more
mixing
and
I
cannot
imagine
this.
You
know
attachment
point
and
everything
like
that,
so
we
don't
have
the
mess
that
we
have
initial,
where
something
maybe
HTTP,
maybe
TCP.
We
don't
know
that's.
E
D
E
A
So
in
general,
do
we
have
to
bring
us
up
in
front
of
Toc
meeting
or
do
we
just
pin
the
TLC
members
separately
because
I
think
not
most?
But
not.
Everybody
is
here.
D
Yeah,
the
TSC
members
should
be
joining
the
meetings
or
following
what's
going
on
in
the
meetings.
So
okay
but
I
think
they're
all
here
about
Louie.
So.
F
And
one
other
follow-up
guys
I
think
for
any
changes
to
the
hcsf
API
repo.
It
does
require
TRC
approvers,
so
I
guess
as
a
general
process
question,
do
we
feel
like
design
doctor?
If
you
actually
just
need
Toc
approval
on
the
dock,
then
is
it
just
a
matter
of
PR
approval
for
Toc?
Isn't
a
plurality?
Is
it
majority?
Do
we
I
I've
heard
api's
required
to
see
approval?
Are
there
any
more
details
about
the
degree
of
approval
or
the
or
or
the
the
location
where
that
approval
should
be,
could
be
gathered.
D
That's
a
good
question
like
from
a
technical
standpoint,
just
two
is
required
to
merge
from
a
procedural
standpoint.
Usually
it
varies
based
on
the
size
of
the
scope
of
the
API,
so
for
something
targeting
authorization
policy,
probably
everyone
at
least
having
the
chance
to
explicitly
abstain.
Is
nice.
F
Yeah
sure
so
this
is
another
design
box.
This
one
about
replacing
workload
selector
with
Target
rep
for
ambient,
specifically
I,
presented
this
to
TRC
and
got
General
the
feedback.
John
Lynn
any
of
you
who
are
there
for
anything
wrong,
but
I
thought
the
feedback
was
generally
yes,
we
need
this.
F
It's
going
to
happen
eventually,
but
concern
was
on
a
little
sentence
there
at
the
near
the
bottom
of
the
document,
around
waypoints,
ignoring
workload
selector,
even
if
they
match
and
so
I
put
up
to
three
options
for
for
discussion
here,
and
we
have
to
have
a
choice.
We
can
not
have
Have
waypointed
No
Label
selectors.
F
We
can
have
waypoints
except
it
looks
like
this
if
they
match
in
in
the
absence
of
a
Target
rep,
or
we
can
do
the
second
option
for
X
member
releases
for
kind
of
a
migration
period
and
then
move
towards
option
one.
My
vote
would
be
number
three.
F
It's
still
there's
going
to
be
some
inherent
confusion
with
this
direction,
regardless
and
and
John
called
that
out
in
that
comments.
Just
because
there
was
he
filming
usual
quote:
selectors
for
Waypoint
to
use
Target
rep
with
gateways.
I.
Think,
that's
probably
fine,
especially
with
the
validation
validation.
That's
got
to
Target
some
buttonal
label
I.
Think
that
probably
makes
sense,
but
yeah
wanted
to
bring
this
this
doc
back
up.
F
This
is
also
API
changes,
so
it
also
requires
COC
approval
for
just
about
every
policy
resource
and
I
called
those
out
by
the
name
of
the
between
the
dock.
Here
so
yeah
wanted
to
get
some
some
feedback
and
see
what
folks
were
thinking
when
it
comes
to
these
migration
options
for
balancing
between
Target
ref
and
workload,
selector.
D
I'm
good
with
three
or
or
one,
and
if
we
do
three
I
would
just
say
for
for
One
release.
So
it's
really
more
of
a
very
temporary
thing.
D
E
F
F
Having
the
Waypoint
just
now,
which
policy
applied
to
it
in
the
absence
of
a
workload,
selector
kind
of
as
a
migration
step,
I
thought
about
that,
but
I,
maybe
I
misremembered,
what
his
suggestion
was,
but
I
I
feel
like
I.
F
It
feels
safer
to
be
as
explicit
as
possible
and
so
that's
kind
of,
like
I
presented
in
the
options
that
I
I
displayed.
F
J
Yeah
I
prefer
option.
Three
I
do
have
a
quick
question:
is
it
possible
for
this
to
be
automatic?
I'm
sure?
Do
you
think
about
loud
here?
Is
it
possible
that
the
RC
policy
can
think
out
which
Waypoint
to
map
to
based
on
the
workloads
label,
even
though
so
that
we
don't
have
to
ask
a
user
to
change
it
I'm
trying
to
think.
D
J
F
Yeah
anybody
could
apply
it.
You
know
an
allow
policy
and
oh-
and
this
now
applies
to
the
super
sensitive
workload
that
is
in
the
same
namespace
and
you
can
probably
alleviate
that
with
like
service
count
way
points,
but
yes,
potentially
pretty
bad.
J
E
Kit,
if
I
can
put
a
requirement
here
and
I'm
discussing
on
the
chat,
if
you
are
an
user
who
has
migrated
or
you
don't
want
to
people
hear
about
the
old
issue
and
you
just
use
ambient
I-
think
it's
very
important
to
have
some
way
to
specify
that
only
target
threat
is
used
and
we
don't
have
to
worry
about
quantities.
The
other
thing
you
have
a
clear
path.
E
D
Can
I
address
that,
like
there's
any
user
adopting
ambient
regardless
of
this
proposal,
it
needs
to
migrate
their
authorization
policies
so
you're
showing
other
slides
by
the
way
Francis.
J
D
Mean
it's
fine,
but
I
just
want
to
make
sure
you
don't
accidentally
show
something
so
the
the
migration
that
option
three
is
allowing
is
migrating
from
alpha
to
Beta,
which,
quite
frankly,
is
not
relevant.
So
I
actually
changed
my
mind.
I,
don't
want
to
do.
Three
like
I
was
okay
with
it
because
it's
probably
trivial
to
do,
but
we've
all
already
agreed
that
we're
willing
to
make
all
Breaking
changes
in.
I
E
That
what
you
thought,
no
John,
what
I'm
saying
is
at
the
end
of
the
day,
yeah
they
migrate.
Somehow
they
change
their
policies.
But
if
you
just
start
with
ambient
and
don't
have
any
issue
policy
or
if
you
somehow
migrated
that's
what
I
care
about
is
to
have
the
clear
path
and
no
ambiguity,
you
know
I
mean
you
have
Target
rev.
Nothing
else
is
a
concern.
I
mean
somehow,
let's
assume
the
nightmare
is
over.
You
migrated.
D
E
D
E
D
K
Okay,
I
think
in
general
in
sgov,
or
we
lean
more
towards
new
and
shiny
things,
but
we
leave
the
current
users
behind
often
which
hurts
us
right.
So,
while
I
understand
the
ux
for
new
ambient
users
shouldn't
be
bad,
I
think
there's
a
long
tale
of
sidecar
users
right,
I,
I,
don't
know
if
you
can
just
remove
or
break
the
API
for
them.
So
I
maybe
may
have
missed
some
part
of
costumes
suggestions,
but
was
he
saying
to
remove
the
workload
selector
or
not
honor?
It.
E
For
we
are
changing
a
lot
of
apis
anyway,
and
you
know
the
worker
selector
will
clearly
not
work
in
some
cases.
So
there
is
no
questions
that
users
need
to
migrate
to
a
new
API.
Yes,
we
keep
them
I
I,
don't
know
I
mean
I'm
I'm
happy
either
way,
but
at
the
end
of
the
day
after
you
migrated,
you
should
have
a
clear
API
without
Legacy.
That's
and
that's.
K
E
K
K
F
Yeah
and
just
to
clarify,
like
workload,
selector
is
still
being
used
for
Z
tunnels.
I
I
should
probably
change
the
things
that
be
more
specific
in
the
in
the
talk,
but
this
is
typically
about
adding
Target
rep
for
gateways
in
in
L7
workload,
selector
eligible
policies,
those
will
still
be
around
and
for
sidecars
as
well
be
around,
and
so
users
actually
have
more
granularity
if
they
want
to
make
amp's
only
policies.
F
The
difference
is
that
if
they
create
a
name,
a
an
L7
policy,
NF
namespace
with
no
Waypoint,
then
the
policy
just
won't
apply
like
that
that
that's
the
go
to
your
partner
rash,
but
the
sidecar
users,
that's
going
to
be
the
the
weird
things
that
they've
got.
They've
got
a
heterogeneous
heterogeneous
namespace
with
sidecars
and
ambient
pods.
Their
L7
main
space
flight
policy
with
no
target
web
will
not
apply
to
the
ambient
pods.
K
I
see
yeah
I
mean
that's
something
that
we
can
maybe
live
with.
Just
because,
like
this
heterogeneous
deployment,
in
which
one
namespace
has
both
sidecars
and
Ambien,
is
that
going
to
be
a
realistic
scenario
as
people
on
board
to
ambient.
F
And
that's
my
PowerPoint
right,
not
not
in
beta
I,
don't
think
like
we've
brought
up
in
the
past.
You
know.
Maybe
ambient
beta
is
is
just
Greenfield
as
we
try
to
scope
the
interoper
right,
I
I
personally,
think
that
that's
a
valid
kind
of
stamp
to
take
for
now
we
and
we
can
flesh
out
the
interoperability
story
later,
but
yeah
like
a
a
mixed,
namespace
I
I,
don't
see
that
in
the
near
future,
at
all.
F
All
right,
the
ask
here
is
the
same:
get
some
some
Toc
approval
on
on
this
stock
as
well,
because
it
does
touch
multiple
different
policy
objects.
So
appreciates
the
appreciate
the
reviews
and
events
thanks.
Everyone.
A
All
right
next
up
whale
Carson.
E
Yeah
I
should
be
faster.
We
we
had
some
discussion.
It's
also
related
to
upgrade
right
now,
Z
tunnel,
when
it
upgrades,
if
you
have
a
node,
that
is
full.
So
basically,
let's
say
you
start
with
two
CPUs
and
you
are
already
allocating
your
your
tool.
Basically
also
CPU
is
reserved.
E
Zetan
will
not
start
so
pretty
much.
Everything
goes
down
in
frame
and
the
solution
is
to
have
priority
which
allow
editing
some
of
the
workloads
to
put
a
new
Z
tunnel,
and
that's
where
we
had
a
long
discussion.
You
know
between
me
and
John
in
particular.
One
way
is
to
move
into
Cube
systems,
you
don't
run
in
Cube
system
and
then
we
can
use
non-critical
priority.
E
F
F
My
vote
would
be
to
create
a
specific
priority
for
for
a
d
tunnel
itself,
just
because
I
thought
that
allows
platform
admins
to
look
at
the
any
number
of
things
that
might
be
running
in
their
cluster
and
do
the
priority
calculus
on
their
own
when
you
actually
piggybacking
up
with
hoop
systems.
Non-Critical
priority
just
couples
things
together
that
you
might
not
want
to
be
coupled
with
so
that
that
would
be
my
my
vote.
E
Yeah
I
think
John
concern
was
that
in
some
cases
that
doesn't
work
so
because
history
and
I
decided
to
go
with
Cube
system,
and
apparently
there
is
some
other
prior
art
or
other
reasons
that
some
cluster
or
some
vendor
or
something
will
be
broken.
If
we
take
this
approach,
which
is
yeah,
clearly
cleaner
and
so
forth,.
D
What
I
recall
and
I
was
going
to
do
some
research
and
I
didn't
get
a
chance
to
so
sorry
was
that
for
gke
we
wanted
to
set
the
priority
for
cni
to
whatever
the
max
is
and
I
think
on
GK
there's.
Something
like
you
couldn't
do
that
outside
a
cube
system,
and
so
our
solution
was
to
put
it
in
Cube
system,
which
is
also
what
a
lot
of
the
other
cnis
do
like
Calico
and
psyllium.
They
use
the
system
node,
whatever
the
built-in
one,
what
I'm
not
sure
of?
D
If
that
was
a
solution
or
the
only
solution,
and
you
know
we
picked
that
for
some
reason
and
we
didn't
pick
the
other
one
or
if
there's
some
reason,
the
you
know
what
I
don't
know
is
if
people
had
permission
to
create
priority
classes
or
if
you
know
we're
just
happy
running
at
Super,
Root
admin
and
it
works
in
our
testing,
but
in
real
world
it
won't
or
things
of
that
nature
or
if
GK
or
other
platforms
will
block
High
priorities
or
something
along
those
lines
it
feels
like
they
should
like.
D
You
shouldn't
just
allow
anyone
to
say
that
they're
the
highest
priority,
probably
but
I'm,
not
sure
the.
E
Product
is
a
priority,
is
not
the
highest
to
be
clear,
I
mean
the
product
we
are
setting
is
just
above
zero.
Normal
default
priority
is
zero.
We
just
need
to
kick
up
some
kick
out
some
samples
running
at
zero.
So
not
necessarily
you
need
to
be
as
high
priority
as
not
critical.
It
can
be
slightly
below.
B
It
also
may
be
worth
revisiting
that
question
with
the
gke
team.
I
know
that
I
think
I
did
that
research
on
the
cni
priority
two
and
a
half
years
ago,
so
there's
a
good
chance
that
some
of
that
may
have
changed.
There's
also
the
question
of
how
it
works
with
gke's,
what's
the
new
mode
that
they
have
autopilot.
E
I
can
think
out
of
JK
I
mean
other
vendors
is
a
problem.
I
mean
some
may
or
may
not
allow
running
Zito
in
Cube
system,
I
mean
what
John
proposes
to
move.
This
is
Italian
Eclipse
system
is
great,
I
mean
it's
probably
the
simplest
cleaner
solution,
but
if
some
other
vendors
decide
that
putting
zitanel
is
not
allowed
because
again,
Cube
system
is
restricted,
then
we
we
mess
up
and
then
we
have
to
you
know,
support
both
ecosystem
and
zetan,
and
it's
a
big
mess.
G
A
quick
question,
maybe
I'm,
maybe
I,
missed
something
you're
being
dense.
Why
does
it
matter
to
any
any
of
our
things?
Where
is
what
name
space
season
was
in
like?
Why
is
it
not
just
a
Helm
thing
you
could
figure
set
it
to
whatever
deploy
it
wherever
the
back.
You
want,
like
it
shouldn't
matter
right
like
that's.
That's
is
that
or
is
that
what
you're
actually
arguing
for
here.
E
G
If
we
can
override
it,
I,
don't
think
it
matters.
What
the
default
is
like
hoop
system
is
fine
and
if
there's
a
corner
case
like
you're
saying,
then
you
know
people
can
just
set
that
they
can
override
that
easily
like
as
long
as
we're
maybe
testing
tests.
Maybe
maybe
we've
got
a
test
in
like
a
non-standard
namespace,
just
not
just
to
catch
stuff.
That's
fine,
yeah,.
E
Okay,
I'll
go
ahead
with
with
what
we
concluded,
which
is
basically
just
the
configuration
and
it's
one
more.
D
E
D
Behavior
but
I
mean
I'm
fine,
with
making
it
more
custom
law
as
well
customizable
rather
yeah.
It.
J
Kirsten,
are
you
done
with
this
topic?
Yep?
Okay?
If
so,
I'd
like
to
quickly
spend
the
next
two
minutes
kind
of
quickly
discuss
the
service
entry
and
vocal
entry
points
John
you,
you
were
trying
to
raise
that
you
were
pretty
concerned,
so
we
already
right
now
we
have
a
little
bit
implementation
for
that
already
in
the
code
base
right.
Would
you
be
so
I
think
there
are
some
bugs
people
are
looking
at
effects?
Kevin,
you
probably
know
more
about
where
the
latest
things
are.
That
me.
J
D
I
mean
I
would
say
yes,
like
it's
hard
to
say
we're
not
going
to
accept
the
bug,
fix
and
scenery.
If
someone
works
on
developing.
C
D
But
I
would
strongly
encourage
people
to
focus
on
the
things
that
are
beta
blockers
like
it's.
It's
it
to
some
extent
is
more
of
a
suggestion
on
our
future
priorities
like
in
hindsight.
Maybe
we
shouldn't
have
actually
even
started
working
on
those,
but
now
that
we
have
it's
kind
of
the
ship
has
sailed
to
some
extent,
so
I
don't
expect
to
revert
them
by
any
means
or
or
just
completely
abandon
them,
but
I
do
think
we
would
be
better
sort
of
trying
making
sure
that
ambient
can
be.
D
I
mean
we
have
a
list
of
things
that
are
beta
blockers,
but
at
minimum
someone
should
be
able
to
go
deploy
in
a
normal
kubernetes
cluster
and
access
kubernetes
services
and
pods
and
whatnot
safely.
Like
you
can't
even
deploy
the
Bare
Bones
I
want
mtls
anywhere
I
mean
you
can
do
it.
It
works
great
for
a
demo
like
it
doesn't
work
for
production,
though
right.
D
It's
all
in
the
beta
blockers
list
like
there's
everything
is
a
security
hole
like
there's.
There's
no
security.
It's
at
any
given
point.
There's
like
a
50
chance
that
your
traffic
is
going
to
actually
have
policy
enforce
pretty
much
like
when
a
pod
starts
up
shuts
down.
Z
tunnel
restarts
each
Geo
restarts.
There's
all
these
gaps
around
like
every
aspect
of
the
edge
edges
of
things,
life
cycle.
J
K
Yeah
I
I
think
I
think
it's
a
fair
ask
to
make
sure
as
a
community
we
prioritize
and
look
at
some
of
the
beta
blockers
overall,
but
I
still
think
you
know
we
have
to
move
Beyond
it
also.
So
it's
going
to
preclude
contributions
that
are
looking
ahead,
but
yeah.
We
need
to
get
into
a
stable
state
for
single
cluster
I'm
with
you,
yeah.
A
Okay,
we
are
at
the
top
of
the
hour,
in
fact,
a
couple
of
minutes
past
it
anything
else
that
we
need
to
discuss
is
on
this
topic
before
wrapping
up.