►
From YouTube: Technical Oversight Committee 2021/05/17
Description
Istio's Technical Oversight Committee for May 17th, 2021.
Topics:
- User Experience WG Roadmap
- Environments WG Roadmap
- Breaking change to unsupported .global feature: Decision to republish release notes warning that this is not supported.
D
D
E
B
B
Okay
and
john,
you
said
the
other
ones
are
not
short
right.
F
That's
probably
short,
it
was
just
that
we
may
want
to
start
looking
at
it.
An
irs
or
someone
on
their
team
is
interested
in
volunteering.
So.
G
Though
iris
is
has
office
hours
in
the
beijing
time
zone
she's
not
usually
at
poc.
G
For
intel
in
town,
okay,
I
can
work
with
her.
I
have
a
meeting
scheduled
with
her
ux
does
like
an
asia
friendly
meeting
when
once
a
month-
and
that
happens
to
be
this
week,
so
I
can
follow
up
with
her
at
that
time.
If
that
be
worthwhile.
D
So
we
still
need-
I
guess,
two
more
right,
so
this
is
our
time
to
ask
for
anyone
else
who
is
interested.
A
D
I
I
think
I
mean
it's.
It
will
be
perfectly
fine
to
have
him
or
her
as
a
shadow
this
release.
If
the
intent
is,
you
know
that
person
will
be
continuing
to
work
with
this
tear.
A
D
B
I
I
So
I
haven't
done
a
release
for
a
cup
a
couple
of
releases
and
the
branch
cutting
and
other
things
were
added.
In
the
meantime,
I
think
they're
pretty
stable.
That
said,
eric
myself
and
a
couple
of
others
are
regularly
involved
in
the
process
anyway.
So
if
they
need
any
help,
we
can
help.
J
Yeah,
I
would
say,
the
branch
cutting
is
relatively
well
automated,
but
brian
and
eric
on
a
lot
of
steps
have
knowledge
that
we
kind
of
have
to
reach
out
to
for
and
there's
like
a
lot
of
manual
stuff.
J
B
B
G
Okay,
so
I
think
you'll
find
that
our
our
111
roadmap
is
very
much
just
executing
on
the
2021
vision
that
we
talked
about
a
few
months
ago.
Items
are
pulled
pretty
much
directly
off
of
that,
without
too
many
exceptions.
G
E
B
B
G
So
I'll
just
kind
of
go
down
the
line
you'll
see
that
we
have
a
lot
of
p
zeros.
Our
first
on
the
list
is
getting
an
analyzer
feedback
into
the
status
field.
This
is
a
carryover
from
one
nine.
It's
currently
blocked
on
a
refactor
that
jason
wang
is
working
on.
G
Next
step,
we
want
to
rationalize
istio
control
commands
by
role
right
now.
Istio
control
has
a
lot
of
commands,
some
of
which
can
be
run
by
our
application
operators,
some
of
which
you
need
to
be
a
control,
plane
operator
to
run,
and
so
we
want
to
organize
them
in
a
way
so
that
it's
obvious
who
should
be
running
which
commands
for
our
users,
because
their
their
permissions
to
the
cluster
vary
quite
a
bit
and
that's
a
p0
for
this
release
as
well.
G
We've
already
seen
the
seo
upgrade
survey
for
one
nine
istio
upgrade.
Surveys
are
now
a
regular
thing
that
happened
with
every
release
of
istio
110
has
a
link
in
several
documents,
as
well
as
in
the
istio
control
command
output.
When
you
succeed
in
an
upgrade
you're
asked
to
fill
out
a
survey,
so
that's
going
to
be
ongoing
effort
on
our
part.
We
also
have
some
interview
follow-up
to
the
one-nine
surveys
that
was
requested,
and
so
I'll
be
heading
up
that
in
the
111
release.
G
Next
up,
we've
been
talking
a
lot
about
feature
maturity,
we'd
like
to
be
able
to
tell
a
user
here
are
all
of
the
non-stable
things
you're
using
in
istio.
Currently
we
do
this.
Well,
we
have
the
technology
to
do
this
with
annotations.
Today
it's
been
disabled
and
we'll
get
to
that
on
the
next
item,
but
we
need
the
technology
to
do
this
with
apis
labels
and
other
pieces.
This
is
something
that's
come
up
in
the
field
multiple
times
in
recent
weeks
as
a
high
priority.
G
I
do
want
to
call
out
that
we
do
not
have
an
owner
for
this
nathan.
Mittler
was
driving
it
for
a
while
jason
wang
was
also
driving
it
for
a
while
both
of
them
have
moved
on
to
other
topics
or
other
other
tasks.
So
we
we
are
looking
for
owner
in
that
important
space.
G
Next
up
as
a
follow-up
to
that,
we
do
have,
you
know,
alpha
maturity,
analyzer
that
exists
and
is
disabled,
it's
disabled!
Well,
it
only
works
for
annotations
today
and
it's
disabled,
because
so
many
of
the
things
that
we
use
by
default
in
istio
are
alpha,
and
so,
if
you
just
ran
istio
cuddle
install
with
no
options,
this
analyzer
would
report
all
kinds
of
problems
in
your
cluster,
which
is
not
a
great
user
experience.
G
So
when
we
talked
about
it
last
in
the
toc,
I
agreed
to
write
suppression
logic
for
anything
that
we
use
by
default
and
I'll
be
going
through
and
doing
that
in
111,
so
that,
at
least
in
the
111
release
we
will
be
able
to
see
what
alpha
annotations
are
being
used.
I
do
want
to
call
out,
though,
that,
as
we
begin
showing
this
to
users,
this
means
that
we
need
to
treat
those
things
that
we
are
using
by
default
as
though
they
were
beta
or
higher.
G
B
B
G
B
G
I
thought
that
what
I
had
represented
just
now
is
what
the
toc
had
asked
for
about
four
weeks
ago
with
this,
when
we
pulled
it
out
of
110.
If
I'm
wrong,
maybe
we
need
another
short
design
session,
but
I
don't
want
to
derail
the
road
map
for
that
purpose.
Does
that
make
sense.
B
I
remember
the
discussion
to
some
degree,
but
I
don't
remember
that,
specifically
being
the
totality
of
the
conclusion.
Okay.
G
D
D
I
don't
think
that
means
that
that
particular
feature
by
default
becomes
beta
or
higher
okay
right,
so
those
two
things
are
separate
in
my
mind
and
that's
what
I
think
louis
is
also
trying
to
say.
B
K
G
Okay,
I'll
make
sure
to
revisit
that
before
we
begin
execution
on
it
to
discuss
the
status
of
these
suppressed
apis.
In
the
meantime,
I'll
continue
along
unless
there's
objections.
Okay,
all
right!
Next
up,
we've
again
heard
from
toc
in
the
last
few
weeks
that
we
need
to
support
stable
api
promotion.
G
We
do
not
have
the
api
machinery
in
place
yet
today
to
label
say
virtual
service,
or
I
know
some
of
the
security
apis
are
looking
to
be
labeled
as
stable
and
that's
not
just
a
maturity
level
that
we
put
on
sdoio.
That's
actually
referring
to
the
kubernetes
api
version
string
moving
to
just
v1
instead
of
v1
beta,
for
instance.
G
L
G
G
We're
moving
an
honor
from
a
p2
to
a
p0,
so
that's
a
good
trade
for
us.
Okay,
I'm
gonna
move
this
to
unowned
all
right
next
up
external
control,
plane
troubleshooting!
This
is
something
that
we
have
been
targeting
since
1
8
and
there
have
been
lots
of
architectural
discussions,
etc.
We
actually
have
the
architecture
in
place
now
for
the
communication
that
we
need
to
happen
between
an
external
control,
plane
and
istio
control,
thanks
to
ed
for
working
made.
A
ton
of
great
progress
in
110.
G
Now
that
that
communication
channel
is
in
place,
ed
has
plans
to
make
use
of
it
across
all
of
the
eligible
troubleshooting
commands
in
istio
control.
There
are
one
or
two
commands
that
it
won't
work
for
their
edge
cases
that
we
know
about,
but
the
vast
majority
it
should,
and
so
we
want
to
make
use
of
that
immediately
so
that
all
of
our
users
can
make
use
of
these
troubleshooting
commands
not
just
control,
plane,
operators.
G
Now
we're
getting
down
into
the
p2s,
we
would
like
to
have
some
opt-in
data
usage
data
collection,
something
that
says
how
many
times
a
various
istio
control
command
was
called
or
how
many
times
the
various
analyzers
fired
to
get
an
idea
of
what
is
useful
for
our
users
and
what
is
not,
this
would
be
opt
in.
D
Mitch,
my
recommendation
on
this
would
be
if
you're
doing
some
surveys
try
to
tease
this
out.
If
people
are
actually
willing
to
opt
in
and
actually
send
data
to
an
open
source
community,
it
has
been
a
hurdle
in
the
past.
I've
not
seen
other
open
source
communities
do
that.
So,
if
we
do,
we
have
to
also
figure
out
where
we
are
going
to
house
this
data
and
who
can
have
access
to
it.
G
We
sort
of
expected
to
bypass
some
of
the
concerns
by
keeping
it
incredibly
simple,
like
a
spreadsheet
that
we
can
store
in
the
istio
community
drive
what
that
does.
Is
it
keeps
us
from
having
any
one
company
having
ownership
over
the
data?
It
belongs
to
the
entire
community
and
again
because
it's
totally
opt-in,
you
have
to
explicitly
send
the
file.
There's
a
lot
of
awareness
that
the
user
has
over,
what's
being
sent
up.
G
And
the
last
one
on
the
list,
newly
unowned,
in
our
discussion
about
api
maturity.
We've
we
heard-
and
I
think
ed
and
I
were
both
unaware
of
this.
Unfortunately,
istio
cuddle
still
struggles
with
some
of
the
apis
that
have
promoted
from
alpha
to
beta
in
the
last
year
in
the
networking
space
particularly
commands
like
analyze,
which
can
accept
config,
either
from
a
live
cluster
or
from
static
files.
G
G
G
B
G
There
probably
should
be,
and
that's
something
I've
had
a
chats
with
a
few
other
people
in
the
community
about
even
across
working
groups
as
the
working
groups
get
pretty
thin.
You'll
notice
that
there
are
not
a
wide
variety
of
names
on
our
roadmap.
This
release.
G
There
could
be
some
value
to
having
a
cross
project
priority
list
of
p0s
to
understand
what
the
relationship
you
know,
environments
is
coming
next,
I'm
sure
they
have
something
really
important
related
to
revision
based
upgrades,
because
those
are
really
important
for
our
users.
So
to
know
what
the
relative
priority
between
that
and
stable
api
promotion
is,
would
help
organize
our
labor
a
little
bit
more.
B
G
I'll
I'll
call
it
out
in
reverse,
though
we
we
sort
of
expected
to
hear
from
the
toc,
which
things
are
most
important.
I
have
to
imagine.
That's
got
to
be
api
stuff
that
that
the
club
plays
most.
B
I
think
the
surveys
are
really
useful.
It's
hard
for
me
to
balance
between
analyzers
and
feature
maturity.
I
I
I
don't
understand
totally.
I
mean
I
get
the
concept
rationalize
commands
by
role,
I'm
just
trying
to
map
it
into
feedback
that
I've
heard
about
the
product
in
my
head.
G
G
So
so
I
guess
there's
there's
two
efforts
there.
One
is
make
as
many
of
them
work
as
possible
and
that's
ed's
work
on
external
control,
plane
troubleshooting.
The
other
is
for
the
remaining
commands
that
are
just
never
going
to
work
for
all
users,
make
it
clear
which
users
should
expect
those
to
function.
B
A
I
guess
I
haven't
find
need
to
find
out
the
status
of
myself
in
the
past
few
releases
with
istio.
I
guess
part
of
the
reason
is
it's
been
working
pretty
well,
so
I
certainly
agree
with
what
you
said
like
the
be
able
to
have
stable
api
promotion
and
upgrade
survey.
Those
are
super
important
because
we
constantly
hear
our
users
complaining
about
upgrades
and
also
apis
clarity
around
our
api.
I
was
just
wondering
about
the
value
of
the
analyzer
status.
G
Yeah,
I
think
we
could
probably
move
that
down
to
a
p1
ed's
that
good
with
you
cool
and
I'll
reorganize
that
I
did
want
to
call
out
with
regard
to
the
api
since
we're
hearing
that
there's
a
lot
of
importance
there,
there
were
four
or
five
people
who
have
been
very
opinionated
in
the
toc
meeting
about
api
status
and
maturity.
G
We
agreed
to
handle
it
in
the
user
experience
working
group
last
week
when
we
began
discussion.
We
had
only
two
of
the
opinionated
people
and
I
worry
that
if
two
of
us
come
to
a
conclusion
and
bring
it
back
to
the
toc,
that's
not
a
useful
consensus
for
us
to
present.
B
C
I
I
was
intending
to
go
when
I
had
a
a
conflict.
Unfortunately,
so
I
am
interested
in
trying
to
help
guide
that
as
much
as
I
can
mitch.
Okay,
thanks
ben.
H
D
I
was
just
gonna
add
same,
you
know,
feel
free
to
tag
me
if
you
need
help
and
the
other
thing
is,
it
looks
like
the
feature-
maturity,
api
labels.
We
really
want
to
have
an
owner
there.
So
if
moving
this
priority
gets
us
an
owner,
that's
good.
If
not,
I
guess
we
can
go
and
look
out
and
see
if
there
are
other
free.
You
know
other
folks
are
available
to
work
on.
G
Yeah,
I
could
talk
to
jason
about
moving
he's
worked
on
both
the
analyzers
and
status
as
well
as
the
feature
maturity.
The
challenge
there
is
he's
he's
probably
three
months
invested
in
the
refactor,
and
so
I'm
I'm
a
bit
hesitant
to
pull
him
out
of
that
with
all
of
the
context
that
he's
built
up,
etc.
It
sounds
like
he
feels,
like
he's
pretty
close
to
completing
the
refactor,
and
that
would
be
a
lot
of
lost.
C
F
A
G
Okay,
yeah-
and
I
think
a
one-off
meeting
is
a
good
idea,
especially
this
week,
where
we
have
again.
We
have
our
asia
friendly
meeting
time
wednesday
at
7,
00
p.m.
Pacific
time,
which
is
probably
not
going
to
be
popular
with
a
ton
of
you.
G
B
I'm
just
going
to
make
a
note
for
shweta
to
come
back
when
we
do
look,
maybe
at
some
cross-product
prioritization,
because
I
think
there's
some
items
in
here
that
are
pretty
valuable
and
I
captured
the
kind
of
the
I
guess
the
general
toc
feedback
is.
We
we're
super
interested
in
seeing
future
maturity,
stuff,
make
progress,
api
and
future
imagery
stuff.
B
Okay,
you
said
environments
working
group,
but
it's
not
actually
on
the
agenda
today.
Am
I
planned.
M
And
we
can
make
it
next
week
as
well.
B
M
M
I
think
it
was
technically,
but
we
would
be
happy
to
do
it
next
week.
Okay,.
F
F
We
stopped
documenting
having
any
code
whatsoever
anywhere
in
estrio
or
easter
dot
io
about
this.global
stuff,
but
some
people
still
have
these
custom
envoy
filters
that
do
this,
so
we
have
no
test
for
it
whatsoever,
which
is
how
it
broke,
obviously,
and
the
change
that
we
made
basically
just
completely
breaks
it,
because
it
depends
on
some
envoy
filter
that
we
don't
even
use
anymore,
because
we
change
how
we
do
the
multi-network.
F
N
Just
to
be
clear,
this
has
been
discovered
that
it
was
broken,
probably
as
far
back
as
like
one
seven-ish.
We
we
we
never
actually
had
desperate
even
before
we
kind
of
refactored
multi-cluster.
N
F
F
So
maybe
it
was,
it
was
working
but
not
supported,
and
now
it's
not
supported
and
broken.
D
F
No
there's
nothing
documenting
startup
global
there's,
nothing
in
isso
uco
at
all,
like
it's,
it's
it's
completely
gone,
so
I
guess
the
question
is:
do
we
still
care
about
it
at
all?
Was
this
just.
D
E
D
All
right,
so
what
I
was
gonna
say
is:
I
mean
I've
been
following
this
in
the
past.
Two
three
releases
at
least
like
john
said
star
global
is
not
there
within
the
power
documentation,
so
the
current
multi-cluster
support
works
without
it.
So
if
a
user
still
uses
the
new
releases
of
new
release
of
histo
and
wants
to
get
this
functionality,
what
are
they
trying
to
do
like
I'm
trying
to
understand
what
is
the
use
case
here
and
if
there
is
no
tangible
or
real
use
case,
I
would
be
okay
dropping.
F
Yeah
it
used
to
be
people
would
have
and
correct
me
wrong.
Q,
a
because
I'm
not
actually
an
expert
here.
They
would
have
their
internal
like
it's.
Basically,
if
you're
familiar
with
mcs
how
you
can
have
by
default,
everything
is
just
the
cluster
local
and
then
you
have
clusterset.local,
which
is
the
entire
mesh
across
all
clusters,
except
we
call
it
a
dot
global,
so
global
hits
the
entire
mesh,
whereas
cluster
local
hits
just
your
own
cluster.
O
N
Yeah
and
just
be
clear,
this
doc
level
was
only
used
in
a
multi-master
setup.
If
you
had
like
a
primary
remote
cluster.local
would
get
you
mesh
wide
anyway,
so
it
really
only
applies
to
you
know.
Multi-Master
configurations.
D
M
M
N
So
so
one
thing
I'm
kind
of
proposing
is
so
in
supporting
kubernetes
multi-cluster
services.
We're
gonna
need
to
support
cluster
set
at
local,
and
it's
actually
kind
of
one
of
the
bullets
here.
So
I
suspect
that
if
we
kind
of
sort
out
the
plumbing
for
aliasing
effectively
cluster.local
with
clusterset.local,
we
can
probably
do
the
same
thing
with
anything
we
want.
N
We
can
kind
of
like
make
a
generic
aliasing
mechanism
where,
if,
if
people
really
need
for
some
reason
to
have
global
it
will
work,
it
obviously
might
not
work
the
way
it
used
to,
because
cluster.local
will
still
get
you
mesh
wide,
but
if
they
actually
have
clients
that
are,
you
know
hard
coded
to
to
global
it
should
at
least
work.
M
To
step
a
bit
back
here,
so
real
problem,
I
think,
is
how
do
we
integrate
or
interoperate
with
external
dns
servers
or
people
who
do
not
use
kubernetes
dns
run
in
other
environments,
where
you
have,
you
know
kind
of
random
domains,
random
dns
servers
where
you
programmatically
control
them
or
do
whatever
you
want.
So
it's
it's
a
matter
of
interoperability
and-
and
you
know,
kind
of
not
being
completely
tied
to
the
behavior
of
you
know.
The
default
install
basically.
A
Yeah,
I
think
that's
totally
the
value
of
that
global
or
whatever
host
user
might
be
able
to
specify,
because
they,
the
caller,
maybe
reside
outside
of
kubernetes.
They
may
be
inside
of
kubernetes
and
be
able
to
configure
a
particular
address.
The
same
destination
address
regardless,
where
your
clients
are,
are
pretty
important.
O
O
B
I
I
think
I
still
don't
think
we
actually
have
a
succinct
description
of
how
what
global
was
was
intended
to
be
used.
Right
was
the
the
option.
We
would
give
you
cluster.local
right,
which
by
default,
would
be
mesh
wide,
but
you
had
an
option
to
make
it
literally
cluster.local,
and
then
you
could
use
the
global
name
right
to
get
global
behavior.
Even
if
that
was
the
setting.
N
B
N
Not
quite
true
so,
like
the
example
yeah
it's
it's
there
wasn't
that
much.
So
if
you
had,
if
you
had
like
multi-cluster
with
say
one
control
plane,
but
a
bunch
of
remotes
cluster.local
would
still
get
you
mesh
wide.
It
really
was
only
a
hack
for
dns
to
make
multi-master
work.
E
E
D
O
D
So
if
you
go
to
that
repo,
so
this
was
pretty
bad
in
terms
of
management,
so
I
think
the
feature
might
have
become
like
alpha,
but
some
of
the
core
pieces
it
needed
were
just
removed.
C
M
So
I
have
a
problem
with
this
interpretation
of
core
dns.
I
mean
there
are
projects
like
cube,
externalizing
core
or
that
a
given
functionality
with
you
know
proper
dns
servers.
M
C
M
Agree,
I
mean
the
coordinates.
Plugin
was
definitely
the
replacement
with
what
john
is
saying
with
the
agent
intercepting
dns
is
one
solution
having
the
synchronizer
that
is
pushing
the
dns
entry
into
into
you,
know,
google
cloud
dns
or
whatever
dls
of
real
dns.
You
have
it's
another
solution,
so
there
there
there
you
know
kind
of
proper
solutions
for
split
horizon
dns
that
we're
discussing
here
or
whatever
it
is.
O
If
that's
how
this
was
really
intended
to
work,
though,
because
this
was
really
just
resolving
a
specific
domain
suffix
to
kubernetes
to
istio
services
and
then
was
wired
in
directly
with
the
kubernetes
cluster
dns
to
as
a
delegate
so-
and
I
mean
it's
not
really
like
a
true
dns
server-
it's
really,
you
know,
sort
of
you
know.
Popsicle,
stick,
sticks
and
bubble
gum
kind
of
no.
M
Yeah,
but
that
was
because
you
need
to
test
it,
you
have
the
requirements
for
testing
and
and-
and
you
cannot
really
have
an
external
proverb-
you
know
manage
dns
that
you
you
put
in
the
unit
test
or
in
the
pro
so,
but
what
I'm
saying
is
that
proper
solution
equality
is
is
to
to
actually
have
a
property
on
a
server
with
a
proper
domain.
M
You
know
food.com
and
then
have
a
synchronizer
that
is
pushing
entries
into
food.com
proper
dns,
which
is
you
know,
global
dns
server,
and
then
everything
is
the
same
thing
but
practical
terms
to
implement
it
in
istio.
You
need
to
have
it
working
kind,
and
for
that
you
are
limited
in
what
you
can
do.
D
D
I
mean
yeah
they're
products
out
there,
which
we
which
do
global
dns
right,
so
we
don't
have
to
solve
those
things.
I
I
think
again
so
what
question
that
swin
had
is
the
thing
that
we
need
to
solve,
or
we
need
to
say
hey
if
you
are
using
dot
global
suffix
for
doing
multi-cluster.
Routing
here
is
the
new
way,
which
means
you
will
have
to
now
change
your
clients
or,
yes,
you
have
to
change
some
of
your
route
rules
or
service
entries
so
that
they
no
longer
need
to.
D
C
D
So
actually
yeah
you're
right,
so
I
mean
it's
a
bit
of
a
churn,
but
I
don't
know
if
you
want
to
give
them
a
smoother
migration
faster.
I
was
asking.
O
A
F
Oh,
it
was
an
onboard
filter
and
we
broke
the
onboard
filter
because
it
depends
on
the
existing
filter
that
we
removed
right.
They
can.
I
mean
there
is
an
out.
They
can
turn
off
the
patch
and
go
back
to
the
insecure
mode
where
they
expose
every
service
on
the
public
internet
with
no
authentication.
But
that's
a
pretty.
F
A
A
Yeah
would
be
yeah,
so
I
I
think
that
global,
there's
definite
value
for
consistent
user
experience.
When
the
client
is
outside
of
the
mesh
was
the
client
is
inside
mesh.
They
can
call
the
single
service
destination
service
with
whatever
name
they
like.
It
doesn't
have
to
be
that
global.
It
could
be
like
somebody
was
saying
food.com
right.
So
if
I'm
a
client
I'm
outside
of
the
mesh,
I'm
calling
my
services
running
inside
the
mesh,
I'm
calling
it
whether
it's
on
my
laptop
or
whether
it's
a
pod
within
kubernetes,
it's
the
same
endpoint.
D
A
That
was
not
that
global
did
initially,
but
I
think
it
has
that
intention
when
shuran
designed
in
the
first
place,
so
the
dark
global.
I
think
it's
a
bad
naming
and
also
it
was
built
on
the
kodia's
implementation,
which
is
bad,
which
we
have
a
replacement
right
with
dns
proxy,
it's
more
mature.
So
I
think
we
should
still
enable
people
to
do
things
if
they
want
a
consistent
naming.
D
M
M
In
the
existence
are
solutions
that
exist,
there
is
a
kubernetes
project
to
to
sync
kubernetes
to
an
external
dns
server
with
about
20
dns
servers
supported.
M
It's
not
supposed
because
you
are
synchronized
with
the
real
domain,
so
you
have
example
port.com
and
then
you
know
names
show
up
with
your.
You
know:
making
get
acme
certificate.
You
have.
You
know
your
own.
Oh.
B
B
M
Like
I
said,
dot
local
would
be,
you
know,
kind
of
well
defined
name,
but
it's
not
taken
out
of
proto.google.com.
B
O
I
I
think,
that's
pretty
much.
It.
M
M
K
B
B
A
E
N
O
So
the
envoy
filter
was
just
substituting.global
with
that
cluster.local,
so
when
it
comes
in
to
the
other
gateway
it
routes
to
the
correct
service.
But
if
you're
using
pass
through
on
the
gateways,
you
can
just
call
everything
with
cluster.local
right
and
you
have
all
the
endpoints
from
the
other
cluster
in
your
local
cluster.
So
it's
just
going
to
route
out
there
and
if
it
ends
up
going
through
the
network
gateway,
then
it
ends
up
in
the
right
spot.
B
B
E
B
And
that
was
again
mostly
for
people
who
were
running
stuff
outside
of
a
kubernetes
cluster
correct
and
our
preferred
answer
for
them.
Anyways,
please
run
inside
the
mesh.
F
Not
about
the
dns
okay,
the
actual,
that's
why
envoy
configured
so
the
envoy
filter
basically
doesn't
work.
It
tries
to
do
something.
That's
completely
yeah.
B
D
D
D
N
Okay,
there
were,
there
were
also
during
the
one
seven
x
time
frame.
A
couple
folks
raised
a
similar
issue
because
of
that
that
gold
was
broken
then
as
well.
Every
single
issue
I've
seen
raising
this
is
in
mesh
traffic.
B
B
Yes,
yes,
no!
No,
I
understood,
obviously
it's
a
pretty
edge
edge
case
right,
but
we
are
we're
contemplating
making
a
deliberate
regression
another
regression.
Well,
depending
on
your
point
of
view,
some
customers
will
think
it's
a
regression.
The
question
is
how
many
of
them
are
there
and
what
are
their
cost
to
remediation.
B
B
I
mean
to
just
say
that
it's,
it
was
an
experimental
feature
and
it
has
been
logically
and
semantically
replaced,
and
if
you
want
going
forward,
you
should
look
at
mcs
and
dns
support
for
mcs,
for
if
you
want
to
do
this
from
stuff
outside
the
mesh,
which
is
already
in
edge
case.
F
N
So
I've
already,
I
I
posted
a
link
to
the
previous
release.
Note
I
mean
if
we
want
to
expand
beyond
that,
we
can.
Oh
sorry
did
I
miss
that.
C
B
B
A
Yeah,
so
the
the
problem
with
this.
Certainly
we
read
this
too,
but
we're
still
using
global
and
glue
mesh
on
1.8
using
the
new
model
nathan,
you
wrote,
which
is
multi-primary,
so
you
could
potentially
do
multi-primary
with
global
or
whatever
host
name
you
like
with
dns
proxy.
That's
the
confusing
part
right.
So
this
application
was
specifically
about
replicated
control
plan
which
we're
not
using
and
it
seems
to
indicate
you
know,
that's
not
supported,
but
it
didn't
really
say
anything
about
people
still
config.global
with
the
new
multi-primary
model.
N
A
D
We
can
add
clarification
to
this
note.
It
looks
like
like
linz
was
saying
there
was
some
confusion
and
we
can
publish
it
again
if
that's
needed
through
our
social
media
channels,
but
it
feels
like
the
first
attempt
was
good.
It
just
wasn't
maybe
clear
again
clear
enough
and
then
obviously
we
don't
didn't
have
automation.
D
So
people
didn't
you
know,
nobody
needs
needs,
upgrade
notes.