►
From YouTube: Technical Oversight Committee 2021/04/26
Description
Istio's Technical Oversight Committee for April 26th, 2021.
Topics:
- Validation webhook config and revision update
- Networking WG Roadmap
A
B
So
I'm
filling
in
for
sure,
while
she's
out
I
know
the
code
freeze
was
moved
to
the
11th
technically.
This
last
week
was
supposed
to
be
the
last
week
of
testing,
but
I
think
it's
okay
to
continue
for
next
week,
since
we've
got
code,
freeze
moved.
C
B
Those
are
already
assigned.
I've
got
one
that
I'm
working
on.
Mandar
has
some
that
he's
working
on
mitch
has
some
that
he's
working
on
yeah.
We
just
need
to
complete
him
at
this
point.
B
B
B
And
the
release
health
is
updated
as
well,
so
we
still
have
four
blockers:
seven
p0s
and
17p
ones.
D
B
So
I
added
this
item
to
the
agenda
as
well
for
previous
istio
releases,
we
had
community
managers,
write
these
kind
of
a
marketing
spin
to
them
and
they
got
linked
to
in
various
places
and
that
sort
of
thing.
B
A
We
can
talk
about
this,
maybe
in
steering
how
much
time
do
we
have
to
resolve
this.
A
Let
me
let
me
copy
this
over
to
steering
thanks.
E
Yeah,
that
makes
sense,
is
this
for
the
release
announcement
blog
that
dance
really
used
to
write.
F
A
E
D
On
what
specific,
what
specific
point
there.
D
Right
yeah,
so
the
the
tag
stuff,
unfortunately,
doesn't
doesn't
fix
the
validating
web
hook
issue.
We
had
a
proposal
for
kind
of
introducing
an
api
to
fix
the
which
revision
handles
validation
and
that's
kind
of
stagnated,
and
I
don't
think
we
have
a
good
way
forward
on
that
proposal
sam.
Why
is
this.
D
I'm
curious
that
there's
something
we
can
do
to
help
yeah.
So
there's
a
dock
and
there's
there's
just
feedback
that
I'm
having
a
hard
time
resolving
pretty
much
there's
a
lot
of
valid
feedback
there
and
it's
the
default
revision
proposal
by
the
way
that
was
going
to
control,
which
revision
handled
both
istio
injection
enabled
inject
the
injection
the
default
injectors
and
also
which
one
handles
validation,
which
is
the
harder
problem.
Probably.
G
Problem
is
not
I'm
blocking.
Him
is
the
price.
You
don't
have
a
solution
that
doesn't
regress
in
in
some
area
I
mean
it's,
it's
some
ways
are
possible,
but
that
will
mean
that
each
study
instance
will
have
more
permissions,
require
more
permission
to
install,
which
is
something
that
we
certainly
do
not
want
to
do,
and
so
basically
we
lack
a
good
solution
that
satisfies
you
know
that
is
moving
us
forward.
G
G
So
maybe
it's
not
a
problem
that
we
actually
need
to
solve
urgently,
because
the
definition
of
the
problem
is,
I
can
install
history
without
having
a
default
revision.
Well,
maybe
we
can
wait
a
bit
until
we
have
a
solution
for
that.
G
The
the
base,
so
we
have
a
bit
more
flexibility
for
advanced
users
on
how
they
configure
this
kind
of
stuff.
Basically,
the
proposal
is
to
move
the
validation
out
of
the
charts
and
have
it
as
a
completely
separate
standalone
step
where
the
user
is
actually
explicitly
specifying
what
validations
they
want
and
not
because
as
a
promise,
we
have
the
spaghetti.
You
know
charts
and
installs
that
we
have
and
with
multi-revision
and
everything
else,
it's
getting
more
and
more
difficult
to
do
to
hack
it
to
do
duct
tape.
E
E
What's
confusing
is
in
the
revision
tag
documentation
we
kind
of
showcase
to
user
hey.
This
is
your
production
revision
of
where
you
are
today
and
then
this
is
how
you
move
to
a
newer
revision.
So
so
we
kind
of
also
tell
people,
you
should
revision
your
old
version
and
then
this
is
how
you
upgrade
to
a
new
revision.
E
I
just
feel
like
the
best
recommendation,
wasn't
super
clear
and
as
a
community,
it
would
be
great
if
we
can,
you
know,
have
clear
guidance
to
the
user,
whether
they
should
start
with
a
revision
in
the
first
place
or
whether
they
should
just
always
do
default,
but
then
later
on,
they
can
always
do
revision
once
they
met.
The
requirement
of
the
first
release
was
the
default.
D
So
installing,
with
with
an
initial
revision
that
should
work
with
the
fix
here-
and
one
thing
we
could
do
actually
is
in
the
installer-
just
create
an
istio
g
service
and
then
installing
with
a
revision,
will
just
work.
The
problem
there
is,
it
won't
work
for
helm,
install
because
it'll
be
a
fix,
that's
unique
to
our
installer,
and
also
we
won't
have
an
api
after
that
on
how
to
change
which
revision
is
istio.
It
handles
validation.
That's
that's
kind
of
the
problem.
G
But
in
reality
the
root
problem
is
that
we
are
relying
on
install
hacks,
I
mean
helm,
template
hacks
and
there
are
kind
of
restrictions
that
held
places.
This
your
cattle
has
a
different
set
of
restrictions,
so
we
are
in
a
situation
where
we're
you
know
doubling
down
on
everything
must
be
done
to
help
templates
and
working
around
instead
of
taking
a
more.
You
know,
drastic
saying
that
hey,
if
you
want
to
control
the
version-
and
you
know
some,
some
things
need
to
control
to
api.
G
Basically,
so
you
need
to
use
cube,
cut
or
apply,
or
you
need
to
use
a
separate
step
where
you
configure
with
you
know
easy
to
install
everything
all
in
once
doing
the
default,
which
you
know
works
perfectly
fine.
But
if
you,
if
you
want
to
do
advanced
things,
you
should
use
some
tools
to
control
what
revision
I
mean
not
not
to
bundle
the
validation
work
yet
with
with
the
templates.
Basically,
that's
kind
of
the
more
deeper
deeper
fix,
basically.
G
I
I
mostly
agree,
but
but
with
with
the
exception
that
eventually
we
want
this,
this
step
of
configuring,
the
validation
to
be
taken
out
of
the
base,
basically
to
be
a
stand-alone
step.
So
if
you
are
a
user
who's
using
revisions
to
have
a
safe
update,
you
need
to
have
used
your
cattle
set,
something
and
and
or
or
you
know,
apply
a
yaml
that
will
control
the
validation
and
possibly
the
imitating
webhook
for
the.
H
G
Instead
of
relying
off
on
on
on
templates
and
e-files
and
kind
of
spaghetti
in
the
install
to
kind
of
decide
what
what
goes
where.
C
So
you
want
to
split
between
an
install
state
and
activated
state.
G
Yes,
so
you
you
do
the
initial
install
and
after
that,
you
you,
you
basically
open
it.
You
change
configs,
you
change
labels,
but
it's
not
really
an
install
operation.
It's
a
it's!
You
know
kind
of
management,
operation.
E
G
And
and
and
that's
perfect,
and
that's
that's
where,
where
we
are
completely
aligned-
and
I
think
we
we
are
on
the
same
page-
the
only
promise-
the
actual
implementation
of
that
right
now
is
still
based
on
some
hacks
in
the
template.
Basically,
some
problems
with
install
just
rip
the
band-aid.
Do
it
with
you
know.
G
G
And
both
of
them
will
be
single
command
effectively
to
do
a
beginner
install.
So
there
is
no
concern
here.
I
mean
it's
a
default
install,
but
once
you
start
to
do
upgrades
and
operate
and
do
more
advanced
stuff
and
do
revision
based
if
you
want
zero,
downtime
and
everything
else,
then
you'll
have
some
commands
to
run,
and
maybe
you
know
take
a
bit
longer
to
do
the
upgrade,
maybe
not
instantaneously,
but
maybe
it
will
take
a
day
or
two
a
week
to
kind
of
roll
out
each
component
at
a
safe
place.
C
Okay,
I
think
I
heard
someone
with
some
feedback,
but
it
was
almost
inaudible.
G
C
G
E
E
I
was
just
going
to
add
and-
and
I
think
specifically
for
one
night
at
110-
I
think
we
also
want
to
know
if
what
sam
was
mentioning
is
reasonable.
E
Well,
we
would
either
ask
a
user
to
create
the
sdod
service
manually
through
maybe
cube
cuddle
commands
or
maybe
we'd
make
a
change
to
israel
cuddle
to
take
care
of
that
action
for
the
user,
so
user
doesn't
have
to
do
that
actual
step,
so
that
that's
a
question
you
know
just
to
ask
the
general
tlc
for
feedback,
because
today
you
know,
like
I
mentioned
to
sam.
If
you
look
at
that
warning
message,
I
start
at
that
message
for
like
a
couple
of
minutes,
I
couldn't
make
sense
of
it.
G
So
I
think
I
think
that
my
answer
is
is
both
I
mean
easter.
Cutters
should
definitely
do
this,
but
we
should
also
have
the
documentation
saying
you
can
also
create
this
visually
service
or
apply
this
validation
work
basically
to
to
expose
to
the
user.
What
actually
your
cutter
is
doing
in
case
they
want
to
do
it
through
to
cube
cutter.
Some
people
are
more
comfortable
with
ci
cd,
applying
cube
cutters,
and
they
are
comfortable
trying
to
use
your
pattern
in
a
cic
system.
D
I
think
that's
one
of
that's
recommended
in
the
warning
as
well.
Right
yeah
create
a
service
called
sdrd
using
this
service
as
a
template.
I
I
the
signaling,
is
really
bad,
though,
for
one
time,
especially
for
our
recommended
install.
I
agree
that
I
think
that
we
should
move
forward
with
a
change
to
the
installer
and
just
get
rid
of
this
message,
because
it's
not
relevant.
A
D
I
would
say
fix
this
dock
first
of
all,
because
if,
if
lynn
is
it's
not
clear
to
line,
it's
probably
not
going
to
be
clear
to
any
of
our
users
so
probably
make
that
better
and
then
change
the
installer,
and
I
think
that
can
probably
make
it
into
well
it's
kind
of
late
for
110.
I
guess
so.
G
But
for
one
thing
we
can
we
can
put
a
documentation
with
a
sample
yamaha
just
like
we
are
doing
for
gateway
injection
by
the
way,
but
we
are
also
moving
towards
people
apply
a
yaml
and
then
they
get
the
result
and
and
have
an
example
either
of
service
or
the
validating
webhook
or
both,
because
both
of
them
are
useful
to
document.
G
A
G
A
G
Well,
with
with
helm,
I
mean
the
whole
idea
is
that
we
document
what
you
need
to
put
in,
you
know
help
template
or
in
whatever
you
are
using
for
data,
for
example,
the
current
plan
for
gateway
is
that
you
will
have
a
yaml
file
where
you
have
some
some
basics
template
and
you
apply
to
helm
with
tube
cutter
with
whatever
you
want.
We
do
injection.
So
it's
it's!
It's
completely.
Automated
all
the
complicated
parts
are
automated
same
here
I
mean
we
document,
hey
put
this
validation
workbook
in
your
whatever
you
want
help
in
your
own.
G
A
A
I
mean
gateway
is
understandable
because
it's
like
you're
creating
a
new
instance
of
a
thing,
so
you
should
have
a
email
representing
it.
That's
fine
and
I
think
it's
okay
to
have
a
emo
representing
you're,
used
to
install
right
like
that's
how
the
installer's
model,
like
that's.
What
kind
of
why
I'm
asking
for
you
know
a
dock
that
describes
this
mostly
in
under
examples,
so
we
can
kind
of
take
a
look
and
see
if
there's
improvements
we
can
make
to
the
experience
while
keeping
the
same
flexibility
in
power.
G
Okay,
one
thing
that
you
see
would
would
help
a
lot
is,
is
to
clarify
the
some
some
goals
and
kind
of
priorities
here.
One
discussion
recurring
discussion
is
reducing
the
the
permission
and
operating
without
cluster
wide
permission
I
mean.
By
being
able
to.
G
I
is
the
operator
I
can
you
know,
change
your
configuration
and
do
stuff
without
having
a
cluster
admin
privilege
I
so
far,
my
understanding
was
that
it
is
a
goal
to
have
lower
permissions
and
and
and
be
able
to
do
that,
and
cluster
wide
resources
like
crds
validating
web
communicating
web
hook
are
kind
of
part
of
this
problem.
Where
we're
having
all
this
confusion,
if
the
qc
say
hey
no
problem,
cluster
admin
is
perfectly
fine.
It
opens
up
a
lot
of
options
in
this
area.
I
G
I
G
G
We
don't
have
a
lot
of
feedback
for
users.
That
securities
should
do.
I
mean
it's
kind
of
an
implicit
yeah.
I
don't
think
anyone
said
hey.
I
don't
want
to
to
run
class
dramas.
There
are
some
multi-tenant
requirements
where
you
know
tenants
definitely
do
not
want
to
have
to
be
all
of
them
cluster
admins,
because
if
it's
a
purpose.
C
G
G
True
but
cni
is
in
the
in
the
first
bucket
of
cluster
wide,
so
cni
crds
cluster
roles,
cluster
all
bindings,
that's
in
the
the
the
transport
permission
keep
in
mind.
There
is
multi-cluster.
G
External
studies
are
all
kind
of
situations
where,
where
the
issue
the
operator
itself,
maybe
it's
a
service
provider,
it
doesn't
necessarily
need
to
have
full
control
over
the
cluster
and
the
owner
of
the
cluster
may
delegate,
hey
manage
my
my
boards
and
do
easter
stuff
with
with
you
know
his
exciters
and
whatever
in
specific
posts,
but
I
don't
want
to
trust
you
to
have
full
permission
to
control
the
entire.
A
G
G
G
We
know
helm,
for
example,
strong
recommends
that
you
don't
even
have
ability
to
set
labels
on
namespaces.
We
know
that
there
are
companies
that
are
doing
multi-tenant
where
users,
you
know
just
don't
have
and
they
lock
down.
Basically,
the
user
jk
autopilot
is
is,
is
you
know
restricting
what
what's
the
issue
or
what
any
any
application
can
do.
G
J
G
Jk
autopilot,
how
would
you
do
it
because,
as
far
as
I
know,
there's
three
and
not
only
is
that
I
mean
most
many
security
conscious
clusters
do
lock
down
what
you
can
install
as
cluster
when
they
don't
give
you
permission
to
explosion.
C
G
Look,
I
mean
if
the
usc
decision
is
that
this
is
not
a
requirement,
not
a
priorities
and
again
everything
becomes
super
simple.
We
don't
even
need
two
templates,
I
mean
we
can
just
have.
You
know,
put
everything
everything
is
running
as
cluster.
A
lot
of
things
are
super
simple.
So
that's
that's.
Don't
get
me
wrong.
I
mean
I'll,
be
very
happy
to
have
this
restriction
removed.
A
G
Splitting
it
up
it's
hard,
I
mean
it's
easy
to
put
them
together
and
and
and
remove
all
it
took
a
lot
of
effort
even
to
get
to
the
step
where,
where
we
have
the
crt
separated
and
and
to
have,
I
mean
it's
a
very
hard
work
to
separate
to
secure
stuff.
Okay,
that's
yeah.
A
So
so
the
separation,
the
separation
for
like
external
control,
plane
operator
of
the
control
plane
versus
the
operator
like
within
the
cluster,
so
that
that
separation
makes
total
sense
to
me.
But
having
this
additional
split
within
a
cluster
doesn't
like,
I
don't
see
the
use
case
for
that,
because
I
think
anyone
who's
anyone
who's
installing
istio
in
a
cluster
kind
of
should
be
cluster
admin.
G
If
he,
if
they
install
cluster
white
and
but
there
are
use
cases
where
you
know,
you
may
not
want
everything
in
the
cluster
to
resist
you
and
then
to
be.
You
know
it's
a
wild
garden
concept
that
you
know
do.
A
B
C
G
A
G
A
G
H
G
G
C
C
A
A
And
by
the
way,
I'm
also
fine,
with
the
the
operator
like
just
like
red
hat
right,
like
the
operator
model,
where
the
operator
has
elevated
permissions,
that's
fine
to
me,
but
I
think
there's
a
separation
between
operator
and
s2d.
Right
like
you,
can
you
separate
those
and
the
operator
is
optional
right?
If
you
don't
want
to
give
a
component
running
in
your
cluster
that
permission,
you
have
the
option.
G
A
Okay,
so
I
think
we
need
to
follow
up
some
more
on
this
again.
I
still
I
I
I
would
like
environments
to
take
this
on
and
try
to
try
to
lay
out
what
this
should
all
look
like
sort
of
holistically,
rather
than
piecemeal,
which
we've
been
doing
where
we're
like
we're,
fixing
a
piece
here,
fixing
a
piece
there,
but
we
don't
have
the
whole
picture.
A
A
Cool-
and
I
think
I
think
I
think
the
whole
like
base
install
versus
canary
control
plan
install
right,
like
all
that
ties
into
it
too,
so
be
good
to
have
kind
of
the
whole
picture.
A
Okay,
john,
you
had
a
question
if
we
can
move
the
networking
roadmap
presentation.
J
Yeah,
sorry,
I
didn't
realize
this
until
today.
I
won't
be
here
next
week,
so
it
may
be
good
to
swap
with
someone
I
mean
we're
mostly
ready
we're
not
in
the
spreadsheet,
we're
on
a
google
doc,
but
we
have
all
the
content
if
we
want
to
discuss
it
at
all
today,
but
I
won't
be
here
next
week
so.
A
C
H
C
A
A
C
J
Yes,
I
don't
know
if
he
reviewed
it
with
the
toc
lens,
but
he
was
he
was
there.
We
talked
about
it.
A
Yeah,
let's
do
it?
Okay,
do
you
have
a
doc
or
something
or
do
you
wanna.
J
C
J
Will
say
it's
like
95
complete,
so
we
didn't
fill
in
all
the
details
on
priorities
and
people
yet
because
I
didn't
plan
to
present
today,
but
we
have
the
high
level
stuff
there.
So
overall,
this
is
is
not
much
different
from
what
we
discussed
in
the
2021
roadmap
and
the
1.9
roadmap.
A
lot
of
this
is
long-term
stuff
that
we've
been
working
on
for
a
while,
and
it's
just
continuation.
J
So
we
have
our
I'll
go
a
bit
out
of
order.
We
have
had
cni
to
beta
for
ever
basically,
and
we
finally
have
owners
and
they've
already
started
the
work
as
well
so
like
this
is
actually
finally
going
to
happen.
I
think
so
I
think
we'll
actually
promote
cni
to
beta
this
release,
hopefully,
or
will
at
least
get
far
closer
than
we
have
been
before.
J
Then
the
gateway
api
once
again
we're
continuing
the
implementation
of
the
kubernetes
gateway.
We've
made
a
lot
of
progress.
Some
of
the
things
we're
going
to
work
on
now
are
especially
support
for
mesh
traffic.
So
right
now,
we've
always
been
focusing
on
ingress
and,
to
some
extent
eres
other
than
that
just
continuing
to
work
with
them
on
the
api
and
make
sure
it
fits
meets.
J
Our
needs
et
cetera
and
oh
yeah,
also
implementing
gateway
selection
right
now
we
only
support
one
easter
egg
gateway,
so
we
want
to
make
sure
that
we
support
arbitrary
gateways,
so
you
can
have
multiple
of
them
and
that
we
write
to
the
status
fields.
So
they
have
things
like.
Is
this
gateway
ready?
Is
there
conflicts?
Is
there
issues
et
cetera?
A
So,
john,
are
you
er
on
that
last
one,
the
the
complete
design
from
support
yep,
basically
how's,
that
going.
J
All
right,
yeah,
we've
kind
of
gone
back
and
forth
being
stuck
and
not
stuck.
I
I
thought
we
were
really
close
to
getting
consensus,
but
then
there's
some
last
minute
disagreement
we're
so
we're
getting
close,
but
we're
not
quite
there.
Yet.
I
think
it's
it's
kind
of
tricky,
so.
J
C
Yeah,
I
I'm
spending
a
lot
of
time
on
this,
then
to
make
sure
that
we
don't
get
blocked.
A
Okay,
I
was
going
to
say
it
would
be
great
to
have
a
presentation
probably
to
toc
on
this,
when
we
have
kind
of
a
plan
that
we're
ready
to
talk
about.
I
understand
you
guys
aren't
quite
there
yet,
but
once
we
have
that
sort
of
disseminating
that
information
widely
will
be
helpful.
J
J
Some
of
the
other
work
mcs
to
alpha
so
nate's
been
working
on
this.
A
lot
on
this
is
multi-cluster
service
discovery,
the
new
kubernetes
api.
So
we
have
a
like
full
design
out
for
how
we're
going
to
implement
this
and
we're
making
some
progress
there.
So
I
think
we'll
probably
have
I
don't
know
if
I'll
actually
make
it
to
alpha
or
just
experimental,
but
I
expect
to
have
some
support
landed
by
this
release.
Wonder
1.11
now
yeah!
J
The
other
one
is
delta
xds,
so
this
is
mostly
about
performance.
Optimization
in
general,
we
added
experimental
support
in
1.10.
It
is
very
experimental
and
so
we're
hoping
to
slowly
productionize
that
a
bit
so
it
works,
but
it
does
not
work
well
at
all.
Well,
it
does
work,
but
it's
not
meant
to
be
efficient
right
now.
It's
meant
to
actually
pass
test.
So
we're
going
to
work
on
improving
that
we
have
a
pretty
detailed
road
map
on
how
we're
going
to
incrementally,
adopt
this
and
not
break
everything.
J
J
J
So
I
don't
know
exactly
what
end
state
will
end
up
in
1.11,
but
we're
certainly
making
progress
here.
In
particular,
the
dual
stack
support
is
non-existent
right
now
in
east
joe,
so
we'll
be
adding
that
and
then
in
the
process
we'll
be
improving
the
pure
ipv6
support.
J
J
So
that's
pretty
much
the
end
of
the
the
main
ones
which
is
not
indicated
here
in
priorities,
but
in
our
discussion,
some
of
the
other
stuff
with
dns
proxying
we've
had
it
around,
for
I
think
two
releases
now
and
generally
been
stable
for
vms,
but
we
want
to
look
into
how
we
can
actually
improve
it,
especially
around
multi-cluster,
multi-network
stateful
sets
some
of
those
more
tricky
areas.
J
J
The
tentative
conclusion
is
that
we
we
don't
want
to
do
that
for
now,
but
we
may
in
future,
and
just
the
reason
is
the
risk,
because
you
know
there's
a
huge
blast
radius
if
we
mess
up
the
dns
in
the
cluster.
So
right
now
we're
not
confident
enough
to
turn
that
on
yet.
J
The
rest
is
just
more
minor
stuff
that
wasn't
didn't
really
warrant
a
whole
full
section.
Just
some
important
important
bug,
fixes
and
minor
features.
So
there's
like
an
issue
about
listener
imbalances
how
we
can
do
some
performance
optimizations
there
kubernetes
has
this
new
node
local
service
api.
We
need
to
figure
out
how
that
interacts
with
estro
and
if
we
need
to
make
any
changes
or
support
that
we
also.
K
J
Yeah
they
have
it's
like.
I
think
it's
called
service
topology
or
something
maybe
that's
the
old
one,
but
I
think
it
has
like
three
values
like
you
can
be
build
local
or
cluster
local,
or
I
thought
there
was
a
third,
but
I
can't
think
of
what
it
would
be
and
it
basically
means
keep
the
traffic
only
local
to
the
node.
Oh,
I
think
it's
no
local,
only
or
no
local
preferred
or
cluster-wide
yeah.
So
I
think
it's
an
alpha
api.
So
the
idea
here
is
we.
J
We
may
not
even
support
it,
but
we
need
to
figure
out
what
we're
going
to
do
here.
So
a
plan
for
a
plan
yeah.
This
is
brand
new,
so
it's
not
as
urgent
as
some
of
the
other
stuff
yeah
thanks
yeah,
no
problem,
some
other
things
we
discussed
in
the
past
about
the
single
outbound
listener
work,
consolidating
all
the
listeners
to
one
listener,
like
we
do
with
the
inbound
listener.
J
J
We
we
may
still
investigate
if
there's
other
benefits
of
doing
it,
but
it's
not
a
high
priority
at
all.
This
is
probably
like
p3
or
p4,
so
probably
nothing
not
even
worth
being
on
here,
it's
more
of
what
we
won't
do.
J
We
also
have
a
feature
that
we
had
in
experimental
for
quite
a
while
about
scoping
gateway
clusters.
So,
right
now
we
send
clusters
for
every
single
service
to
a
gateway
which
is
a
ton
in
a
big
cluster,
and
you
can't
scope
it
down,
like
you
can
with
side
cars.
So
we
have
this
feature
where
we'll
actually
send
only
ones
that
are
used
by
virtual
services,
but
this
is
introduced
various
bugs.
J
J
Yeah,
I
also
talked
about
ex
like
progression
of
the
import
and
export
and
how
it
may
relate
to
mcs,
which
kind
of
has
similar
concepts
and
gateway
api.
This
is
very
vague
because
the
idea
is
to
figure
out
what
we
want
to
do
here.
We
don't
have
anything
more
than
a
vague
statement
for
now,
but
I
think
there
is
definitely
some
improvements
we
can
make
here
and
finally,
we
have
some
egress
improvements
to
look
into
right
now,
eager
skate
way,
I
think,
is
kind
of
a
known,
clunky
area
of
east
geo.
J
So
we
have
various
ideas
on
how
that
could
be
improved,
whether
it's
kind
of
having
like
the
pass-through
traffic
go
through
the
eagers
gateway,
how
we
can
better
handle
mpls
to
the
eager's
gateway
when
we
do
sni
routing
and
wild
sni
routing.
I
think
it's
meant
to
be
wild
card,
sni,
routing
improvements
which
currently
doesn't
work
so
well.
J
So
this
is
kind
of
just
like
the
whole
section
is
just
the
crop
bag
of
stuff
that
we
may
be
interested
in
working
on.
But
we
don't
really
have
things
flushed
out
now,
so
they
didn't
make
it
to
the
top
level
and
at
the
bottom
is
kind
of
our
dreams
that
we
very
unlikely
will
do.
But
we
we
are
interested
in
them.
So
we
continue
to
discuss
them
a
bit.
G
Know
just
tunneling:
there
are
some
some
some
tunneling
concerns
because
they
use
turn
and
they
have
their
own
dtls
and
special
routing.
And
it's
a
bit
tricky
to
have.
You
know,
go
to
what
you
want.
Okay,.
F
J
J
gateway,
api.
Again,
that's
new
feature.
Opt-In,
cni
should
be
just
improving
things,
so
I
don't
think
we're
breaking
anything.
At
least
nothing
that
I
know
of
yet,
but
we'll
keep
an
eye
out
for
that.
J
J
C
I
J
I
mean,
I
think,
analyzer
improvements
are
always
welcome,
but
there's
nothing
that
really
stands
out
as
as
new
things.
We
need
one
interesting
interaction.
There
is
with
the
new
gateway
api.
To
some
extent,
they
already
have
their
own
analysis.
You
know
they
have
their
own
status
fields,
which
are
not
just
a
blob
of
conditions
that
we
can
write
these
two
conditions
they
have
specific
ones
like
is
this
listener
ready
and
they
have
specific
apis
that
we're
supposed
to
report
for
various
issues
like
no.
J
This
is
not
ready
because
it
conflicts
with
another
one,
and
so
we
may
need
to
sync
up
a
bit
on
that.
I
don't
think,
there's
too
much
to
do
other
than
understand
that
we
will
implement
that
api
of
course,
and
that
it's
not
just
writing
estro
specific
status
anymore.
G
One
quick
question
on
josh
comment:
earlier:
bts
has
a
potential
long
term
to
be
a
major
incompatibility
when,
but
I
mean
for
few
releases
will
keep
bts
and
the
old
system.
But
at
some
point
you
know
we
want
bts
to
visit
default
and
then
sold
on
to
be
deprecated.
G
Bts
is
a
pretty
different
protocol.
I
mean
it's
it's
you
know
the
traffic
between
workloads
is
going
to
go
over
over
bts
with
you
know,
different
different
metadata,
the
hacky
proton
that
we
prepared
on
top
of
tcp
connections.
We
replaced
with
proper
bts
no.
F
G
G
Because
because
at
some
point,
if,
if
we
want
to
stop
the
current
protocol,
you
will
have
I
mean
we,
we
will
keep
both
for
a
while.
But
it's
a
pretty
high
cost
in
terms
of
right.
J
G
F
Right
so
there
there
might
be
some
sort
of
what
it
might
do
is
it
might
constrain
their
upgrade
path
where
at
some
point
you
know
there
are
a
series
of
releases,
it
might
be
like
you
know,
111
through
113,
or
something
where
you
can
actually
use
both
simultaneously,
and
you
would
have
to
hit
one
of
those
releases
on
your
way
to
upgrading
to
something
you
know
higher
than
113..
Okay,.
G
And
and
and
that's
actually
a
very
big
important
issue
on
you-
know:
interoperability
with
other
other
systems
like
grpc.
G
K
G
K
G
It's
mostly
implemented
on
the
voice
side,
as
far
as
I
know,
so
we
just
need
to
tweak
configurations
and
to
draw
it
in
production,
and
it's
not.
We
don't
miss
any
piece
as
far
as
I
know,
in
terms
of
you
know,
upgrading
the
tcp
connection
to
h2
and-
and
I
know
other
products
that
are
using
this
feature.
G
C
G
If
we
want
to
employ
filter-
and
you
put
some-
you
know,
username
features
basically.
K
K
So
I
I
think
we
need
to
sync
with
you,
chen
and
or
taylor,
to
to
get
some
real
details
so
so
that
we
can
start
testing
using
it
and
there
is.
There
is
a
related
telemetry
item
of
what
we
do
and
I
think
now
we
are
much
closer
to
actually
moving
on
that
time.
Viewer
before
that.
That's
the
reason
for,
for
my
questions.
H
A
E
A
So
then,
I
think
the
question
is
going
to
be
how
long
we
need
to
keep
right.
The
current
way
around
all
right
kind
of
like
with
with
the
whatever
we're
talking
about
right
here
forget
which
part
the
new
apis
yeah
anytime.
We
have
a
new
api
right
like
anytime.
We
have
a
new
way
of
doing
something:
yeah
bts
right
so
like
we
got
to
keep
the
old
stuff
around
for
a
while,
and
I
think
there's
going
to
come
a
time
when
we
start
discussing.
When
can
we
deprecate
and
remove
old
stuff.
A
Well,
despite
the
name,
you
know
canonical
services
workload.
Workload
group
called
that
more
there's.
Today
they
don't
relate
at
all
they're,
completely
independent.
Today.
What
what
kind
of
interaction
were
you
thinking
of.
A
So
the
the
mcs
names
are
kind
of
like
kubernetes
service
names,
they're
actually
addresses
they're,
not
they're,
not
the
same
concept
as
canonical
service.
That's
why
I
was
sure
yeah,
it's
it's
different
enough
that
actually,
like
we
don't
use
in
canonical
service,
we
don't
actually
use
service
names
at
all
to
drive
to
drive
things
it's
based
on
labels.
It's
not
based
on
service
names,.
C
Okay,
all
right,
I
think
we
are
at
time.
Thanks
for
the
presentation,
john
seems
to
make
sense
to
me.