►
From YouTube: Istio Networking WG meeting - 2019-08-29
Description
- Filter Chain migration
- Limitations around TCP Services
- Announcement about new more focused WGs on alternate weeks
- Review items blocking for Release 1.3
A
All
right
so
hello,
everybody.
Welcome
to
this.
Your
networking
working
group
meeting.
We
have
two
things
to
discuss
in
the
agenda.
The
first
one
is
about
the
filter
chain,
migration.
This
is
an
important
change
that
has
been
merging
in
1/3
and
you
children
will
explain
the
details
and
also
cover
whatever
we
need
to
be
aware
of,
especially
for
production
environments,
and
hopefully
we
don't
need
more
than
30
minutes
for
this,
and
then
we
can
discuss
about
some
limitations
of
around
TCP
services.
A
B
B
B
B
B
B
This
is,
first,
is
trampoline
treatment
prevention.
We
did
this.
We
no
longer
encounter
this
through
trampoline
because
we
isolated
the
inbound
and
outbound.
So
previous
we
have
done.
We
had
a
risk
that
the
inbound
traffic
is
magic,
is
intercepted
by
the
virtual
listener
and
then
try
to
match
the
amongst
small
IT
portal
listeners,
and
then
we
fail
to
find
and
it
go
out
throughout
and
listener,
and
that
traffic
is
pretend
to
be
the
one
sent
by
the
app
which
controlled
by
this
container
and
the
others.
Things
include
infinite
loop
that.
B
B
B
As
this
is
landed,
we
no
longer
need
container
port
to
declare
which
port
is
this
part
in
which
porta
intercepted
by
IP
tables
container
port
is
introduced
to
prevent
traveling
in
the
IP
tables.
Wait
now
we
have
the
feeder
trained
way.
We,
the
IP
tables,
just
need
to
say
capture
all
the
traffic,
and
if
it
is
non
port,
the
noun
I
mean
a
pilot
will
send.
The
list
will
sign
the
further
chance
and
the
configs
if
it
is
non
port,
it
is
captured
by
a
pre-designed
filter
chain
and
it's
now.
B
If
the
port
is,
and
now
we
will
descend
as
big
as
if
there's
no
IP
tables,
and
so
that
we
no
longer
needs
the
container
port
to
determine
which
part
we
are
going
to
intercept
well.
I
know
there
are
exceptions
or
you,
sir,
want
to
define.
Definitely
we
don't
want
the
IP
tables
to
capture
my
traffic.
We
still.
We
have
another
annotation
to
support
that
for
this
particular
ports.
C
D
B
B
B
It's
not
while
defined
how
the
traffic
or
how
we
should
pretend
that
traffic
to
be
not
intercepted.
Currently
it's
just
pass
through
and
with
the
protocol
sniffer
so
that
we
can.
We
defined
two
trains
for
the
default
traffic.
We
allow
user
to
identify
HTTP
traffic
and
TCP
traffic,
but
currently
there's
no
particular.
C
In
other
words,
we
do
not
have
permissive
or
I
mean
it's
it's
possible
just
like
before.
Just
before
the
change
I
mean
in
one
or
two
traffic
to
a
port
that
is
not
in
container
port
was
just
going
directly
to
the
destination.
Is
an
implementation
is
still
going
to
the
destination,
but
we
we
get
the
limit
we
and-
and
we
can
apply
our
purpose.
E
C
A
D
B
B
D
C
Lina,
if
the
container
port
was
removed
from
from
from
the
dock
problem,
we
need
to
add
it
back,
because
we
still
support
I
mean
if,
if
the
user
chooses
to
turn
back
sorry
behavior
to
capture
explicitly
ports
in
IP
table
by
by
changing
the
default
capture
mode,
then
the
user
will
still
be
required
to
use
container
port.
So
we
are
not.
We
haven't,
removed
the
feature
of
supporting
octene.
Oh.
D
C
D
C
D
B
F
E
I
think
I
think
that,
in
addition
to
that
right,
if
if
they
believes
that
they
have
will
need
to
directly
reach
the
application-
and
they
really
don't
even
want
the
TCP
proxy
functionality
of
Envoy,
then
I
think
it's
legitimate
in
this
case,
and
we
don't
know
why
they
would
want
it.
I
mean
some
something
to
it
performance,
but
yeah.
C
F
Have
a
question,
so
it
seems
that
this
feature
will
manipulate
with
the
IP
tables
right,
meaning
that
basically
the
container
or
the
port
means
some
privileged
container,
and
it
seems
that
it
would
break
the
CNI
Easter
CNI,
because
the
IP
tables
programming
is
done
by
East
er
CNI.
And
it
assumes
that
the
application
will
not
have
any
privileged
container
to
manipulate
the
IP
tables.
G
A
There
is
like
there
is
a
change
now
because
right
now
that,
like
before
the
traffic,
is
to
be
sent
to
a
single
for
now,
it
sends
it
to
port.
So,
for
there
is
a
change
in
the
IP
tables,
and
if
you
take,
for
instance,
and
if
your
crop,
your
current
one
one
trees
to
approximate,
try
to
run
it
with
some
older
control
flame,
you.
H
A
C
C
C
I
I
If,
as
long
as
the
changes
that
were
required
went
into
that,
then
we
should
inherit
them
and
we're
also
working
on
a
longer-term
plan
to
just
combine
all
of
the
what's
used
in
anis
sto
in
it
as
it,
with
the
same
exact,
iptables,
programming
logic,
so
I
think
from
from
XTO
we're
going
to
try
to
merge
those
two
together
somehow,
but
that's
still
being
worked
on.
I
team.
F
I
F
J
C
Sergei
I
I
think
I
understand
what
you're
saying
my
understanding
was
at
sea
is
also
looking
up
in
kubernetes
support
and
he's
expecting
the
annotations
on
support,
because
otherwise
a
lot
of
other
features
that
we
have
would
not
work.
We
have
a
lot
of
annotations
that
control
exclude,
include
IP
ranges
and
all
kind
of
other
stuff.
I
Yeah
I
think
you're.
The
sequencing
is
is
that,
at
the
admission
controller,
that
inserts
is
the
thing
that
inserts
that
annotation
before
the
pod
is
even
before
the
pod
spec
is
actually
scheduled
so
that
that
annotation
is
present
for
both
C
and
I,
and
this
NSM
in
it
I
mean
it's,
do
an
it
scenario.
It's
same.
It's
the
the
parameters
are
going
to
be
exactly
the
same.
Okay,.
C
I
G
C
K
E
A
C
C
I
would
add
that
it
would
be
very
important
for
everyone
involved
in
testing
to
pay
close
attention
to
to
start
up
or
supports,
and
what
happens
when
pods
are
started
and
and
in
security
in
particular,
because
we
want
to
make
sure
we.
You
know
we
test
recover
as
much
as
possible
in
terms
of
corner
cases.
A
A
So,
let's
move
on
to
the
second
one,
which
is
a
github
issue:
around
limitations
related
to
TCP
services
that
make
it
you're
pretty
unusable
on
larger
multi-tenant
clusters,
and
here
I
hope,
like
Sriram
and
costing
will
pitch
in
because
I
saw
you
added
comments
recently
like
as
recently
as
last
night
or
yesterday.
So.
C
J
So
yeah
I
think
there
was
I,
mean
a
temporary
solution
or
a
solution
that
might
satisfy
the
most
common
case
was
to
generate
an
IP
:
port
listener
for
every
part
in
the
headless
service.
If
the
headless
service
has
less
than
like
five
or
six
ports,
and
in
that
scenario-
and
you
can
have
any
number
of
headless
services
on
the
you
know-
on
the
same
old
in
the
same
namespace
and
whatnot,
and
since
most
users
tend
not
to
have
like
in
hundreds
of
pods
for
the
headless
services
like
Postgres,
HCDE
and
so
on.
J
This,
this
is
sort
of
you
know,
minor,
optimization,
slash,
a
hack
which
will
just
work
for
them
and
remove
the
adoption
burden,
slash
user
in
UX
and
improve
the
us
experience.
And
but
the
only
thing
is
that,
yes,
if
it
did,
they
have
more
than
let's
say
like
some
threshold
number
of
portlets.
Let's
say
they
have
more
than
like
six
seven
parts.
Then
we
fall
back
to
the
existing
behavior,
which
is
that
there's
a
quote
conflict.
We
only
pick
the
the
most
earlier
service
and
then
we
log
the
rest
of
support
conflict.
J
Yes,
yes,
but
I
mean
yeah
that
will,
but
it's
it's
a
trade-off
between
like
stuff
being
broken
on
day.
One
was
just
being
honest:
I
I
can
see.
There
are
trade-offs
on
both
sides.
It's
like
we
can
definitely
either
log
warnings
which
is
saying
that,
like
this
headless
service
has
less
than
five
pots.
So
we
are
like
no
doing
this
temporary
thing,
but
please
do
make
sure
to
you
know,
make
sure
that
the
headless
services,
our
own
unique
Forks,
if
not
bad
stuff,
will
happen.
J
J
I
peace,
no,
but
I
think
that
there
are
other
problems
as
well
right
because
it's
like,
if
you
create
a
headless
service,
the
listener
on
wild
card
listener
0.00
and
we
have
the
cluster
set
to
a
pass-through
listener,
which
means,
if
that
headless
service
happens.
Let's
not
port
443.
Then
you
effectively
end
up
sending
traffic
to
like
entire,
like
many
I
betrayed
his
service
on
4.3,
because
he'll
just
end
up
in
a
pass-through
cluster
and
that's
an
example.
J
C
Do
that
we
would,
if
you
explicitly
import
a
service
on
port
443
or
headless,
for
that's
a
B,
so
not
generate
a
zero
zero
zero.
Instead,
we
generate
all
the
IPS
and
keep
them
up
to
date,
but
only
for
that
people
who
explicitly
import
that
service.
That's
that's
the
key
here.
Otherwise,
we'll
crash
and
burn
so.
A
C
A
But,
like
I
think
we've
discussed
in
the
past
about
like
making
that
change
using
the
the
pod
I
team
and
the
main
limitation
was
this
game
right?
So
if
we
combine
this
with
the
cycle,
just
as
you
suggest
costing,
we
could
reduce
the
scale
yeah
I
can
make
it
work
like.
We
will
probably
need
to
measure
to
see
the
point
of
breakage,
but
we
have
a
good
way
of
solving
the
problem,
except
that
we're
hitting
some
scale
issues.
C
J
C
C
J
If
your
application
happens
to
talk
to
another
headless
service
on
a
different
name,
space
right
come
on
same
port
3306,
then
this
application
is
going
to
be
like
a
really
traffic
will
go,
but
the
MPLS
will
actually
break
because
the
calling
side
is
actually
expecting
the
the
service
account
to
match
with.
You
know
the
headless
service
in
the
local
namespace
and
not
the
headless
service
in
the
remote
namespace.
And
this
is
why
people's
headless
service,
HTV,
I
hate
MPLS
thing
keeps
breaking,
because
they
we
use
the
pass
through
clusters
or
yeah.
C
But
but
they
will
not
use
the
pass.
That's
a
point.
That
means
they
will
use
explicitly.
Is
it'll
get
the
least
of
all
least
earner
for
each
or
the
IPS.
That
is
running
the
service,
so
they
will
never
hit
the
situation
where
they
they
fall
to
the
past
row.
So
if
you,
if
you
explicitly
import
the
two
services
from
different
name
space
or
whatever,
you
will
always
get
the
full
list
of
art
piece
and
the
proper
configuration
for
each
of
them.
J
J
A
J
The
headless
services
typically
tend
to
be
stateful
sets,
which
are
yes,
but
I
mean
the
common
case
is
what
I'm
trying
to
tackle
and
all
the
more
common
case
that
that's
why
I
said
like?
If
you
don't
want,
you
do
auto
scaling
and
if
you
thinks
is
going
beyond
five
or
whatever
it
is.
We
go
back
to
the
older
behavior,
which
is
gonna,
be
an
issue,
but
for
people
it,
but.
C
G
G
J
Finite
because
here's
the
thing
that
many
people
don't
news
imports
and
many
feel
also
told
have
this
like
large
sort
of
headless
service
pods,
and
they
typically
tend
to
have
like
one
or
two
headless
services
in
a
cluster.
The
my
sequel
database
or
a
postcard
database
here
and
there
and
socially
one
or
two
is
not
that
much
and
to
extend
that
we
would
be
able
to
let
handle
this
normal
UX
case
for
like
at
least
eighty
percent
population
and
those
that
actually
cannot.
As
Howard
says,
they
can
actually
go
on
and
flip
on
this
flag.
J
That
basically
says,
like
you
know
what
I'll
just
stick
with
the
I'll
start
using
the
site
cars
to
constrain
it
and
I'll
like
you
know,
and
also
make
sure
that,
like
you
know,
which
is
the
annotations
and
so
on
and
so
forth,
which
can
help
them.
In
other
words,
it
satisfies
eighty
person
X
in
the
twenty
person.
It
will
probably
have
to
do
additional
work,
but
today
it's
like
nobody's
happy.
K
C
J
J
A
Going
back
to
the
older
behavior,
we
should
actually
document
the
that
they
could
use
the
sidecar
API.
Yes,
I'll
put
their
scale
because
that's
what
we're
going
towards
right?
That
background
is
also
good
for,
like
enforcing
policies,
another
use
cases.
So
if
it's
really
like
a
big
big
deployment,
it's
more
beneficial
to
use
the
sidecar.
Therefore,
when
I
change
the
listener
to
some,
you
know
some
people
don't
need
to
understand
how
we
configure
Android,
but
we
configure
with
one
this
note,
then
that's
very
implementation
detail
right.
Word.
C
J
J
F
J
So
in
that
case,
but
so
we
should
be
having
this
feature
on
by
default
and
just
have
a
flag
to
turn
it
off
and
on
behind
people
who
have
a
larger
scale
since
they
use
sidecut
their
blast.
Radius
will
automatically
be
smaller
and
within
the
namespace
they'll,
just
simply
like,
and
have
like
a
few
more
listeners
for
the
heckler
services
compared
to
just
a
0
or
0.
But
it's
better
because
we
don't
have
wildcard
listeners
anymore,
but
not.
K
J
And
so
that's
the
point
right.
That's
what
I'm
saying
like
this?
Don't
you?
If
you
only
do
this
only
when
you
have
conflicts,
then
you
can
go
back
to
the
flip
flop.
Behavior.
If
you
don't
have
conflict,
you
have
0
0
0
and
the
moment
you
have
a
conflict.
You
again
clip
after
clip
to,
like
you
know,
all
actual
listeners,
and
then
it
goes
back
and
forth
like
rumba
like
two
years
ago.
Stoner
has
they
should
like
one
or
one
year
on
Xavier
Golic,
one
namespace
head
permit.
C
C
A
J
Thing
once
again,
if
it's
a
large-scale
scenario,
where
people
actually
automatically
configure
the
side
card,
then
they
will
only
have
one
headless
omission.
It
not
be
that
many
and
when
they
have
that
one
headless
service
and
if
they
find
that
their
headless
service
has
like
500
pots,
they
can
have
a
flag
to
turn
this
feature
off
and
go
back
to
wild
card
listener
and
like
if
they
only
have
like
5
5
parts
in
the
headless
service,
because
it's
just
surrounded
Sealy
cluster,
then
they
like
this
is
like
almost
like,
not
but
much
more
just
I.
J
C
C
J
Have
this
what
happens
like
numbers
again,
then?
The
argument
here
is:
if
you
have
that
bingen
entire
mesh
thing,
then
people
will
start
a
using
a
sidecar
and
odd.
If
it's
a
large
scale
issue,
then
they
would
have
already
been
using
a
sidecar
such
that
this
pod
rings
have
only
affect
things
in
that
namespace.
Not
everybody
else.
C
Again,
I
already
a
customer
security
team
to
decide
if
they,
if
this
can
be
turned
on
for
everything,
I'm,
not
entirely
sure
what
happens
I
mean
in
the
normal
ideal
case.
Everything
is
fine,
small
clusters
or
or
or
clusters.
Everything
is
stable.
The
promise
we
don't
have
enough
testing
and
for
for
the
cases
where
something
flip-flops,
because
there
is
a
bug
and
then
this
X
collides
and
and
and
start
bringing
down
far
more
than
it
should.
But
again
is
it's
a
montage
should
comment
and
sure.
L
C
F
C
J
And
I
did.
It
is
only
in
this
scenario
that,
like
I
guess
it's
if
this
is
gonna,
be
a
problem
even
in
a
smaller
cluster,
then
yes,
but
the
point
here
is
that,
like
when
somebody
complains
of
that,
we
should
probably
go
and
tell
them
that
look.
You
should
you
have
the
sidecar
enable
so
that
you
reduce
the
blast
radius
only
to
the
namespaces
were
consuming
that
headless
service
and
not
to
everybody
else,
because
that's
a
nice
segue
for
to
tell
us
to
tell
people
that
you
look.
J
A
labelled
issue
should
start
using
this
and
you
know
whatever
default
sidecar
so
on
a
larger
scale
cluster.
They
would
have
already
had
the
sidecar
that
this
flip
flop
thing
will
only
have
been
not
ripped
off.
The
listener
pushes
and
so
on
shown
only
happen
for
that
specific
namespace
and
not
for
everybody
else.
I
really
the
data
needs
to
be
represented
to
see
Ashley
make
the
proper
decision
on
what
the
default
should
be,
but.
C
J
F
G
M
J
I
had
a
PR
for
that
as
well,
which
we
use
the
earlier
push
context
and
then,
like
you
know,
basically
just
simply
like
pre
computed
values
for
a
bunch
of
those
things,
and
we
could
actually
optimize
that
as
well.
Radiation,
stop
storing
the
CVS
output
and
other
things
in
that
sidecar
itself,
then
we
should
probably
be
able
to
like.
We
use
that
thing
as
well.
Yeah.
C
C
G
A
C
A
J
C
C
A
Okay
and
I
think
like
maybe
we
shouldn't
worry
about
any
scale.
We
should
worry
about
this
case,
so
so
the
hard
one
or
from
Auto
Trader
open
this
I
think
we
cannot
just
ask
him
about
his
scale.
You
know
he's
probably
in
with
hundred
services
I
think.
A
A
M
A
C
C
J
Mean
it
goes
both
ways,
remember
that,
like
an
omen
to
create
a
wild
card
thing
it
or
iron,
with
the
past
through
class,
that
means
that
didn't
I
used
to
simply
allow
any
traffic
on
that
code
to
go
to
this
destination
and
no
money.
You
had
empty
less
on
that
particular
listener.
Then
it's
like
you
know
it.
Actually,
it
breaks
the
existing
traffic
because
that
traffic
is
going
to
something
out
of
cluster
will
break
and
all
of
that
traffic
goes
to
a
different
name.
J
Space
then
again
break
because
we
don't
do
the
empty,
let's
check,
and
that
is
actually
broken
even
today,
and
that's
so
that
this,
the
the
goal
of
this
thing
was
to
actually
resolved
that,
because
almost
I
mean
even
after
us,
fixing
the
empty,
let's
think
for
a
normal
head
la
cerva.
This
thing
kept
coming
up
saying
people
said
headless
service
is
not
working
with
the
LS
and
so
on,
and
so
for
the
Cassandra
doesn't
work
or
elasticsearch
does
not
work.
J
If
you
see
like
there
was
some
issue
that
I
think
add
create
a
curated
list
of
things
that
almost
all
of
them
were
actually
headless.
Services,
like
you
know
thing.
If
I
was
that
he
could
probably
point
out
like
there's
like
about
15
different
six,
seven
different
application
that
don't
work
on,
which
of
which,
like
Andros
elasticsearch
Cassandra,
my
sequel,
which.
J
J
D
J
A
A
E
D
D
J
A
So
I
think
I
think
we
got
some.
You
know
good
perspective
and
follow
up
on
this.
This
issue,
so
John
will
test
with
Sri
Rams
PR
it's
and
we
can
understand
the
breaking
point
and
then,
based
on
the
breaking
point,
we
will
see
if
we
can
actually
enable
the
pod
IP
listeners
or
everybody
right,
so
it
will
like
we
will
likely
need
to
discuss
more
in
the
neck
in
two
weeks.
F
A
C
You're
right
you're
right,
no,
it
was
an
example.
I
mean
all
the
stuff
that
we
typically
a
didn't,
because
you
have
a
lot
of
features
that
people
start
turn
on
that.
If
we
test
with
no
feature
enabled
everything
is
perfectly
fine
if
we
start
enabling
features
that
when
it
breaks-
and
we
don't
know
it-
customers
use
that
don't
use
those
features.
So
what.
C
Like
entropic
spirit,
a
traffic
split
or
some
some
other
stuff,
that
is
typically
added,
because
if
it's
just
an
empty
configured
so
different
result.
And
if
we
have
a
realistic
component
and
that's
what
we
want
to
make
sure
that
a
realistic
class
that
is
not
going
to
break
under
under
load
and
explode.
A
Yeah
and
like
with
this,
maybe
we
need
to
push
on
void
towards
incremental
and
yes
and
you
know
other
goodies
because
that's
what's
causing
the
problem.
The
fact
that
we
cannot
down
now
about
the
single
listener,
with
its
routes
and
clusters
and
and
point
without
doing
a
full
push
right.
That's
what.
G
L
B
L
C
H
C
A
You
know.
So,
if
not
I
have
an
announcement
to
make,
so
it
will
be
probably
more
official
coming
from
the
TLC.
So
there's
been
a
decision
in
the
TOC
to
create
more
like
focused
working
groups
and,
for
instance,
there
will
be
a
a
table
name
working
group
for
networking,
which
will
be
very
focused
like
on
Android
and
in
particular,
and
the
reason
that's
data
playing.
So
we
will
have
to
see
exactly
when
the
meetings
take
place.
A
Probably
it's
gonna
be
like
alternate
week
with
this
networking
group,
so
the
this
meeting
will
stay
focused
on
pilots
and
controlling
behavior
and
will
try
to
like
get
more
coverage
for
ongoing
specific
things.
The
data
plane
working
group
also
the
environments,
is
going
with
two
separate
tracks,
so
there
is
one
winter.
The
discussion
focused
around
the
Installer
and
everything
like
that
and
alternate
weeks.
A
That's
needed
so
I
hope
that
works
for
everybody,
but
then
stay
tuned
for
the
meeting
invites
and
do
attend
those
please
and
let's
do
like
again
cross,
but
I
try
to
attend
as
many
as
I
can,
but
sometimes
it's
good.
You
know,
if
leads
from
one
group
comment,
you
know
bring
awareness
to
other
groups
as
well
that
helps
any
questions.