►
From YouTube: Istio Networking WG meeting - 2019-01-17
Description
- Exposing telemetry for virtual services, gateways etc.
- Sidecar config and reassessing global policy
- DestinationRule resolution for 1.1
- Locality Load Balancing proposal
- Internal Interface PR #10378
A
And
welcome
to
this,
your
networking
community
meeting
it's
the
second
meeting
so
of
this
year,
but
happy
new
year,
because
I
miss
the
first
meeting
and
like
many
many
good
wishes
further
here
2019
and
on
the
agenda.
We
have
four
items
today
and
maybe
more
to
come.
The
first
one
night
we
have
guests
from
the
policy
and
telemetry
war
drew
to
discuss
an
approach
about
exposing
telemetry
about
virtual
services
and
services
and
destination
roles
right,
okay
and
then
we
have
like
Shannon
to
discuss
a
scalability
issue.
A
We
have
I
think
a
follow-up
on
the
locality
load
balancing
from
last
week,
I
see
something
here
and
I'm,
not
sure
exactly
what
is
the
last
item,
but
I
hope
Sebastian
you're
here.
So
let's
try
to
do
how
about
like.
Do
they
need
more
than
15
minutes
knowledge?
Okay,
I
guess
it
looks
like
a
fifteen
minutes
for
each
topic
should
be
okay,
unless
we'll
have
more
than
that.
Okay,
all
right
so
yeah.
B
So,
basically,
there's
a
sort
of
a
long-standing
requests.
It
provides
some
harmony
between
the
networking
side
of
the
world
in
the
telemetry
side
of
the
world
and
one
of
the
ways
that
people
are
interested
in
doing
that
is
tracking,
which
virtual
services
and
which
destination
rules
fired
for
an
event
right.
So
we
just
are
looking
for
some
way
and
we're
happy
to
do,
but
we
have
resources
to
devote
to
implementation.
C
I
think
well,
the
key
is
that
so
the
virtual
service
part
is
easy.
The
wind
'red
for
the
destination
role
part
means
we
can
actually
add
the
metadata
to
the
cluster,
about
which
destination
rules
being
used,
but
then
I
think
the
resolution
was
that
you
have
to
change
the
proxy
such
that
when
the
proxy
is
like
you
know
in
the
mixer
client,
when
it's
about
to
add
all
the
attributes,
it
checks
the
the
upstream
cluster
for
that
particular
route
and
from
that
clustered
extracts
the
metadata
attached
to
the
cluster.
C
That
way
becomes
much
more
easier
and
more
tractable
okay,
so
that
was
the
I
mean.
Basically,
if
you
patch
the
exact
line
such
that
in
at
run
time
before
route,
I
mean
before
adding
the
I
mean
doing
sending
the
reports
or
whatever
it
is,
it
goes
on
within
our
mo
itself
would
actually
queries
for
the
information
about
the
cluster.
Then
you
know
if,
within
that
cluster,
we
can
actually
embed
information
about
what
destination
routes
actually
being
used
for
that
cluster
inside
metadata
block,
and
that
way
mr.
C
C
For
this
part
is
that
he
already
being
done
right
I
think
there
was
somebody
from
that
had
to
like
object,
Jeremy
brown,
he
actually
added
information
about
for
every
trace.
There
was
this
concept
of
a
business
operation
where
that
operation
results
of
his
name
for
that
case,
but
then
somebody
came
along
and
kind
of
undead
the
whole
thing
saying
that
they
needed
the
full
host
name
and
and
so
on
and
so
forth.
C
C
The
words
I
mean
so
this
was
like.
You
were
there's
pub
and
you
are
doing
the
PR
right
like
the
within
the
route.
There
is
a
within
the
race
block
which
contains
the
operation
name,
and
that
operation
name
is
either
the
default
hostname
slash
star
or
the
virtual
service
name.
That
was
actually
used
for
that
route.
Yes,.
D
C
C
D
C
That
you
can
talk
to
Gary
ronan
and
see
if
these
I
mean
okay
with
like
in
adding
the
name
person,
because
that
would
or,
alternatively,
we
can
definitely
add
more
metadata
to
every
doubt,
to
say
which
virtual
service
is
actually
being
used,
so
that
you
can
actually
parse
it
out
in
the
mixer
plugin.
And
that
should
not
be
an
issue.
Yeah.
B
C
F
H
G
C
The
name
for
this
is
there's
no
caching
issues
here,
because
the
cluster
is
tied
to
the
resignation.
Doer,
not
typed
lose
every
single
proxy.
So
it's
it's
there's
in
the
cache
just
continues
to
work
as
it
is.
It
will
not
affect
the
cap
action
anyway.
So
the
the
thing
that
actually
affects
the
caches
is
the
is
our
notion
of
like
adding
the
UID
in
every
Pesce
keep
your
out
and
that
kind
of
makes
it
hard
for
us
to
catch
the
whole.
You
know
the
generator
for
conjugation.
This
stuff
does
not
have
anything.
G
A
I
Thank
you,
I!
Don't
need
to
share
I
just
have
some
some
follow-up
questions
to
a
thread
that
I've
been
conversing
with
with
Louis
and
others
about
the
work
that
we're
doing
to
address
scaling
issues
by
filtering
the
config
that
that
is
received
by
sidecars
I
continue
to
be
uncomfortable
with
the
approach
of
tackling
this
problem
by
requiring
app
developers
to
declare
their
dependencies
when
it
really
seems
like
something
that
should
fall
out
of
a
policy
declaration,
and
it
seems
to
me
that
this
approach
is
being
taken
because
we
continue
to
assume
a
default.
I
Allow
all
policy
when
this
would
be
addressed.
Quite
simply,
if
we,
if
we
assumed,
deny
all
policy
in
which
case
and
sidecars
would
receive
no
config
until
a
security
policy
was
declared
and
white
lists
are
much
easier
to
manage
this
blacklist.
I
understand
that,
maybe
the
approach
is
being
taken
for
backwards,
compatibility
reasons,
but
I
wonder
if
the
group
would
consider
that
scalability
as
a
feature,
whereas
offered
only
when
a
default
deny
all
mode
was
enabled.
H
H
H
H
If
I
had
touched,
but
if
you
look
in
the
agenda,
I
attached
a
link,
so
there
was
a
proposal
in
the
scoping,
not
the
two
iterations.
The
scoping
discussion
about
having
services
declare
themselves
as
public
or
private
right
and
there's
a
link
to
get
a
Bishu,
basically
refining.
That
proposal
to
allow
the
service
owner
to
declare
the
namespaces
to
which
the
service
is
exposed
right.
So
that's
the
producer,
oriented
white
list
of
consumers
that
sounds
reasonable.
H
That
was
discussed
in
the
earlier
iteration
of
the
namespacing
scope.
Talk.
Sorry
though,
and
we
hadn't
yet
validated
whether
we
just
needed
public
private
or
whether
we
wanted
that
thing,
that
was
more
refined
and
it
became
fairly
obvious
that
we
needed
the
more
refined
thing.
I
agree.
So
that's
now
on
the
table
for
1.1.
H
That
doesn't
necessarily
obviate
the
need
for
the
consumer
to
a
further
refinement
right
of
the
default
set
of
services
that
are
exported
to
it.
We
still
need
sidecar,
also
in
addition
for
layer
3
through
layer,
6
controls
from
the
consumers
perspective
right
because
they
still
need
to
control
the
working
behavior
in
some
cases.
Why
well
do
you
have
a
variety
of
cases
right,
so
you
have
poor
oriented
protocol
wrapping
that
they
need
to
deal
with.
They
need
to
use
sports
as
aliases
for
stateful
sets
the
ability
to
my
aesthetic
consumer
concern.
If.
F
C
C
F
I
J
I
F
I
F
H
All
right
so
kubernetes
doesn't
have
a
default
prevent,
but
sto
could
it
still
could
right
and
so
we're
trying
to
at
least
make
that
possible
in
the
API
by
in
the
psyche,
our
API
well
one
of
the
goals
with
around
the
eventual
namespace
isolation.
The
changes
is
to
allow
the
administrator
to
configure
what
the
default
behavior
is
which,
where
you
want
either
the
default
behavior
from
the
consumer
side
to
be
import.
Everything
that's
exported
to
me,
import
nothing
by
default
or
import
only,
my
namespace
by
default
right.
H
Those
are
the
three
viable
options
and
Shannon
in
though
the
what
I
was
on
three
worlds,
the
the
namespace
were
forgotten.
The
the
Cloud
Foundry
equivalent
would
be
import
my
namespace
only
by
default
right,
yeah
yeah.
So
we
want
to
get
to
the
point
where
we
have
a
configuration
mechanism
where
that
admin
could
say.
Look
the
default
behavior
of
the
mesh
in
namespaces,
where
that
behavior
is
not
overwritten.
Is
this
right,
where
it's
important
namespace
only,
but
what
that?
Wouldn't
that
be.
I
The
the
simplest
scalability,
as
opposed
to
what
seems
like
this
workaround
of
putting
the
onus
on
every
consumer
I
mean
I'm.
Imagining
how
an
operator
is
going
to
cluster
operator
is
going
to
take
advantage
of
this
feature
to
accomplish
scale.
They
have
to
go
out
and
tell
every
app
development
team
using
the
cluster.
So.
H
I
F
I
Imagine
that
so
it
seems
to
me
that
there's
you've
got
different
use
cases.
There
is
the
auditability
use
case
of
an
operator
or
security
team
who
wants
to
know
what
the
dependencies
are
of
every
workload.
That
is
not
a
scalability
concern.
You
also
have
the
operator
who
wants
to
support
many
development
teams.
I
It
seems
to
me
that
those
could
be
accomplished
in
in
in
different
ways
the
the
way
the
strategy
it
seems
that's
being
taken
to
accomplish
scalability
is
is
really
more
appropriately
addressing
the
auditability
use
case
and
that
it
would
be
a,
in
my
opinion,
a
more
effective
approach
to
scalability
to
assume
a
denial
policy.
So.
I
F
H
Know
wonder
that
means
you
have
to
go
in
for
sidecar
and
every
name,
spirits,
which
is
fine.
That's
probably
not
the
right
user
experience
right
cheek.
If
you
want
to
be
global
denial
policy,
there
should
be
one
resource
that
is
by
default,
representing
a
global
deny
all.
And
then,
if
you
want
to
change
that
on
a
namespace
by
namespace
spaces,
you
do
that
on
the
namespace
by
namespace
basics,
right.
C
C
Have
fun-
and
these
days
are
gonna,
still
be
running
with
the
whole
bigger
thing,
which
means
that
was
I,
guess
that
is
on
the
run.
You
could
probably
tackle
that
in
one
point
as
a
separate
API,
but
until
then
we
could
still
provide
them
with
something
where
like.
If
we
currently
there's
actually
no
way
even
in
Annie
I,
don't
want
to
input
anything
it
only.
It's
just
always
plant
may
have
not
complied
with
just
always
local
making
it
and.
H
K
H
C
F
F
H
J
I
C
F
F
C
C
Point
is
for
every
subspace
or
every
CF
namespace
equivalent.
This
would
actually
require
all
the
developers
to
go
and
write
that
site
correctly
and
I
mean
paying
for
export.
For
sure
means
they
need
to
the
first
Express
home
input,
mummy
site
carrot,
a
has
which
impose
from
a
non-existent
place
to
ask
created
zero.
Yet.
C
H
E
E
H
H
I
F
C
Guy
that
would
basically
learn
the
big
opening
is
only
the
Tollan.
Is
that,
as
Gabe
pointed
out
there,
that
I've
definitely
seen
CF
namespaces,
which
is
a
namespace
which
has
like
a
thousand
plus
services,
and
that
by
itself
is
actually
make
very
mad.
So
I
am,
though,
you
still
need
an
option
to
turn
it
off
and
say
no
until
end.
C
C
H
So
the
thing
here
Shannon,
what
we're
trying
not
to
use
flags
right,
we're
trying
to
use
our
API
is
to
define
these
behaviors
right
for
default.
So
the
the
general
trend
of
solution
in
this
space
is
to
say,
there's
a
kind
of
a
root
namespace
right,
that's
part
of
the
control
plane
under
the
administrator
control,
where
a
default
sidecar
would
live.
A
H
I
I
think
I
understand
the
user
experience.
The
user
experience
in
describing
would
be
that
a
cluster
operator
or
admin
would
write
a
mo
file
which
described
the
default
configuration
for
all
sidecars
and
that
that
config
would
be
applied
to
a
namespace
that
only
they
have
access
to.
Yes,
whether
whether
that
namespace
actually
has
workloads
or
not
is
irrelevant
whether.
D
H
I
I
I
have
heard
from
customers
that
that
our
allow
all
workloads
within
the
same
namespace
to
talk
to
one
another,
is
a
problem
for
them
that
that
they
would
prefer
to
have
no
workloads
even
within
the
namespace
talk
to
one
another,
and
that
requires
an
explicit
policy.
So
having
I
understand
it
comes
with
additional
cost,
but
having
the
option
to
lock
it
down
even
further
would
be
valuable.
Yeah.
L
I
F
H
That's
referring
to
a
service
would
take
priority
over
a
destination
rule
defined
in
a
namespace
that
defines
the
service
and
then,
if
there
was
again
a
similar
kind
of
route
configuration
namespace,
then
it
would
be
possible
to
resolve
a
destination
rule
in
that
namespace.
Using
the
standard
host
base
destination,
rule
matching.
H
L
D
L
H
J
H
Orthogonal
to
how
destination
rule
resolution
works
today,
right
so
slightly
sidecar
and
Gateway,
plus
the
scoping
stuff.
We
just
described,
resolve
service
visibility
and
service,
consumption,
mm-hmm,
but
unfortunately,
destination
rule
is
an
orthogonal
kind
of
hierarchical
policy
that
applies
to
services
after
they've
been
resolved,
and
today
it
has
a
global
fallback.
So.
J
J
H
Create
a
service
entry
in
the
namespace,
a
you
create
a
destination
rule
that
selects
it
right
be
used.
Wildcard,
then
you
refer
to
it
in
a
virtual
service
in
another
namespace.
Okay,
now
the
namespace
B
will
use
the
destination
rule
to
find
an
A
and
then
I
go
create
a
third
namespace
with
a
more
specific
destination
rule
match
for
the
host
name,
and
it
wins.
C
H
In
the
case
where
sidecar
is
present,
we
can
change
the
semantics
to
not
allow
this
craziness,
which
I
think
we've
already
agreed
on
the
writ,
but
that
we
still
have
the
default
behavior
when
people
aren't
defining
sidecars
right
and
we
don't
want
that
to
be
massively
different
from
the
behavior
when
they
do
and
the
behavior
as
currently
implemented
in
one.
Oh
right
is
dangerous.
H
Services
and
destination
rules
all
right.
The
Sriram
is
right,
there's
a
similar
kind
of
scoping
issue
with
virtual
service
right,
which
we
would
also
like
to
resolve
I
only
just
like
specifically
so
in
this
bug
off
about
destination
rule
I
can
open
up
another
one
with
virtual
service,
at
least
with
sidecar
and
Gateway.
H
C
J
C
H
Doc,
there's
a
trigger
for
changing
the
behavior
right,
but
babe
there's
a
new
way
of
specifying
the
same
hold
if
that's
such
a
syntactic,
so
syntactically
similar
and
in
fact,
they're
not
required
to
do
it-
that
you
would
have
very
trivial
changes
in
API
use
who's
causing
shifts
without
documented
it
and
what
a
good
idea.
It's.
O
In
a
simple
way
and
I
think
it
just
you
know
that
all
you've
done
about
it
is
intermediate
step
of
saying
so,
just
randomly
go
to
everything
go
for,
go
to.
You
know
to
the
what
are
the
things
define
first
and
then
you
spin,
you
survive
this
convenient
thing
to
say:
let's
never
look
down
the
whole
list
and
if
you
set
this
administrator
route
things,
and
then
you
don't
bother,
you
can
working
up
the
ramp
and
other
sort
of
things.
O
I
think
the
only
change
is
this
is
a
bug
fix
if
you
just
say
the
default
instead
of
being
SQL
system,
but
the
fault
is
to
go
back
to
the
old
behavior
look
everywhere.
So
you
kind
of
say
the
administrator
has
to
set
this
thing
to
get
around,
get
away
from
the
bar
completely,
but
you're
going
to
sort
of
probably
get
rid
of
the
bug
in
those
cases.
Just
by
doing
this,
that's
beautiful
all.
H
M
O
H
A
H
A
H
That's
a
separate
yeah
epi
discussion,
but
this
this
specifically
is
blocking
101,
mm-hmm
I.
Think
Shriram
is
gonna.
Pay
me
a
visit
here
this
afternoon
because
he's
he's
on
the
the
cloudy
and
wet
West
Coast.
So
he
and
I
can
take
this
up
but
I.
If
people
can
take
a
look
at
this
and
provide
their
feedback
on
the
bug,
that
would
be
helpful.
Yeah.
A
A
H
M
H
A
P
Like
a
mini
person
in
my
co-authors
in
China,
so
area
street
burner,
so
this
is
basically
a
I
was
looking
at
we're
separately
and
my
co
boss
was
looking
at
locality,
weighted
load,
balancing
separately
and
has
some
API
implementation
of
that,
and
this
is
bringing
of
those
together
where
the
TLDR
of
the
zone,
where
site,
which
is
the
site
I'm
familiar
with,
is
basically
using
priorities
within
localities,
and
that
gives
us
the
behavior.
We
want
it's
consistent
with
the
locality
weighted
stuff.
P
Just
generally
looking
for
feedback
on
the
zone,
aware
side
of
things
I
think
I
know
I've
had
discussions
with
people
basically
around
the
API
for
the
locality
side
of
things,
but
these
can
be
done
separately.
They
use
the
same
underlying
sorry,
the
same
underlying
locality,
information,
it's
purely
a
priority
versus
waiting
thing.
P
At
that
point,
I
need
to
probably
sync
up
a
Causton
offline
at
some
point
is
probably
not
worth
doing
in
the
networking
group
by
high
VDS.
Cache
is
going
to
work
with
the
sidecar
stuff
and
kind
of
the
stability
of
it,
and
whether
or
not
I
can
go
ahead
and
implement
it
right
now
or
whether
or
not
it's
going
to
require
the
EDS
for
the
sidecar
scoping
stuff
to
be
done
and
then
build
it.
On
top
of
that,
I
just.
L
P
P
Q
Sounds
like
you,
your
folks
are
just
I
mean
they're.
You
you're
falling
the
design
that
we
discussed
in
a
separate
thread
which,
where
everything's
gonna
express
me
through
that
counties
and
parishes.
That
sounds
fine.
I
mean
when
you
talk
about
is
onna.
Where
are
you
actually
talking
about
that
convoys?
Don't
know
where
that
is
more,
just
known,
all
aspects
of
locality,
so.
P
P
Let
annotations
for
those
type
of
things
to
set
the
localities
and
then
we're
going
to
prioritize
local
ones
for
the
kind
of
cluster
wide
locality
stuff
and
then,
if
you
want
to
override
specific
services,
that's
where
the
locality
weighting
comes
in.
At
that
point,
we'll
flatten
the
priority
back
to
zero
and
we'll
purely
use
waiting
at
that
point.
So
the
locality
weighting
uses
envoys
locality
weighting
the
zone
aware
in
quote,
marks
users
priorities.
Basically
anything
that
will
not
be
called
so
under
web
because
for
that
exactly
what.
P
Potentially
I
haven't
we
haven't
designed
for
that
yet,
but
that's
saying
be
extended
to
I
guess
it
gets
quite
complex
at
that
point.
I
know
I
mean
yeah.
We
would
probably
have
to
have
some
serious
API
design
on
how
we
would
go
about
exposing
that,
because
that's
quite
complicated
but
the
simple.
The
idea
was
to
have
a
simple
use
case
for
customers
that
basically
wants
say
I.
Q
I
mean-
and
all
of
this
is
sort
of
most
of
the
design
of
envoy
locality
and
priority
structures
driven
actually
by
an
externalization
of
Google's
internal
load,
balancing
which
we
we
use.
I
guess.
But
you
know,
we've
found
all
of
these
combinations
to
be
useful.
It's
in
you
know,
and
we
actually
make
use
of
both
waiting
and
priorities.
Q
Their
main
priorities
are
really
used
for
the
situation
where
you
lose
all
the
healthy
host
or
a
very
large
percentage
of
healthy
house
in
a
given
priority
priority,
and
you
want
to
fall
over
to
a
completely
different
set
of
them,
locality
weighting
as
the
next
step,
and
you
can
you're
picking,
which
is
a
bit
different
than
I.
That
I
hear
that
folks
are
interested
in
using
priorities
in
the
history
of
context
where
at
least
based
on
the
strengths
that
we
were
on
people,
one
of
them
immediately
pick
the
next
priority.
C
Q
Q
H
Q
The
last
time
p2
I
mean
this
is
purely
policy
at
this
point,
is
it
just
a
question
like
you
can
write
this
as
Co
today
as
an
extension?
How
could
we
externalise
this?
We
just
write
a
proto
which
captures
this
in
different
permutations
or
they
actually
write
some
dynamic
thing,
which
you
did
this
when
your
firm
depend
on
how
sophisticated
the
customer
is
right
at
the
end
of
the
day
and.
Q
Q
Long
reading
you
yeah
yeah
this
is
that
by
I
think
square.
Four,
yes
on
square
okay,
yeah,
beautiful
yeah.
H
P
H
C
H
H
C
P
H
P
Q
Q
Topology,
okay,
yeah
I,
mean
obviously
I
mean
if
your
tendency,
due
to
wire
delay
or
latency
due
to
actual
load
I
mean
or
some
combination
of
the
two.
This
is
a
real
time.
Measured
thing
or
I
was
just
talking
about
where
Lindsey,
okay,
yours
express
it
through
the
locality
concepts
right,
but
you're
worried
about
the
chemistry
express.
What.
Q
H
P
A
R
So
hey,
my
name
is
Sebastian
working
from
reddit
on
an
open
source
project
called
cube
here.
The
projects
allowed
to
run
a
virtual
machine
works
loaded
on
top
of
kubernetes
and
I,
create
like
testing
two
months
ago
and
I
didn't
get
any
review
of
something.
So
I
just
want
to
point
it
out
here
they
a
APR,
that's
a
so
when
we
create
a
VM
inside
the
pod.
R
We
create
a
tough
in
terms
inside
the
pod
anyway,
and
we
are
going
to
like
to
allow
this
interface
to
be
as
an
internal
interface
to
the
end
by
itself.
So
my
PR
just
add
some
rules
to
the
iptables
bash
script
that
is
running
as
an
init
container
before
the
a
proxy
comes
up
and
just
configure
a
a
couple
of
rules
there
to
allow
a
specific
interface
to
be
a
to
be
an
internal
one,
not
as
external
a
traffic
direction.
A
R
C
Correctly
I
you
like
taking
stuff
from
the
in
how
does
the
coop
word
work
when
a
separate
instance
of
the
networking
namespace
like
the
VM,
will
have
its
internal
network
interfaces,
which
is
not
something
that
you
can
touch
and
but
then
you
would
have
to
set
up
this
whole
redirect
stuff.
For
the
other
word
interface,
that
VM
is
provided
right,
and
that
is
where
you
set
up
the
in-and-out
routing
yeah.
R
Exactly
so
how
we
have
a
few
different
type
of
connections
on
machines.
One
of
them
means
the
sra
of
V,
and
this
one
is
out
of
stop
here,
because
what
yeah
the
pod
main
space
doesn't
even
see
the
interface.
But
we
also
have
the
filter
yo
option
and
we
connected
to
a
Linux
bridge
in
the
pod
namespace.
And
then
we
just
use
masquerade
to
a
is.
R
C
So
what
I'm
asking
is
that,
in
that
case,
and
the
way
they
aren't
able
to
set
up
is
such
that
it
just
simply
intercepts
all
traffic
from
all
interfaces
in
the
network,
namespace
and
routes
it
through
envoy
and
vice
versa.
So
why
would
you
need
to
modify
that
if
the
VM
is
actually
running
a
network
namespace-
and
you
know
you're-
setting
up
the
rules
in
such
a
way
that,
like
all
traffic
that
enters
and
leaves
this
network
namespace
will
go
through
envoy?
Yes,
it.
S
C
Virtual
machine
stuff,
which
is
where
the
the
inbound
traffic
into
VM,
will
appear
as
an
outbound
traffic
and
vice-versa
in
some
way,
so
the
traffic
that
exits,
a
VM
will
sort
of
appear
as
an
inbound
traffic
to
envoy
to
North
right.
We
could
look
at
it.
It's
actually
like
an
outbound
traffic
out
of
the
port
network
namespace
and
one
exactly
like
you
know.
This
is
a
very
classic
thing
that
happens
with
send
was
I
mean
like
with
a
VM
and
so
on
so
forth.
C
H
S
C
Know
David
the
way
they
were
looking
at,
it
was
dead
and
I
was
they
want
intact,
different,
like
firewalls
and
so
on
in
the
body
and
I
was
whether
they
wanted
to
do
full
Ethernet
capture.
So
they
were
in
judge
that
captures
the
post,
whereas
this
one
is
actually
become
the
same.
Networking
means
to
their
network
main
phase
to
Ethernet
interface.
M
Sorry,
this
channel
I
haven't
looked
at
the
PRS,
the
first
time,
I'm
seeing
it,
but
it
I
mean
I,
agree
that
the
problem
is
generic,
but
I'm
wondering
if
it
might
be
easier
to
solve
in
the
cni.
So
we
should
look
at
how
this
could
be
done
in
the
cni
as
well
and
see
if
it's
quite
straightforward
to
do
it.
There
I.
H
Think
over
all
right,
we
want
the
the
C&I
plug-in
to
be
able
to
set
up
these
types
of
things
on
people's
behalf
know
if
it's,
whether
you're
running
a
VM
or
a
container,
there's
a
variety
of
different
kind
of
network
configurations,
that's
C
and
I
could
set
up
for
you
and
the
set
up
of
all
those
should
probably
be
pretty
well
encapsulated.
So
you
should
probably
look
to
merge
right
this
capability
down
into
this
DNI
I
learnt
the
block
it
in.
M
T
Erin
yeah
definitely
to
be
more
into
CNI
and
maybe
I'm
Marge.
It
basically
have
separate
IP
table
scripts
for
different
profiles,
because
it's
a
mess
now
with
T
proxy
and
redirect
and
other
things
we
may
need
to
clean
it
up
before
we
and
and
and
definitely
see
and
I
will
be
one
another
use
case
when
we
have
different
namespaces.
T
H
R
R
R
R
A
That's
a
new
thing:
I
think
they
somebody
got
them
over
lunch,
so
I'm,
sorry,
John.
We
won't
be
able
to
address
that
question,
but
I
suggest,
like
there's
a
question
on
the
chat,
so
I
really
suggest
trying
that
we
are
discussed.
Dots,
east
UI,
oh
yeah,
yeah,
exactly
there
is
no
more
mailing
list,
but
there
is
a
website
called
discuss,
don't
destroy.
It
looks
a
bit
like
Stack
Overflow.
It's
based
on
this
course.
Okay,
thank
you.
Thank
you.
Everybody.
We
have
to
leave
opportunity
because
we
lost
our
room,
so
I
can
no.