►
From YouTube: Antrea Community Meeting 05/22/2023
Description
Antrea Community Meeting, May 22nd 2023
A
Hello:
everyone
thanks
for
joining
this
instance
of
the
entria
community
meeting
today
we
have
two
main
topics
on
the
agenda.
First,
Kumar
is
going
to
present
some
findings
about
Andrea
scaling.
You
run
some
tests
and
he
wants
to
present
as
a
result
of
this
test
and
then
Chan
wants
to
discuss
three
entry
issues
that
have
a
an
impact
on
users.
I
think
one
one
of
them
is
could
be
considered
a
bug
and
the
other
two
or
more
like
feature
requests.
A
So
Kumar.
Do
you
want
to
get
started.
B
B
Thank
you
so
hello,
everyone,
so
my
name
is
Kumar.
Ratish
and
I
will
deliver
a
short
presentation
on
this
topic
and
she
has
scale
test
that
is
10
000
VMS
in
agentless
mode.
Only
so
so
we
did
a
scale
test
in
agentless
mode.
Only
and
the
result
of
the
test
will
be
shared
in
this
short
presentation.
So
we
will
cover
the
data
set,
which
we
used
for
the
test
and
I'll
share
the
test
setup
details:
how
I
perform
the
test
and
the
bindings
the
results
which
we
got.
B
That
is
how
much
time
it
was
taken
and
how
much
CPU
and
memory
usage,
so
the
data
set
which
we
used
is
like
I,
took
reference
from
the
performance
benchmark
test
which
are
already
there
in
the
Upstream
repository
So,
based
on
the
similar
lines
I
created
the
data
set.
That
is,
we
had
a
one
name
space
in
that
we
had
10
000
and
general
policies
and
10
000
external
entity
objects.
B
Similarly,
Android
network
policy
B
is
applied
on
external
entity
B
to
select
the
to
accept
the
incoming
traffic
from
external
entity
a
so
this
is
in
one
group,
so
there
are
total
5000
groups.
So
in
total
we
have
ten
thousand
internet
of
policy
and
ten
thousand
external
entity
objects
and
all
are
in
the
same
name
stress.
B
Which
is
this
one?
This
is
the
example
of
the
internet
to
policy
and
the
external
entity
which
I
used,
so
this
IP
is
just
randomly
generated
for
each
external
entity
and
labels
were
used
to
select
enter
Network
policy
to
select
external
entity
for
the
internet
policies.
B
D
B
Local
three
node
kind
cluster,
which
was
used
for
the
test
and
the
time
was
calculated
from
the
Android
controller
log,
so
I,
while
applying
the
entry
yaml
I
increased
the
velocity
to
five
and
then
this
objects
were
created
and
when
the
objects
were
created,
I
monitor
the
logs
when
all
the
objects
were
processed.
All
the
entire
network
policies
were
processed.
When
you
increase
the
velocity
of
the
logs,
you
can
see
that
there
is
a
timestamp
when
processing
of
network
policy
and
applied
to
groups
and
address
groups
are
all
finished.
So
there
is
a
timestamp.
B
So
when
all
this
10
000,
amps
and
external
entities
were
processed,
then
I
restarted
the
Android
controller
pod
and
after
restarting
Android
controller
pod.
The
time
which
again
I
check
the
logs
and
the
time
with
which
it
took
to
calculate
to
process
all
the
created
alternative
policy
was
measured.
That
time
was
measured.
So,
firstly,
the
Google
program
was
run
to
create
the
objects
and
logs
were
monitored
so
that
all
the
network
policies
are
processed
and
then
the
controller
Parts
was
restarted
and
after
restarting
again
whatever
the
time
we
got.
That
time
is
calculated.
B
Moving
on
to
the
next
point,
so
CPU
and
memory
usage
for
this
I
wrote
another
group
program
and
I
use
metrics
package
to
get
the
CPN
memory
usage
of
the
Andrea
controller
pod,
and
this
was
measured
at
an
interval
of
10
seconds
so
that
we
could
get
the
accurate
result
most
accurate
result
and
the
maximum
CPU
and
memory
was
updated
at
the
interval
of
10
seconds
each.
So
this
is
the
way
I
calculated
I
perform
the
test
tree,
and
if
anyone
has
any
questions
on
this.
E
Before
the
next
measurement-
okay,
perhaps
sometimes
the
it
that
it
it
shouldn't
even
taken,
take
10
seconds
to
finish
in
the
calculation.
Is
it
correct.
B
Means
I
read
somewhere
that
this
Matrix
package
it
does
not
calculate
on
its
own,
it
queries
cubelet
and
it
calculates
at
an
interval
of
10
seconds
each.
So
while
the
object
creation
was
this
Matrix
at
the
end
table
of
10
seconds
by
means
a
program
was
kept
on
running
and
I
was
in
another
terminal
and
I
was
measuring
the
monitoring
the
logs.
So
after
every
10
seconds,
whatever
the
CPU
and
memory
usage
we're
getting,
we
were
updating
our
maximum
memory
and
maximum
CPU
data.
B
Yeah,
it
took
around
45
seconds
for
the
program,
but
around
33
minutes
for
the
go
program
for
the
from
the
Android
controller
logs
around
33
minutes
for
the
first
time
when
I
object,
Speculator
after
restarting
I'll
show
you
the
next
slide,
so
the
Matrix
for
calculator
after
restart
I'm,
saying,
okay,
okay,.
B
So
here
is
the
result,
so,
firstly,
go
program
was
run
to
create
the
objects
that
took
around
40
45
seconds
for
the
go
program
and
I
was
monitoring
the
Android
controller
logs.
It
took
around
33
minutes
to
finish
processing
all
the
network
policies.
Then
I
restarted
the
this
one
Andrea
controller
pod.
Now
after
restarting
it
took
around
17.5
seconds
to
finish,
processing
all
10
000
amps,
and
during
this
17.5
seconds,
the
maximum
CPU
users
went
as
high
as
1038
millicore
and
memory
has
277,
maybe
bytes.
B
So
during
this
17.5
seconds
interval
this
the
10
seconds
interval,
which
I
was
talking.
That
program
was
running
just
before
the
start
of
the
anterior
controller,
just
before
restarting
the
Android
controller
pod,
and
it
was
kept
running
until
this
17.5
seconds
or
so.
E
Got
it
if,
if
I
understand
correctly,
if
the
test
last.
E
B
B
It
will
not
be
a
problem,
but
I
think
the
result
won't
change,
because
I
read
somewhere
that
Matrix
package,
which
I
used
that
does
not
itself
calculate
it.
It
just
queries
the
data
from
the
cubelet
and
cubelet
has
cubelet
itself
calculate
at
an
interval
of
10
seconds
e,
so.
E
B
B
E
E
We
could
just
measure
the
program
directly
while
some
other
magnitude
not
well
equivalent.
It's
giving
it
only
provide
the
data.
Every
10
seconds
per
second
methods
will
be
more
reliable
to
cat
to
capture
the
the
top
usage
during
this
process.
B
Yeah,
that's
good
and
that's
correct.
I
also
wanted
that.
But
I
didn't
got
that
because
so
I
you
this
is
as
close
as
possible.
I
equipment,
I
use,
also
tried.
The
kubernetes
dashboard
also
provides
the
metrics,
so
that
is
an
intent
level
of
one
minute,
and
this
Matrix
is
as
small
as
10
seconds
so
I
researched,
but
I
didn't
got
any
one.
Second
I'll
try
again
for
that.
E
F
A
B
Yes,
the
that
PR
is
open
in
the
Upstream
like
there
also
I
use
the
same
data
set
this
one
only,
but
the
time
which
I
got
there
was
seven
point
some
seconds
and
the
CPU
and
memory
uses
which
I
got
there,
but
that
was
done
using
the
fake
Andrea
controller
like
in
you
see,
we
have
for
unit
test,
we
have
the
fake
controller,
so
there
we
have
big
controller.
So
using
that
the
test
was
then
there.
So
it
is.
The
pr
is
open
in
the
Upstream.
B
Yeah
because,
like
you
see
that
topic
is
10
000
VMS,
so
the
VMS
are
registered
as
external
entity
in
the
end
entrya.
B
So
when
you
want
to
apply
alternative
policies
on
VMS,
so
you
register
the
VM
as
external
entity
in
the
entry
and
then
you
apply
the
enter
Network
policy
and
you
select
in
that
up.
You
apply
to
you,
select
external
entity
selector
and
then
match
labels.
So
the
test
was
to
check
whether
Andrea
is
Network.
Policies
are
able
to
applied
on
large
scale
number
of
VMS
and
VMS
are
registered
in
Andreas
external
entities.
That's
why.
A
Okay,
so
that
was
specifically
in
the
context
of
like
the
Navy
project.
Yes,.
G
A
B
A
Thanks
Kumar,
so
if
there
are
no
other
questions,
I
think
Chen.
We
can
move
on
to
you
and
there
were
three
GitHub
issues
you
wanted
to
discuss.
E
Yes,
San
Antonio,
can
you
say
massagree?
It
should
be
my
browser.
E
Then,
let's
I
want
to
discuss
three
user
reported
issues
and
some
of
them
future
questions,
and
some
of
them
actually
is
problems.
In
answer,
let
me
start
with
the
first
one:
it's
supporting
DSR
for
loan
advance
and
I
have
prepared
a
Thrice
to
introduce
the
background
and
potential
design.
Let
me
show
my
screen.
H
E
And
the
motivation
is
that
currently
we,
we
Implement
a
proxy
and
the
process,
oh
in
symmetric
mode,
that
in
which
mode
that
the
traffic
we
are
always
flow
in
the
same
in
the
in
the
same
path
like
both
for
the
request
and
the
response
and
the
traditional
loan
balance
has
a
mode
which
is
called
a
DSR
director.
So
return.
E
It
works
in
the
way
that
the
the
request
will
be
learned
best
by
the
loan
balancer
to
the
back
end,
where
some
special
mechanism,
some
implementation,
implementation,
May,
might
just
change
the
destination
Mac
to
the
back
end
servers
and
doesn't
change
the
IP
address
and
the
higher
payload
and
then
the
back
end
server
could
return
to
the
client
directly
because
it
has
the
original
Source
IP,
the
client
IP,
and
here
it
also
knows
the
the
server
IP
the
client
was
accessing
and
some
implementation
might
use
tunnel
or
or
some
other
encapsulation
mechanism
like
they
could
just
encapsulate
the
request
in
a
in
your
tunnel,
or
they
could
do
some
dnet
in
the
loan
balancer,
but
they
could
encapsulate
the
the
original
server
IP
in
the
header
of
the
encapsulation
packet
and
forward
it
to
the
server
and
then
the
server
anyways
knows
the
original
server
IP,
the
client
that
was
accessing.
E
So
you
could
still
response
to
the
client
directly
either
getting
the
so
the
the
server
IP
from
the
inner
package
directly
or
from
the
encapsulation
header
and
then
so.
The
typical
traffic
flow
if
running,
DSR
mode
is
like
this,
and
the
advantage
of
this
mode
is
that,
as
seen
as
we
see
from
the
the
picture,
because
there
is
one
less
jump
in
the
data
pass,
the
the
response
doesn't
have
to
go
through
the
loan
balancer
and
the
in
selling.
E
E
It
is
typically
useful
for
traffic,
which
is
which
has
a
lightweight
a
request,
but
a
heavy
response,
because
if
we,
if
we
think
about
imagine
about
this,
when
there
are
two
back-end
server
and
one
of
the
answer-
and
you
can't
request
some
payload
from
the
service-
the
back
ends,
the
the
two
backend
server
could
talk
to
the
client
directly
by
passing
the
loan
balancer.
So
the
total
bandwidth
will
be
the
sum
of
the
of
each
packet
and
spanways,
and
another
benefit
of
this
mode
is
that
the
server
can
see
the
client
IP.
E
The
can
see
the
original
client
IP
because,
typically
in
this
mode,
the
server
needs
to
respond
to
client
directly,
so
loan
balancer
don't
doesn't
do
masquerade
for
the
client,
ID
and
I.
Think
in
kubernetes.
The
typical
usage
of
this
mode
could
be
the
in
cluster
loan
balances.
I.
Think
typically,
external
advances
doesn't
need
this
because
it
might
already
performs
DSR
because
in
Q
Plus
implementation
it
just
assumes
that
the
external
advancer
has
already.
E
The
Ingress
traffic
to
the
node
and
the
destination
IP
will
be
the
service
run,
bus
IP.
So
typically,
you
might
already
in
working
the
DSR
mode
by
by
the
the
Instagram
external
romancer,
changing
the
destination
Mac
when
it
does
the
runtime
scene
and
typically
the
external
loan
balancer,
should
leverage
the
external
traffic
policy
mode
local.
E
And
based
on
the
how
the
DSR
mode
works
in
typical
learn,
balances,
I
found
that
in
current
and
sales
design
it
should
be
not
hard
to
implement
Implement
DSR,
as
described
in
the
future,
when
the
client
wants
to,
assuming
that
we
are
using
the
until
service
external
IP
future
to
announce
the
external
IP
of
the
of
a
load
answer.
E
Implementation,
we
will
just
deny
the
request
and
send
it
and
the
ton
of
the
post-dnet
and
that's
not
the
request
and
send
it
to
the
backend
Port.
So
the
back
end
port
could
respond
to
the
request,
but
because
the
design
destination
IP
will
be
the
getaway
IP
of
this
node.
So
it
will
be
forwarded
to
this
node
and
the.
E
Then
respond
to
the
client
with
the
SR
mode.
On
this
note
on
the
Ingress
node,
we
we
don't.
Do
we
don't
change
the
IP
of
the
request
by
the
way,
still
select
one
backend
endpoint.
E
We
could
choose
to
change
the
back
case
to
both
node,
node-based
or
and
pod
based,
and
we
could
aggregate
the
number
of
the
ports
on
a
node
and
improve
the
weight
of
a
packet.
Then,
if
the
package
is
the
the
the
package
that
is
node
based,
we
will
not
do
any
IP
translation,
but
we
will
just
tunnel
the
request
to
the
selected
backend
Port
back
in
the
node.
E
E
And
selecting
local
ports
only
and
after
we
select
one
second
Port,
we
will
perform
the
net
on
this
node
and
forward
the
packet,
the
request
packet
to
the
selected
backend
port
and
right
now
the
backend
Port
receives
the
traffic
with
the
original
Source
IP
and
the
with
destination
IP
set
to
itself.
So
it
will
respond
to
the
client
IP
directory
and
when
the
package
reached
the
the
response
packet
reaches
the
open
with
switch,
we
will
do.
E
So
the
package
will
be
the
the
destination
IP
of
the
response
will
be
changed
to
the
original
Source
IP
and
be
forwarded
to
the
client
directory
like
this
way
before
it
reaches
the
openly
switch.
E
The
response,
Source
IP,
will
be
the
port
IP
and
the
server
IP
will
be
the
current
IP
and
after
performing
anti-net
The
Source
will
be
translated
to
the
service
IP
and
the
server
IP
web
still
be
with
client
IP,
so
the
client
can
receive
the
response
and
I
have
wildfired
with
some
hacker
flows
and
basically
it
works,
but
I
also
found
some
two.
Some
some
some
caveats
and
the
first
and
is
is
very
typical
in
with
DSR
mode.
E
But
it's
okay
for
this
node
and
not
also
note
that
in
the
host
network,
namespace
of
the
Ingress
and
back-end
note,
the
connection
will
both
be
marked
as
invalid,
because
they
only
see
one
package
of
One
Direction,
but
that's
that
doesn't
affect
the
function
because
we
we're
in
open
habitable
as
the
rules
we
installed.
By
answer,
we
didn't
check
the
connection
State
when
forwarding
between
these
two
device,
but
there
is
a
problem
in
the
open
wave
open,
wave
switch
of
the
Ingress
node.
E
E
After
the
first
request
is
committed,
this
will
cause
different
selections
when
we
a
loan
advance
the
following
package,
because
a
packet
we
leverage
can
I
track
to
to
to
remember
which
back
end
node
or
Port.
We
choose
for
the
connection.
Now
we
cannot
leverage
that
and
after
some
investigation-
and
we
found
that
wrong
action
was
provided
before
open
waste
which
supports
the
can
kind
of
track,
and
it
could
provide
similar
functions
where
I
have
some
tested.
E
How
tested
some
flows,
and
basically
it
works
that
we
we,
for
example,
this
flow
means
that
we
will
generate
one
along
the
flow
every
time
we
met
a
new
connection
and
it
is
a
source
Port
based
which
is
different
from
the
searching
Affinity.
We
have
implemented
for
services
and
then
we
need
to
set
the
theme
idle
timeout
to
a
small
value
so
that,
after
the
connection
close,
we
can
the
open
switch
could
remove
the
loan
flow
quickly
instead
of
waiting
for
the
idle
timeout.
Otherwise
there
will
be
massive
or
for
open
floors.
E
Even
if
the
connections
are
all
closed,
gracefully
and
I
hacked
the
data
pass
with
this
example.
Flows,
for
example,
I
ordered
the
the
the
package
of
a
group
to
actually
this
doesn't
really
change
anything.
E
The
most
important
change
is
that
in
endpoint,
the
net
flow
instead
of
Performing
the
net
I
I,
changed
it
to
not
doing
that,
but
in
service,
in
less
rate,
forwarding
table
I
will
check
the
red
mark,
which
has
set
during
the
in
the
series
lb
and
the
end
quantity
net
and
I
I
forward
the
package
to
another
note
where
the
tunnel
and
the
setting
the
tunnel
IP
to
the
backend
nodes,
IP
and
then
another
important
flow
is
that
we
should
stop
dropping
the
invalid
package,
but
this
could
be
a
down
conditionally.
E
E
For
example,
I
how
to
set
up
a
three
node
cluster
and
one
service,
which
is
loan
balancer,
and
it's
eastern
IP
in
this
one,
and
currently
the
Ingress
node
is
assigned
to
kind
of
worker,
but
the
back
end
port
is
on
kind
of
worker
2..
So
that's
a
festive
and
also
I
have
two
notes.
I
I
will
enter
the
namespace
of
the
two
nodes
and
the
first
one
is
a
kind
of
worker
and
the
second
one
is
a
kind
of
worker,
2.
E
yeah,
so
So
In
traditional
way
the
packets
will
flow,
both
the
the
request
and
the
response
will
flow
through
the
Ingress
node
and
we
are
not
appear
on
the
Ingress
node
physical
interface,
because
it
will
always
be
tunnel
to
the
in
Ingress
node.
Instead
of
responding
to
client
directory.
E
Let's
take
a
twine
and
we
can
see
that
the
pack
we
can
see
a
request
and
response
of
the
Ingress
node
and
and
the
back
end
node.
There's
no
package
showing
on
the
physical
interface
and,
let
me
add,
some
flows
to
make
the
DSR
Mode
work.
E
E
E
E
E
20
at
a
higher
20
percent,
higher
throughput
with
the
DSR
mode,
but
there
is
also
a
problem
if
you,
if
you
have
a
look
at
the
test
command,
I
used
I
use
hyphen
K
to
keep
alive
so
that
the
connection
well
it
the
other
request,
will
use
the
same
connection.
E
This
is
because,
if
I
don't
use
keep
alive
under
every
request,
you
you
a
new
port
and
finally
it
the
performance
became
worse
with
DSR
mode
and
country
is
still
investigating
the
region
behind
it.
I
guess
it
might
be
related
to
two
times.
We
need
to
commit
a
connection
two
times
with
DSR
mode,
but
with
all
we
we
need
to
do
two
runs
of
learner
balancing
with
DSR
mode,
but
with
condition
mode.
Only
the
Ingress
node
will
will
do.
Learn.
Balancing
so
I
understand
even
skating.
E
This,
but
I
think
the
result
of
latency
is
actually
promoting
promising,
because
it
shows
that
with
less
jump
the
the
current
could
receive
and
the
response
faster
and
it
the
total
bandwidth-
should
be
also
benefit
from
this
mode
as
well
and
get
back
to
the
presentation.
E
I
listed
some
to-do's
regarding
this
feature
is
one
is
to
measure
the
latency
in
Wireless
mode
as
I
mentioned.
Currently,
it
works
better
with
a
single
connection,
but
it
works
worse
with
new
connections.
If,
if
every
request
use
a
Newport,
Newport,
Source,
port
and
I
also
want
to
investigate
how
much
benefit
it
could
bring
with
to
the
throughput
foreign.
E
I
have
tested
it
with
chair
service,
external
IP.
It
basically
was
and
I
also
plan
to
test
it
with
other
popular
in
cluster
loan
bands
like
meta
B,
to
see
if
there
is
any
gap
between
the
implementations
and
because
of
the
the
not
ideal
data
with
new
connections.
I
I
want
to
see
that
whether
we
could
improve
the
flow
to
to
make
it
always
work
better,
regardless
of
the
mode
of
the
traffic
I
I
guess
it
might
be
related
to
the
two
rounds
of
learn
balancing.
G
A
We
when
we
implemented
session
Affinity
in
entry
a
proxy
at
the
very
beginning.
There
was
an
issue
because
OBS
has
a
delay
to
install
a
flow
after
after
the
learning
action,
and
it
was
at
most
500
milliseconds
and
then
we
changed
the
config
parameter
in
OBS
to
reduce
it
to
200
milliseconds,
and
so
what
that
means
is,
if,
when
the
case
of
session
Affinity,
if
there
was
a
new
connection
from
the
same
pod
within
200
milliseconds,
there
was
no.
A
There
is
no
guarantee
that
it's
going
to
go
to
the
same
to
the
same
back
end
for
the
service,
because
200
milliseconds.
It
is
how
long
it
can
take
now
in
entria
to
install
the
flow
after
the
learning
action
and
that's
because
it
needs
to
go
to
the
user
space
agents.
And
then
the
OBS
user
space
agents
need
to
update
the
the
data
planes
right.
So
do
you
think
using
the
loaning
action
and
knowing
about
that
limitation?
Do
you
think
that
will
be
an
issue.
I
E
Yes,
that
will
be
a
problem
and
thanks
for
imagine
this
I
I
sat
back.
The
the
current
metadata
with
the
new
connections
might
be
related
to
this,
because
I
in
the
test
flows
I
have
only
one
back
end,
so
it
will
choose
that
anyway,
but
it
may
have
to
the
following
package:
mad
how
to
generate
a
long
section.
E
A
Yeah
I'll
put
the
link
in
the
chat
I
think
it's
it's
kind
of
like
a
design
over
vs,
where
learning
was
mostly
used
for
MAC,
address
learning,
and
so
for
that
use
case.
Having
the
delay
is
not
an
issue
because
until
you
actually
learn
the
flow
in
the
data
pass,
you
can
just
like
keep
Broadcasting
the
L2
traffic,
but
for
our
use
case
it
could
be
an
issue.
E
Then
we
either
need
to
a
further
investigate
whether
we
could
make
a
CT
States
available,
even
with
a
single
direction
of
traffic,
or
we
need
to
check
whether
there
is
a
workaround
for
this
loan
action,
or
perhaps
we
could
use
some
consistent,
Hashi
or
some
other
magnetism
to
select
the
back
end
to
guarantee
that
it
will
always
be
the
same
packet
with
the
same
with
with
one
file
typo
metadata.
C
Hey
Jeff,
like
I,
have
a
question
like
so
normally
like.
It
will
improve
the
response
time
right,
like
from
the
server
to
the
client
directory
like
by
passing
the
load
balance
right.
But
what
about
that?
Like
firewall
and
other
things
which
are
also
there
alongside
the
load
balancer,
so
that
will
also
get
bypassed
while
responding
to
them.
E
This
DSR
was
in
cluster,
so
whatever
external
firewall
is
there,
you
should
still
be
the
same.
I
guess.
But
if
you
mean
the
nail
policy
we
have
I
haven't
thought
about
whether
it
will
affect
the
their
policy.
E
C
H
So
I
have
a
question
on
the
nail
policy
too,
for
example.
Previously,
if
the
source
IP
Source,
current
IP
will
be
as
not,
and
then
if
we
use
acmp
to
restrict
which
source
IP
can
access
from
external
network
to
a
specific
set
of
Parts,
it's
not
possible
because
the
society
will
be
changed,
but
this
time,
if
actually
it
keeps
the
source
ID.
Does
that
mean
acmp
can
be
used
in
in
such
case,
to
block,
for
example,
some
Source
IPS
from
external
to
internal
part,
wireless
service.
E
A
Sorry
I
had
a
super
quick
follow-up
question
about
what
we
discussed
before.
So
you
said
that
for
invalid
connection,
City
Market
and
City
label
not
available
in
OBS.
But
can
you
remind
me
why
we
need
City,
Mark
and
CT
label
on
the
left
on
the
left
node
here
in
Gross
node.
E
Let
me
think
about
it:
I,
remember,
I.
Remember
we
I
I
for
the
following
package,
because
we
don't
perform
net
right,
so
the
source,
IP
and
the
destination
IP
are
the
same
when
they
are
forwarded
out
of
the
Ingress
node.
But
we
have
to
choose
one
backend
node
right
and
to
remember
the
black
and
note
we
have
to
persistent
the
node
information
somewhere.
So,
okay,
the
connection
storage
is
a
city
Mark
and
the
city
label.
That's
why
I
need
it.
Oh.
A
E
A
If
there
are
no
other
questions,
we
have
about
10
minutes
left,
so
I
think
Chen.
You
can
probably
cover
one
more
issue:
I,
don't
know
if
we're
gonna
have
time
to
cover
both
of
them.
E
Okay,
I
will
try
my
best
I
think
the
following
two
I:
don't
have
a
real
Pro
Photo,
but
I
just
want
to
give
some
opinions
from
the
community
and
the
first
of
all.
Some
of
you
have
a
heard
of
that
after
enabling
their
policy
login
and
if
there
are
many
connections,
will
hit
the
policy
the
it
will
cause
package.
Jobs,
executively
I,
think
that
was
not
an
intention.
E
According
the
discussion,
but
currently
the
issue
exists
and
we
have
to
tell
users
to
not
enable
Network,
say
login
when
it
could
be
hit
by
many
traffic,
so
I
I
think
that's
not
an
idea
and
the
Anarchy.
That
is
a
creative,
critical
urgent
to
issue
so
that
we
can
prioritize
the
things
in
one
daughter,
13.,
I'm,
not
sure.
E
If
we
already
conclude
the
one
solution
or
who
is
going
on
working
on
that
I
see
that
there
has
been
some
discussion
between
Anthony,
Wiki
and
away
so
do
they
want
to
how
was
some
discussion
here
or
who
will
be
track?
Checking
this
issue.
G
E
E
However,
however,
you
know
that
because
we
don't
have
control
on
the
traffic
and
accessing
the
node
or
generated
from
the
node,
because
we
don't
Bridge
the
federal
interface
to
open
with
switch
So.
Currently,
this
is
not
possible,
but
but
I
wonder
that,
given
that
we
have
implement
it,
you
send
a
node
and
we
have
a
region
mode
in
Android,
iPad
mode,
whether
it
is
possible
that
even
we
don't
run
an
Android
iPhone,
but
if
user
could
set
the
reaching.
Let
me
see
the
enable
reaching
mode
to
chew.
E
We
could
just
Bridge
the
open
wave
switch
and
the
physical
interface
to
open
with
switch
bridge
and
and
then
it
is
possible
that
we
could
enforce
no
policies,
like
the
external
node
feature
and
note
that
this
this,
the
regular
traffic
is
still
is
therefore
working
but
actually
getting
phase
and
the
will
be
netted
by
the
host
Network
when
it
is
accessed
in
the
external
work.
It
still
works
in
layer
three
way,
not
the
L2.
Even
the
interface
is
Rich,
but
we
just
have
a
control
on
the.
E
Traffic
phone
and
to
the
node
in
node
physical
interface,
so
I
wonder
whether
this
implementation
makes
you
make
sense
to
you
and
whether
it
was
to
implement
the
future
in
some
upcoming
release.
Or
we
should
deny
such
a
request.
I
D
Early,
we
think
about
using
the
TC
to
redirect
traffic.
Can
that
be
a
possible
approachable?
Actually,
I
forgot.
What
we
had
you
know
discovered
with
TC
approach
when
we
do
and
share
proxy
proxy
wall
feature
that.
G
E
D
You
yeah
I
think
moving
the
Federal
instance
Bridge.
We
talked
about
before
I
think
the
major
concern
I
have
that
that
means
for
we
can
manage
the
physical
interface
configuration.
Maybe
in
some
cases
it's
easy
to
support
our
public
comments
to
support
all
the
cases.
E
Yeah
yeah
I,
understand
that
and
the
I
have
one
condition
for
this
feature
is
that
this
will
not
work
by
default,
because
we
already
have
one
configuration
enabled
bridge
mode.
Yeah
I
mean
that
only
if
user
enable
this
flag,
we
will
approach
the
interface
that
means
for
most
users
they
will
not
be
affected,
and
only
for
users.
They
do
want
to
enforce
net
policies
to
node
interfaces.
E
E
I,
just
wonder
whether
we
should
keep
tracking
this,
because
users
are
asking
when
we
we
could
have
this
or
we
don't
want
to
support
it
at
all.
D
I
think
for
the
use
case,
it's
a
worldwide
in
my
mind,
because
I
I
don't
know
how
I
want
to
prioritize
or
any
particular
post
we
want
to
take.
D
E
Okay,
I
think
time
is
up
and
I
have
no
more
topics.
A
All
right
thanks
chain
for
the
comprehensive
walkthrough
we're
out
of
time.
So
if
this
is
everything
and
no
more
questions,
I'd
like
to
thank
everyone
for
joining
and
we'll
see
everyone
in
two
weeks.