►
From YouTube: Antrea Community Meeting 07/27/2020
Description
Antrea Community Meeting, July 27th 2020
A
Okay,
so
welcome
to
the
july
27th
meeting
for
project
entria.
We
have
a
pretty
packed
and
interesting
agenda
tonight.
A
First,
I
think
jake
is
gonna
present
to
us
a
tool
he's
been
working
on
called
the
nctl
query,
so
it's
an
extension
to
the
nctl
command
line
tool,
and
I
think,
after
that
we
are
going
to
discuss
messi
we're
gonna,
have
an
update
on
entries,
specific
metrics
for
android
android,
specific
network
policy
metrics
by
chan,
who's
been
working
on
this
feature,
then
we're
gonna
discuss
the
performance
for
entry,
a
proxy
when
it
comes
to
cluster
ip
service
load.
A
A
Jake
is
a
student
at
brown
university
in
computer
science,
and
this
summer
he
spent
some
time
working
on
project
entria
and
he's
going
to
show
what
he's
been
working
on
and
is
preparing
a
pull
request
for
this
feature.
So
jake
you
can
go.
B
Hello,
everyone
nice
to
meet
your
acquaintance
yeah.
So,
as
anton
was
saying,
I'm
intern
this
summer,
I've
been
working
on
entria,
building,
ncto
query,
which
is
an
extension
to
nctl
tool.
B
I
actually
have
a
really
nice
demo
that
I
prepared
for
the
poster
presentation
for
those
of
you
who
are
interested
in
like
the
internet,
poster
presentation
that
will
be
on
the
30th
and
I
can
send
you
some
information
for
that.
Anyways
just
give
me
one
moment
and
I
can
pull
up
the
video
also
if
the
video
isn't
doesn't
really
work.
When
I'm
sharing
my
screen,
I
have
a
one
drive
version,
so
I
could
share
that
with
you
guys.
But
just
let
me
know
if
there's
any
problems
with
audio
or
anything
like
that.
B
A
Choppy
to
you
no
you're
right
jake,
I
thought
you
had
some
slides
that
you
could
show
and
you
could
talk
over
them.
I
think
it
would
be
better
than
just
playing
the
video.
A
A
B
Alright,
so
this
was
my
poster
for
the
presentation
for
the
poster
presentation
and
insert
section
you
guys
all
know
about
kubernetes
and
how
network
policies
apply
to
entry.
So
I
could
skip
over
that
for
now
and
instead
talk
about
the
problem
that
we're
trying
to
solve
with
nctl
query.
B
So,
as
you
scale
clusters
to
very
large
sizes-
and
you
have
perhaps
hundreds
of
network
policies
applied
to
hundreds
of
pods,
you
develop
like
a
very,
very
complex
networking
structure,
with
a
very
complex
set
of
rules
and,
to
put
that
in
perspective,
say,
you
would
like
to
investigate
the
network
policies
which
are
being
applied
to
once
certain
endpoint,
perhaps
for
like
a
security,
diagnosis
or
really
for
any
reason.
B
Well,
every
network
policy
has
the
ability
to
select
the
endpoint
and
for
each
network
policy
there
is
sort
of
a
list
of
selectors
which
have
the
ability
to
select
that
endpoint
either
as
and
applied
as
a
either
as
that
network
policy
could
apply
to
the
endpoint
or
it
could
reference
it
in
egress
or
ingress
role.
So
you
could
see
that
if
you
have
a
very
large
amount
of
network
policies,
then
to
actually
examine
which
network
policies
are
relevant
to
a
certain
endpoint.
B
You
have
to
iterate
through
the
entire
list
of
network
policies,
and
you
know,
as
a
user.
Eventually
that
could
be
a
burden,
so
our
solution
is
rather
simple
and
it
draws
infrared
inspiration
from
some
of
the
other
relevant
networking
plugins
for
kubernetes
and,
as
we've
been
discussing,
the
solution
is
ant.
Ctl,
query
and
sort
of
the
idea
of
ant
cto
query
is
that
it's
a
more
sophisticated
tool
to
track
relevant
resources.
B
So,
for
the
one
particular
case
we're
interested
in
is
being
able
to
track
the
relevant
network
policies
to
network
m
for
the
first
iteration
of
design.
A
network
endpoint
is
simply
a
pod
in
a
namespace.
B
So
the
way
it
works
you
can
see
from
this
sample
output.
What
the
of
like,
what
the
results
of
a
simple
endpoint
query
are
and
I'll
show
you
the
the
the
whole
demonstration
of
the
tool.
Shortly
after
I
go
over
this.
Given
a
pod
and
a
namespace,
you
are
able
to
see
the
policies
which
apply
to
that
end
point
and
those
which
reference
the
end
point
and
egress
or
an
ingress
rule
yeah.
So
I'll
show
you,
hopefully
the
audio
isn't
as
choppy
as
it
was
before
so
now.
B
I'll
show
you
like
the
actual
demonstration
of
the
tool
which
I
was
able
to
show
in
the
poster
session,
and
then
I
could
go
over
the
steps.
B
C
D
B
Okay,
so
here
we
have
a
pretty
simple,
pretty
simple
setup.
Sorry
is
my
video
blocking
your
view
here.
B
Oh
yeah
I'll
move
that
in
a
second
and
I
have
this
network
okay,
so
we
have
a
pretty
simple
setup
here.
We
have
a
single
service
running
just
just
a
regular
engine
server
and
we
have
a
network
policy
which
you'll
see
in
a
second
I've
already
applied
to
the
cluster
yeah.
B
So
the
important
thing
to
note
is
that
this
is
the
labels
where
access
is
set
to
true
when
specifying
english
rule
and
the
pod
selector
selects
apps,
set
the
engines
and
I'll
just
go
ahead
and
tell
you
that
this
njinx
has
that
label
app
engines
all
right.
B
All
right,
so
the
audio
are
the
visuals
coming
out
a
little
bit
there.
But
what
I'm
doing
is
I'm
creating
a
busybox
container
running
with
access
at
the
false,
and
what
we're
going
to
do
is
we're
going
to
see
the
results
of
nctl
query
endpoint
for
both
the
engine
server
and
for
busybox,
with
the
access
label
set
to.
B
A
A
Is
can
you
can
you
stop
the
sound
in
the
video?
It's
right
there,
just
click
on
the
just,
remove
the
sound,
otherwise
we'll
keep
hearing
it.
B
B
B
Okay
and
now
we
see
the
result
of
when
we
query
busybox,
because
the
access
labels
just
defaults,
we
get
no
apply
policies,
no
egress
and
ingress
rules
which
have
but
contain
policies
which
reference
busybox.
B
I
should
be
querying
it
now:
yeah
and
yeah.
Okay,
and
we
see
here
that
now
we
act,
we
have
busy
box
specified
in
the
ingress
rule,
and
this
is
useful
because
now
we
can
see
the
return
structure
for
ingress
rules.
B
Yeah,
so
one
important
differing
field
is
this
index
field
and
it
basically
helps
the
user
see
which
see
which
rule
of
the
network
policy
actually
selects.
Busybox
and
since
there's
only
a
single
roll
and
it's
in
the
zeroth
index
index
is
set
to
zero.
B
Yeah
and
so
that
sort
of
that
wraps
up
a
brief
demo
I'll
be
sending
in
a
longer
demo
to
antonin
shortly
yeah
so
going
on
from
there.
I
talk
about
future
work,
so
some
future
work
that
we're
considering
is
providing
support
for
querying
network
policies,
and
you
can
sort
of
see
that
as
working
like
in
the
reverse
direction,.
B
And
more
abstract
here,
let
me
cut.
Let
me
exit
out
this
video
okay.
B
And
having
more
general
selection,
a
more
general
selection
mechanism
so
that,
instead
of
having
to
just
do
like
a
pod
and
a
name
space,
you
have
the
ability
to
say
like
return
a
list
of
endpoints,
we
also
have
something
some
more
ambitious
visions
that
I
think
might
be
cool
like
using
this,
just
as
like
a
general
policy
filtering
mechanism
for
other
sorts
of
visualization
yeah.
So
that's
how
the
tool
works.
B
Now
I
could
anton
do
you
think
it'd
be
a
good
idea
to
dive
into
the
implementation
now
or
should?
Could
that
just
be
something
that
is
reviewed.
A
Oh,
we
will
let
people
review
that
when
you
open
your
request,
like
take
five
minutes
for
questions.
D
E
So
I'm
not
sure
I
understand
that
correctly,
but
basically
we
supposed
to
query
the
policy
at
the
I
am
saying:
holding
senate
required
query
the
policy,
let's
reference,
the
source
portal.
That's
the
important
question.
B
Are
you
asking
about
querying
network
policies
as
opposed
to
querying
endpoints?
Our
end
point.
B
Okay,
yeah,
that's
a
good
question
yeah,
so
kubernetes
network
policy
is
able
to
reference
endpoints
in
three
different
in
three
different
places.
I
think
the
first
is
just
like.
B
Actually,
actually,
I
could
actually
pull
up
the
here.
Let
me
pull
up
the
the
the
demo,
the
demo
network
policy,
so
I
can
compare
that
to
the
result
that
I
showed
you.
B
Right,
so
you
can
see
that,
like
any
network
policy
is
able
to
reference
endpoint
in
three
different
places.
The
first
is
in
the
actual
pod
selector
like
which,
which
pods
does
this
policy
actually
apply
to,
and
that's
that's
determined
by
the
labels
in
the
pod
selector,
and
then
you
could
reference
an
endpoint
as
an
ingress
rule
which
specifies
a
communication
from
some
pod
and
an
egress
rule,
which
is
the
opposite
to
some
pod.
B
So
so
that's
so
that's
like
the
different.
So
that's
what
I
mean
by
a
network
policy
is
able
to
reference
an
endpoint
if
that
makes
sense.
E
Already
have
a
command
to
retain
the
policy
applied
to
a
port.
So
probably,
I
think
what
you
add
is
for
for
the
two
reference
referencing
for
the
two
strong
sorry
front
and
two.
E
So
class
in
the
implementation,
how
you?
How
you
do
the
resolving
of
the
reference?
It's
like
you,
create
some
state
from
the
controller
or
you
computer,
from
labels
by
under
content.
B
Yeah,
so
you
guys
will
be
able
to
see
my
full
implementation
as
soon
as
I
open
up
the
pr,
but
from
a
high
level.
What
we
do
is
we
add
a
new
indexer
to
the
applied
to
group
store
and
the
address
okay
group
store
and.
A
E
A
E
A
C
B
The
thing
is,
I
don't
think
it's
it's
usable
by
other
cni's
for
the
reason
that
we're
using
internal
entry
of
references.
A
Yeah,
I
think,
steve,
that's
a
great
question,
because
one
could
just
like
write
a
tool
which
would
just
like
connect
to
the
communities,
api
server
and
look
up
like
the
full
list
of
network
policies
and
then
do
some
computation
and
kind
of
like
answer
those
queries
right.
But
that
would
have
required
a
kind
of
like
replicating
a
lot
of
the
logic
that
we
already
have
written
in
the
in
the
entry
controller.
In
a
way.
C
C
A
A
Entry
that
kind
of
stuff
already
existed
right
right.
We
can
quickly
do
those
lookups
because-
and
I
think
another
thing
that
we
didn't
mention
is
we
kind
of
like
want
the
tool
to
also
work
with
the
andrea
specific
network
policies
like
for
different
personas
right,
so
cluster
network
policy
and
the
entry
specific
name
space
network
policy,
so
yeah
the
tool
is
going
to
work
with
those
as
well.
C
I'd
suggest,
maybe,
when
you
submit
this
pr,
I
assume
it's
going
to
come
along
with
some
documentation
and
maybe
put
that
in
the
design
documentation
to
explain
it
to
others
that
follow,
because
I
can
picture
somebody
going
out
there
to
evaluate
andrea
versus
other
cni's
saying.
Oh,
I
see
this
feature
in
there.
That
would
be
valuable.
C
I
wonder
if
it's
in
all
the
other
cni's
too,
but
when
you
go
through
the
the
backing
description
of
your
design
and
the
reasoning
that
went
into
your
decisions,
it
would
become
apparent
to
the
evaluator
that
hey
this
is
andrea,
specific
and
they're
they're
likely
to
have
that
question
in
the
back
of
their
mind.
So
maybe
answer
it
in
design,
docs
documentation
or
both
yeah
sounds
good.
A
E
Yeah,
I'm
just
thinking
if
we
really
want,
we
can
do
the
light
policy
laser
by
consumer
cloud
and
undercut,
but
we
don't
install
agent
and
don't
be
switching
line
plugging,
but
we
just
run
the
controller
and
then
run
you
undercut
through
to
another
analyze,
the
corresponding
policy.
If
you
want.
A
Yeah
I
mean
in
a
way
we
do
leverage
what's
already
in
the
entry
controller,
but
if
you
were
to
build
it
from
scratch,
kind
of
like
just
just
for
as
a
generic
mechanism
for
communities
just
supporting
kubernetes
network
policies,
and
you
would
build
those
like
maps
from
scratch.
I
think
the
logic.
E
F
E
Sure
one
one
more
comment
here,
I'm
not
sure
already.
You
already
did
some
testing
on
this,
maybe
you'd
better
to
check
the
overhead
added
to
the
controller.
If
we
add.
B
Yeah,
I
believe
I
added
several
graphs,
because
memory
was
like
a
concern
for
us,
so
we
made
sure
to
reflect
that
in
the
testing.
A
Yeah,
so
the
some
numbers,
some
exact
numbers,
are
in
the
issue
on
github
that
jake
open.
But
I
think
the
memory
footprint
was
like
around
like.
I
think
it
was
what
10
percent
20
percent
overhead
from
the
extra
indexer
that
we're
building.
G
Okay,
just
one
last
question,
so
I
I
think
I
overheard
that
when
you
deployed
all
those
pods,
the
nginx
pod
that
you
were
deploying
at
the
very
beginning
also
had
the
access
equals
to
true
label.
I
don't
know
if
I
hear
that
wrong
or
but
you
know,
on
the
on
the
get
endpoint
on
the
nginx
pod
on
the
ingress
rule.
I
don't
actually
see
the
part
itself
because
it's
you
know
selecting
ingress
from
the
label.
From
my
understanding
it
should
be
also
selecting
itself.
G
B
Asking
me
if
access
is
set
to
true
for
the
engine's
deployment.
G
B
B
A
Okay
thanks
everyone
thanks
jake
for
for
attending.
I
know
it's
late,
your
time
and
thanks
for
the
presentation,
we're
gonna
move
to
the
next
topic
on
the
on
the
agenda,
because
we're
a
bit
of
running
out
of
time
here
and
it
was
an
update
regarding
cluster
network
policy
matrix
by
chen,
so
chan,
if
you're
ready.
D
Okay,
I
just
created
this
issue
last
night
and
put
all
the
design
details
here,
and
actually
it
has
been
a
little
different
compared
with
the
prioress
washing
that
the
the
matrix
data
I
I
assume
it
will
not
be
persisted
to
the
kubernetes
crd
because
of
the
performance
effect.
D
But
I
can
talk
about
the
details
later
and
overload
the
the
proposal
is
to
collect
and
they
expose
the
statist
statistic
state
of
nato
policy
and
when
working
on
this,
I'm
I'm
thinking,
maybe
not
on
not
only
for
andrea
specific
net
policy.
We
could
also
do
that
for
kubernetes
network
policy
because
they
follow
the
same
model
and
the
way
they
just
has
a
diff
a
little
difference
when
enforcing
them
and
so
and
if
we
don't.
D
So,
oh,
it
should
be
good
to
have
it
for
kubernetes
nail
polish
too,
and
the
policy
the
metrics
api
will
be
exposed
by
entry
controller
and
the
entry
controller
will
is
responsible
for
collecting
the
metrics
from
omg
agent
and
solve
it
and
show
it
through
its
methods
api.
Then,
the
monitoring
solutions
and
the
users
can
access
this
state
where
the
entire
controller
metrics
api
and
also
it
could
be
accessed
by
end
cartel
get
metrics
narrow
policy
making
it
easy
to
wield.
D
D
I'm
thinking
about
one
minute
by
default,
for
for
detailed
design
and
for
the
capability
scalability
consideration,
I'm
assuming
we
want
to
support
100
000
policies
and
one
thousand
nodes
and
assuming
each
node
will
get
one
thousand
nail
polish
applied.
D
So
if
the
metric
state
is
collected
every
minute,
it
means
that
each
agent
will
report
1000,
metrics
per
minute,
and
the
entry
controller
will
have
to
sum
up:
1
million
metrics
for
100
000,
individual
net
forces
per
minute.
So
this
sounds
no
performance
issue.
When
collecting
and
aggregating
the
data
in
about
scale
but
for
storage.
D
Assuming
we
want
to
process
the
data
to
kubernetes
crd,
then
we
have
to
perform
100
000
api
writing
per
minute.
It
means
one
thousand
more
than
one
thousand
api
writing
per
second
by
average
and
the
even
we
we
consider
make
the
process
make
the
interval
to
up
to
10
minutes.
It
could
be
mean
it
could
mean
more
than
100
acre
writing
per
second.
D
So
I
think
it's
it's
not
a
reason
more
reasonable
to
write
the
state
to
to
possess
the
state
to
kubernetes
crd
and
actually,
the
only
difference
that
we
possessed
is
in
crd
or
way
we
sell
this
api
while
android
controller
is.
There
is
no
difference
from
user
perspective
because
they
all
get
this
data
phone
api.
The
only
difference
is
that
the
data
will
be
lost
or
not
because
if
we
persist
in
crd,
then
after
con
entry
control
restarts,
we
can
reload
the
state
and
keep
counting.
D
And
for
metrics
collection
by
agent,
each
agent
is
responsible
for
collaborative
methods
for
open
flow
stats
and
in
current
net
policy
implementation,
each
net
policy
rule
gets
an
unique
conjunction
id,
and
each
floor
has
two
stats:
two
stats,
the
packets
and
the
bytes,
but
the
current
flows
that
only
counts
the
first
packet
of
each
session.
D
As
we
in
the
as
we
can
see
in
the
conjunction
match
flow.
We
only
match
the
one
direction
of
the
session
and
it
always
come
from
it
all.
It
has
always
the
the
the
sender
it
always
matches
the
the
sender's
address
as
the
source
address
and
the
destination
the
receiver
address
as
the
destination
address.
D
So
the
following
package:
we
are
not
hit
by
this
law
so
and-
and
we
are
hit
by
a
general
generic
rule-
match
the
established
package,
the
the
package
of
is
established
sessions,
so
one
possible
solution
we
have
consider
considerate
is
to
use
ct
network
source
and
the
state
contract
city
network
destinations,
city
transport
destination,
as
a
condition
of
the
conjunction
match
flow
like
this,
and
this
could
solve
the
above
problem
that
so
because
this
ct
networks,
source
field
means
it
will
not
match
the
source
or
source
address
social
address
of
the
packet,
but
the
connect
track
source
address
of
the
session.
D
So
even
for
the
reply
package
it
will
hit
by
this
rule,
but
we
also
see
some
drawbacks
in
this
solution.
First,
the
ct
network
source
adjusts
requires
open
with
switch
2.8.
I
think
some
discharge
don't
have
it
yet,
and
the
second
is
in
ingress
table
in
ingress
table.
We
we
are
not
using
destination
address
as
a
condition,
but
the
off
port,
so
it
is
impossible
for
if
we
want
to
continue
using
the
off
port,
it
is
impossible
to
leverage
the
city
network,
source
or
city
network
destination,
and
the
third
one
is
released
by
srika.
D
I
think
he
mentioned
that.
Currently
we
only
need
to
match
each
pack
each
each
session
once
because
the
following
packets
can
just
hit
the
the
generic
flow,
but
if
we
change
the
flow
in
this
way,
the
the
first
packet
of
the
reply
package
will
also
have
to
go
through
the
pipeline
and
match
the
conjunction
condition
flows
so
by
the
overhead
should
be
very.
It
should
be
minor
so,
but
it
is,
it
still
has
some
overhead.
D
So
we
I
had
a
discussion
with
wayne
and
she
proposed
to
possess
the
conjunction
id
to
kind
of
track
labor
and
how
dedicated
dedicated
the
metrics
collection
flows
to
match
the
connect
kind
of
track
label.
The
flow
will
be
like
this.
In
this
way,
we
don't
need
to
change
the
original
narrow
policy
flows
a
lot.
We
just
need
to
resubmit
the
packets
to
another
matrix
collection
table,
and
another
change
is
that
we
need
to
load
this
conjunction
id
to
to
the
contract
label.
D
Then,
in
the
matrix
collection
table
we
could
use
the
connection
label
as
a
match
field,
and
we
we
can
then
use
city
state
new
or
not
new,
as
another
condition
to
collect
to
discrimi
distinguish
the
first
packet
of
our
session
and
the
following
package
and
the
new.
The
the
stats
for
the
new
state
will
be
considered
as
the
session
count
and
the
sum
of
them
will
be
considered
as
the
pac
the
package
card
and
the
sum
of
the
bytes
will
be
considered
considered
as
the
bytes
code
and
after
enter
engine
entry.
D
D
Instead
of
controller
pulling
data
from
agent,
we
I
propose
to
let
anti-agent
push
metric
state
to
api
or
for
entry
controller,
similar
to
the
the
current
way
that
entry
agent
pulls
the
internal
network
policy
from
mj
controller
api,
and
so
in
this
way,
the
same
authentication,
authorization
mechanism
and
even
the
tcp
connection
for
the
internal
network
policy.
Api
can
be
reused.
D
And
another
consideration
considerations:
each
entry
agent
could
restart
and
it
it
will
restore
all
net
process
stats.
All
nano
policy
open
flow,
so
the
open
flow
stats
will
be
reset
after
that.
D
So
if
entry
agent
sends
the
whole
stats
to
android
entry
controller-
and
they
are
a
synchronous,
so
it
is,
it
is
very
hard
for
android
controller
to
aggregate
the
whole
stats,
given
that
each
agent
could
restart
itself
and
recess
its
data
individually,
and
so
in
this
proposal
I
want
make
energy
agent
to
calculate
the
incremental
stats
and
only
report
the
incremental
stats
to
energy
controller.
So
the
controller
can't.
Oh
sorry,
this
is
a
type
one.
D
The
the
energy
controller
could
simply
sum
up
some
of
the
data
up
and
don't
need
to
consider
the
agent
restart
case.
D
D
And
because
the
polish
it
could
have
the
same
name
and
the
name
space,
so
they
could
conflict
with
each
other.
So
I'm
I
add,
a
type
field
here
to
this
screen
distinguish
from
them
and
the
the
stat
state
includes
package
bytes
and
sessions,
and
the
api
endpoints
doesn't
need
to
follow
the
kubernetes
api
pattern
and
we
can.
We
cannot
follow
it
because
we
want
to
repo.
We
want
to
push
the
stats
in
batch,
not
by
individual
item.
D
Otherwise
it
will
be
api
api
overhead,
so
the
the
mesh
agent
will
report
the
well.
We
report
the
a
batch
of
narrow
process
stats
every
minute.
While
this
api,
then
ap
and
then
entry
controller
will
collect
the
will,
get
this
state
and
then
sum
them
up
and
equilibrate
them
then
store
them
in
cache
in
memory,
and
there
is
another
group
of
api
code,
I'm
calling
them
metrics
api.
For
now.
D
This
api
will
follow
the
kubernetes
api
pattern.
It
could
support
us
list
and
get
web
so
and
we
could
also
register.
Is
it
as
a
kubernetes
api
service,
so
it
could
be
accessed
by
the
kubernetes
api.
D
D
So
the
api
group
for
now
another
intermediate
and
share
tensor
well
more
account
and
the
endpoint
will
be
like
this,
and
I
I
still
have
an
open
question
that
should
we
have
separate
api
endpoint
for
kubernetes,
neuropathy,
glass
and
neuropathy
and
adrenal
forces.
The
the
reason
is:
if
we
we
just
use
one
endpoint,
then
then
there
is
difficult
to
have
rbac
for
this
resource.
But
you
know
the
customer
policy.
Is
cluster
scoped?
D
So
this
is
a
question
we
could
discuss
and
after
having
this
user
can
access
the
narrow
processing
metrics,
while
this
matrix
api
and
they
can
also
use
encoder
to
get
the
matrix
and
by
specifying
a
a
nail
voice,
a
name
and
a
namespace
or
a
noun
or
on
or
not
specify
a
name
to
get
the
list
of
metrics
and
that
that's
all
for
the
detailed
design.
Any
question.
E
D
G
Tren,
I
have
a,
I
have
a
very
high
level
question.
I
don't
know
if
it's
right,
though,
is
that
when
I
look
at
this,
I
I
feel
like
your
design
is
to
basically
have
the
statics
being
sort
of
stateless
when
published
to
to
the
api
channel
between
the
client,
the
agent
and
the
controller.
G
So
I
I
see
on
the
agent's
side,
when
we
watch
for
address
groups-
and
you
know
apply
to
groups,
for
example,
we
sort
of
tend
to
you,
know,
disconnect
and
reconnect
and
everything
if
the.
If
the
matrix
you
publish
to
a
path
server
is
sort
of
like
stated,
how
would
you
guarantee
the
the
increment
increments
are
sort
of
like
added
correctly
meaning
we
don't
like
miss
any
increments
or
double
count?
Any
increments.
D
This
is,
though,
this
this
is
not
a
long
connection,
though
the
underlying
tcp
connection
is
is
a
long
connection.
The
http
connection
is
short.
It
just
reports,
it
just
reported
the
the
date
one
per
minute
and
this
kind
of
disconnect
the
http
connection.
D
So
I
assume
that,
as
long
as
that,
the
data
is
pushed
successfully,
the
data
is
we.
We
could
consider
that
we
have
the
the
agent
can
can
refresh
its
cache
and
use
the
latest
value
as
the
base
of
the
next
round.
D
G
G
So
if
that's
the
case,
you
know
a
new
fresh
agent
will
install
new
flows
for
the
network
policies,
whereas
you
know
the
old
flows
will
sort
of
you
know
just
hanging
there
right
and
the
new
agent
will
not
be
able
to
we're
not
basically
looking
to
the
these
old
flows
and
in
fact
these
old
flows
might
still
be.
You
know
having
traffic
hits
and
and
whatnot
in
the
agent
downtime,
so
yeah,
I'm
thinking
you
know
those
will
be
the
things
that
we
definitely
gonna
miss.
If
we
have
you
know
a
stateless.
D
C
D
For
for
the
first
case,
I'm
considering
that
when
the
agent
comes
up,
it
should
have
a
snapshot
of
the
current
stats
and
uses
as
the
base
of
the
incremental
date
and
and
after
a
period
like
one
minute.
It
should
collect
it
another
time
and
calculate
the
incremental
part
and
report
the
incremental
part.
Only
so
if
so,
the
data
between
it
is
it
is
off,
and
it
is
on
the
the
matrix,
the
the
stats
between
that
time
window
will
be
lost.
D
We
don't
have
a
good
way
to
just
distinguish
whether
we
have
reported
it
or
not
before
right.
So
for
another
case,
it
is
actually
is
same
because
the
snapshot
will
have
all
zero
value
for
all
neural
policies
and
when
it
collects
the
methods,
the
second
time
it
could
calculate
the
incremental
part.
D
A
Thanks
all
right,
thanks
for
the
presentation,
please
comment
on
the
issue.
If
you
have
any
follow
up
questions,
okay,
so
I.
A
Have
10
minutes
late,
a
tennis
left
story,
so
we
may
not
be
able
to
go
to
the
end
of
the
agenda.
But
the
next
item
is
to
discuss
the
performance
evaluation
of
of
the
entry
or
proxy
and
I
think
that's
chan
again.
D
Okay,
thank
you
anthony.
I
did
some
performance
tests
in
my
private
test
bed.
It's
a
kom
based
and
I
don't
think
the
absolute
value
is
reliable,
but
we
can
tell
the
difference
between
the
two
modes.
One
is
when
and
when,
when
we
use
cube
prophecy
to
proxy
the
access
to
service
and
another
one
is,
we
have
open
flow
to
proxy
the
service
access
and
we,
I
have
two
tables.
D
One
is
for
the
intranode
case,
as
you
can
see
from
the
pictures
that
the
tcp
stream,
if,
if
is
if
it
is
the
queue
proceed
case,
compared
with
the
port
port
and
the
product
to
service,
it
will
decrease
a
lot.
I
think
it
is
because
in
this
mode
we
have
to
load
the
traffic
to
the
host
network
namespace
once
and
then
the
package
will
have
to
go
to
the
habitable
rules
in
the
hostname
host
network
and
also
the
and
also
look
up.
D
D
So
so
this
is
expected
and
when
I,
when
I
add
a
lot
of
service
back
background
service
to
that
process
and
angioprosis
to
creating
a
lot
of
habitable
loss
and
open
flow
loss,
the
the
effect
the
influence
influence
on
the
performance
is
also
overwhelms
like
for
cube
process
case.
We
can
see
that,
along
with
the
increase
of
the
service
number,
the
throughput
will
decrease
a
little
but
the
for
for
anti-proxy
case.
The
support
is
consistent
regardless
the
number
of
the
proxy
of
the
number
of
narrow
policies,
sorry
services
and
another
metrics-
is
the
tcp
crr.
D
Repeat
repeatedly,
connect
and
request,
then
use
and
exit
and
then
get
the
response,
then
destroy
the
connection
and
create
another
new
connection.
So
it's
for
the
short
connection
case
and
when
q
process
is
used,
we
can
see
that
the
performance
decrease.
Obviously,
when
the
number
of
service
increase
and
for
an
entry
proxy
case,
it
is
consistent.
I
think
this
is
because,
by
the
way,
the
when
I'm
talking
about
cube
process,
I
mean
the
iv
table
small
of
cube
processing.
D
I
think
the
performance
gap
is
because
for
applicable
it
how
to
do
liner
search
of
tables
loads,
to
select
the
character
d
net
destination
and
for
open
flow,
the
net.
It
only
need
to
do
some
tuple
space
search
and
it's
based
on
hashi,
so
the
the
performance
is
consistent
and
for
tcp
this
is
a
case.
It
is
a
long
connection,
but
it's
a
request
response
mode.
D
So
this
is
for
internal
case
and
another
is
the
internal
case.
I
didn't
see
a
considerable
difference
between
these
two
modes.
I
think
it
is.
It
is
mainly
because
that,
for
internal
case,
the
main
bottleneck
is
not
in
the
it's
not
in
the
live
tables
or
open
floor
match
it's
in
the
in
the
in
cap
of
the
decap,
so
both
of
them
has
a
similar
performance
in
in
the
in
the
different
scenarios.
D
D
D
But
recently,
where
we
are
comparing
that
with
four
four
zeros,
but
that
is
different
from
this
case
and
that
is
for
performing
performance
gap
between
andrea
with
other
things
yeah,
but
for
cupola
and
anti-proxy.
I
think
there's
no
scenario
that
acupuncture
the
way
we
integrate
with
the
cube
policy
will
perform
better
than
the
way
we
have
open
flow
for
for
service
access.
D
Yes,
I
I
guess
that
ipos
will
have
a
consistent
performance,
regardless
of
the
service
number
based
on
its
technical
theory,
but
I
didn't
test
it
the
way
I
I
can
have
a
test
on
that
too.
In
future
yeah.
F
Hey
john,
with
obvious
offload
related
node
numbers
be
better
right
than.
F
With
obvious
offload,
the
bottleneck
will
still
be
the.
D
A
D
Maybe
we
maybe
we
could
test
it
in
while
the
no,
not
no
in-cap,
okay,
knowing
cap
test
bad,
because
we
could
remove
the
bottleneck
off
in
capital
d
cap,
two
yeah,
that's
a
good
idea.
Okay,.
F
Okay,
like
with
obvious
offload,
what
will
be
the
expectation
I
mean
I
did
not
get
like?
Is
there
any
expectation
of
adria
proxy
doing
better
or.
D
Yeah
yeah
I
I
would
expect
that
actually
proxy
will
do
better
than
kubrow's.
In
that
case
too,
yeah.
A
Okay,
thanks
again
chen
for
the
presentation
we're
out
of
time.
The
one
item
we
didn't
have
time
to
cover
on
the
agenda
was
discussing
whether
the
entry,
a
proxy
implementation,
should
be
the
default
starting
with
the
next
entry
release.
E
Slack,
I
think
we
are
doing
some
tests
around
unchecked
boxing,
probably
I'll,
still
offline
for
some.
A
Results
yeah.
So,
basically,
if
anyone
has
deployed
entria
with
the
proxy
feature
at
scale,
I
think
we
would
appreciate
that
feedback
and
if
anyone
runs
into
any
issues
that
we
should
take
into
account
when
making
the
decision
of
enabling
entry
proxy
by
default,
that's
also
something
we
would
like
to
hear
about.
So
we
can
make
that
decision
so
yeah.
A
A
Okay,
well
thanks
everyone
thanks
for
joining
thanks,
jake
and
chan
for
presenting
today
and
enjoy
the
rest
of
the
day
or
the
beginning
of
your
night.