►
From YouTube: Antrea Community Meeting 05/18/2020
Description
Antrea Community Meeting, May 18th 2020
A
A
B
So
I
thought
it's
probably
a
good
time
to
update
the
changes
that
we
made
on
the
spec
slight
changes.
Initially,
there
were
a
couple
of
things.
The
first
thing
was
around
category
or
tears,
initially
proposal
image
the
initial
proposal.
What
we
said
was
that
the
canaries
network
policies
will
always
be
created
at
the
bottom,
and
the
categories
would
always
be
created
above
that,
above
those
policies
and
and
then
users
would
be
able
to
create
cluster
policies
and
set
set
them
to
categories
which
would
always
be
above
the
network
policies.
So
I
think
over
here.
B
The
requirement
which
you
mentioned,
that
we
should
have
the
network
policies.
The
Kuban
area
set
of
policies
also
set
in
a
category
which
will
be
at
the
bottom.
So
you
know,
instead
of
just
saying
that
they
are
outside
of
categories.
They
will
also
be
part
of
the
categories,
and
category
will
be
the
default
categories
be
at
the
bottom,
and
then
users
can
also
create
administrate
cluster
policies
and
set
the
category
as
the
default,
so
they
will
be
grouped
with
the
same
category
as
the
the
critical
policy.
B
The
other
thing
was
that
we
should
also
allow
some
way
or
fashion
to
create
crystal
policies
or
lacking
of
namespaced
and
hieratic
policies
to
be
created
below
the
Kuban
ad
set
of
policies
in
the
same
default
till
so
so
that
was
one
thing
that
we
are
going
to
consider
as
a
phased
item.
So
when
we
start
working
on
the
next
phase
of
items
which
would
include
the
clearing
concept,
we
will
include
this.
This
particular
task
as
well.
B
The
other
other
user
story
that
was
brought
up
was
that
we
would
also
want
the
ability
to
set
two
fields
in
English
words
and
from
fears
in
the
egress
routes.
So
if
you
look
at
community
network
policy
today,
you
know
it
just
has
an
English
section
and
an
English
section.
The
English
section
will
talk
about
the
traffic
in
coming
to
the
applied
to
pours,
and
so
in
that
case
the
two
is
always
the
applied
to
set
of
ports.
But
what
I
think
in
this
case,
where
you
may
be
something
for
outside
of
the
cluster?
B
You
may
have
email,
VMs
or
nodes
which
have
multiple
interfaces
and
you
will
apply
the
policies
on
those
interfaces
or
those
those
nodes
or
games,
and
what
you
may
want
to
only
select
a
subset
of
those
interfaces
that
are
associated
with
that
node.
So
you
might
apply
the
policies
on
the
node,
which
has
all
the
interfaces
button
in
the
two
section
you
want
to
granularly
select
only
like
certain
IP
addresses
to
be
allowed,
so
so
in
suit.
In
order
to
accommodate
that
particular
use
case,
there
were
a
couple
of
approaches
available.
B
One
was
we
generalize
the
we
collapse,
the
ingress
section
and
egress
section
into
a
very
generic
rule,
section
we'll
have
to
from
direction.
You
know,
but
that
would
be.
You
know
a
little
too
far
away
from
the
capabilities,
Network
policies
and
might
add
to
a
learning
curve
for
for
users.
So
we
decided
to
do
a
compromise
us
to
still
have
the
from
and
we
still
have
the
ingress
and
egress
sections
and
in
the
policies
but
also
generalized,
like
you
know,
they
will
refer
to
the
same
structure
internally.
B
B
So,
instead
of
you
know
earlier,
as
you
can
see
the
ingress
and
egress
for
pointing
towards
you
know
an
ingress
rule,
struct
or
and
an
egress
route
struck,
which
would
have
the
corresponding
fields
in
it,
we
decided
to
still
have
the
ingress
and
egress
sections
for
the
policy
spec,
but
they
all
mark
to
the
same
struct
here,
which
would
be
something
like
this,
which
would
have
the
from
and
to
in
the
first
phase.
We
will
only
allow
setting
to
in
egress
routes
and
from
in
English
roots,
but
you
know
we
have
more
support.
B
B
Kind
of
a
comment
that
came
was
like:
we
should
also
have
an
ability
to
set
applied
to
the
basis
which
would
override
the
entire
policy
supply
too,
but
we
haven't
found
really
use
cases
or
something
that
good
user,
user
or
UX
for
this
particular
field.
Yet
so
it
is
something
that
can
be
incrementally
added
forward,
but
that
is
that
that
has
been
put
on
the
back.
So
so
those
were
the
major
items
that
were,
you,
know,
discussed
and
updated
since
then,
and
I
think
yeah
that
that's
about
it
and
I
just
go
through
the
taste.
B
I
have
mentioned
that
we
do
this
in
like
a
couple
of
implementations,
but
the
initial
phase
will,
you
know,
target
to
have
like
a
cluster
NATO
policy
on
the
cluster
scope
basis,
which
will
allow
network
administrators
to
create
these
security
policies
using
the
label.
Selectors
that
apply
policies
you
know
using
select
is
not
yet
by
matching
names
that
could
come
in
the
next
phase.
B
You
delivered
to
poor
selectors
namespace
selectors
in
the
first
phase
for
the
different
rooms,
and
we
have
like
two
and
we
will.
We
will
provide
the
ability
to
prioritize
a
particular
cluster
NATO
policy
with
respect
to
the
other,
with
the
priority
field,
so
the
user
can
set
those
priority
fields
and
then
we
can
again
have
like
an
order
with
another
tier
later
on.
It
will
be
reflected
within
it.
B
Here
you
can
have
neck
specific
orders
and
then
you
have
ordering
between
queues
as
well,
so
you
can
have
Meghan
hierarchy
so
that
will
come
in
the
face
bar
for
the
next
phase.
Then
we
will
also
the
rules
that
you
create
in
the
network
pole
in
the
cluster
policy.
They
will
be
ordered
so
so
the
rules
that
are
created
at
top
will
have
the
higher.
B
You
will
also
target
the
actions
of
allowing
drop
and
I
think
in
in
the
next
phase.
There
was
not
request
in
the
next
place,
will
add
Dec
reject
action
which
would
give
some
feedback
to
the
client
and
a
section
to
you
know,
jump
to
a
different
killer
or
a
different
policy,
so
that
will
be
a
follower
and
then
also
allow
IP
blocks
in
the
policy.
So
this
is
like
the
first
phase
and
then
add
the
categories
will
add
the
ability
to
say
to
in
English
and
from
then
using
would
match
the
were
close
directly
using.
B
C
B
Then
we'll
think
about
services
and
dns-based
vectors.
So
these
are
the
changes
now
and
and
for
this
particular
item
there
are
two
controller
side
PRS
for
you
and
the
agent
side.
Work
is
also
in
progress,
so
that's
the
sense
I.
Also
after
the
status.
You
know,
if
there's
any
question
on
this,
please
go
ahead
after
the
status,
I
added
another
kind
of
proposal
for
those,
the
meaning
of
the
sub
resources
of
the
API
groups,
because
you
know
it's.
B
If
you
start,
they
have
a
formalized
way
of
naming
these
things
for
the
new
cid
they're
coming
in
for
a
period
I
think
in
progress
work.
So
if
you
have
make
a
standardized
way
of
doing
this,
I
had
like
a
one
proposal.
So
if
there's
no
specific
questions
on
the
on
the
cluster
network
policy,
then
I
can
II
talked
about
that
proposal.
I.
D
D
Can
you
clarify
if
there
is
any
case
in
which
this
makes
sense
for
what
we
support
today
in
terms
of
like
the
applied
to
field,
because
today's
a
apply
to
field
is
gonna
select
like
pods
correct,
and
so,
if
I
define
like
an
egress
role
and
I
want
a
set
of
from
field
worried?
What
is
that
from
field
going
to
look
like
because,
concretely
because
I,
we
basically
right
now
have
a
single
interface
per
pod,
yeah.
B
So
so
that
you
know
for
in
cluster,
were
close,
it
probably
doesn't
really
make
sense.
So
that's
why
we
didn't
add
that
in
the
first
phase
itself,
I
think
it
makes
more
sense.
When
you
have
like,
when
you're
applying
policies
to
something
like
nodes
or
external
entities,
something
which
is
outside
of
the
cluster,
then
it
might
make
more
sense.
Another
use
case
that
I
can
think
of
is
maybe
or
with
multiple
interfaces.
B
B
So
as
part
of
the
CID
creation,
in
the
first
step,
we
will
ensure
that
there
is
validation
on
the
schema
level,
that
whenever
a
user
creates
an
Ambu
with
an
ingress
rule
and
sets
a
to
field
that
will
be
considered
considered
as
an
invalid
cluster
maker
policy.
To
begin
with,
until
we
have
that
support-
or
we
have
integrity
are
like
ready
to,
you
know
formulas,
these
new
use
cases.
So
so
we
will
only
relax
that
validation,
when
we
suppose
is
so
user,
should
not
be
able
to
input
that
wire
yeah.
B
Okay,
so
if
there's
no
other
question,
I
had
like
a
so,
if
you
see
that
we
are
creating
a
lot
of
CDs
and
there's
a
lot
of
other
proposals
which
are
coming
in
and
most
of
those
you
know,
most
of
the
CID
is
sort
of
like
either
just
randomly
creating
some
subgroup
or
you
know
a
group
for
a
subgroup,
or
you
know
particular
see
already
and
depending
on
what
it
is.
There
is
no
like
formal
but
way
of
like
specifying
like
what
should
be,
which
CID
should
go
in
with
subgroup,
so
I
thought.
B
B
Subgroup
I'm
not
talking
about
the
API,
the
entire
API,
you
bring
that's
probably
another
discussion,
but
for
the
subgroup.
What
I
was
proposing
is
that
any
security
related?
You
know,
cluster
policies
or
Lex
namespace
network
policies,
anything
that
is
related
to
network
force
lecturers
and
all
those.
B
B
The
tools
can
be
put
into
a
troubleshoot
CRE,
and
if
we
come
up
in
future,
we
talk
about
like
doing
an
item
within
interior.
We
could
think
of
like
any
of
those.
You
know,
IP
pool
or
IP
block
CID
is
that
we
might
introduce.
We
can
probably
go
in
a
network
sub
group
instead
of
like
every
CID
having
their
own
sub,
so
I
just
thought.
Maybe
we
can
open
up
a
discussion
around
this,
maybe
not
today
about
maybe
perhaps
over
some
other
meeting,
but
something
that
we
should
think
about.
E
B
E
F
B
B
B
I
think
definitely
should
be
kept
separately,
also
create
problems
generating
code,
but
I
think
that
that
definitely
needs
to
be
separate,
making
the
ones
which
are
not
users
user
facing
the
ones
which
are
mainly
used
for
internal
purposes
like
origin
right
in
here.
They
should
definitely
be
more
effective
in
the
networking
which
we
already
have.
B
E
B
B
A
G
G
So,
thanks
for
the
opportunity
to
speak
about
this
proposal
today,
so
like
I'm,
going
to
talk
about
the
flow
exporter
Nigeria
agent
and
like
the
main
objective,
is
to
achieve
network
visibility.
It
in
in
humanities
cluster.
This
essentially
useful
for
various
things
like
network
management,
probably
ten
components
of
network
policies
exist
and
the
existing
CNI
solutions
like
calico,
see
Liam
has
some
sort
of
pollutants.
Calico
has
only
the
solution.
Only
in
its
enterprise
version
in
psyllium
has
a
project
called
of
this
project
called
humble,
which
provides
visibility
into
flows
and
stuff.
G
So
essentially
the
there
are
various
things
that
we
can
answer
with
this
and
then,
as
I
listed
like
you
know,
what
are
the
network
policies
employed
between
two
services
or
between
two
pods
that
are
there
in
the
humanities
cluster
and
then
which
service
port
l4
port
has
the
highest
traffic
on
a
node
and
then
what
nodes
are
communicating
it?
What
is
a
bandwidth
between
two
nodes,
neck
abilities,
cluster
and,
like
you
know
how
many
connections
are
there
with
only
TCP
stand
packets
and
then
floods
etc.
G
So
various
questions
can
be
answered
through
this
network
visibility
solution
and
then
the
flow
network
flows
are
in
each
node
of
the
building.
Are
the
building
blocks
to
provide
this
network
visibility?
So
this
proposal
is
going
to
discuss
that?
What
network
flows
we
are
going
to
connect
and
how
are
we
going
to
export
and
how
are
we
going
to
visualize?
You
know
these
are
the
things
that,
as
I
said,
the
flow
visibility
and,
like
the
user
stories
that
I
have
written
to
explain
the
use
cases
for
this
feature.
G
As
the
flows
are
collected,
we
are
going
to
process
the
flow
matrix
and
then
export
them
through
Prometheus
exporter,
and
these
flow
matrix
can
be
of
different
types
like
that
they
can
be
like
number
of
connections
per
pod.
Number
of
connections
per
node
like
this.
This
can
be
packaged
bytes
as
well,
instead
of
connections
and
then
again,
the
matrix
service,
cluster
IP
and
full
proto
this
destination
ports.
G
G
So
now
I'm
going
to
go
into
the
design
of
the
solution.
What
we
are
proposing
the
flow
X,
the
flow
exporter
essentially
will
be
running
as
part
of
an
tree
agent
and
we
it's
going
to
collect
clothes
from
contract
module,
I'm,
going
to
discuss
the
alternatives
that
we
looked
at
for
contract
modules
and
why
we
have
big
contract
modules
a
little
bit
later,
but
we
have
decided
to
go
with
the
contract
flows.
So
the
flow
exporter
pulls
the
contract
module
and
get
the
four
get
the
flows.
G
And
then
this
flow
exporter
makes
IP
fix
flow
records
and
also
process
the
flow
matrix
as
after
every
Popol.
So
the
IP
flix
flow
records
will
be
sent
to
a
flow
collector
and
then
there
will
be
a
flow
visualizer
as
well,
and
this
we
are
recommending
elastic
flow,
which
is
an
open
source
solution
and
very
popular
in
the
community.
A
lot
of
people
are
using
that
so
and
then
it
supports
IP
fix
protocol,
so
the
flow
exporter
exports.
G
E
G
Yeah
yeah,
it's
just
for
simplicity,
sake
as
a
first
step
as
when
we
think
that
you
know
there
is
some
resource
consumption
issues
as
it
will
be
pulling
the
contract
module
right
constantly.
So
if
there
are
issues
with
the
resource
consumption,
then
we
have
to
separate
it
out
either
as
its
own
container
or
its
own
pod
with
a
separate
interface,
don't
think
on
yeah,
so
the
network
yeah.
H
Speak
first,
sorry
in
the
in
the
graph
I
didn't
see
how
it
can
specify
which
flows
to
export.
Is
there
an
interface
for
it
to
trigger
the
flow
export.
H
For
example,
I
want
to
set
some
criteria
to
which
flows
I
won
I'm
interested.
So
maybe
is
there
any
way
or
is
exporting
all
the
flows?
Oh.
G
So
currently
we
are
exporting
only
the
entire
related
flows,
pot
to
pod
flows,
I
mean
service
to
service
flows
as
well,
but
going
through
the
entire
created
OBS
bridge.
So
we
are
not
going
to
export
like
the
other
flows
in
the
system.
That
is
the
the
first
thing,
but
we
yeah
I
see
that
if
there
is
any
filtering
like
you
want
to
see
only
the
flows
from
certain
pod.
The
question
yeah.
H
G
Yeah
yeah
we
we
do
not
have
that,
but
it
can
be
integrated
the
in
the
contract
module.
When
we
are
going
to
poll,
we
can
give
a
match
flow
match
and
then
we
can
give
certain
parameters
so
that
you
know
we
can
get
only
the
flows
from
certain
pod.
Okay,
it
can
be
extended,
but
currently
we
are
going
with
showing
all
the
flows
going
through
the
obvious
bridge
created
by
an
Korea.
Okay.
E
I
G
Sure
yeah,
we
don't
have
any
benchmarks
as
such.
We
don't
have
like
as
part
of
the
proposal.
We
want
to
do
benchmark
in
the
end
by
having
some
sort
of
setup
where
you
know
creating
some
100k
flows,
or
you
know
very
large
number
of
flows
and
see
how
the
system
is
reacting
and
yeah.
So
that
is,
that
is
the
plan,
but
we
don't
have
any
any
any
any
thoughts
on
how
it
will
look
at
yeah.
The
plan
is
to
get
this
functionality
in
and
then
do
some
scale
testing.
J
A
J
G
G
G
C
G
F
That's
one
on
top
and
also
a
we
saw
the
performance
issue,
media
Pro
obvious
on
customers,
and
we
see
a
lot
of
like
a
nice.
It's
much
are
like
a
you
know.
A
lot
of
packets
are
planted
from
the
data
trying
to
control
process.
If
you
see
look
at
this
kind
of
information,
so
not
be
great
I
think
you
know
to
troubleshoot
performance
issue.
Oh.
G
G
G
A
G
G
You
know,
and
Edmund
also
too,
before
going
there
I'll
mention
what
are
the
other
possibilities.
Other
possibilities
are
obvious:
IP
fix
the
there
is
IP
fix
feature
on
OBS,
which
directly,
which
can
provide
a
one
direction,
flows,
information
from
the
data
plane,
and
then
it
can
create
flow
records.
There
is
an
inbuilt
exporter
and
then
can
we
just
have
to
configure
that
with
a
collector.
G
So
it's
it's
used,
but
the
problems
are.
It
cannot
provide
visibility
into
the
reverse
direction
and
then
it
cannot
provide
connection,
State,
TCP
connection,
States
and
also
sometimes
retransmission,
and
then
it
and
then
another
another
information
that
would
be
missing
is
network
policy
related
which
contract
can
provide
and
then
similarly
OB
SS
flow.
There
is
another
flow
flow
information
solution.
Possible
s
flow
is
somewhat
different
from
I
prefix.
G
What
it
does
is
it
samples
a
packet
out
of
many
packets
and
then
it
will
send
it
to
a
collector
and
then
the
collector
will
parse
through
the
packet
and
create
flow
records
for
each
flow.
So
the
mechanism
is
different
and
there
as
well
the
cons
are,
it
will
be
single
direction
and
then
we
cannot
get
Network
policy,
information
and
yeah.
Those
are
the
main
cons
with
s
flow.
There
is
a
benefit
where
we
can
pass
till
L,
seven
layer.
G
You
know
this
could
be
useful
for
some
HTTP
related
statistics,
but
that
that
will
be
missing,
but
we
have
to
think
about.
If
it
is
required,
you
know
we
could
enhance
in
different
ways
a
future
but
yeah.
These
are
the
solutions
again
going
over.
The
benefits
I
went
over
like
another
benefit.
I
missed
is:
if
we
have
some
some
kind
of
sampling
in
OVA
site,
we
fixed
it.
G
We
have
to
sacrifice
accuracy,
we
cannot
collect
all
flows
and
then,
if
we,
if
there
is
no
sampling,
there
will
be
overhead
in
the
data,
so
that
is
also
major
thing.
Whereas
contract
flows,
we
are
just
going
to
pull
the
contract
modules.
Information
is
there,
there
is
no
relation
to
the
data
plane,
this
not
being
implemented
in
data
place.
So
there
is
no
overhead
intake
update
yeah.
G
Those
are
the
points
and
another
point
is
with
contract
flows.
We
can
use
different
kind
of
communication
channels.
We
prefer
dye
prefix
communication
channel
because
it's
popular
and
lot
of
collectors
support
it,
but
you
can
prefer
we
can
go
for
any
other
communication
yeah
this.
So
these
are
the
benefits.
That's
why
we
went
with
contract
knows.
Do.
D
A
H
G
Like
I
think,
the
IP
fix
also
does
the
same
thing
it.
It
also
exports
only
the
metadata.
Some
some
fields
creates
the
field,
but
s
flow.
As
you
said,
it
can
get
a
packet
and
pass
through
it
and
can
get
a
little
more
information
so
that
we
are
missing
out.
So
we
have
to
you
know.
If
at
all
we
go,
we
want
this
l7
related
information.
You
know
we
have
to
enhance
in
the
future
in
some
ways,
but
I
haven't
it's
kind
of
beyond
the
scope
of
this
document.
A
See
regard
simple
questions
from
me:
I
seem
to
recall
this
is
the
flows
that
we
are
exporting.
We
are
just
exporting
like
the
row
contract
flow,
so
there
is
no,
and
there
is
there,
isn't
anything
in
the
data
that
you're
exporting
that
will
allow
the
to
receipt,
link
back
a
flow
to
a
particular
couple
of
pods
or
to
a
particular
connection
between
a
pod
and
a
service
and
so
on.
Right.
G
G
G
So
now
I'm
I'll
go
through
the
flow
record
like
them.
How
will
it
look
like
so
the
there
are
like
the
basic
feels
we
are
going
to
expose
her
like
the
flow
like
optic
Delta
count,
how
many
bytes
that
we
have
seen
before
the
last
export
we've
compared
to
the
previous
export
and
then
packet
Delta
count
and
then
the
flow
start
seconds
flow
in
seconds.
The
source
address
destination,
address,
l4,
port
information
proto,
and
then
we
are
going
to
create
new
fields
which
which
will
have
a
different
Enterprise
ID.
G
Thank
you
specific,
the
sore
spot
for
namespace
sore
spot
name
destination,
port
namespace
destination
port
name,
and
then
there
are
certain
reverse
fields
to
get.
Since
we
have
bi-directional
flow
information
in
contract
flows,
we
are
going
to
get
that
field
information
as
well,
so
these
will
be
there
and
for
every
flow.
We
are
going
to
create
a
record
these
fields
and
going
to
export
on
using.
G
So
that
is
the
idea,
and
then
there
is
one
public
library
that
we
can
use
to
export
the
physical
records,
but
that
library
is
kind
of
old
and
it's
not
maintained.
You
know,
like
I,
haven't
seen
any
recent
check-ins
so
and
then
it
also
doesn't
serve
the
purpose
of
like.
There
is
a
use
case
in
our
proposal
where
we
have
to
intercept
IP,
fix
flow
records
and
add
new
fields
in
each
flow
and
then
again
send
it
to
collector.
We
are
kind
of
calling
it
as
IP
fix
forwarded.
J
Ask
a
quick
question:
yeah
yeah,
have
you
thought
about
anything
around
compression
in
terms
of
flows
that
are
repeated
all
the
time?
Is
there
some
way
that
we
can
add
a
field
that
basically
says
we've
seen
this
flow
a
hundred
times,
instead
of
exporting
that
flow
a
hundred
times
depending
on
your
collection
period,
I'm,
not
sure
how
IP
fix
works,
but
I'm
curious
if
there's
ways
to
compress
it
so.
G
The
flows
are
essentially
like
Phi
triple
loss,
so
we'll
be
like
updating
the
flow
information
for
each
fight.
People
flow
in
every
flow
record:
okay,
like
yeah,
so
so
yeah.
Essentially,
let's
say
the
flow
is
there,
for
you,
know
ten
minutes
and
we
are
sending
flow
records
every
two
minutes,
so
we
are
going
to
send
the
number
of
packets
like
every
two
minutes
number
of
bytes,
and
then
we
are
going
to
update
the
flow
record
in
the
next
flow
record.
We
are
going
to
send
it
for
next
two
minutes.
K
G
K
K
The
reason
I
was
asking
is
well.
This
is
going
on
I'm,
looking
over
at
the
calico
specs
and
I,
see
them.
I
found
a
document
that
talks
about
five
couples
on
their
flow
logs.
So
I
was
wondering
if
it
was
the
same
concept
as
that
and
there
five
couples
are
source,
namespace
source
pod
source
labels,
which
is
interesting
because
I
haven't
heard
us
mention
labels
and
same
things
on
destination.
K
G
G
K
D
G
D
D
D
J
Reason
the
reason
I
was
asking
and
say
you
have
a
lot
of
quick
flows
right
that
open
up
a
HTTP
connection
to
do
a
quick,
get
right
and
close
down,
and
they
always
look
the
same
and
in
terms
of
you
know,
source
and
destination,
but
the
reason
we
might
want
to
compress.
That
is
if
it's
always
the
same
thing,
and
it's
and
it's
you
know
generating
a
lot
of
data
from
an
enterprise
use
case
perspective.
J
J
D
J
Yeah
they
would
be
different.
The
question
I'm
getting
at
is,
if
you
know
the
source
on
the
egress
side
and
the
destination
I
mean
the
when
we
getting
in
here
the
source
on
the
egress
I'd
write
in
terms
of
port
right.
That
could
be
changing,
for
example,
and
cause
the
whole
five
tuple
to
change
on
every
on
every
send,
but
we
may
not
necessarily,
from
a
practical
standpoint,
have
more
more
information
right
for
somebody,
who's
actually
interested
in
understanding
the
service
graph
or
looking
at
the
flows
from
a
service
graft
perspective.
J
J
D
Can
be
done
at
I/o
level,
which
is
I
mean
assuming
we
don't
generate
I
mean
we
may
generate
like
a
lot
of
IP
fix
data,
but
that's
not
that
doesn't
have
to
be
like
exposed
in
a
road
fashion
directly
to
the
user
right
we
can,
assuming
we
don't
generate
too
much
traffic.
We
can
do
that
compression
and
remove
that
noise
at
the
collector
level.
If
we
write
like
a
custom
collector.
D
D
D
G
D
L
D
L
D
Gonna,
we
would
only
include
connections
which
are
in
an
established
state,
potentially
in
which
case,
if
you're
just
doing
like
port
scanning
and
you're,
just
sending
like
seen,
packets
and
waiting
to
see
if
you
get
an
AK
or
not
I'm
thinking.
If
this
is
the
case,
what
you're
describing-
and
maybe
this
wouldn't
be
too
much
of
an
issue
assuming
we
only
send
like
information
about
established
connections
and
separately,
we
can
keep
track
of
as
3-car
described.
Actually
those
connections
that
ever
seen
packet,
but
the
connection
is
actually
never
fully
established.
G
D
F
G
G
Okay,
moving
forward
I
think
I
have
only
five
minutes
left,
so
the
we
are
going
to
as
a
first
step,
we
are
going
to
add
some
local
Cuban.
It
is
resource
information
it
agent.
We
are
not
going
to
add
everything
like,
for
example,
in
a
flow
we
may
not
know.
We
may
only
know
the
society
or
destination
IP
like
the
pods,
there's
only
two
source
IP,
a
district
that
is
locally
available.
We
are
going
to
add
that,
but
we
are
not
going
to
add
the
remote
pod
name
and
remote
coordinate
space.
G
That
is
the
first
thing
and
then
we
are
going
to
add
some
Network
policy
information
like
Network
policy,
UID
Network
policy
names
for
each
flow,
that
that
will
be
there
and
then
as
like
that,
then
there
could
be
cluster
IP,
so
in
a
flow
to
figure
out
which
service
the
flow
is
going
to.
So
I
am
considering
that
as
a
future
work
you
know,
will
focus
on
first
these
two
things
in
the
cluster
IP
information
in
each
flow.
We
have
to
keep
track
of
that.
To
get
the
service
package
flows.
G
G
G
C
C
A
M
So
our
visualization
solution
for
this
ability
and
the
matrix
is
elastic
flow
is
built
on
the
elastic
stack.
It
includes
the
page
to
track
data
and
the
log
stash
to
process
and
aggregate
data
in
Sunday
to
the
elasticsearch,
which
can
work
as
a
database
and
Cabana
will
visualize
all
the
data
in
their
dashboard
and
for
the
IP
fix
full
records.
We
will
use
fire
big
fire,
beat
to
crap
data
and
do
some
customization,
such
as
add
some
IP
fix
fields
for
entry
in
and
the
Cabana
dashboard
has
their
pre
Butte
dashboard
for
the
flow
records.
M
We
have
a
piace
for
that,
as
you
can
see
that
we
have
some.
The
Sankey
diagram
can
show
the
throughput
from
port
port
and
the
poor
name
and
port
namespaces
can
be
shown
in
this
diagram.
For
the
flow
matrix
is
the
metric
page
with
permissive
model
mode.
You
can
be
integrated
to
our
project
to
collect
data
from
the
permisos
so
that
we
can
show
our
matrix
data
in
the
Cabana.
M
G
The
and
then
quickly
the
future
work
or
imminent
work
that
we
are
planning
to
add
more
in
the
rest
of
the
humanities
information
it
it's
some
kind
of
a
central
aggregator
where
every
agent
is
going
to
send
the
prefix
flow
records
to
this
IP
fix
aggregator
of
flow
forward,
where
we
are
going
to
intercept
the
flow
records
and
add
the
remote
pod
name,
remote
port
name,
space,
service
names,
etc,
and
then
we
are
going
to
send
it
to
the
flow
collector.
G
A
A
G
Thing
I
mean
we
are
working
on
my
physics,
library,
which
is
kind
of
we
are
writing
it
from
scratch.
So
that
is
one
thing
where
we
are
not
sure
about
the
timeline.
So
currently
the
work
is
in
progress,
I,
think
and
but
we
are
actually
trying
to
push
it
into
the
release
like.
Instead
of
do
we
get
POC,
I,
don't
know
the
format
like.