►
From YouTube: Episode 23: Antrea Network Flow Visibility
Description
Come and join Anlan and Yongming to explore antrea flow visibility using Grafana. It's a new feature added to antrea 1.6!
A
Okay,
hi
friends,
welcome
to
this
week's
andrea
light
show.
Today
we
have
two
new
friends
joining
us
and
alan
and
yumi
they're
from
entria
team,
and
today
we
are
gonna
talk
about
a
new
feature
that
available
in
andrea
1.6.
Would
you
like
to
say
hi
to
our
audience:
okay,
hi.
B
Everyone,
my
name
is
angelan.
I
have
been
working
on
project
and
three
or
four
years
and
I'm
mainly
working
on
entry,
a
network
flow
with
ability
project,
and
I
I
also
have
my
team
member.
You
mean
here
hi
only
hi.
C
Everyone,
my
name
is
yumi.
I
work
for
andrea
team
at
1.5
year
and
mostly
focus
on
the
flow
availability
stuff.
Currently,
we
are
building
the
solutions
to
for
the
monetary
and
gives
the
visibility
to
the
user
with
the
poor
traffic
and
extra
I'm
online,
and
I
and
also
us
also,
we
have
other
team
members
are
working
on
these
features
to
maintain
this
and
create
new
features.
A
Okay,
thanks
for
like
telling
us
a
little
bit
about
yourself
and
also
we
have
zach
and
scott
here
with
us,
hi,
jack
and
scott,
and
for
all
the
audience.
If
you
want
to
interact
with
us,
please
log
in
to
your
youtube
account.
So
you
can
leave
your
comments
in
in
you
know,
so
you
can
post
your
comments
and
we
can
show
them
in
our
screen
in
our
live.
Show,
okay,
so
alan,
would
you
like
to
go
ahead
and
share
your
screen
to
tell
tell
us
a
little
bit
about
this
network
visibility.
B
Okay,
okay,
so
you
guys
can
see
it
right
now:
yeah,
yeah,
right,
okay,
so
so
the
main
purpose
of
our
project
is
to
collect
the
network
to
collect
the
the
network
information
in
the
cluster
and
to
visualize
those
kind
of
informations,
and
today
our
main
topic
is
to
introduce
a
new
visualization
tool
named
a
graphana
flow
collector,
so
that
we
can
visualize
our
network
flows
on
those
dashboards
on
graphana
and
help
us
to
understand
the
things
happen
currently
in
our
cluster.
B
I
think
I'll
start
with
a
very
helpful
architecture
diagram
here.
So
in
our
project
we
have
two
main
building
blocks.
The
first
one
is
called
flow
exporter,
so
for
exporter
is
running
as
part
of
our
entry
agent
and
the
main
functionality
of
our
flow
exporter
is
to
periodically
pulling
from
contract.
B
So
contract
will
basically
like
capture
all
the
packets
running
in
the
node
and
also
identify
which
package
is
belong
to
which
connection,
so
that
our
so
that
our
full
exporter
can
can
read
from
those
contract
table
and
to
gather
those
kind
of
flow
record
information
from
contract,
and
then
our
flow
exporter
will
export
this
kind
of
flow
metadata
to
our
next
stop,
which
is
flow
aggregator.
B
This
is
our
second
main
building
block,
so
the
main
functionality
of
flow
aggregator
is
to
to
aggregate
and
correlate
the
flow
records
that
it
received
from
forex
porter,
and
so
that
means
for
aggregatory
smelling
doing
some
data
processing
with
those
kind
of
flow
records
and
also
then
then,
the
flow
aggregator
will
also
send
those
flow
records
to
our
final
stop,
which
is
flow
collector.
B
I
think
here
maybe
we
can
like
stop
and
look
what
kind
of
fields
are
included
in
those
flow
flow
records,
so
that
so
that
we
can
have
a
bet,
a
better
idea
like
knowing
what
the
metadata
in
the
flow
records.
So
we
have
those
kind
of
fields.
For
example,
we
have
time
stamped
those
four
star
seconds
which
represents
when
the
flow
start
and
also
we
have
flow
n
seconds,
which
is
the
most
up
to
date.
B
Seconds
of
our
current
flow
connection,
and
also
we
have
the
v4
addresses
and
v6
addresses
in
separate
fields,
and
we
have
port
and
also
the
packet
account
and
by
count
of
the
current
connection,
which
will
record
like
how
many
five,
how
many
dice
and
packets
are
currently
flowing
in
this
connection,
and
we
also
have
the
reverse
count.
So
the
pack
account
and
icon
here
means
the
packing
accounts
going
from
source
to
destination,
but
the
reverse
packet
and
bytes
means
the
the
packet
invites
going
from
destination
to
source.
B
So
that's
why
it's
called
a
reverse
count
and
we
also
added
some
kubernetes
resources:
specific
metadata,
for
example,
pod
namespace,
pawn
name
also
destination,
pawn
m
and
no
name
in
the
cluster
ip
service
port,
and
also
we
have
a
bunch
of
network
policy
related
metadata
here,
postman
space
type
role,
name,
etc,
and
we
also
have
photo
photo.
Basically,
you
represent
a
weather
connection
is
a
internal
connection
or
it
is
a
internode
or
it
stands
for
from
front
clustered
to
external
network.
B
Currently
we
have
three
three
kinds
of
different
flow
collectors
there,
the
first
one
is
ipfix
for
flagship
with
with
this
with
ibfx
flow
collector,
we
can
kind
of
print
out
the
flow
records
in
our
terminal,
but
but
but
we
can
imagine
that
is
not
not
so
much
helpful
compared
to
ui.
So
so
that's
why
we
also
have
two
different
to
other
kind
of
flow
collector.
The
first
one
is
eok
for
collector.
This
is
the
one
which
we
previously
supported,
but
now,
due
to
some
license
dependency
issue.
B
Basically,
we
have
quick
house
as
our
database
just
to
store
those
flow
records
data
and
we
have
graphana
as
our
ui
to
visualize
the
flow
flow
records
data
in
our
dashboards.
B
So
the
whole
flow
is
like
this:
we
have
the
flow
records
coming
from
for
exporter
and
then
to
flow
aggregator,
and
we
will
create
a
clear
house
client
inside
of
flow
aggregator.
B
A
Now
yeah
it's
clear
to
me:
so
can
we
like
maybe
start
a
mirror
board
to
like
put
everything
you
just
mentioned
together
so
like
we?
Can
the
audience
can
have
like
a
better
understanding
of
this
and
so
for.
Let
me.
A
Yeah,
so
we
can
like
review
the
whole
workflow
here.
So
if,
for
those
who
are
new
to
andrea,
we,
for
example,
we
have
like
a
cluster
here
right.
Let
me
make
it
large.
A
A
A
B
We
have
for
explorer
inside
of
correct
voyage
of
entry
agent.
Yes,.
A
A
The
contract
is
the
components
in
linux
kernel
right,
yes,
so
we
so
we
have
this
flow
explorer
watching
on
this
contract
right
and
then
we
have
like
flow
aggregator
is
that
in
the
other
node
in
a
cluster.
B
It
can
be,
can
be
any
node.
Actually
it's
running
at
the
service
and
the
pod
can
be
any
okay.
A
B
A
And
then
this
flow
collector
will
send
the
information
to
the
rafana
or
it's.
B
So
actually
flow
collector
is
just
a
concept.
So
so
here
we
we
implement
a
flow
collector
by
by
two
components.
So
one
is
the
quick
house
database
and
the
other
one
is
the
on
the
ui.
So
these
two
components
together
for
our
flow
cluster.
A
So
it's
click
house,
it's
a
database.
Yes,
it's
part
of
the
full
collector.
It's
like
yeah,
it's
just
like
a
con.
A
flow
collector
is
just
a
concept
so.
B
A
So
it
so
the
fragra,
the
graph
funnel
is
the
ui
right
okay.
So
we
have
like
a
client
in
this
flow
aggregator
to
send
the
data
to
the
to
the
db
to
the
tv.
Let
me
make
this
more
clear
to
the
db
and
the
db
sender
to
the
ui.
A
A
B
B
So
click
house
is
a
it's
a
column,
oriented
database
so
where
we
we
also
like
maintain
data
in
a
bunch
of
tables
and
when
we
insert
data
into
those
tables.
We
also
insert
it
like
row
by
row.
A
A
A
C
Yeah,
I
think
we
don't
do
the
aggregation
inside
the
database.
Actually,
we
have
a.
We
have
a
component
called
flow
aggregator
which
will
get
the
flow
data
from
to
exporter
and
we
will
do
correlation
and
aggregation
as
this
as
this
service
and
then
the
flow
aggregator
will
export
will
send
out
the
the
flow
records
after
aggregation
to
the
database.
So
on
the
database
that
we
don't
have
any
aggregation
process
there.
B
C
The
no
sql
database,
but
it
is
yeah
yeah,
but
we
could
use
sql
queries.
For
example,
in
the
grafana
we
are
still
using
sql
queries
to
make
our
ui.
So
actually
it
works.
But
it's
optimized
for
the
read.
D
A
B
B
A
Yeah
yeah,
I
think
that's
a
good,
that's
a
clear
answer,
so
I
I
I
I
didn't
draw
the
click
house
client
in
the
flow
aggregator.
D
A
B
Yeah,
that's
nice,
okay,
yeah
so
like,
since
you
just
said
for
full
aggregator.
We
have
three
main
functionalities,
storage,
correlation
and
aggregation
which,
like
I
think,
can
answer
one
of
the
questions
that
we
just
encounter
and
if
we
have
time
at
the
end,
you
mean
we'll
go
more
in
detail
into
those
three
functionalities.
B
So
I
think
now
we
have
a
somehow
a
clear
idea
of
how
the
pipeline
is
working
so
that
we
can
just
jump
into
the
demo.
B
So
currently,
I'm
running
it
on
a
cluster
where,
where
we
have
already
full
exporter
and
flow
aggregator
deployed,
so
we
just
need
to
run
to.
B
A
That
good
enough,
can
you
do
a
little
bit
larger
yeah,
it's
good.
Okay,.
B
So
I'm
gonna
I'm
going
to
apply
the
first
ammo
file,
that
is
the
clay
house
operator,
so
the
click
house
operator
is
is
required
to
to
run
the
clean
house
database
in
our
cluster.
So
but
that
is
kind
of
a
essential
part.
I'm
going
to.
B
It
might
take
like
30
to
40
seconds
to
to
wait
for
all
the
services
to
be
ready.
So
I
think,
during
this
waiting
time
we
can
also
create
some
working
load
traffic
in
our
cluster,
so
that
we
have
something
to
show
a
letter
on
the
dashboard.
B
I
have
prepared
to
ammo
file
to
create
those
kind
of
traffic.
The
first
one
will
create
some
part
to
external
traffic.
I
like
it
like.
We
can
see.
We
have
some
pot
here,
ping
and
external
public
ip.
B
And
I
also
have
another
one
here:
this
one
will
create
some
ink
cluster
networks,
basically
create
some
parts
and
they
will
send
traffic
to
each
other.
B
I'm
trying
to
find
some
command
that
will
show
what
kind
of
resources
has
has
already
been
created
by
by
running
these
two
yammer
file.
For
example,
if
we
run
run
this
command,
it
should
show
us
all
the
resources
under
the
namespace
of
flow
visibility.
B
Okay,
so
we
have
everything
ready
here.
We
finally
click
house
and
also
I
have
another
magical
command
that
will
print
out
the
note
ip
and
graphing.noteport,
because
we
have
exposed
grafana
as
a
node
port
service
here.
So
we
have
to
use
the
node.lpno
port
to
access
it.
B
We'll
just
copy
it
and
paste
it
in
our
browser
address
line.
B
B
B
So
here
we
can
see
we
have
in
total
six
dashboards.
The
first
one
is
flow
records
dashboard.
This
is
functioning
like
the
the
base
dashboard
which
which
contains
the
total
count
of
flow
records
we
currently
have
in
our
database
and
also
the
time
series
data
and
also
we
have
pod
to
pause,
dashboard,
positive
service
dashboard
part
external
dashboard.
There
are,
they
are
very
similar.
B
The
difference
is
just
the
destination
pod
type
or
whether
it's
a
pod
or
it's
a
service
or
the
or
the
destination,
is
an
external
ip,
and
also
we
have
no
to
no
dashboard
and
also
a
network
policy.
Dashboard,
let's
see
currently
is
zero
because
it
takes
some
time
for
for
the
flow
records
to
be
exported
from
full
exporter
and
then
to
aggregator
and
then
to
graph.
B
So
currently,
the
flow
aggregator
is
is
an
arrow
because
flow
aggregator
is,
is
previously
deployed
and-
and
it
is
awaiting
to
be
connected
to
the
click
house
server,
but
we
we
first
start
the
flow
irrigator
and
then
we
start
the
clear
house
server.
So
the
aggregator
will
well
well
hit
arrow
when,
when
it
is
trying
to
get
connected
to
the
server
normally
it
will
just
recover
in
a
few
minutes.
B
Yes,
so
it's
running
now,
we
don't
need
to
redeploy
it,
so
I
guess
we're
just
waiting
for
the
flow
records
to
be
exported
to
our
final
step.
C
B
This
is
the
first
panel,
and
this
is
the
second
panel-
and
this
is
the
third
one,
the
first
panel,
showing
the
total
count
of
flow
records
in
our
table
and
the
second
panel
showing
the
number
of
flow
records
we
received
in
every
minute.
B
So
it's
a
time
series
data
here
and
the
third
panel
just
displaying
all
the
fields
of
of
every
flow
record
we
have
received
like
those
fields
we
have
already
introduced
in
the
documentation,
the
time
source,
ip
destinat
destination,
ip
and
also
some
pawn
m
name,
space,
etc,
and
to
be
noticed,
there
are
some
filters
that
we
can
play
with.
B
B
The
total
count
will
decrease
and
also
we
have
a
filter
that
will
take
effect
on
the
whole
dashboard.
So
this
will.
This
will
allow
us
to
select
as
many
code
as
we
want,
for
example,
if
we
just
want
to
do
filtering
on
the
source
command,
I
want
source
point
name
equal
to
web
client
4-1.
B
Let
me
just
remove
it
for
now,
and
also
the
the
third
filter
is,
as
you
can
see,
we
have
a
small
filter
icon
on
each
field.
If
we
click,
if
we
click
on
it,
it
can
also
allow
us
to
filter
on
on
this
table.
B
Okay,
so
that
is
about
the
flow
records
dashboard.
B
We
have
two
sankey
diagram
at
the
top,
so
the
second
diagram
will
show
the
source
path.
Information
on
the
left
hand,
side
and
shows
the
destination
proclamation
on
the
right
hand,
side.
If
we
just
hover
on
our
mouse
onto
the
link,
it
will
show
the
source
pawn
land
going
to
the
destination
pawn
m
and
how
many
bytes
are
are
going
through
this
connection
and
on
the
left
hand
side
is
the
is
the
bias
going
from
source
to
destination
and,
on
the
right
hand,
side
it's
it's
recording
the
reverse,
bytes
going
from
destination
to
source.
B
Showing
the
throughput
evolution
on
on
the
power
to
plot
traffic
and
also
we
have
pie
chart
showing
the
bytes
group
by
the
source,
namespace
or
destination
pawn
in
space.
B
C
Yeah
actually
yeah,
actually
in
the
in
the
ui,
showed
in
the
flow
rockers
dashboard.
We
could
see
each
line.
Each
row
is
represent
of
flow
records,
it
represents
the
traffic
between
port
to
port
portal
service
and
port
external,
and
currently
we
don't
support
the
external
to
power
traffic.
B
So
for
for
every
single
connection,
we
will
periodically
to
update
the
the
flow
records
and
also
send
the
most
up-to-date
flow
records
to
our
collector.
B
A
B
C
Right
right,
I
think
we
don't
have
traffic
generated
for
that,
but
maybe
we
could
show
all
the.
B
Documents-
yes,
yes,
yes,
that's
right!
So
so
currently
I
I
I
didn't
generate
any
deny
narrow
policy.
So
if
we
have
that
we
can
like
filtering
on
this
real
action
field
and
trying
to
find
those
connections
that
have
to
deny
narrow
powers
enforced
to
it,
but
but
for
for
for
the
next
release.
We'll
also
add
this
part
into
our
network
policy
dashboard.
B
So
currently,
our
now
our
narrow
policy
dashboard
only
visualize
the
allow
traffic,
but
for
our
next
release
we
will
also
add
a
support
for
the
denied
traffic,
so
those
denied
traffic
will
be
represent
in
different
colors
to
to
to
to
distinguish
those
from
the
allow
traffic.
So
that
is
a
future
feature
that
we
can
inspect.
B
See
joey
is
asking:
why
did
we
move
from
elk
to
grafana?
So
the
reason
is
elk
is
previously
open
source,
but
then
they
were
too
licensed
based.
So
that
means
we
cannot
just
depend
on
them
anymore.
So
that's
why
we're
moving
away
from
elk
to
graffana
and
another
reason
is.
We
have
found
some
performance
issue
with
with
logstash.
B
So
logstash
is
the
component
that
the
the
letter
l
stands
for
elk.
So
the
alpha
stand
for
log.
Slash
didn't
take
license
graph
on
my
channel.
B
Okay,
I
I
think
we
can.
I
can
move
on
to
another
topic
that
is
doing
some
customization
on
our
dashboard.
For
example,
if
we,
if
we
want
to
add
a
new
panel
on
our
previous
dashboard,
how
can
we
do
that?
We
can
just
click
on
the
add
panel
button
here
and
add
a
new
panel
and
we
can
select
the
visualization
we
want,
for
example,
if
we
just
want
to
do
the
statistic
one,
and
then
we
select
our
data
source,
which
is
click
house,
and
let's,
let's
write
some
queries
here.
B
For
example,
if
we
want
to
select
count
from
table
floats
is
our
table
name,
and
we
asked
some
as
some
where's
four's
pod
and
space
equals
two.
A
So
the
dashboard
we
see
previously
is
defined
in
the
graph
on
a
yaml
file
when
we
create
when
we
deploy
this
flow
for
exponents.
Yes,
yes,
and
based
on
that,
we
do
some
like
customization
about
the
graph
we
want
to
see
and
the
sql
query
we
want
to
like
run
right.
B
Yeah,
so
for
for
every
for
every
panel,
we're
just
doing
the
seamless
things
we
write
on
queries,
we
select
the
visualization
and
we
select
the
data
source
and
we
can
export
this
dashboard
into
a
json
file
and
we
store
this.
This
json
file
somewhere
and
we
we
mount
it
in.
B
A
B
And
we
can
just
quickly
save
that
for
here,
but
if
we,
if
you
want
to
do
something
like
more
more
advanced,
for
example,
we
want
to
create
a
dashboard
from
scratch.
We
don't
we
don't
want
just
to
customize
our
preview
dashboard.
We
can
just
create
a
brand
new
one.
So
in
this
new
dashboard
we're
just
doing
the
same
thing,
we
we
add
panels
and
we
can
do
multiple
of
them.
B
And
we
save
this
dashboard,
so
it
will
be
available
here,
new
dashboard,
but
one
thing
to
be
mentioned
is:
if
we
just
click
on
save
dashboard.
These
changes
will
only
be
saved
for
this
runtime,
which
means
if
we
just
stop
our
graffana
pod
and
redeploy
the
pod.
Those
changes
will
not
still
be
there
for
the
restart.
B
So
if
we
want
to
have
if
we
want
to
have
these
changes
saved
for
the
restart,
there
are
two
ways
to
do
that.
The
first
way
is
we
just
export
it
and
save
save
the
json
file
somewhere
and
the
next
time
we
can
just
import
the
json
file
so
that
we
can
bring
the
dashboard
back
to
our
graphing
ui.
So
that's
the
first
way,
let's
say
manually,
export
and
import.
A
I
remember
there
such
is
there,
maybe
a
yaml
file
that
we
have
all
the
pretty
fine
dashboard
that
leaves
there
and.
B
B
So
we
have
this
as
a
complete
map
is
how
the
dashboard.json
mounted
to
graphina.
A
B
It's
very
deep!
I'm
sorry
about
that
dashboard!
Okay,
we
have
all
those
json
files.
You
just
need
to
put
the
new
json
files
under
the
same
directory
and
also
add
its
name
to
the
customization
file
so
that
it
can
help
us
to
build
the
complete
map.
A
B
A
B
Yes,
it's
a
real-time
dashboard,
as
you
can
see,
since
from
the
time
we
started
to
now,
it
has
already
been
passed
like
20
more
seconds
and
we
have
set.
If
we
set
the
automatic
refresh
to
five
seconds,
it
will
refresh
every
five
seconds.
As
you
see,
the
the
count
has
already
increased
by
one.
A
B
A
Yeah
thanks
numi
for
that,
and
I
I
cannot
see
that
jay
has
like
really
great
comments
about
this
dashboard
because
he
noticed
that
when
we
show
the
flow
record,
we
not
only
oh
wait.
Wait,
wait,
wait
it
doesn't.
We
we
don't
show
the
port
ip.
We
show
the
port
name
right.
Yeah.
C
A
Yeah,
that's
like
what
we
do
in
the
flow
aggregator
to
to
achieve
this.
So
would
you
like
to
tell
us
a
little
bit
more
about
that.
C
Yeah
sure
sure
alan
could
you
help
me
navigate
to
the
document
of
the
flower.
C
Yes,
thanks
ella,
so
actual
flow
aggregator
is
behaves
like
a
lack
of
three
parts.
The
first
one
is
called
a
collecting
process,
since
there
are
multiple
kinds
coming
from
the
flow
exporter
are
sending
flow
records
to
the
flow
aggregator.
So
the
first
step
of
the
flow
regulator
will
collecting
these
flow
records
and
the
second
part
is
called
the
aggregation
process.
C
With
this
524
we
could
identify
identify
a
single
network
connections
and
all
the
flow
records
coming
from
flow
exporter
have
the
same
vector.
We
will
do
the
correlation
and
aggregation
here.
The
interesting
part
here
is
that,
if,
if
there
is
the
flow
record,
which
is
correlating
to
internal
flows
such
as
the
source,
the
source
port
and
destination
port
are
located
in
different
nodes,
so
in
that
case,
after
the
correlating
the
source,
the
flow
records
coming
from
the
source
node
flow
exporter
will
have
the
information
of
the
source
port,
so
flow.
C
Rockers
coming
from
the
destination
node
flow
exporter
will
have
the
information
of
the
destination
port.
In
this
case,
we
could
do
the
correlation
here,
and
the
flow
records
sent
by
the
flow
aggregator
will
have
the
port.
We
have
the
pawn
name
for
both
the
software
and
destination
part.
We
could
do
more
beyond
the
correlation
here,
since
we
have
some
more
information
here
and,
for
example,
the
the
full
records
sent
by
the
flow
aggregator.
We
also
have
the
throughput
information
based
on
the
flow
records.
C
We
will
have
this
calculation
logic
here
and
also
we
will.
We
will
involve
the
source
for
labels
and
destination
for
labels
here
in
the
aggregate
flow
records
we
will,
which
will
be
also
useful
for
user
and
our
future
of
virtual
visibility.
Application
here.
D
A
D
C
The
and
the
third
part
of
the
flow
aggregator
is
exporting
process
after
we,
after
we
finish
the
correlation
and
aggregation,
we
would
like
to
send
out
our
full
records
to
the
flow
collector,
as
we
can
see
that
if
it's
a
simple
ipfix
flow
collector,
we
will
send
through
the
ipfix
protocol.
C
If,
if,
if
the
flow
character
is
our
new
graphana
flow
flow
character,
we
will
use
the
clean
house
client
to
send
all
the
full
records
directly
into
the
database
yeah.
Basically,
that's
that's.
What
does
this
was
also
actually
do,
and
if
we,
if
you
are
interesting,
we
could
also
we
are
all.
We
also
have
the
uncutter
support
for
the
full
record.
If
you
do
not
want
to
set
up
a
grafana
flow
character
or
elk
flow
catcher,
you
could
simply
log
into
the
flow
aggregator
port
and
run
some
uncutter
commands.
A
Okay,
so,
basically,
what
you
are
saying
is
that
from
this
graph
we
can
see
that
each
for
each
flow
exporter
only
runs
each
node
right,
so
it
I
can
only
see
the
information
within
this
node,
but
the
flow
aggregator
here
collects
all
the
data
from
each
node.
So
the
flow
aggregator
has
the
information
in
the
cluster
scope
not
within
like
each
specific,
specific
node.
So
it
can.
D
C
A
Yeah
right
and
also
here,
we
can
see
that
this
flow
aggregator
like
talk
to
this
click
house
db,
but
if
we
want
to
like
only
extract
data
from
the
flow
aggregator,
we
can
also
talk
to
these
components
directly.
B
So-
and
I
see
that
is
asking
a
question
about
for
the
current
behavior
data
are
in
memory
when
the
data
storage
usage
exit
the
memory
allocated,
I
assume
the
part
will
restart
and
previous
data
will
disappear,
so
that
is
actually
a
very
good
question
to
prevent
this
kind
of
situation.
B
Cinchy,
could
you
just
let
me
to
share
with
you?
Yes,
so
we
have.
We
have.
We
have
added
two
two
data
retention
methods
to
prevent
this
kind
of
data
restart.
So
for
for
these
two
methods,
the
first
one
is
ttl
mechanism,
so
basically
ttl
will
periodically
delete
the
expired
flow
records
in
our
database.
B
A
Yeah,
I
think
this
is
a
great
answer
to
this
question
and
is
there
any
other
questions
by
the
way?
If
I
remember
this
correctly,
it's
the
first
time
in
our
intro
like
show
that
we
cover
the
flow
of
visibility,
part
and
it's
like
really
really
sensu,
and
you
mean
to
tell
us
a
little
bit
about
this
aggregator.
I
don't
know
about
this
like
the
dashboard,
and
we
can,
of
course
like
use
that
to
on
troubleshooting
the
cluster
or
like
do
some
statistics
based
on
the
beautiful
dashboard
review.
A
So
thank
you
both
for
telling
us
this
and,
and
also
I
don't
really
see
any
other
questions
in
the
comments.
A
So
if
there's
no
more
question
we'll
like
thanks
every
audience
for
attending
this
show,
and
if
you
like
this
kind
of
context
considering
liking
our
videos
and
subscribing
the
channel
and
okay,
so
I'll
see
you
guys
next
wednesday
bye.
Thank
you.