►
From YouTube: Antrea Community Meeting 07/13/2020
Description
Antrea Community Meeting, July 13th 2020
A
Perfect,
so
welcome
to
the
andrea
community
meeting
today
is
tuesday
july
the
14th,
unless
you
are
in
in
the
u.s,
in
which
case
it's
still
monday
july,
the
13th.
So
for
today
we
have
two
topics
in
the
agenda:
solikar
and
you
are
proposing
the
demo
of
the
flow
exporter
of
this
profile.
Exporter
feature
on
the
other
end.
We
have
jacqueline
and
waning.
That
would
like
to
discuss
the
ipv6
proposal.
A
At
least
20
minutes
for
the
flow
exporter
demo
and
roughly
40
minutes
for
the
ipv6
discussion.
This
means
that
these
two
topics
we
are
going
to
fill
up
the
the
full
agenda
for
today.
So
before
we
start
the
meeting,
I
would
like
to
know
if
there
is
a
any
other
topic
that
your
community
would
like
to
discuss
today.
A
All
right,
so
if
everybody
agrees
that
my
proposal
is
to
start
with
the
flow
exporter
demo
from
the
recar,
which
should
take
roughly
20
minutes
and
then
move
to
the
ipv6
discussion,
this
will
allow
us
to
complete
for
sure
that
the
flow
exporter
demo
and
at
least
kickstart
the
conversation
on
ipv6,
which
we
can
then
surely
not
be
able
to
finish
the
discussion
on
ipv6.
We
can
either
continue
it
either
on
slack
or
perhaps
you
know,
have
a
second
design
review
in
the
next
community
meeting.
B
A
request
like
you
had
just
started
running
traffic.
I
just
want
to
ask
her
if
we
are
ready
to
demo,
we
are
thinking
we'll
go
after
ipv6.
Okay,
I
mean
yeah.
If
yeah.
A
Okay,
that's
that's
all
right,
then,
let's
go
with
it.
Yeah
yeah
okay
works
for
me.
So
let's
start
with,
let's
start
with
ipv6
discussion,
and
I
see
that
waning
and
joe
chung
are
on
the
another
call.
So
I
know
that
you
have
a
document
to
share
and
a
lot
of
topics
discussed.
So
perhaps
we
can
get
started
with
that.
D
Hello,
everyone,
so
I
will
share
my
screen
with
ipv6
design.
D
Ahead,
hello,
everybody
thanks
for
attending
the
discussion
for
ipv4
and
with
ipv6
dual
stack
dispersion.
First
from
the
upstream
called
kunatis
community
requirements
for
the
ipv4
and
ipv6
dual
stack,
we
got
some
requirements.
D
First
is
if
the
cluster
is
planned
to
support
your
stack.
There
should
be
your
stack
partner
working
that
means
for
each
port.
It
might
have
a
single
ipv4
and
an
ipv6
address
and
the
second
the
cluster
should
support
ipv4
and
ipo
v6
for
services.
D
That
means,
although
we
have
ipv4
or
ipv6,
addresses
for
the
service,
but
for
each
service
it
only
has
a
single
address
family,
which
means
for
source
address
it.
It
will
be
either
an
ipv4
or
ipv6,
and
the
third
requirement
is
for
part
of
cluster
equalized
protein
if
the
cluster
is
enabled
with
ipv6
the
pulse
through
excise
external
drives
with
ipv6
interfaces,
except
for
the
three
requirements
from
the
upstreaming,
we
have
another
two
requirements
for
ensure.
D
First
thing
is
for
the
network
policy,
as
we
all
know
that
we
are
using
the
part
in
network
policy
and
active
blocks
for
the
network
policy
rules.
If
the
cluster
is
enabled
with
ipv6
should
in,
we
should
support
fpv6
addresses
in
the
network
policy
rules,
and
the
last
advantage
will
be
for
just
flow,
as
we
may
know
that
we
are
using
picked
out
some
message
to
trace
the
open
flow
entries
for
the
untrue
functions.
D
So
if
we
have
enabled
ipv6
for
the
cluster
they
might
to
be
able
to
generate
ipv6
packets
to
trace
the
flow
entries
all
about
five
entries
of
all
the
requirements,
then,
if
we
want
to
enable
the
ipv4
and
ipv6
build
stack,
some
pre-request
requests
are
needed
for
the
cluster
configuration
first.
We
need
to
enable
the
feature
gate
of
ipv
stack
and,
as
the
second
value
is
for
the
cluster
cider
configuration,
there
should
be
an
activity
for
slider
and
anti-v6
cider,
and
the
third
line
is
for
the
cluster,
the
service
cluster
configuration.
D
We
could
configure
an
ipv4
and
ipv6
sider
and
the
last
one
is
for
the
node
configuration.
If
we
want
to
support
ipv6,
the
node
should
be
configured
with
both
fpv4
and
ipv6
addresses
and
by
default.
If
we
have
already
configured
the
cluster
cellular
for
the
for
the
cluster,
the
pos,
the
subnet
for
each
node
is
allocated
to
either
a
24
bit
for
the
ipv4
subnets,
all
as
64
bits
mask
for
the
ipv6
subnets.
D
E
So
do
we
support
ipv6,
only
a
load,
ip
configo?
We
always
support
both
ip4
and
q6.
D
For
design,
we
want
to
support
your
site,
but
I
think
if
we
have
already
supported
dual
cycle,
we
could
support.
We
could
deploy
a
cloud
service.
Only
ipv6
will
be
okay,
okay,
okay,
okay,
we
come
to
the
diesel
design
for
the
design,
for
the
dual
stack.
First
is
for
the
dual
stack
for
networking
to
support
the
dual
stack
for
networking.
Some
changes
are
needed
for
the
interior
agents.
First,
they
need
to
read
for
the
ipv4
and
ipv679
from
the
nodes
back
and
then
they
need
to
configure
the
product
sider.
D
To
add
time
configuration
we
will
continue
to
use
the
host
local
iphone
plugin
for
ipv6,
and
then,
after
that
time
has
you
know
config
database
of
both
ipv4
and
ipv6
addresses.
It
should
be
able
to
allocate
a
single
address
for
each
port.
I
mean
either
of
I
mean
both
ipo
v4
and
ipv6
address.
D
From
the
ipad
driver
and
the
third
one
is
a
way
how
to
configure
the
the
app
addresses
allocated
from
the
pump
driver
to
the
container
unique-
currently,
I'm
sure
is,
I'm
sure,
only
finds
the
ipv4
address
from
the
palm
results.
But
if
we
want
to
support
the
dual
stack,
we
need
to
loop
all
ikea
addresses
from
the
result
and
each
one
we
need
to
configure
to
be
configured
on
the
container
unique
and
the
fourth
step
is
unsure.
D
First
is
for
the
ipc
book
guiding
table,
they
need
to
support
the
ipv6
address
and
something
I
want
to
remind
you
is
that
if
we
enable
ipv6
on
the
interface,
there
should
be
a
link
local
address
and
the
global
drives
configured
and
interface
for
ipv6.
I
mean
that
there
should
be.
There
should
be
at
least
two
addresses
two
ipv6
addresses
configured
on
the
interface
so
for
episode
guiding.
We
need
to
support
both
two
two
ipv6
addresses
for
the
link
local
address.
D
So
we
need
also
allowed
the
the
package
using
the
global
fpv6
address
as
the
software
that
addressed
where
the
packet
is
leaving
the
port.
D
D
D
Another
thing
for
the
open
flow
entries
corresponding
to
the
positive
working
must
be
the
tunnel
configuration.
I
mean
if
the
node
is
configured
with
ipv6
address
and
it
join
the
cluster
with
ipv6
configuration.
D
D
D
D
E
So,
actually
I
have
a
look
at
other
things
for
the
I
wonder,
do
you
do
you
believe
the
the
load
ipam
I
mean
subnet
panel
is
a
typical
ipam
strategy
for
ipv8620.
E
There
are
some
other
ways
to
ipad
for
ipv6.
D
We
have
some
new
investigation
on
hotel
local,
I
mean
the
ipam
driver
we
are
using.
Currently,
the
diaphragm
driver
is
also
support
supporting
ipv6
ip
allocation,
so
I
don't
think
we
need
to
change
that
up.
A
Okay,
yeah,
yes
ginger,
and
this
is
from
whatever
what
I've
seen.
I
have
not
seen
actually
any
implementation
using
methods
like
slack
or
dhcpv6
to
assign
ip
addresses
to
honestly.
I
also
have
not
seen
any
case
of
a
of
a
global
subnet,
but
you
know
it
is
also
true
that
there
are
not
many
ipv6
implementations
out.
F
A
So
but
the
most
important
thing
is
that
it
seems
that
static
ip
addressing
is
still
the
preferred
way
for
assigning
ap
addresses.
A
That's
that
is
the
point
that
I
believe
that
waning
was
going
to
discuss
later.
Is
that
correct.
D
Yeah,
it
is
openly
so
I
need
to
have
more
thoughts
from
the
community.
D
But
the
second
feature
I'm
trying
to
support
for
ipv6
is
the
network
policy.
If
we
want
to
use
use
ipv6
addressing
network
policy,
I
have
think
of,
I
have
started
two
changes.
First
is
for
the
control,
plane
and
trail
agents.
G
D
G
But
completeness
we
should
have
made
note
of
that,
and
also
just
a
generic
comment,
there's
a
lot
of
validation
that
is
being
done
with
open
api
for
fields
which
select
ipv4
as
a
field.
Maybe
we
need
to
take
a
look
at
all
those
validations
and
ensure
that
ipv6
is
also
alarmed.
D
Okay
and
then
the
third
feature
of
56
is
through
the
about
the
services
two
supported
with
six
services,
we're
still
using
the
overhead
net
feature
to
take
transition
from
service
address
to
the
back-end
pause
address
and
the
the
most
changes
should
be
about
to
check
that
the
service
ipo
drives
family.
D
I
need
to
enable
and
always
access
the
right
screen
to
catch.
The
selected
backhand
pulse
ico
drive
from
the
balancer
algorithm
for
activities
4.
We
are
using
the
right
register,
3
to
catch
an
activity
4
address,
but
for
a
register
I
mean
the
regulator
0
to
write
15..
D
They
have
only
32
bits,
so
it
is
nothing
long
enough
for
an
activated
sector
drive,
hence
the
how
to
enable
the
accessories
for
each
axis
right.
It
has
128
of
these
for
for
the
catch.
So
we
need
to
enable
this
one
in
the
open,
blind
trees
and
another
changes
for
the
services
is
that
we
need
to
support
using
the
ipv6
addresses
in
the
service
matching
or
action
sales.
D
Then
the
the
last
feature
of
our
I'm
sure
is
the
is
the
truth
blow
and
the
the
only
changes
is
we
need
to
support.
Generate
an
ipv6
package.
D
Then,
except
for
the
features
we
want,
we
need
to
implement
on
sure
I
still
have
two
openings
one
to
have
some
discussion
with
the
community.
D
First
is,
as
they
may
know,
that
the
ipv6
address
is
long
enough,
so
for
most
cases
at
night
is
not
using
in
ipv6
traffic,
especially
for
the
north
and
south
traffic.
So
I
I
want
to
have
some
discussion
about
how
to
implement
knowledge
night
for
ipv6
traffic,
especially
for
the
northern
south
traffic.
D
D
We
already
defined
some
routing
configurations
for
the
overlay
configuration,
for
example.
If
we
want
to
excite
the
possum
knight
on
another
node,
we
need
to
routing
the
package
to
the
gateway
first
and
then
using
the
raspberry
ois
of
a
pipeline
to
to
process
the
traffic,
but
then
how
to
coordinate
the
right
thoughts
on
the
association.
D
E
Yeah
yeah,
I'm
just
wondering
since
you
talk
about
no
estate
for
income
mode
right,
but
I'm
wondering
if
we
do
know
the
knowest,
and
that
means
the
underneath
or
already
know
about
the
port
ip.
E
E
E
D
E
Good
question
by
the
way,
when
we're
doing
cap
what's
doing,
I'm
using
iq
is
four
to
be
the
alt
height
right.
I
guess
might
be
the
case
since
four
wheelers
and.
D
If
we
implement
a
yin
type
mode,
the
panel
should
is
both
okay
for
ipv4
or
ipv6.
All
that
supports
both
ipv6
or
ipv4
and
x
or
ipv6
or
iphone
36.
D
So
if
the
node
has
joined
the
cluster
with
an
ipv6
address,
we
might
to
rate
the
ipv6
address
from
the
no
spikes
so,
but
how
to
use
that
with
the
six
for
tunnels.
D
D
When
I
make
the
design,
I
just
think,
entree
is
assuming
the
in-cap
mode
is
ready
default
mode.
So
so
I
I
didn't
think
of
a
victory,
not
support
in
cut
mode
for
ipv6.
E
No,
I'm
not
saying
that
I'm
saying
the
case
you
you
want
to
do
income
mode
further.
You.
You
also
want
to
know
snap
for
that
one,
I'm
I
I'm
just
wondering
do
you
need
to
support
that
or
not.
D
D
So
I
think
we
should
first
have
an
agreement
whether
we
need
to
support
at
nightfall
for
the
north
and
the
south
perfect,
and
we
need
again
they
could
continue
the
discussion
about
vital,
virtually
supported,
knowing
copying
no
ethnic
income.
A
Let
me
probably
you
know,
since
the
whole
point
of
this
discussion
is
that
with
ipv6,
those
nut
topologies
are
way
more
common
than
us,
not
topologies,
which
means
that
you
know
a
solution
which,
which
is
not
north
south,
not
will
definitely
work,
but
it's
not
probably
what
most
users
will
be
looking
for.
Let
me
try
and
put
the
problem
under
another
perspective.
Let's
think
only
about
ipv4
now,
with
fpv4
in
andrea.
Are
we
able
to
fully
support
fully
routed
topologies?
E
Naturally,
for
low
income
model,
actually
we
assume
until
network
routing
beta
notes.
That
means
some.
Some
other
solution
will
handle
results.
For
example,
maybe
the
kubrick's
cloud
provider
can
now
probably
results
to
underground
launch.
We
are
done
with
100.
A
A
So
let's
say
that
at
the
moment
we
don't
really
have
a
solution
in
andrea
to
implement
knows
not
fully
routed
topologies
without
right,
perfect.
A
So
in
light
of
these,
in
my
opinion,
our
approach
should
be
that
for
ipv6
we
proceed
with
the
same
approach,
meaning
that
we
will
lose
not,
even
though,
let's
say
it's
probably
not
the
most
common
use
case,
and
then
we
should
start
another
activity
for
allowing
fully
routed
topologies,
which
is
you
know,
it's
the
it's
the
main
use
case
in
ipv6
and
it's
probably
a
very
fairly
important
one
for
ipv4
as
well.
I
don't
know
what
is
a
your
feeling
here.
E
D
D
So
jinjin,
your
point
is
visual
support
to
run
as
the
first
step.
E
Of
course
I
mean,
if
later
have
a
way
to
do
routing
I
mean
that
means
we
can
populate
or
entirely
with
favorable
data.
It
can
be
a
complete
solution
further
for
now.
Maybe
we
can
still
assume
some
other
solution
to
the
physical
result.
Interaction
for
us
like
we
feel
sorry
like
the
class
cloud
provider.
E
I'm
I'm
not
sure
the
cloud
product
can
do
ipv6.
Also
I
I
I
guess.
Maybe
some
product
can
do
like
aws
or
maybe
a
gcp
provider
can
do
that,
but
I
didn't
check
that.
A
Much
so
now
that
makes
sense
that
you
have
a
good
point
about
relying
on
the
infrastructure
provider
for
routing
it's
the
usual
thing
that
the
responsibilities
of
montreal
are
limited
to
the
kubernetes
cluster
itself.
A
So
when
it
comes
with
the
communicating
with
the
infrastructure,
which
could
be
either
by
injecting
steady
crowds
or
announcing
or
announcing,
you
know,
sublet
routes,
I
really
don't
know
if
this
is
something
that
we
want
to
add
country
or
not,
because
it
feels
like
we
might
end
up
adding
code
which
is
dependent
on
the
infrastructure
and
therefore
adding
a
kind
of
opening
a
can
of
worms
that
we
probably
don't
want
to
open.
A
E
A
I'm
not
sure
pro,
I
probably
just
don't
understand
your
question.
What
do
you
mean
we
did?
We
have
to
let
the
physical
router
know
about
our
ip
address.
A
Oh
okay,
that
will
be
the
case
of
luck.
Yes,
that's
correct!
That
will
be
in
the
case
of
slack
allocation
in
in
this
locker
location.
Basically,
yes,
yes,
a
router
advertisement
message
from
the
infrastructure,
router
and
then
the
station
then
auto
configures
its
ip
address,
but
it
and
then
it
sends
a
neighbor
and
then
and
then
that's
pretty
much
auto
configuration
there
isn't,
in
that
case
a
feedback
like
you
know
the
station
telling
the
router
hey.
E
Got
it
I'm
just
wondering:
is
that
slack
or
it's
a
typical
way
people
do
ipr
location
library
system.
A
Enable
if
we
talk
about
virtual
machines
as
a
generalist
stations,
that
is
definitely
the
most
common
ways:
luck,
but
for
container
solutions,
and
especially
on
kubernetes
networking.
I
have
not
seen
so
far
any
ipv6
ipv6
ibm
leveraging
slack
from
from
my
experience,
which
anyway,
is
fairly
limited.
Everything
is
still,
I
believe,
this
allocation.
E
A
A
Anyway,
so
I
hate
to
say
this,
but
we
are
using
quite
a
bit
of
time,
so
perhaps
this
is
a
natural.
This
is
not
not
discussion,
something
that
we
can
probably
continue
on
slack
unless
there
is
something
else
that
you
would
like
to
to
mention
on
this.
D
D
If
we
enable
your
stack
a
node
through
conveyor
at
least
a
two
active
drives,
one
is
two
by
four
and
another
is
ipv6,
but
from
the
no
spike
we
might
be
able
to
retrieve
only
one
address,
so
we
might
have
lost
some.
We
might
lose
some
information.
D
F
D
D
F
Actually,
there's
probably
like
a
shortcoming
on
my
side
because
yeah,
I
guess
I
don't
know
enough
about
dual
stack.
I
need
to
look
into
it
because
I
don't
see
why
supporting
dual
stack
for
pods
means
that
each
node
now
needs
to
have
like
one
ipv4
address
and
one
ipv6
etc.
So
it
kind
of
like
sounds
a
bit
orthogonal
to
me.
This.
A
Is
not
required?
No,
it's
it's.
Indeed,
all
the
orthogonal,
it's
not
required.
The
thing
is
it.
I
mean
that
as
a
part
of
this
design,
we
also
wenting
also
wanted
to
ensure
that
dual
stack
also
worked
for
nodes,
because
I
think,
and
maybe
that
we
want
to
be
able
to
create
either
we
four
or
v6
tunnels
right.
D
A
Mean
tunnel
end
points
I
mean
talking
about
tunnel
end
points,
but
you're
right
that
you
know
for
dual
stack
containers
having
also
the
node
in
dual
stack
mode
is
not
required
at
all.
F
A
You
you're
right
even
for
l
checks.
You
know
if,
if
the
containers
is
dual
stock,
then
for
the
l
check
to
work,
the
node
can
be
either
v4
or
v6.
As
long.
A
A
E
If,
if
we
don't
even
know
the
ipv4
will
pull
the
ipv6
and
then
how
could
we
reach
the
portal
or
poi
origin
node.
D
Yeah,
it
probably
is
that
the
node
should
support
should
be
configured
with
both
activities,
4
and
ipv6,
and
I
think
it
should
use
that
basic
address
to
have
the
limit
check
for
that
to
be
6
is
a
t
stack
targeting
at
the
basics
address.
B
D
E
Not
just
about
port
region
a
loan
reporter,
but
it
could
also
be
no
port
port
to
load
reach
to
load
the
ip
ring.
E
D
E
You're,
seeing
that
means
even
for
load
ip,
it
must
have
both
ipv4.
E
D
We
can't
we
might
not
be
able
to
convey
the
cluster
with
only
relatively
small
cider.
If
people
already
configured
your
site,
then
then
the
feature
might
not
work.
D
Okay,
another,
the
last
thing
I
want
to
share
is
about
the
limitation,
since
the
oas
on
windows
doesn't
support
ipv6
in
contract
and
as
with
this
channel,
I
actually
basic
feature
is
not
supported
on
windows
in
the
front
space,
so
we
will
not
cover
the
windows
ipv6
for
ensure,
in
this
case,
that's
all
for
my
part.
A
Thanks
waning-
and
that
was
a
very
good
now
for
ipv6,
we
should
also
discuss
the
ci
pipeline.
I
I
am
a
bit
concerned
that
there
might
not
be
enough
time
in
today's
meeting
to
include
also
the
the
flow
exporter
demo,
but
it
will
be
a
shame
to
leave
the
ipv6
discussion
incomplete.
A
So
I
will
say:
let's
try
and
complete
the
ci
pipeline
discussion,
perhaps
in
a
few
minutes
to
leave
at
least
let's
say
five
to
ten
minutes
for
recurring
you
for
the
pro
exporter
demo.
Please
go
ahead.
D
J
J
So
can
you
see
my
screen
now?
Yes,
okay!
Thank
you,
okay!
So,
let's
start
the
discussion
for
the
ipv6
ci
pipeline.
So
currently
we
have
several
jobs
on
like
three
platforms.
First
is
the
github
actions,
so
it
it
has
its
cluster
setup
with
kind
and
on
vmc.
So
these
are
the
public
ci
jobs,
so
you
can
see
the
results
of
them
and
also
we
have
some
jobs
most
of
the
windows
jobs
they
are
on
the
private
private
ci.
J
J
As
for
the
github
actions
jobs,
I
I
I
find
a
tutorial
to
introduce
how
to
deploy
kaliko
ipv6
with
kind.
So
I
think
it
won't
be
much
trouble
to
set
up
with
kind
on
our
github
actions.
So
it's
easy,
then,
let's
move
on
to
the
vmc.
So
basically
we
have
discussed
with
the
tkg
teams
about
how.
How
can
we
have
such
clusters?
They
mentioned
that
we
may
need
to
add
the
ipv6
side
of
including
port
and
service
to
our
cluster
templates.
J
Also,
there
may
be
some
other
configures,
but
now
here
is
the
issue
that
they,
the
tkg
team,
who
is
responsible
for
the
cafe
which
we
use
to
set
up
the
testbed
on
vmc
they
they
are
not
sure
about.
If
we
can
successfully
deploy
the
ipv6
cluster
of
vmc,
we
are
going
to
have
a
meeting
with
them
to
talk
about
its
possibility
next
week
and
we're
going
to
maybe
we
are
the.
J
We
are
the
first
project
for
them
to
test
it
if
it
is
possible,
but
we
will
see-
and
I
will
report
the
results
being
the
repo
and
if
it
is
not
possible,
maybe
we
can
have
other
cloud
to
host
these
public
jobs,
yes
and
for
the
private
those
windows
jobs
on
in
the
private
lab.
So
the
issue
is
that
the
this
private
lab
is
not
does
not
support
ipv6.
J
Currently,
our
office
network
doesn't
support
it
so
far,
but
we
have
two
solutions.
First,
we
may
ask
the
I.t
guys
to
try
to
give
us
ipv6
for
our
test
bed.
Also,
we
may
have
a
simulation
for,
for
we
have
make
us
have
simulation
to
make
our
test
testbed
cluster
to
to
be
in
ipv6
and
it's
sending
ipv6
traffic,
but
it
is
actually
in
an
ipv4
environment.
But
this
is
not
the
best
choice
things
we
want
to
have
a
really
real
ipv6
testbed,
so
this
well,
we
will
try
to.
J
J
We
can
edit
the
test,
etv
kind
script,
so
we
can
run
the
ipv6
around
the
test
with
ipv6
when
we
want
to
do
and
as
for
the
jenkins
jobs,
so
with
jank's
job
builder.
Basically,
it
is
the
same
with
our
current
jobs
and
maybe
there's
some
small
changes.
First,
maybe
we
will
have
some
new
ipv6
eq
test,
so
we
can
have
some
modification
on
our
testable
setup.
J
It
is
possible,
then,
as
for
conference
test,
we
are
going
to
add
the
dual
stack
focus
keyword
to
our
performance
test
and
it
should
include
some
ipv6
related
tests,
but
we
also
need
to
add
a
protein
node
and
multiple
cases
which
is
not
included
in
the
do
stack
test.
J
A
To
continue,
yes,
thanks
thanks
very
much
for
this
introduction
and,
as
time
is
flowing
very
fast,
I
think
it's
time
to
move
to
the
flow
exporter
demo.
B
A
Sure,
oh,
you
should
believe
you
need
more
than
eight
minutes.
I
mean
if
everyone
agrees,
I
will
not
mind.
You
know
overflowing
the
meeting
by
a
few
minutes
unless
unless
people
are
busy
or
especially
for
people
in
usa,
unless
you
want
to
go
to
bed.
B
B
Okay,
yeah
thanks.
Let
me
share
my
screen.
I
made
some
presentation
so
like
the
first
I'll
go
with
the
overview
of
the
feature.
A
few
weeks
back,
I
presented
a
proposal
for
flow
export
to
an
entry
agent,
so
it's
like
a
a
short
recap
and,
and
then
we'll
do
the
demo
and
I'll
talk
about
the
future
work.
That
is
it
to
be
done.
So
the
the
basic,
the
the
basic
thing
in
this
feature
is
we
collect.
B
The
connections
in
the
contract
table
are
exported
by
the
andrea
agent
as
flow
records.
We
use
ipfix
protocol
to
send
it
to
a
flow
collector
and
then
that
the
flow
collector
we
we
provide
visibility
into
the
flows
that
are
going
that
are
that
are
present
in
each
node
in
the
anterior
cluster.
B
So
and
for
that
we
built
the
connection
store
by
polling
contract
module
periodically
and,
and
then
this
we
call
it
as
poll
interval.
So
every
few
seconds
we
poll
and
build
the
connection
poll,
the
contract
module
and
build
the
connection
store,
and
then
it's
a
specific
interval
called
as
export
interval.
We
convert
these
flows
in
connection
store
into
flow
records
and
send
them
to
our
prefix
flow
collector,
and
for
that
we
have
written
a
ipfix
library
in
golag.
B
It
was
not
available,
so
we
did
it
from
scratch
and
then
that
will
be
available
as
a
vmware,
open
source
library
under
github.com
and
and
if
people
are
interested
in
to
know
more
details
about
this
feature,
there
is
a
url
and
there
is
a
issue
in
andrea
as
well,
so
the
flow
records
mainly
contain
the
standard,
happy
fix
fields
which
are
like
five
people
and
some
stats
like
a
packets
pipes,
including
reverse
direction.
B
In
addition
to
that,
we
have
anterior
specific
phase
like
pod
name,
pod,
name,
space,
node
name,
etc
for
source
and
destination,
and
for
in
this
demo
we
as
a
first
phase
of
this
feature,
we
resolve
only
local
pods,
pod
names,
pod
name
spaces
and
local
node
names.
This
is
essentially
what
I'm
going
to
show
today.
B
B
We
did
not
use
elastic
flow
as
such,
so
and
these,
and
this
flow
collector
is
deployed
as
on
in
the
andrea
in
the
kubernetes
cluster,
where
we
are
collecting
the
flows
and
it's
running
in
the
same
cluster
there,
and
so
here,
like
the
we
can
select
the
time
interval
to
see
number
of
blow
records
receive.
So
in
the
last
15
minutes.
B
Let
me
refresh,
we
have
received
this
nine
floor
record
flows
every
minute,
so
our
export
interval
is
60
seconds
and
our
poll
interval
is
five
seconds.
B
We
poll
the
contract
module
every
five
seconds
and
and
then
we
export
these
flow
records
every
one
minute
from
the
connection
store
and
I'm
showing
here,
I
have
added
a
filter
default,
the
our
workloads
are
on
default,
namespace
and
so
the
source
and
destination
pod
name
spaces
are
default
and
we
can
see
our
workload
has
nine
flows
flowing
through
and
in
our
cluster
there
are
three
nodes:
master
node
worker
node,
one
and
worker
node
two.
B
So
these
are
the
this.
So
this
is
one
kind
of
a
dashboard
where
we
can
see
how
many
flow
records
have
received.
We
have
received
till
now
and
we
can
change
the
time
so
to
see
like
the
history
like
three
days
four
days
and
like
that
so
and
then
the
second
dashboard
we
have
is,
let
me
refresh
again,
we
have
is
with
the
flows
and
the
the
left
diagram
is
essentially
the
left
side.
B
Vertical
line
shows
the
source,
pod
names
and
board
namespace
the
default
web,
client,
etc,
and
then
the
right
ones
shows
the
destination
for
namespace
and
bot
names
and
the
the
this.
The
terabytes
is
basically
cumulative
sum
of
bytes
that
have
been
seen
in
the
flow
over
last.
How
many
hour
minutes.
We
are
going
to
select
here
type,
so
here
here
as
well,
I
have
chosen
the
default
default
so
and
then
the
next.
The
right
diagram
is
for
reverse
bytes
in
that
flow.
B
This
is
in
the
forward
direction,
and
this
is
the
reverse
direction
so,
and
we
have
considered
two
workloads
for
this
demo.
One
workload
is
a
intra
node
flows
and
which
are
which
is
unidirectional.
B
That
means
the
client
is
sending
data
to
server
and
then
the
second
workload
is
inter
node
flows
where
the
client
and
server
are
sending
data
to
each
other.
So
the
two
workflow
workloads
and
I'll
focus
on
the
first
one,
the
intra
node,
which
is
the
web
client
one.
B
And
the
destination
is
also
the
web
server
yeah
so
anyway,
here
there
is
only
one
intro
and
outflow.
The
client
is
talking
to
server.
These
are
hyper
flows
and
we
can
see
that
there
is
a
lot
of
data
that
has
been
sent
from
client
to
server
and
very
comparatively
very
less
data
received
from
server
to
client
as
acknowledgements,
and
we
also
here.
We
also
show
the
time
series
of
the
flow
throughput.
So
here
we
have
four
graphs.
B
We
can
see
the
throughput
of
the
from
the
source
pod
tx
mbps,
how
much
data
the
source
part
is
sending
to
a
destination
part,
and
then
source
for
rx
mbps
is
how
much
data
is
receiving
from
at
the
destination
part
in
each
flow
record-
and
here
you
can
see
around
130
gigs
per
second
is
being
said,
and
then
the
the
reverse
direction
is
only
acknowledgement
which
is
around
136
mega
bits
per
second.
B
So
and
this
is
kind
of
redundant,
we,
we
kind
of
show
the
destination
for
tx
mbps,
which
is
like
the
rx
of
this
one
and
similarly
destination
for
rx
is
the
source
for
the
links.
So
there
are
these
four
time
series
based
plots.
B
B
Okay,
so
the
essentially,
what
happens
in
inter
node
flows
are
like
each
node
will
have
a
flow
record
for
the
same
flow
right
and
then,
as
I
mentioned
as
a
first
phase
in
this
feature,
we
only
dissolve
local
pod
and
pod
name
spaces
and
we
don't
resolve
the
remote
pod
and
pod
name
spaces.
So
here
the
as
you
can
see,
let's
say
the
worker.
The
web
client
is
on
worker
node
one
and
then
web
server
is
on
worker,
node
2.
B
So
the
here
essentially-
and
I
also
mentioned
bi-directional
traffic
here-
the
client
is
sending
data
to
the
server
as
we
cannot
resolve
the
remote
part.
We
show
the
ip
here
and
we
can
see
some
amount
of
data
is
going
from
the
web
client
to
this
ip
and
and
then
you
can
see
the
the
ip
of
the
web
client
is
also
here,
10.0.2.16.
B
so
and
then
the
same
flow
will
be
there
on
the
node
2.
As
I
mean
worker
node
2.,
there
we
resolve
the
local
pod
name
web
server,
and
then
we
we
cannot
resolve
the
remote
port
name
and
namespace,
and
so
essentially
we
have
two
flow
records.
For
the
same
flow
and
the
the
amount
of
bytes
that
are
sent
is
almost
the
same,
and
and
then
here
that
there
is
bi-directional
traffic,
so
we
see
the
reverse
direction.
B
There
is
some
data
flowing
as
well,
and
we
can
see
the
same
thing
here
in
the
time
series
or
like
the
flow
throughput
representation
and
the
the
no
traffic
from
is
around
five
gigabits
per
second
in
one
direction
and
three
in
the
other
direction.
B
Yeah.
This
is
somewhat
like
it
and
we
can
in
future.
We
can
probably
think
of
some
some
sort
of
since
the
the
flow
records
of
the
same
flow
are
there.
We
have
to
aggregate
them
or
we
can.
We
have
to
do
something
for
the
internet
traffic.
B
So
this
is
what
we
want
to
show
in
the
demo.
Any.
K
B
Oh,
the
we
we
don't
create
watcher
so
like
we
don't
have
the
information
in
the
andrea
agent.
So
we
we
kind
of
rely
on
interface,
store.
B
Okay,
yeah.
We
don't
yeah.
You
know
the
plan
is
to
do
this
through
flow
aggregator,
send
all
the
flows
flow
records
to
one
guy
in
the
cluster
and
there
will
resolve
the
remote
pod
name
and
body
name
spaces.
E
This
we
can
have
another
question
so
for
elastic
flows
do
any
do
we
need
any
customization
of
the
elastic
flow
wheels
to
filter
by
the
attributes
we
append
to
the
flows
or
reality
flow
can
support
you
to
do
filtering
or
search
with
any
energy.
B
I
think
we
have
to
customize,
I
think
you
can
add
more,
but
she
actually
moved
away
from
elasti
flow
and
used
like
her
own
log,
stash
and
script
and
then
the
elastic
search
and
kibana,
because
there
were
some
license
issues
to
use
the
elastic
flow,
put
it
in
andrea.
So
she
went
and
she
can
add
more
on
that
yo.
C
C
The
the
log
stash
will
collect
the
data
and
do
some
processing
and
then
store
it
in
the
elastic
search.
The
kibana
is
only
for
the
visualization,
so
we
are
using
some
built-in
visualization
of
the
keyboard.
E
E
Okay,
the
so
you
mean
the
service
by
ourselves,
so
we
have
some
service
code.
We
like
with
the
elastic
search
to
consume
the
data
and
then
presenting
the
keypad.
C
B
Yeah
I
like
that,
to
give
a
specific
example,
for
example,
like
we,
we
calculate
the
throughput
using
octet
delta
count,
and
then
we
use
the
interval
of
the
records
received
for
each
floor
record
like
the
time
space
between
those
records
and
we
use
the
doctor
data
account
and
then
interval
and
then
calculate
the
truth
is
estimated.
So
that
is
added
in
log
step
script.
Our
code.
E
So
you
guys
already
create
the
pr
for
the
photo
yeah.
I
think
yo
has
a
pr.
I
say.
Maybe
I
miss
that
one.
I
saw
this
some
testing
code
since.
H
K
B
Labels
yeah,
if,
if
the
information
is
available,
we
can
send
it
as
a
recall
as
a
field
in
the
record,
so
we
have
to
explore
how
to
do
it.
But
if
the
information
is
available,
we
can
send
it
as
part
of
loricod.
B
B
B
Me
talk
about
the
future
work.
The
essentially
we
we're
planning
to
add
network
policy,
network
policy,
rule
info
to
the
flows
as
the
next
step
and
then
also
exploring
with
andrea
proxy
looks
like
the
service
mapping
info
is
available
at
the
agent,
so
we
want
to
add
the
service
name
and
everything
to
the
floor
record.
That's
the
another
thing
and
then
create
some
flow
matrix
out
of
this
as
a
prometheus
matrix.
B
These
are
the
three
things
that
are.
We
are
thinking
and,
and
then,
as
I
mentioned
during
the
demo,
a
flow
aggregator
where
flow
records
will
be
receiving
from
every
agent
in
the
cluster
to
this
one
guy,
where
we
are
going
to
have
watches
and
and
then
we
can
resolve
the
remote
pod
name,
pod,
name,
space,
node
names,
etc,
and
then
for
cube
proxy
scenarios.
B
We
have
to
resolve
the
the
service
information
for
the
for
the
for
each
floor
flow
record.
Essentially,
the
idea
there
is
there
are
two
contract
flows,
one
with
cluster
ip
and
one
with
endpoint
ip.
So
we
have
to
send
both
the
flows
to
the
flow
aggregator
and
then
based
on
the
mapping
between
endpoint
ip
and
the
cluster
ip.
We
have
to
correlate
these
two
into
one
flow
and
flow
record
with
the
service
names,
etc.
B
So
that
is
the
next
with
the
flow
aggregator.
This
is
what
we
are
planning
to
do
and
then,
for
that
we
need
a
more
ipfix
implementation,
where
we
have
to
implement
a
ipfix
mediator
where
we,
which
can
collect
and
add
this
information
and
then
send
it
to
collectorate.
G
So
for
the
network
policy
rule
flows,
is
it
possible
that
you
have
additional
like
metadata
to
the
flow,
for
example?
I
recently
saw
in
some
demo
that
a
particular
flow-
let's
say
the
default
flow-
is
being
hit
or
traffic.
We
can
tag
it
as
an
unprotected
flow
saying
that
this
this
flow
was
not
covered
by
any
network
policy,
so
it
was
not
explicitly
allowed.
It
was
default.
Allow
so
that
way.
G
With
one
view
of
the
system,
you
can
see
how
many
unprotected
flows
are
are
happening
in
the
system
and
that
could
give
visibility.
Yeah
is
that
possible,
with
what
we
have
today.
B
The
idea
here
is
to
use
the
connection
label
and
then,
if
the
connection
label,
when
we
are
adding
to
contract
table
in
contract
module,
we
in
the
through
connection
label,
we
are
going
to
send
this
network
policy
metadata
or
rule
id
or
uuid.
B
We
are
not
sure
how
it
will
be,
but
that's
the
high
level
idea.
So
let's
say
if
the
connection
label,
that
is
not
there
for
a
contract
flow.
That
means
it
did
not
go
through
the
network
policy,
so
it
is
possible
in
in
to
do
that
kind
of
the
thing.
So,
but
you
know
until
we
implement
this
first
part,
I
cannot
tell
for
sure
whether
that
can
be
done,
but
at
least
from
the
high
level
idea,
as
it's
possible.
B
Like
can,
can
you
repeat
the
first
part
I
I.
H
B
Used,
oh,
like
we
can
do
that
as
part
of
visibility
here
through
filters,
but
not
at
the
anterior
agent.
Like
you
know,
we
we
get
everything
that
is
there
in
the
contract
module,
that
is,
that
is
relevant
to
the
andrea.
Like
all
the
flows
yeah,
we
don't.
We
don't
have
any
filtering
at
the
agent
itself,
but
we
can
do
that
filtering
on
the
dashboard
here.
B
I
think
we
can
definitely
explore
that
to
have
that
sort
of
a
filter,
because
contract
definitely
provides
the
filtering
there,
but
yeah
it
it's
we
have,
but
the
thing
is
yeah
like
it's
easy
to
probably
easy
to
implement,
not
show
certain
things
right
or
rather
than
show
only
multiple
things
like
you
know:
you'll
have
multiple
filters
right
so
like
what
I'm
saying
is
what
I'm
saying
is
negation
of
a
filter
is
probably
easier
to
implement,
rather
than
to
say
that
I
want
to
see
only
these
100
like
flows
or
something
like
that.
E
So
from
filtering
is
more
about
to
reduce
the
traffic
from
maintenance.
B
L
How
do
we
want
to
make
it
configurable
like?
How
would
the
configuration
happen
for
this
filtering.
B
E
If
you
want
to
do
some
filtering,
I
think
probably
some
crd
in
my
mind.
Okay,.
B
A
Seconds
all
right,
so
I
would
like
to
thank
waning
john
shawn
and
zurich
for
their
presentations
and
demo.
We
really
appreciated
this
meeting,
which
is
also
probably
been
probably
the
longest
meeting
in
yantria
community
history
meeting.
So
we
set
a
record
today,
so
I
am
going
to
stop
the
recording
now
and
I
wish
everyone
a
good
morning
good
afternoon,
good
evening
or
good
night.