►
From YouTube: Antrea Community Meeting 01/19/2021
Description
Antrea Community Meeting, January 19th 2021
A
All
right,
perfect,
so
good
morning,
good
afternoon,
good
evening,
good
night
in
some
cases
and
welcome
to
this
instance
of
the
andrea
community
meeting
today
is
wednesday
january
20th
or
tuesday
january
the
19th.
If
you
are
in
us
and
for
this
meeting,
we
will
have
a
couple
of
quick
announcements
around
one
from
jay.
A
I
myself,
I
will
introduce
the
results
of
the
poll
for
the
additional
andrea
community
meeting
that
we
discussed
the
tweaks
ago
and
then
we'll
the
intel
team
will
present
their
their
solution
on
cnf
container
networking
net
containerized
network
function
leveraging
andrea
so
without
waiting
much
more
time
in
the
interest
of
time.
Let's
get
started
so
we'll
start
with
these
announcements
and
jay
you're.
The
first.
B
Sure
matt,
you
want
to
go
ahead
and
tell
people
about
what
you've
been
up
to.
C
Sure,
yeah,
hey
everybody,
matt
fenwick,
here
I've
been
working
on
network
policies
in
the
e2e
suite
and
kubernetes,
with
jay
and
abashak,
and
your
other
people,
and
we
we
came
up
with
this
fuzzer
to
test
network
policies,
generate
a
whole
bunch
of
network
policies
and
then
test
them,
and
I
was
running
that
against
calico
and
then
they
suggested
that
I
should
try
it
against
andrea
and
you
know
and
see
what
we
could
do
there
in
terms
of
just
like
you
know,
generating
tons
of
network
policies
and
testing
those
so
yeah.
C
B
It
sounds
like
the
the
initial
results
I
didn't
see
them
yet,
but
matt
was
telling
me
it's
interesting
that
we
definitely
don't
have
parody
in
some
some
areas,
so
it'd
be
cool
to
like
basically
get
to
parity.
There
may
be,
I
don't
know
what
you
said.
It
was
about
10
of
the
ones
that
you
saw
were
different.
The
results
right
or
something
yeah.
B
F
Yeah
you
go
ahead
and.
E
I
just
want
you
to
see
if
you
generate
random
policies
or
you
start
from
like
a
set
of
network
policies,
and
then
you
validate
their
implementation,
because
I
think
there
was
another
upstream
project
and
I
think
jay
mentioned
it
in
the
in
the
github
issue
that
was
kind
of
like
doing
something
similar.
I
think
it
was
what
was
the
name
illuminatio
or
something
like
this?
I
think
they
were
like
starting
from
like
a
set
of.
They
were
assuming
some
network
pulses
in
your
cluster
and
then
validating
the
implementation
for
those
policies.
C
Yeah,
let
me
take
a
look
at
that
issue.
The
camera
was
in
there
oh
yeah,
basically
generating
a
bunch
of
network
policies
generating
expected
results
and
then
comparing
those
to
actually
run
against
andrea
and
looking
for
differences.
C
Yeah,
it's
yeah
it's
under
my
profile
right
now,
mit
license
and
all
that
it's
linked.
I
don't
know
if
you
guys
saw
the
upstream
issue.
I
guess
I
can
drop
a
link
to
that
slack,
but
yeah
open
source.
B
Yeah
we've
just
the
history
on
this
stephen
is:
we
got
the
initial
net
paul
framework
and
andre
imported
over
to
upstream
matt
was
instrumental
in
doing
that,
and
so
we
have
all
those
android
tests
now
in
upstream
kubernetes
itself
under
hack.
I'm
sorry
under
test
e2e
networking,
netball
and
now
matt's
basically
extended
that
framework
now
to
do
more
than
just
test,
but
actually
generate
thousands
of
policies
and
verify
them
with
these
truth
tables
of
connectivity
across
this
matrix.
B
So
it's
kind
of
an
iteration
on
the
upstream
aligned
work.
We've
already
been
kind
of
doing.
G
Okay,
great,
I
I
see
that
lincoln
slack.
C
B
Really
quick
and
who
should
be
the
lead
on
working
with
matt?
I
can
get
him
started
on
on
some
of
the
stuff,
but
at
some
point
we
may
have
to
really
upgrade
the
antra
horsepower
in
terms
of
who's
really.
Looking
at
the
issues
should
we
just
file
issues
or
should
we.
E
D
F
There
any
ticket
or
something
some
kind
of
document,
to
understand
this
test
suit.
You
have
some
info
for
the
community,
including
the
code
base.
I.
C
Have
some
really
bare
bones:
documentation
yeah,
I
just
threw
it
together
in
the
last
week,
so
I'll
be
improving
that
stuff
for
you
and
stuff.
But
if
there's
any
specific
question
you
know
like
feel
free
to
like
poke
me
on
slack
and
all
and
I'll
you
know
let
you
know
figure
out
some
way
to
write
that
up
or
just
let
you
know
or
something.
B
D
B
A
A
All
right
thanks,
matthew
and
jay
next
topic
is
for
myself
to
give
a
summary
of
the
survey
that
we
security
two
weeks
ago.
This
should
take
no
more
than
a
minute.
Hopefully
let
me
share
my
screen,
so
you
should
now
see
a
screen
with
the
results
of
the
summary.
We
didn't
get
many
votes,
just
seven
votes,
but
still
better
than
nothing.
A
As
you
can
see
from
the
first
answer.
Everyone
prefers
an
office
hours
meeting
then
we'll
there
is
a
slight
preference
for
the
pacific
pm
time
and
everyone.
I
mean,
there's
kind
of
a
tie
between
a
16
and
30
minutes,
meeting
slight
preference
for
60
minutes.
I
think
that
perhaps
it
might
be
worth
having
scheduling
for
a
60
minutes
mini
meeting,
and
then
you
know
if
there
aren't
any
topics-
or
there
are
many
many
attendees.
A
The
meeting
can
always
be
cut
short
asking
for
preference
for
a
specific
time.
We
obviously
get
get
all
sort
of
preferences,
but
let's
say
that
this
is
in
line
with
a
general
preference
for
a
pacific
pm
meeting
and
as
for
preferred
day
of
the
week,
there
is
no
one
likes
wednesday
for
meetings.
For
some
reason,
but
the
I
will
say
that
three
out
of
seven
votes
were
for
tuesday,
so
we
consider
tuesday
to
be
the
day.
A
Considering
this,
I
believe
that
the
idea
will
be
to
schedule
every
two
weeks,
an
office
hour
meeting
of
the
duration
of
one
hour
on
tuesday
at
2
p.m.
Pacific
time
this
will
mean
that
there
will
be
3
p.m:
mountain
5
p.m,
on
the
east
coast,
of
course,
4
p.m.
Central
it
will
still
be
doable
for
european
time
zones
being
10
p.m,
for
great
britain
and
portugal,
11
p.m.
For
the
rest
of
central
europe
and
the
midnight
for
eastern
europe
and
israel,
I
mean
I'll
be
kind
of
a
bit
late
for
that.
A
For
that
time,
zone
asia
will
clearly
be
not
doable.
I
mean
it
will
be
like
6
a.m
in
beijing
and
7
a.m
in
japan.
A
But
you
know
this
will
give
us
the
opportunity
of
having
awaiting
another
meeting
that
has
much
better
coverage
for
united
states
time
zones,
and
I
say
as
a
this
will
not
be,
however,
like
a
community
meeting
in
the
same
style
as
this
one
with
a
predefined
agenda.
A
It
will
be
mostly
a
an
office
hours
meeting.
Obviously
we
can
still
discuss
design
topics
and
implementation
topics
it
will.
It
will
just
be
that
the
main
focus
of
the
meeting
will
will
be
answering
questions
for
community
members
that
can
come
up
and
you
know
ask
for
assistance,
curiosity
ask
for
clarifications
and
so
on,
and
so,
if
there
isn't
any
objection,
objection
our
procedure
to
add
to
the
andrea
readme
and
note
that
we
will
start
scheduling
this
meeting
at
andrea
website
as
well
as
this
meeting
starting
next
tuesday
at
2
p.m.
G
Yeah,
I
have
one
I'm
just
wondering,
and
maybe
this
already
exists
and
I'm
oblivious
to
it.
But
if
you
look
at
the
kubernetes
project,
they
have
a
community
calendar
just
using
the
free
google
calendar
service,
yeah.
H
G
There's
enough
meetings
now
in
andrea
that
if
we
don't
have
one
of
these,
we
should
create
one
so
that
people
can
either
open
the
webpage
and
just
take
a
quick
look
to
get
the
meeting
times.
And
when.
When
you
create
that
you
can
click
on
the
items
and
get
them
to
load
in
an
outlook
calendar
or
your
personal,
google
calendar
or
whatever
you
use.
A
G
A
G
I
don't
know
that
it's
allowed
they're,
not
as
liberal
with
it
as
they
are
for
creating
slack
channels,
but
there's
nothing.
I
think
that
would
stop
us
from
creating
our
own
calendar
for
antrion
the
ownership.
You
would
you
already
have
a
google
mailing
list,
so
you
give
ownership
of
that
calendar
to
the
mailing
list
so
that
if
one
person
goes
away,
you
can
have
shared
ownership
of
calendar
entries
with
multiple
people.
A
Yeah
that
should
be
doable.
It's
a
good
idea.
I
can
I
can
I
mean
I
can
I
can
set
up
the
I
can
set
up
this
calendar
and-
and
you
know
maybe
we'll
just
share
a
link
to
the
calendar
on
the
andrea
website,
so
that
people
can
all
you
know
people
can
download
the
calendar
ad
bookmarks
in
their
own
calendars.
A
A
Perfect
so,
since
we're
already
at
a
quarter
of
the
meeting,
I
think
that
we
can.
We
can
go
to
the
presentation
around
cnf.
I
don't
know
arun.
Is
it
you
that
are
going
to
present.
F
Yeah
and
we
can
see
yes-
oh
that's
great
yeah
thanks
harlow
yeah,
so
so
I
think
if
you
guys
can
recall
this
like
back
in
december
first
week
of
december,
we
came
up
with
the
final
proposal
from
our
site
to
support
cnf
right,
so
cnf
use
case
on
andrea.
F
So
I
believe
it
was
well
understood
by
the
community
and-
and
we
saw
pretty
much
a
positive
response
from
all
you
guys
right.
So
with
that.
So
even
last
time
itself
we
told
we
are
actually
working
on
a
demo
to
show
more
visualized
confirmation
about
how
these
cnfs
are
expected
to
be
implemented.
F
So
it
took
a
little
more
than
what
we
actually
thought
it's
going
to
be
also.
There
was
a
vacation
time
during
the
christmas
time
and
new
year.
So
finally,
we
got
time
now
to
present
to
the
community.
F
So
indeed,
we
also
had
a
quick
internal
sync
with
some
of
the
key
guys,
so
they
may
be
aware
of
this
already,
but
but
then
feel
free
to
ask
your
questions,
as
you
have
just
to
recap
on
what
we
discussed
last
time
now,
so
we're
going
to
go
to
the
entire
details
of
what
we
presented
last
time
right.
So
if
you
need
more
details,
please
review
the
corresponding
tickets,
where
we
have
uploaded
the
ppt
as
well.
I
think,
and
antonio
uploaded,
the
ppt
there.
F
So
probably
we
can
share
the
ticket
details
later
in
the
slack
just
to
start
with
the
overview.
In
this
case,
we
we
try
to
address
the
cnf
use
case
in
andrea,
as
you
can
see
at
the
high
cluster
level
scenario.
In
this
case
you
have
a
micro
services
running
in
as
part
of
your
cluster.
F
You
have
your
provider
network
in
this
case,
which
is
your
building
line
or
van
or
whatever
right,
and
then
you
have
a
bunch
of
network
functions
which
is
running
as
a
container
and
it
is
still
under
the
same
cluster
scope
right.
So
there
is
a
load
balancer.
There
is
a
next
gen
and
there
is
a
sd
man
cnf
see
we
just
took
sd-wan
as
a
use
case.
F
F
So
there
are
few
traffic
patterns,
as
we
saw
last
time
is
like
one
of
the
key
traffic
pattern
is
to
get
the
traffic
out
of
your
provider
network
to
the
internet,
and
vice
versa
right
and
the
other
one
is
from
the
microservices
itself
right
from
the
cluster
itself,
cluster
to
the
external
network,
and
vice
versa.
Right
and
another
thing
is
like
the
thing
is
from
the
provider
network
to
the
microservices
right.
F
So
these
are
some
of
the
traffic
patterns
which
we
see
feasible
from
this
sd-wan
configuration
right
so
just
to
have
a
little
more
any
questions
on
this.
I
hope
you
can
recall
from
the
previous
presentation.
F
Right,
okay,
a
little
more
deep
dive
here
further,
this
gives
you
a
overview
of
like
how
the
network,
the
sfc
traffic
flow
service
function.
Chain
flow
for
cnf
is,
is
expected
to
happen
in
this
case,
so
basically,
as
we
described
last
time,
so
there
are
a
couple
of
virtual
network
in
this
case.
F
Basically,
in
the
specific
use
case,
we
have
two
virtual
network
to
be
configured
for
the
traffic
chaining
from
slb
to
next
gen,
accordingly,
from
nextgen
to
sd-wan,
right
otherwise,
like
the
the
traffic
which
is
getting
out
of
your
provider,
network
in
this
case
is
going
to
be
default.
Router
to
to
your
slp
all
right,
your
slb
is
going
to
be
the
next
half
for
any
traffic
going
out
of
your
water
network,
so
this
is
actually
a
physical
intro
connection.
F
In
this
case,
these
two
are
a
logical
network
right,
so
this
portion
what
we
talk
now.
This
is
a
cnf
which
is
which
is
virtual
and
it
is
residing
in
somewhere
in
the
cloud
right.
So
this
is
your
building
bank.
This
is
your
internet
and
so
to
to
service
change
the
network
functions.
These
are
actually.
F
These
are
actually
mounted
in
your
cluster
in
a
kubernetes
cluster
right,
so
we
can
deep
dive
on
this
diagram,
so
just
to
understand
a
little
bit
here,
we
have
a
subnet
defined
right,
so
we
have
a
def
subnet
defined
for
your
provider
network
default
route.
In
this
case,
which
is
22
200
right,
22.x,
1172
3022.x.
F
This
is
a
subnet
here,
and
the
subnet
for
a
communication
between
slb
and
nextgen
is
33.x
right.
This
is
what
the
definition
and
the
communication
from
nextgen
to
sdvan
is
44.x
right.
So
these
two
are
the
virtual
networks
which
we
create
for
traffic
forwarding
from
between
the
cnfs
right.
So
once
the
traffic
hits
your
last
node
of
a
chain,
the
traffic
is
now
has
to
be
destined
towards
your
external
router
right.
F
As
per
the
configuration
we
say
like
we
have
a
tm2
external
router,
but
for
the
demo
need
what
we
did
was
like.
We
just
used
the
andrea
gateway
itself
as
an
external
router.
In
our
case.
So
when
we
go
through
the
example,
I
can
come
back
here
to
explain
further
right.
As
of
now
you
can
just
understand
like
this.
Is
the
actual
traffic
flow
and
pattern
which
we
are
going
to
experience
with
the
quick
demo
with
andrea.
F
Right,
so
the
another
aspect
here
to
understand
is
the
default
route
right.
So
when
it
comes
to
traffic
forwarding
at
this
level
right,
let's
take
an
example
of
next-gen
firewall
when
the
traffic
has
to
be
forwarded
towards
the
east
or
or
in
this
case,
or
not
bound.
We
can
say
in
this
case,
let's
assume
that
east
face
traffic
right.
F
So
if
the
traffic
is
forwarded
towards
east,
it's
going
to
follow
the
default
route
in
this
case
right,
so
any
traffic
which
is
forwarded
from
your
provider
network
to
outside
world
is
always
going
to
go,
go
through
the
default
network.
It's
a
typical
routing
principle
in
this
case
right.
So
at
the
same
time,
when
there
is
a
reverse
communication
happens,
the
the
node
at
the
level
ax
the
cnf
at
the
level
x,
should
be
aware
of
all
the
on
all
the
subnet
on
his
on
its
left
side.
F
F
That's
only
way
that
it
can
route
the
traffic
back
to
the
network
or
route
the
traffic,
which
is
actually
bound
to
the
internal
provider
network
from
the
external
internet
services.
F
F
Clear,
okay,
yeah
sure,
thank
you
before
we
jump
on
to
the
real
demo
scenarios.
We
just
want
to
summarize
exactly
like
what
changes
the
vc
has
to
happen
to
achieve
this
cnf
use
case
in
andrea
right.
So
as
what
we
have
done
is
is
a
is
a
boc.
Basically,
we
modified
the
andrea
code.
Some
of
the
things
are
hard
coded
right,
so,
but
ultimately,
when
we
implement
this
feature,
it's
going
to
be
impacting
these
areas.
So
one
is
the
anterior
controller
update
as
we
discussed
last
last
month.
F
So
what
we
need
here
is
like
there
is
a
new
controller
module
or
currently
we
have
a
monolithic
controller
right.
So
either
we
modify
the
existing
controller
or
introduce
a
new
controller
which
would
basically
understand
the
cnf
part
configuration
right.
Basically,
cnf
cdr
crd
has
to
be
parsed
and
it
has
to
basically
understand
a
multiple
interface
support
right.
F
So
basically,
if
we
see
the
use
case,
if
you
take
this
cnf,
which
is
a
basically
a
cnf
part
in
this
case
for
him,
he
has
to
have
interface,
to
support
the
left
side
interface,
to
support
the
right
side,
traffic
right
and
also
an
interface
to
support
internal
cluster
communication
right.
So
basically,
we
expect
three
interface
to
be
configured
when
it
comes
to
a
cnf
use
case
right
in
this
case,
like
basically,
what
it
means
to
andrea
is
andrea
once
it
recognized
the
pod
is
actually
a
cnf
use
case
part.
F
It
has
to
go
and
create
multiple
vf
pairs
or
vfs
right.
If
we
have
sri
ov
device
to
be
supported,
so
we
will
have
a
vf
to
be
configured
so
ultimately,
from
from
the
container
perspective,
it
is
about
like
a
network
interface
right
either
it's
going
to
be
a
via
pair
like
what
we
do
currently
for
the
default
network
configuration
or
it's
going
to
be
vf
if
it's
sri
ov
right.
F
So
another
thing
is
the
virtual
network
update
between
cnf,
so
one
is
the
interface
itself
and
another
thing
is
like
what
subnet
is
is
configured?
What
gateway
information?
Config
configured
for
per
interface
configuration
right,
so
these
are
the
key
aspect
which
the
controller
to
understand
from
the
crd
definition
of
your
cnf
parts
right
once
it
understands
that
it
has
to
generate
a
part
annotation.
F
So
basic
use
case
here
is
like
we
expect
this
to
be
supported
either
at
the
the
chain
configuration
to
be
supported
at
the
init
or
at
the
runtime.
Both
cases
right,
so
we
expect
power
annotation.
So
if
there
is
a
new
crd
push
towards
the
controller
controller
parse
that
and
it
he
generates
the
power
annotation
support
foreign
rotation
to
the
part
to
to
communicate
what
interface
has
to
be
configured
and
what
is
the
subnet
to
be
configured
right?
So
there
is
also.
F
There
is
also
an
ask
like
hey:
why
do
we
need
a
controller
in
this
case?
Why
can't
we
play
with
the
the
part
configuration
itself
at
the
other
node
level
or
the
part
level
right
during
the
unit
so
which
we
are
exploring
further,
which
we
need
to
see?
How
does
it
does?
It
really
makes
sense
to
go
that
route,
or
we
still
feel.
I
mean
like
valuable
to
have
a
controller
itself
in
this
case.
F
Okay
and
the
major
impact.
Otherwise
is
the
andrea
agent
itself
right.
So
one
of
the
main
thing
is
the
cnf
to
support
multiple
interface.
So
the
main
thing,
what
we
know
now
is
right,
so
andrea
supports
the
network
configuration
for
application
parts
right.
So
what
it
means
to
us
is
like
either
traffic
is
destined
or
originated
at
this
pod
level
right.
So
what
we
support
right
now
is
we
create
a
pod,
so
the
traffic
is
either
generated
from
here
or
destined
to
here
right.
Basically,
we
are
not
supporting
a
forwarding
use
case.
F
So
what
what
we
need
to
do
in
andreas
ni
is
to
implement
multiple,
multiple
interface
support.
So
we
need
to
update
some
of
the
utility
and
library
functions
to
add
the
multiple
interface
via
that
configurations
right.
Otherwise,
it's
going
to
be
mostly
a
code
reuse.
We
already
have
a
predefined
function,
which
takes
care
very
well
about
creating
the
viet
pairs
and
stuff.
So
now,
what
we're
going
to
do
is
we're
going
to
just
pass
different
interface
names.
F
So,
as
of
now,
we
assume
andrea
assumes
that
we
have
a
single
interface
to
be
attached
to
the
obs
bridge,
all
right,
so
there's
some
naming
conventions
to
be
taken
care
of
some
minor
modifications
to
be
done
to
the
algorithm
of
generating
the
interface
name
right.
So
so,
having
said
that,
we
can
re
most.
We
can
reduce
most
of
the
code
in
this
case
with
some
minor
modifications
which
adapts
as
to
help
implementing
multiple
interface
at
the
at
the
cnf
part
scope.
F
I
Ipam,
I
think
you
once
mentioned
that
you
plan
to
use,
I
think
it's
well
spots
to
do
ipad.
Is
that
still
the
case.
I
I
I
remember
you
mentioned
the
old
engineer
you
plan
to
use
wireless
about
for
ipad.
H
Yeah,
actually,
we
are
considering
two
three
open
sources
we
are.
We
are
not
at
decided.
F
I
think,
as
of
now
it's
hardcoded
sf.
Now
it
is
hard-coded.
Okay
yeah.
Basically,
there
is
no
controller
implementation
for
our
poc
right,
so
poc
is
only
in
this
segment.
We
have
not
implemented
a
controller
for
this.
This
is
something
which
we
want
to
discuss
and
decide
the
architecture
finalize
the
architecture
before
the
increment.
As
of
now,
it's
all
hardcoded
like
when
you
generate
a
when
there
is
a
pod
creation.
Event
comes
andrea,
cni
looks
at
the
pod
name
and
there
is
a
predefined
set
of
pod
names
for
this
communication.
F
Okay,
so
another
main
aspect
here
is
the
andrea
agent
and
the
obvious
flows
has
to
be
reconfigured
for
cnf
traffic
interface
right.
So
so
we
we
are
also
marking
at
the
classifier
level.
At
the
obvious
classifier
level,
we
do
mark
the
traffic
which
is
which
is
actually
coming
through
or
getting
into
coming
through,
mainly
or
coming
through
the
cnf
interfaces
right,
so
anything
which
is
not
defaulted
to
the
scene
of
interface
in
our
cases
or
now,
which
is
eth1
and
eth2
right.
F
So
we
mark
it
to
differentiate
the
cnf
flow
from
the
traditional.
Our
existing
entry
at
default
flow
right
just
to
differentiate
the
cnf
traffic.
So
as
of
now
as
per
the
poc,
we
are,
we
are
handling
the
traffic
in
the
same
vr
in
it
obvious
bridge.
So
there
was
also
a
proposal
to
use
a
different
bridge
because
of
the
srw
we
support
dependencies
and
stuff
right.
F
So
probably
we
may
consider
that
so
the
current
implementation,
so
because
it's
it's
based
on
mainly
the
application
part
right,
so
the
current
implementation
assumes
that
as
part
of
the
snoop
table,
I
hope
all
of
us
understand
the
obs
flow,
the
pipeline,
which
is
well
documented
in
the
anterior
community,
where
you
have
the
packet
when
the
packet
is
going
out
of
your
the
table
10,
I
believe
right
table
20
or
10
yeah
table
10.
snoop
table
once
it
hits
the
snoop
table.
F
We
ensure
that
packet
is
hitting
that
interface
is
sourced,
with
the
mac
address
and
ip
address
of
that
specific
interface
right,
which
will
not
be
true
for
cnf,
because
cnf,
we
are
looking
into
forwarding
use
case
instead
of
traffic
origination
or
termination
right.
We
are
looking
into
forwarding.
So
what
we
had
to
do
was
like
we
have
to
modify
the
snoop
flow
to
to
allow
the
cnf
traffics.
Basically,
don't
look
for
iprs
and
mac
addresses
of
now.
F
Don't
look
into
ibm
mac
address,
as
I
said,
as
I
said
like.
Basically,
we
had
to
modify
the
table,
0,
10
and
20,
and
for
immediate
for
for
the
for
the
immediate
needs
and
also
a
table
70.
The
l3
forwarding
has
to
be
modified
as
well
for
for
the
tunnel
specific
communications
right.
So
your
next
top.
Basically,
when
you
talk
this,
when
you
take
this
thing,
so
your
your
next
hub
can
either
be
in
the
same
node
or
it
may
be
in
some
other
node
across
your
cluster
in
your
cluster
right.
F
So
this
will
be
on
node
one.
This
will
be
on
node
two
or
these
two
are
going
to
be
in
the
no
the
same
node
right.
So
we
need
to
have
some
logic
to
find
out
that,
basically,
your
next
hop
is
not
local
to
you
on
the
same
node.
So
we
need
to
create
a
l3
table
updates
for
the
genu
tunnel
based
communication
right.
This
is
what
we
discussed
last
time
when
the
next
hop
is
actually
a
cross
node.
F
We
are
going
to
use
the
genu
tunnel,
so
we
are
going
to
follow
everything,
as
is
like
what
andrea
defines
right
now,
so
so
the
cnf
use
case
will
be
built
based
on
how
it
works.
As
of
now.
So
only
thing
is
we.
We
support
the
forwarding
use
case
now.
F
Yeah
so
yeah,
so
these
are
the
updates
I
missed
to
mention
about
table
70
right
now,
so
for
demo
needs
what
we
have
done
was
we
have
not
supported
the
tunnel
use
case
yet
so
we
supported
like
basically,
we
made
sure
with
the
pod
affinity.
The
pod
is
getting
created
only
on
the
same
node
for
the
most
memory
demo
needs.
F
So
another
main
thing
which
has
to
be
updated
is
the
anterior
agent
to
configure
the
route
table
on
the
cnf
parts,
actually
part
scope,
also
on
the
host
scope
right,
basically,
in
the
part
scope,
it
has
to
configure
the
default
route.
The
same
way
like
what
we
discussed
about
right.
So
all
this
route
programming
has
to
be
taken
care
in
the
par
scope
right
when
it
is
cnf,
so
andrea
needs
to
go
and
configure
these
route.
F
Information
on
the
part,
scope
other
is
in
container
scope
and
also
he
has
to
make
sure
that
configure
the
iptable
rule
updates
if
any
and
also
the
router
route
table
updates
of
the
host
scope
right
with
this
switches
on
the
agent
or
obvious
daemon
scope.
F
I
F
F
So
the
firewall
update
has
to
be
taken
care
for
any
mass
grading
needs
or
any
kind
of
packet
flow.
Acceptance,
in
this
case
right
that
is
required
from
the
host
code.
So
routing
table
has
to
be
updated
as
well
on
the
whole
scope.
We
can
see
this
example
directly
out
there
any
specific
questions.
If
not,
we
can
go
to
the
live
demo
and
see
how
it
works.
A
Yes,
I
have
one
quick
question:
that's
still
still
related,
somehow
to
ipam
that
I
changed
unasked
earlier
on
the
virtual
network,
the
ip
addresses
for
the
virtual
network,
those
unlike
every
other
network,
that
we
create
with
the
node
local,
that
network
it's
spread
across
all
the
nodes
in
the
deployment.
Is
that
correct.
F
Yeah,
that's
an
idea
right
now,
so
we
will
use
the
local
ipam
controller
is
what
we
are
we
are
we
are
thinking
about.
We
are
ready
to
decide,
so
we
are
going
to
use
the
ipam
controller
for
sure
for
sure,
for
that.
I
A
I
see
I
see
and
then
the
second
thing
is
that
probably
not
part
of
this
demo,
but
I
don't
know
if
it's
part
of
the
overall
project
traffic
steering
for
selecting
which
cnfs
given
traffic
issued
support,
is
that
something
yeah.
A
F
F
Yeah,
the
control
approacher
is
yeah.
That's
right
thanks,
so
the
controller
approach
is
like
why
we
think
of
controller
approach
in
this
case,
like
controller,
has
the
global
view
of
the
cluster
in
this
case
right,
so
he
knows
which
node
is
part
of
the
chain
and
what
subnet
has
to
be
configured
right
so
that
way
it
gives
a
global
scope,
a
global
view
of
the
chain.
That's
what
we
thought
like
having
a
separate
controller
which
understand
the
cnf
use
case
would
be
nice.
C
F
I
have
a
mobile
yeah
sure
so
so
just
to
show
I
have
a
cluster.
I
have
a
cluster
and
I
have
a
chain
running
for
last
40
minutes.
I
believe
so
I
just
before
the
call
I
just
started
the
chain
so
as
we
as
we
discussed.
So
there
is
like
a
provider
network.
There
is
a
sd
van
and
slb
and
the
next
gen
right
as
we
saw
from
the
diagram,
so
I
use
the
same
names
as
such
and
and
the
poc
is
defined
in
a
way.
F
If
he
sees
these
names
he
knows
what
to
be
done
based
on
the
subnets.
So
it's
not
fully
hard
coded.
I
use
some
of
the
test
functions
to
generate
the
ipam
actually
right,
so
there
are
ipam
specific
test
functions
which
I
use
to
generate
the
subnet
based
on
the
data
structure
defined
for
the
specific
subnet
okay.
F
So
this
is
the
crd
or
the
part
spec
defined
it's
very
simple.
All
we
have
is
like
the
the
the
container,
which
is
like
basically
a
busy
box
in
this
case,
and
it's
a
privileged
container,
obviously,
and
and
we
have
the
environment
variable
set
because
we
are
behind
intel
proxy.
So
we
need
things
to
work
fine,
so
we
have
the
environment
set.
F
So,
as
I
mentioned,
so
we
are
not
really
supporting
the
tunnel
means
no
to
node
communication
in
the
pocs
of
now
right,
I'm
working
on
it
right
now,
adding
a
support
for
it.
It
will
be
done
soon,
so
just
to
make
this
a
demo
make
sure
that
everything
works
fine.
So
we
put
a
pot
affinity
in
this
case.
So,
basically,
all
all
these
the
chain
nodes
are
going
to
be
spawned
in
the
single
node
right
it's
going
to
be
spawned
in
the
single
node
itself.
F
Right
so
for
the
provider
network
also,
we
have
the
environment
variables
set
and
also
I
had
to
configure
the
dns,
because
we
are
behind
proxy
in
our
in
our
internal
network,
so
to
make
communication
work
without
any
flaws
instead
of
additional
any
manual
things.
I
have
this
also
configured
right,
as
you
can
see
from
here.
All
these
four
nodes
were
spawned
under
the
same
single
node
right
for
now.
We
are
supporting
only
this
use
case,
ideally
in
in
a
real
implementation,
we
need
to
support
communication
between
nodes
through
geneve
tunnel.
F
Right
so
I
have
four
terms
windows
created
here.
This
is
broader
network
and
this
is
slb.
Okay,
sorry
to
fix
it
for
some
reason:
okay,
it's
here,
this
is
slb.
F
So
this
is
the
provider
network
case.
If
we
go
back
to
our
diagram,
we
we
take
this
one
sec,
sorry
about
that.
So
this
this
portion
right,
so
instead
of
having
a
separate
vm
right.
So
in
our
configuration
all
these
things
are
running
in
single
vm
and
the
internet.
Also,
I
created
another
vm
like
as
a
destination
for
us
just
to
reach
just
a
typical
ping
or
traced
out
to
show
the
flow
instead
of
having
another
vm.
So
I
just
created
one
more
cluster.
F
Sorry,
one
more
part
in
the
same
cluster
as
pn
right
provide
a
network
to
initiate
the
traffic,
in
this
case
right,
the
the
the
west
traffic
from
our
side.
So,
if
you
see
here,
the
key
aspect
to
take
care
in
this
case
is
like
the
interface
right.
So
what
we
have,
what
we
plan
to
do
is
to
support
andrea's
default
network
configuration
for
the
cluster
level
communication
right.
F
So
this
is
the
internal
subnet
cidr
assigned
for
the
cluster,
so
andrea
picks
up
and
configures
the
e0
right,
so
the
poc
code
goes
and
configures
the
eth2
interface
in
this
case.
So
it's
defined
if
it
is
left
side
eth1.
If
it
is
right
side,
it
is
eth2.
Okay,
it
picks
up
based
on
the
network
direction
packet
direction
interface
direction,
so
you
can
see
that
it
it
configured
the
the
interface
with
2200,
and
the
key
aspect
to
see
here
is
the
next
hop
route
has
been
updated
with
gateway,
22
200
right.
F
So
this
is
what
we
we
see
from
here
right,
2200,
to
20
to
200
right
any
packet
which
is
going
out.
Yes,
he's
destined
towards
slb,
with
the
default
gateway
assigned
to
22
200..
Basically,
the
traffic.
We
say
that
hey
any
external
traffic
next
top
is
this
guy
right.
So
in
the
slb
case,
if
you
go
to
slb,
so
you
will
see
three
interfaces
and,
as
we
discussed,
one
is
the
anterior
default
network.
Each
one
is
my
left
rider
left
network
e2
is
my
right.
Network
right
is
22.200
and
33.100
is
the
right
network.
F
So
if
we
have,
if
you
see
the
route
updates
in
this
case,
he
knows
22
or
200
is
the
left
side
of
the
network
and
obviously
33
subnet
is
also
known
to
him,
because
it
is
his
right
network.
F
We
go
to
next
gen
firewall
in
this
case
the
same
configuration
here.
You
have
two
interfaces
and
you
do
the
route
configuration
you
can
see
that,
basically
it
knows
about
22,
33
and
44.
Right
44
is
his
right
side,
22
and
33
on
his
left
side.
As
you
see
from
this
diagram
from
next
gem,
33
and
22
are
left
side.
44
is
reds
right
side,
that's
what
we
see
44
is
towards
eth2
22
and
33.
F
33
are
actually
on
left
side
right.
So
another
main
thing
here
is
like
because
the
22
subnet
is
not
directly
reachable
right.
Basically,
you
don't
have
a
22
subnet
in
the
interface
here
right.
So
when
you
try
to
program
the
route,
it's
going
to
throw
an
error
like
network,
not
reachable
or
unreachable
right,
so
we
need
to
make
sure
we
need
to
see
anything
other
than
the
very
next
subnet
it
has
to
be
programmed
as
on
link
just
to
fake
the
communication.
F
F
Right
so
so
the
last
one
here
is
the
sd-wan.
As
you
can
see
the
route,
it's
all
the
three
subnets
are
known
here
right.
So
what
we
do
is
just
a
deviation
from
what
you
saw
from
the
from
the
diagram
here.
So
in
the
diagram
we
say
the
traffic
is
actually
going
out
to
the
external
network,
so
to
think
make
things
simplified,
so
the
actual
traffic
is
going
towards
towards
the
gateway
zero.
I
mean
anti-gateway
zero
wow
right,
so
the
traffic
has
been
put
back
through
gateway
0
in
the
same
node.
F
So
basically
the
chain
is
the
traffic
is
going
through
the
chain
and
finally,
it
goes
out
of
your
entry
as
0
interface.
In
our
case,
but
in
ideal
scenario,
this
will
not
be
the
case
just
to
simplify
the
demo.
We
did
that
right.
So
this
is
this.
Is
this
n
server
which
I'm
using
this
is
1?
16,
sorry
165,
15
169?
F
F
Case
yeah,
so,
as
you
can
see
that
the
traffic
click
based
on
the
routing
information
we
have
right
now,
the
packet
is
hitting
your
22
200,
which
is
your
sd
fan.
Next,
it
goes
to
33
to
100,
which
is
your
next
gen
firewall.
Then
it
goes
to
44
200
with
this,
which
is
your
sd
van
right.
These
are
the
parts.
These
are
the
cns
right.
These
three
subnets
are
the
cnf
subnets
which
the
traffic
is
flowing
through
and
finally,
it
goes
out
of
the
node
from
your
android
gateway.
0
right.
This
is
the
subnet.
F
F
Www.Example.Com,
so
it
goes
through
the
chain
and
resolves
through
our
internal
proxy
and
stuff.
It
pulls
the
content
right
so
that
data
is
actually
get
pulled
through
the
chain
right
now.
So
the
actual
use
case
right.
So
this
these
are
dummy
parts
or
containers.
As
of
now
the
actual
use
case,
if
you
take,
if
you
take
this
diagram,
so
if
you
take
basically
you
go
and
install
the
next-gen
firewall.
F
Let's
say
you
install
some
open
source
firewall
module
here,
you
can't
figure
and
you
want
to
do
a
traffic
shaping
that
can
very
well
be
done
right.
So
if
a
traffic
is
generated
from
a
specific
source
limit,
the
traffic
or
drop
the
packet
or
whatever
right,
we
can
do
that
right.
So
now
the
packet
has
been
put
through
the
chain,
it's
configured
through
andrea,
and
now
it's
up
to
the
service
function.
F
F
F
Yeah,
so
that's
that's
kind
of
summarizes
the
the
cnf
blow
in
andrea
and
and
also
we
discussed
about
the
open
challenges.
What
we
have
right
now
and
probably
we
had
an
internal
discussion
as
well
with
anton
and
jin
jiang
a
few
days
back,
and
there
are
some
inputs
which
we
are
looking
to
explore
and
and
kind
of
come
up
with
the
next
level
of
information.
For
that.
A
Perfect,
I
believe
we
still
have
10
minutes
during
in
the
meeting,
and
so
we
can
probably
open
the
floor
for
any
question
or
comment
that
the
community
might
have
regarding
this
presentation.
So
I
would
like
just
to
thank
you
very
much.
I
don't
excite
for
this
plc
work
and
it's
a
nice,
very
nice
presentation
and
if
you
have
any
question
or
comment,
please
go
ahead.
I
Sure
yeah,
it's
very
crew,
demo.
Okay,
one
question:
I
probably
I
missed
it
when
you
probably
already
show
it,
but
I
missed
it.
So
when
we
configure
the
chain,
we
just
need
to
change
the
id
right.
We
don't
need
to
update
the
ports
back
itself
right.
F
Right
right
so
chain,
crd,
that's
what
we
are
thinking
about:
controller
right,
so
controller
to
have
a
global
scope
of
the
chain
at
the
cluster
right.
If
we,
if
we
downstream
it
to
the
pod
spec,
it
may
be
relatively
difficult
or
may
not
be
handy
for
runtime
configuration
changes
all
right,
so
somebody
needs
to
understand
the
entire
picture
of
the
chain
and
configure
the
corresponding
part.
So
from
the
bot
level.
It's
all
about.
Okay,
how
many
interface
I
have
to
create?
What
is
my
subnet
per
interface.
I
Got
it
I
remember
in
our
first
discussion,
you
guys
also
mentioned
about
some
port
annotation,
also
virtual
network
crd.
H
Yes,
that
is
for
the
interfaces.
Actually,
basically
I
don't
know,
I
think
we
have
other
slide
right
there.
Yeah.
D
F
So
the
the
part
annotation
is
expected
to
come.
Something
like
that.
So
basically
we
say
virtually
one
virtually
two:
what
is
the
interface?
What
is
interface
name?
So
basically,
the
agent
needs
to
understand
that
now
this
is
coming
from
the
controller
scope
right
agent
need
to
understand
this
part
annotation
and
configure
the
interfaces.
This
is
how
we
communicate
what
interface
and
subnets
to
be
configured.
H
H
Yeah
we
create
the
virtual
networks
for
the,
which
particular
part
what
interface
part
of
the
which
virtual
network.
I
think
we
have
a
sfc
cr
right
there.
We
can
see.
Oh
yeah.
H
Yeah
yeah
here
you
can
see
that
yeah
right,
so
you
can
see.
F
I
I'm
trying
to
understand
the
complete
workflow.
Basically,
we
still
need
to
define
whether
virtual
networks
used
by
chain,
and
then
we
define
a
chain
here
using
this
network-
chaining
crd,
yes,
and
do
we
also
need
to
annotate
the
ports
for
that.
H
Okay,
spar,
the
parts
basically
for
the
interfaces.
Basically,
this
is
additional
handy
so
that
when
the
pod
is
getting
created
automatically,
we
can
create
the
interfaces
with
these
subjects.
I
Okay,
this
is
why
it's
also
by
user
right
user
need
to
allocate
reports.
Yeah.
I
see
I
see
warranty
I
was
I
was
thinking
if
we
have
a
controller,
maybe
the
controller
automatically
annotate
the
posts
or
ultimately
create
interface
and
reports.
Since
you
already
have
external.
H
Crd,
exactly
that's
what
arjun
was
saying
in
case.
We
want
to
maintain
in
the
controller,
enter
sfc
blueprint
so
that
we
can
skip
the
power
annotations.
I
I
see
okay,
so
the
virtualize
will
still
be
a
pre-defend.
H
F
A
Perspective
all
right,
so
thank
you
very
much
and
if
there
is
no
additional
question,
I
think
that
we
can.
We
can
conclude
this
discussion
on
cnf
in
atria.
I
would
like
to
thank
again
inside
for
for
this
work,
and
I
really
look
forward
to
the
integration
in
andrea
and
is
there
any
finance
topic
that
we'd
like
to
bring
up
for
today,
waiting
like
30
seconds
for
community
members
to
come
up.
B
J
I
think
I
have
no
important
updates
just
about
the
issues
and
the
parry
created
yesterday.
It
is
night,
so
it's
about
an
issue.
The
port
cannot
to
excise.
The
coordinated
service
coordinators.
Api
service,
I
think,
is
a
big
problem,
so
I'm
still
working
on
to
debug
the
os
flows,
but
I
think
it's
a
hard
question.
I
may
need
time
to
debug
it.
B
Yeah
the
services
stuff
in
general
upstream,
I
feel
like
in
general.
We
need
a
good
service
diagnostic,
I
mean
I,
I
think
we
need
one
for
linux.
I
mean
we
have
conformance
tests
and
stuff,
but
the
services
upstream
are
so
complicated
to
figure
out
what
tests
are
testing.
What
and
I
don't
think
we
have
one
for
windows
either,
but
the
particular
bug
perry
found
was
around
services.
J
The
service
way
we
met
in
ash
is
different
from
the
from
the
common
service.
It's
because
the
endpoint
of
the
service
is
the
ipo
of
the
endpoint.
Is
it's
a
a
node
ip,
not
the
private
port
side
ip?
So
that's
that's
the
difference.
J
Yeah
then
we'll
make
os
make
our
current
current
pipeline
to
both
do
the
ice
knights
and
the
denied
on
same
package.
So
I'm
not
sure
if,
if
these
two
operation
happen
to
at
the
same
time
cause
the
issue.
B
J
B
J
Thank
you.
Will
you
add
more
text
for
content.
B
Yeah
well,
yeah
we'll
have
more.
I
actually
have
network
policy
tests
working
on
windows
for
andrea,
using
the
network
policy
framework,
and
in
that
we
found
a
new
bug
where
you
get
a
hanged
with
agn
host
and
it's
related
to
container
d,
not
properly
on
windows,
not
properly
create
cleaning
up
log
files.
B
So
we
we
thought
it
was
a
network
policy,
android
level
issue,
and
then
I
dug
more
into
it,
and
I
was
finding
that
I
was
just
getting
these
pods
that
were
hung
all
over
the
place
and
so
there's
there's
a
lot
of
interesting
issues
there.
The
probing
containers
also
don't
support
udp
properly
in
agn
host,
which
is
the
normal
end-to-end
container
that
they
use
for
all
the
intent
tests.
So
I'm
digging
through
all
that
stuff,
but.
A
A
All
right,
so
I
will
stop
the
recording
now.
I
would
like
to
thank
everyone
for
attending
this
meeting
and
there
will
be
an
official
announcement
about
the
office
hours
community
meeting
for
next
week
and
suggested
by
stephen
will
also
add
a
calendar
for
tracking
these
meetings.
Thank
you
again
for
attending
and
talk
to
you
next
week.