►
From YouTube: Antrea Community Meeting 09/14/2020
Description
Antrea Community Meeting, September 14th 2020
A
B
A
Have
to
discuss
a
little
bit
of
naming
yeah
it
likely
will.
So,
let's
see
the
topics
that
we
have
on
the
agenda
today,
you
can
find
the
agenda
on
the
community
slack
channel.
A
So
first
we
have
a
discussion
of
a
solution
for
audit
logging,
foreign
network
policy,
and
that
is
a
us
from
machi
and
I'm
sorry
if
I
killed
your
guy
sorry
and
I'm
sorry
if
I
killed
your
name
and
then
for
today
we
have
a
discussion
of
a
short
name
for
andrea
network
policy.
A
This
should
be
anything
is
an
easy
naming
task,
so
should
not
be
too
much
and
abhishek
will
lead
that
one
and
finally,
with
the
chen
which
which
one
today
we
are
going
to
discuss
the
a
proposal
for
changing
api
groups
all
right.
So
these
are
all
the
topics
that
I
know
of.
If
there
is
any
other
topic
that
you
would
like
to
bring
up,
we
can
easily
discuss
it
after
in
the
open
discussion
section
of
the
meeting.
A
Sounds
good,
so
if
you
want
you
can
get.
C
Thanks
for
today,
I'll
do
a
presentation
of
audit
logging
for
custom
network
policy
and
a
demo
it's
my
intern
project,
so
the
overall
goal
is
to
implement
firewall
logging
used
for
auditing
and
analyzing
traffic
when
it
is
enabled
individually
by
each
cluster
network
policy.
C
C
So
moving
on
to
the
details,
first,
we
changed
the
crd
and
spec.
We
added
a
new
field,
enable
logging
in
the
ingress
and
egress
for
a
cluster
network
policy,
and
another
notice
is
that
we
also
start
packing
if
entry
policy
feature
gate
is
enabled.
C
At
that
time,
we
subscribe
to
the
channels
which
will
be
used
later
then
moving
on
to
andrea
controller,
convert
this
back
to
computed
rule
and
which
enable
includes
enable
logging.
So
from
this
spec
it
would
be
true
or
false
for
personal
network
policy,
and
it
will
be
default
to
false
for
network
for
kubernetes
network
policy
because
it
does
not
support
this
field.
C
Then,
moving
on
to
entry
agent,
we
register
pack
in
handler
with
openflow
under
reason.
0
and
1.
also
check
roll,
enable
locking
to
create
different
flows.
C
I
would
talk
about
this
later
and
we
initialize
a
custom
logger
that
will
be
used
to
lock
the
information
then
moving
on
to
obs.
We
load
this
position
in
register
seven.
C
So
this
is
that
we
created
a
new
register
to
particularly
load
the
allow
or
drop
information
to
extract
it
later
for
logging,
and
then
we
send
packeting
to
controller
with
a
corresponding
reason.
So
currently
we
use
zero
for
network
policy
and
one
was
previously
used
for
trace4.
C
I
will
also
talk
about
that
in
detail
later
also
in
the
entry
agent
here.
The
network
policy
controller
based
on
the
received
packet
reason
we
forward
the
packet
to
subscribe
channel,
which
was
subscribed
during
the
start.
Packaging
then
use
the
registered
handlers
to
handle
packets.
C
The
these
were
registered
in
agents
from
here
then.
Finally,
in
entry
agent,
we
handle
the
the
packets
reports,
the
information
to
of
the
packet
load
and
also
from
the
registers
retrieve
the
information
that
we
previously
loaded
then
use
initialize
the
logger
to
log.
C
An
example
of
this
is
like
it
would
be
date,
time,
file,
table
name,
cmp
name,
disposition
of
priority
source,
ip
destination,
p
and
protocol.
C
I
will
show
this
later
during
the
demo,
so
the
lock
tool
used
in
this
project
is
the
golden
lock
and
lumberjack.
So
lumberjack
is
maintained
properly
and
it
has
a
recent
release
version
2
in
2017..
C
I
have
set
the
file
rotation
to
be
500
mega
size.
So
after
that
the
original
file
name
would
be
renamed
with
the
current
timestamp
and
create
a
new
file
with
the
previous
thing
to
log,
and
there
would
be
of
maximum
three
backups
and
the
previous
logs
will
be
compressed
and
after
28
days
they
would
be
deleted.
C
C
Then
it's
the
packaging
reason.
It's
currently
used
to
separate
feature
queues
which
still
correspond
to
obs
reasons.
So
after
some
testing,
when
we
set
the
reason
to
be
zero,
it
would
correspond
to
the
obs
reason
has
no
match.
If
we
set
it
to
one,
it
would
correspond
to
the
overs
reason
action
and
for
the
other
correspondence
they
do
not
return
a
particular
packing
packet.
C
C
Currently,
the
solution
is
to
use
zero,
so
we
register
we
register
zero
for
network
policy
logging
and
one
was
previously
used
by
trace
wall
and
finally,
race
limiting.
C
Currently,
it
is
set
on
the
agent
size,
with
rates
limiting
q
on
the
pro
is
that
it
does
not
change
the
obs,
but
it
still
occupy
the
traffic
from
obvious
to
agent,
whereas
cropped
packets,
because
for
allowed
packets,
they
would
of
a
flow,
would
be
established
and
it
would
there
will
not
be
continuous
packeting
sent
back
from
the
obs,
but
for
if
we
have
an
attack
with
dropped
packets,
then
the
obs
will
continuously
send
packets
back
to
agent.
C
So
we
want
to
do
this
rate.
Limiting
a
better
design
is
to
introduce
rate,
limiting
on
obs
side
to
rate
limit
the
packaging
side
sent
to
controller
action
for
network
policy
specifically,
and
we
want
to
prioritize
trace
law
over
network
policy.
C
C
C
C
So
at
first
I'll
do
and.
C
C
C
C
C
Now
I'm
on
client
and
painting
the
server.
C
We
can
see
that
here
is
the
table
name
for
89,
and
this
is
the
customer
policy
that
I
previously
defined.
The
action
is
allow
a
priority.
The
priority
is
the
same
here
as
the
flow
and
source
is
the
client
destination
server
protocol,
because
I
used
ping.
It's.
C
C
C
C
Are
several
locks
dropped
drops
logged
which
correspond
to
what
we
previously
discussed
because
it
tried
to
log
several
times
so
for
the
rate
limiting
we
are
still
investigating
on
the
obs
side,
but
currently
the
rate
limiting
queue
is
working
on
is
working.
C
Basically,
this
file-
it's
after
I
curl
this
address
it
will
redirect
to
get
the
the
second
address
that
I
put
in
so
I'll.
Try
this.
C
C
C
C
Because
the
default
controller
is
limiter
that
I
use
allows
10
10
requests
per
second
and
100
buckets
size,
so.
C
Basically,
now
it's
like
88
and
89,
but
from
the
commands
it's
sending
101
calls
per
second
and
yeah,
but
for
the
rate
limiting,
of
course,
the
better
way
is
to
implement
it
on
the
obs
side,
which
I
will
be
looking
at
it
yeah.
I
think
that's
all
for
my
presentation.
Thank
you.
A
Thank
you
very
much.
That
was
a
really
great
and
I
I
really
enjoyed
it
and
really
nice
and
slick.
I
don't
have
a
lot
of
questions
because
it
was
extremely
clear
in
terms
of
or
of
functional,
behavior
of
the
rate
limiter.
I
just
wondered
that
that
one
is,
is
it
causing
the
recast
for
logging
from
you
know
the
controller
request
from
open
with
which
to
get
cured,
or
will
it
cause
the
request
for
being
dropped?
A
That
is
right
and
when
a
packet
is
rate
limited
sorry,
the
behavior
is
like
an
http
rate
limiter.
Well
it
it
will
drop
the
request
from
obs
the
you
know,
the
controller
request
from
obs,
or
will
the
request
be
cued
and
just
so
just
be
put
in
a
queue.
E
This
was
fantastic
again.
I
agree
with
salvatore
that
the
logs
were
really
clear
from
a
user
perspective,
two
things
that
I
think
would
help.
G
E
E
Once
we
have
tiering,
you
know
fully
enabled
also
recording
the
tier
name
might
be
useful
for
downstream
filtering
for
parties
that
are
only
interested
in
looking
at
logs
from
our
particular.
C
Tier
currently,
currently
I
haven't
thought
about
the
tearing
side
of
this
that
I
could
look
into
it
like.
E
C
I'm
not
sure
if
there's
a
place
for
extracting
that,
I
probably
need
to
look
into
that
from
currently
what
I
have
seen.
I
I'm
not
sure
there
is
a
place
to
extract
that.
H
It's
possible
to
convert
one
at
least
one
ip
to
the
name,
but
I'm
not
sure.
What's
the
cost
cost
of
doing
that,
if
the
overhead
is
high,
then
maybe
it's
not
lasting.
H
E
Expected
there
may
be
some
overhead
yeah
if
y'all,
if
y'all
could,
just
as
you're
developing
this
feature
out,
spend
some
time.
Looking
into
that,
I
think
that
would
be
a
a
great
usability
enhancement
for
users
to
have
both
ip
address
and
a
pod
name
and
if
that's
not
feasible,
let's
look
and
see
what
that
impact
is.
H
I
also
have
two
questions
so
first,
I'm
thinking
it
doesn't
make
sense
to
keep
the
pack
hider.
Sorry,
the
the
local
message
highly
like
a
packed
in
don't
go
there.
I
think,
probably
it's
not
very
interesting
to
to
the
consumer
of
the
logs.
H
Another
one
I'm
not
I'm
not
clear
about
that,
so
that
does
typically
include
the
some
pilot
feels
like
the
the
the
size
of
the
packet
interlocks.
H
B
E
It
also
is
it
also
possible
to
capture
the
port
information.
I
didn't
see
that.
C
Or
information-
I
I
looked
into
that,
but
it's
not
directly
in
from
the
packet,
although
it's
ipv4,
but
the
structure
of
ipv4
does
not
have
a
port
in
there.
C
The
the
ipv4
structure
that
we
currently
use.
I
I
And
a
minor
point,
but
I
guess
related
to
what
janjun
said
earlier.
Maybe
the
timestamp
should
be
changed
for
and
in
input
in
a
different
format
with
milliseconds
milliseconds,
especially
when
you
have
a
lot
of
packets
like
this.
C
F
C
A
H
A
Actually,
I'm
just
thinking
in
terms
of
let's
say
packet
processing
overhead,
which
can
be
measured.
Let's
say
in
term
of
true,
but
let's
say:
is
there?
Is
this
logging
affecting
throughput
in
any
way
or
is
it
pretty
much
transparent
to
me?
It
seems
that
the
at
least
from
the
obvious
data
path
perspective.
A
The
impact
on
throughput
should
really
be
negligible,
but
I
don't
know
if
you're
ready.
Yet
if
you
actually
measured
it
already.
C
A
H
But
actually
I
was
thinking
in
another
way.
I
know
people
are
talking
about
more
contacts,
interlocks
and
some
more
user-friendly
format.
Actually
I
was
thinking
if
we
believe
performance
is
more
important,
should
we
use
some
way
to
compress
the
the
the
data
a
little.
For
example,
we
just
use
fixed
format
and
we
don't
use
this
keyword
like
protocol
or
test
source
priority
and
then
even
for
stable.
We
can
use
a
short
name
or
just
integer,
then
we
can
probably
reduce
the
size
or
not.
H
Right
for,
of
course,
I
I
don't
know
how
people
typically
do
this
traffic
logs,
so
I'm
just
yeah,
especially.
E
B
Sure
I
think
another
question
that
had
come
up
in
the
past
was
whether
we
should
include
these
logs
into
the
support
bundle
by
default,
or
it
would
be
like
a
explicit
ask
for
it,
because
I
think
at
the
moment
we
are
planning
not
to
include
this
in
the
support
bundle.
H
E
H
H
G
Go
ahead,
okay,
yeah!
So
that's
why
you
know
like
do
you
plan
to,
like
you
know,
having
like
a
function
to
forward
inside
out
there
to
for
the
login
for
to
the
stock
server?
Or
you
know
this
is
my
ask
you.
B
E
Okay,
I
think
that
would
be
lower
lower
priority.
I
mean,
I
think
I
think
if
we
log
the
file,
we
can
leave
that
as
a
user
concern
right
now,
for
you
know
where
they
want
to
ship
that,
because
the
entry
could
ship
it.
You
know
we
can
build
a
lot
of
different
options
there,
but
I
don't
think
that's
the
primary
thing
we
should
be
focusing
on.
E
Unless
somebody
has
a
different
opinion,
I
mean
I
just
it
should
be
a
fairly
straightforward
task
for
a
platform
at
metatrader
to
pull
off.
I
would
think.
G
This
is
like
our
you
know,
from
networking
operator
perspective.
You
know
using
like
insects
yeah,
you
know
most
of
customers
brought
packets
to
this
look
and
to
look
at
you
know
what
connections
are
flowing
in
the
environment
and
what
connections
are
dropped.
So
that's.
Why,
like
you
know,
I
wonder
if
we
can
get
this
function
or
not.
F
E
A
A
Are
we
good
to
move
to
the
next
one?
I
guess
so
so,
thanks
again
guf
for
your
for
your
nice
presentation
here
and
abhishek.
So
what
is
this
story
about
the
short
name
for
network
policies.
B
A
Okay
sounds
good,
so
sean.
Would
you
like
to
discuss
renaming
for
api
groups,
changes
to
api
groups.
G
F
Preos
community
meetings,
we
discussed
the
internal
api
name
and
finally,
we
choose
control
plane
as
a
api
group
name.
So
in
in
entry
0.9.3,
we
made
the
change
to
rename
the
networking
to
control
plane
and,
as
a
result,
the
net
policy
info
enforcement
will
not
be
will
be
disrupted
until
the
upgrade
is
done
and
recently
junction
raises
the
issue
that
it
may
be
serious,
because
if
the
cluster
is
very
large
and
the
agents
upgrade
in
a
loading
update
fashion,
which
could
take
many
minutes
to
to
to
upgrade
the
whole
cluster.
F
So
in
some
nodes
the
poles
will
not
be
enforced.
The
policy
narrow
policy
of
some
ports
will
not
be
enforced
correctly.
So
we
came
up
with
a
proposal
that,
as
the
deleted
networking
api
group
back
and
so
both
old
and
new
api.
At
the
same
time
and
in
implementation
we
we
are
not.
We
will
not
add
all
the
deleted
code
back
and
we
are
only
add
the
api
group
back,
but
we
will
show
the
request
with
the
same
storage
as
the
control
plate
storage
as
a
control
plate
api.
F
So
it
will
not
cause
much
code
redundancy
and
with
this
code
I
have
verified
that
we
can.
F
I
also
added
another
test
for
upgrades
so
that
the
newer
entry
controller
can
work
with
older
entry
agents
and
pass
the
conformance
and
network
policy
test.
The
components
and
the
narrow
portion
test
is
well-fed
manually
because
we
don't
have
a
good
mechanism
for
that.
F
But
I
added
a
simple
upgrade
test
that
only
upgraded
entry
controller
only
and
the
basic
function
still
works,
and
so
this
is
the
proposed
change
and
if
you
have
a
objection
or
any
better
idea,
you
could
share
your
ideas
and
if
we
agree
with
it
by
the
way
we
currently
I'm
thinking,
we
keep
two
minor
two
older
minor
release
for
compare
for
compatibility,
because
we
already
have
the
upgrade
test
from
two
to
older
minorities,
and
it
also.
F
And
if,
if
we
agree
with
this,
we
could
you
could
review
this
pr
and
we
could
document
the
api
compatibility
and
the
the
break
duplicate
duplication
policy
in
the
docks
of
the
ripple.
So
that's
it.
H
Child
crushing
here
so
you
are
seeing
the
photological
process
rolling
up.
It
can
take
like
seven
minutes
that.
F
F
Even
it's
just
a
normal
cluster
that
update
your
operator.
Image
is
because
we
set
the
updated
strategy
as
loading
update,
so
it
will
execute
in
the
loading
update
fashion.
So
you
know
first
update
one
and
then
wait
for
it.
It's
finished
then
update
another
one
and
then.
H
G
F
H
Okay,
this
is
faster
than
me,
yeah,
okay
and
then
for
the
upgrade
highest
reason.
We
are
able
to
easily
enable
tasks
for
older
version
of
the
entry,
because
I'm
seeing
that
because
we
see
requirements
from
upgrade
from
0.7.2.
H
To
0.09,
maybe
it
doesn't
have
an
issue
whether
we
we
leveraged
it.
So
I'm
just
wondering:
could
we
just
reuse
your
task
to
support
that.
H
F
J
Yeah,
I
agree
so
if
we
so
so,
this
code
is
oh
under
github
workflow,
I
got
it
so
we
can
discuss
how
we
can
create
the
jenkins
job
for
this
before
we
release
a
new
entry.
J
Please
sorry,
I
have
an
other
question
so,
for
example,
if
we
deprecate,
for
example,
deprecate
some
resources
or
api
group
and
then
in
in
some
new
release,
we
finally
remove
those
resources,
but
during
the
upgrade
there
must
be
someone
to
delete
the
old
resources
for
us
right.
So
if
the
there
are
some
leftover
api
service
resources,
it
should
not
interfere
with
us
unless,
because
there
is
actually
no
agencies
using
are
using
that
api.
Oh
the
api
group.
F
J
I
mean
api
services,
custom
resource
so
and
the
config
yeah,
the
e-pen
service
custom
resource.
F
A
And
I
think
that
that's
all
for
this
topic
and
well
thanks
for
sharing
this,
and
I
guess
that
if
you
have
any
other
question
comment
or
if
you
just
want
to
look
more
in
detail
at
what
john
is
doing
just
go
to
gita
pr
and
we
can
move
continue
the
discussion.
There
say
that
so
our
last
topic
for
today
agenda
is
the
renaming
of
the
short
name.
Sorry
for
network
policies.
Abishek,
do
you
already
have
an
idea
in
mind.
B
Yeah,
so
I
think
this
came
up
because
you
know
someone
on
the
slack
channel
kind
of
was
trying
out
the
network
policies
and
used
the
short
name
for
andrea
network
policies,
which
is
the
name
space
network
policies
and
he
ended
up
using
kubect
network,
and
you
know
by
default
it
returns
the
upstream
kubernetes
network
policies.
B
So
we
kind
of
like
you
know.
Let
them
know
that
we
both
both
share
the
same
short
name.
You
need
to
append
the
group
name
after
the
the
short
name
so
that
you
will
get
the
correct
set
of
network
policy,
but
you
know
I'm
talking
to
and
offline
and
I
think
kind
of
feel
like
you
know,
if
it's
a
short
name,
then
it
should
be,
like
short,
and
I
don't
want
to
append
anymore
details
behind
that.
B
So
that
was
one
of
the
reasons
why
maybe
we
thought
that
maybe
we
reconsider
the
name
there
and
the
other
thing
was
that
you
know
it's
kind
of
like
the
cluster
network
policies
have
a
short
name
of
cnp,
perhaps
to
be
a
little
more.
You
know
on
the
uniform
side,
maybe
we
name
the
name:
space
network
policies
as
either
an
ent
or.
A
H
H
K
I
think
we
were
originally
going
with
cnp
and
amp,
and
then
you
know
when
I
opened
up
the
pr
people
started
to
see
that
the
amp
is
not
sort
of
like
correspond
to
the
entry
namespace
network
policy,
full
name
which
is
still
named
network
policy.
So
people
feel
like
there's
a
discrepancy
between
the
shot
name
and
the
long
name,
and
there
was
a
discussion
to
you
know:
rename
it
to
net
poor,
red
and
amp.
B
I
think
janet
talked
about
you
know
the
cube.
Sorry,
the
ant
cattle
also
has
the
same
network
right.
F
Right,
yes,
so
are
we
going
to
change
the
the
encounter,
the
short
name
of
anakato
that
we
get
the
internal
network
policy?
I
think
it
is
and
how
to
get
net
poor
right
right
now.
F
I
F
By
the
way,
when
supporting
the
nail
policy
metrics,
I
have
to
define
matrix
names
for
clustering
policy
and
antenna
policy
too,
and
because
they
are
in
the
same
group,
the
the
same
matrix
group
and
including
kubernetes
narrow
policy
metrics.
So
currently,
I
named
them
narrow
policy
metrics
for
kubernetes
neuropolicy
and
cluster
policy,
matrix
for
clustering
policy
and
entry
network
policy,
metrics
for
anti-name
space,
neural
policies.
F
E
We
should
keep
in
mind
is,
as
we
develop
cluster
network
policy
in
upstream
v2
api.
You
know
we
may
have
some
name
clashes,
so
we
may
want
to
you
know
specifically
call
out
antria
in
some
of
these
names
at
some
point.
I
I
Right
I
mean
yeah
like
if
there
is
ever
like
an
upstream
cluster
network
policies.
The
problem
is,
if
we
do
cube,
call
get
cluster
network
policies
and
it's
going
to
be
returning
like
the
kubernetes
ones,
unless
we
rename
it
to
entry
or
cluster
network
policies,
and
I
mean,
and
then
there
is
no.
E
I
I
Do
you
think
you
can
summarize
a
proposal
in
on
github,
or
maybe
a
bishop.
A
Thanks
all
right,
so
I
think
this
also
summarized
the
conversation
on
short
names
for
andrea
natural
policies.
Is
there
any
other
topic
that
you
would
like
to
bring
up
for
today?
Anything
else
that
you
would
like
to
discuss
any.
A
And
this
is
the
sound
of
silence,
which
means
that
perhaps
we
don't
have
any
other
topics
for
today,
so
we
can
close
our
meeting
here.
I
would
like
to
thank
everyone
for
attending
this
call
and
wish
you
a
good
morning
afternoon,
a
good
evening
or
especially
for
the
folks
in
the
west
coast,
a
good
night.