►
From YouTube: Antrea Community Meeting 03/04/2020
Description
Antrea Community Meeting, March 4th 2020
B
C
A
The
agenda
for
today
that
we're
getting
together
was
primarily
to
give
Jay
an
opportunity
to
share
with
everyone
what
he's
been
working
on
on
the
upstream
testing
and
then,
if
we
have
a
few
minutes
after
that,
I
thought
we
might
just
quickly
touch
base
on
our
plans,
for
if
there
any
other
things
we
need
to
discuss
before
we
chat
with
our
project
with
Tim
and
if
there's
any
other
things
on
the
agenda.
After
that,
we
can,
we
can
just
definitely
jump
into
those.
It.
D
Sure
so
this
is
obviously
a
big
big
broad
effort,
as
you
can
see,
actually
I'll
check
push
the
last
commit
to
it.
So
I
appreciate
all
the
help
from
everybody,
so
we
have
kind
of
two
things
that
are
in
play
right
now
we
have
a.
We
have
a
upstream
PR
to
to
really
standardize.
The
way
that
we
look
at
CNI
is
in
the
way
that
we've
out
like
validate
them
right.
E
D
We
have
obviously
the
first
implementation
of
what
we
care
about
for
that
PR
in
in
in
an
TRAI
cell
yeah.
Here's
is
the
here's,
the
actual
here's
kind
of
the
the
actual
proposal
we
have
and
I'll
just
view
the
file,
so
I
can
have
it
over
here
on
the
right
so
yeah.
This
is
what
we're
working
on
and
the
main
point
of
it.
All,
as
many
of
you
know,
is
that
the
current
test
matrices
in
an
TRAI
I'm,
sorry
in
in
upstream
kubernetes
Network
policies,
look
a
little
bit
like
this.
D
They
just
validate
two
things
and
in
order
to
actually
really
test
the
network
policy
properly,
you
have
to
really
validate
that
the
entire
state
of
the
network
is
correct
and
we've
seen
several
issues
that
allude
to
that
in
tran
and
calico
and
other
places
where
you
know
you
turn
one
thing
off
and
other
things
don't
work.
You
know
the
most
interesting
notable
one
that
we
saw
recently
was
I.
Think
when
you
use
calico
for
certain
policies
and
ec2
is
the
data
plane
you
have.
D
You
have
issues
where
we're
like.
If
you
have
like
a
pre-start
hook,
you
know
certain
pods
won't
come
up
because
the
IP
address
hasn't
enough.
They
didn't
miss
you
not
yet
so,
there's
all
these
complicated
semantics
that
aren't
really
easy
to
reason
about,
and
so
we've
built
essentially
like
a
DSL
as
Cody
calls.
D
Anyone
to
look
at
them
are
very
hard
to
interpret
and
are
like
a
single
line
like
this
could
take
10
15
20
lines
of
code
to
implement
because
they
build
the
entire
struct,
and
so
it's
very
difficult
to
reason
about
how
things
should
be
connected
in
one
part.
So
that's
what
we're
building
and
you
know,
thanks
to
the
entry
of
folks,
we
were
able
to
merge
the
first
copy
of
it
into
hack
the
other
day
and
it's
runnable
is
like
an
executable
and
I
know.
D
Antonin
is
gonna,
start
adding
some
things
to
it
too,
but
yeah.
The
main
kind
of
work
has
kind
of
been
merged
in
and
with
sadef,
Abhishek
and
and
me
and
and
Matt
Fenwick
from
from
another
company
kind
of
guilt.
All
that
out-
and
it
looks
like
interesting-
is
kind
of
doing
a
great
job
of
helping
us
move
it
forward
and
have
it
CI
and
all
sorts
of
other
things.
D
So
I
can
answer,
questions
or
I
can
walk
through
how
a
test
might
work
or
we
can
talk
more
about
other
sorts
of
stuff,
but
that's
pretty
much
it
for
for
the
for
the
broad
overview
of
like
well.
What
we've
been
up
to
with
this
new
framework
and
we're
really
looking
forward
to
having
this
up
streamed
into
Nettie's,
core
I'm,
pretty
sure
I
think.
F
F
Add
this
into
CI,
cuz
I
think
it's
good
for
entry
and
it's
good
for
this
proposal
to
make
sure
things
don't
get
broken
good
stays
in
entry.
So
I
request
your
review
on
this.
Oh
yeah,
if
you
can
take
a
look
said,
that
would
be
great,
but
everything
gave
me
parks,
yeah
and
everything's
working
right
now.
Okay,.
D
Great
yeah
and,
to
his
point,
I
think
the
main
thing
here
you
know
there's
a
lot
of
stuff
going
on,
but
really
there's
one
major
major
thing
in
here,
which
is
this
probe
function
and
this
whole
repo.
If
you
think
about
it,
and
what
we've
kind
of
done
is:
we've
moved
away
from
an
upstream
test.
What
they
do
is
they
create
a
policy
and
then
they
probe
that
part.
They
wait
for
that
policy
to
have
a
pod
status
of
success,
and
so
there's
a
lot
of
lazy,
asynchronous
waiting
around
for
stuff
to
work
type
stuff.
D
You
can't
reason
about
what's
going
on
and
for
us
what
Anton
is
talking
about
in
terms
of
speed
is
what
we've
done.
Is
we've
moved
to
a
probe
functionality?
So
if
you
run
this
in
cluster,
what
this
is
doing
is
all
the
pods
are
statically
made,
and
then
this
probe
function
runs
and
then
you
can
see
it
actually
prints
out
the
exact
coop,
CTL,
exact
expression
and
it
it
goes
off
and
it
it
just
jumps
into
the
pod
and
does
a
dub
you
get
and
sees
if
you
can
connect
to
another
pod.
D
So
we
run
you
know
eighty
one
of
these
protests,
because
we
have
three
different
pods
that
each
live
in
three
different
namespaces.
So
you
have
a
nine
by
nine
two-dimensional
matrix
of
connectivity.
So
that's
why
we're
a
lot
faster,
but
yeah
I'll
definitely
review
that
for
sure,
Anton
and
I'm
an
easy
reviewer.
You
know
I
just
like
the
code
and
I'm
like
well,
I!
Guess
if,
if
somebody
cared
enough
to
do
it,
it's
it's,
it's
probably
gonna
work
and
then
we
can
back
it
out
later,
but
I
will
definitely
look
at
it.
D
F
D
Very
cool-
that's
great
yeah,
cuz
I
was
a
big
problem
with
testing,
so
so
that's
cool.
The
the
big
thing
to
discuss.
I
think
is
the
you
know.
We
want
to
have
like
a
scale.
A
generic
scale
type
test
in
here
and
the
folks
from
Google
had
mentioned
that
as
well
and
I
haven't
thought
architectural,
a
of
how
we
should
fit
that
into
this
framework
like
should
we
make
a
DSL
that
allows
you
to
say
an
arbitrary
number
of
pods
and
stuff?
Should
we,
if
we
do
that,
I
think
the
test
will
be?
D
D
That
back
and
forth
in
my
in
my
head
about
how
generic
and
I
know
Cody
had
some
thoughts
on
that
like
how
generic
and
declarative
should
we
go,
because
we
can
go
further
than
we've
already
gone
and
scale
testing
is
something
we
obviously
really
really
care
about,
but
it's
within
trying
and
testing,
even
our
other
plugins,
like
calico
or
whatever
else.
We
support
right.
D
E
Think
for
the
scale
part
it
depends
on.
You
know
what
would
be
the
eventual
goal
of
this
script
or
this
variable,
because
if
we
want
to
replace
the
end-to-end
test
and
maybe
what
we
have
and
improve
the
coverage
different
set
of
test
cases,
that
would
be
fine.
But
if
you
want
it
to
be
like
a
separate
thing
which
does
everything
for
network
policies,
then
maybe
we
can
consider
a
t-tail
framework
and
do
some
corrosion
testing.
E
D
We
definitely
want
to
replace.
What's
what's
there
upstream,
we
definitely
I
think
from
talking
everybody.
It
seems,
like
the
consensus,
is
I'm
open
to
ideas
on
this,
of
course,
and
I
think
all
of
us
are,
but
it
seems
like
a
consensus,
was
that
we
do
want
all
this
stuff
upstream.
We
want
to
build
an
upstream
standard
for
for
high
quality
CNI.
D
You
know
coherence
right,
like
so
I
think
we
want
to
get
it
upstream
and
and
yeah
that's
the
tricky
thing
is:
how
far
should
we
go
right
before
we
go
upstream,
I
know
Antonin.
What
will
you
you?
You
were
gonna
do
something
around
making
it
exit,
zero
or
exit
one
as
a
starting
point
right
without
actually
necessarily
adding
test
semantics
into
it.
I
think
that's
a
good
middle
ground,
yeah.
F
F
D
Yeah
I
mean
it
could
just
wind
up
being
a
bass
rapper
to
a
pod
right,
yeah
and
then
I
mean
to
obvious
X
point
yeah,
the
more
framework
stuff
we
put
on
this,
the
harder
it's
gonna
be
to
back
out
that
framework
stuff
and
put
it
into
upstream
kubernetes.
So
I
like
that
idea
of
yeah
just
making
it
exit
one
run
it
as
a
pod.
I.
Don't
know
why
I
did
is
the
job
I
think
I
just
was
trying
to
trying
to
be
fancy
or
something
I.
D
D
F
C
D
D
Yeah
I,
like
that
idea,
I
think
that
works.
So
so
and
then
we
can,
you
know,
and
then
we
can
start
thinking
about
how
to
back
this
into
Cooper
Daisy.
Now
one
of
the
interesting
things
some
of
you
I,
don't
know
if
we've
talked
about
this-
was
that
when
I
was
talking
to
Bowie
about
this
over
at
Google,
he
was
saying
you
know
we
should
actually
start
writing
all
of
the
upstream
tests
way,
not
just
the
network
stuff.
D
D
G
G
A
A
D
Bowie's,
a
reviewer
on
this
we've
got,
you
know,
we've
got
some
comments
on
it.
You
know,
let's
see,
I
haven't
looked
at
him
in
a
while
hope,
there's
nothing
nasty
on
here.
This
is
a
horrible
idea.
I
can't
believe
you
tried
to
change
the
way
we,
you
know
you
ain't,
no
bad,
so
just
some
ideas
from
squeezed.
D
You
know
he
actually
introduced
something
interesting
here.
It
turns
out
a
lot
of
people
have
thought
about
this
problem.
Someone
had
written
this
thing
called
Illuminati
Oh,
illuminati
Oh,
which
essentially
is
a
similar
approach
to
building
like
a
basically
a
tool,
that's
specific
to
network
policy,
validation
and
so
I
thought
that
was
kind
of
interesting
that
someone
else
had
already
done
this
or
had
already
kind
of
tried
to
build
their
own
sort
of
tool.
For
this.
D
D
This
is
another
interesting
thing
that
he
mentioned,
which
is
the
scale
test
comment,
which
is
that
you
know
well
I.
Think
it's
a
scale
testing
issue,
but
basically
what
he
was
saying
is
you
know
we
should
make
sure
that
the
order
in
which
you
create
policies
versus
create
pods
versus
labeling
them
is
completely
irrelevant.
So
you
could
create
a
pod
and
then
create
a
policy
or
create
a
policy
and
create
another
policy
and
then
create
a
pod
and
stuff
like
that,
and
nevertheless,
you
always
converge
to
the
proper,
a
security
model.
D
So
what
I
was
thinking
was
that
yeah,
that's
that's
great,
but
the
problem
with
that
is
there
so
many
different
permutations
of
ordering,
which
is
that
I
think
I
feel
like.
What
really
should
happen
is
that's
a
scale
test
scenario
where
you
would
just
say,
create
lots
of
pods
and
policies
randomly
and
then
back
calculate
what
you
expect.
The
state
of
the
the
security
matrix
to
be
and
then
see
if
it
matches
and
if
you
ever
come
up
with
a
scenario
where
that
fails.
Then
you're,
like
oh.
A
E
Most
I
mean
the
way
the
new
policies
implemented
it,
but
you
either
get
an
iPod
event
or
you
do
something,
but
you
get
an
ad
network
policy
event
and
you
do
something
and
you
might
have
a
part
in
either
a
if
the
order
switch.
So
it
I
think
is
if
we
do
one
scenario
dot
for
every,
but
just
for
just
a
couple
of
test
cases
would
be
change
in
order.
I
think
that'd
be
good.
Good,
yeah.
D
Yeah
so
I
see
seed
you
on
that
comment.
So
if
you
can
leave
that
feedback
in
the
cap,
that
would
be
awesome
too
cuz,
because
I
think,
first
of
all,
it's
good
to
have
some
dialogue
on
there
so
that
we
can
really
start
moving
forward
with
it,
but
also
I
agree
with
you.
That's
that's
a
good
point
right.
So
maybe
we
should
have
two
specific
separates
test
cases
around
it.
D
So
I
just
see
seed.
You
on
that
comment
and
then
I
see
seed
sadef
on
another
comment,
because
she
had
an
idea
about
node
specific
tests,
and
that
was
something
else
that
screed
had
mentioned
so
and
I.
Think
if
someone's
interested
it
would
be
interesting
to
look
at
in
effects
of
lemon
audio
and
see
if
we
want
to
borrow
some
features
from
that
I
haven't
III.
Won't,
probably
won't
have
time
to
look
at
that.
D
But
that's
another
interesting
thing:
do
we
want
to
divide
up
work
it
all
around
this
antonin
or
abhishek,
or
anyone
else
thinks
like
we
should
divide
up
work
or
is
that?
Are
we
happy
with
what's
there
and
we're
happy
that
we
can
maintain
it?
You
know
I,
don't
want
to
be
that
you
know
I,
don't
wanna
be
that
guy,
that,
like
does
the
drive-by
PR
and
then.
E
D
F
So
I
was
looking
at
the
human
FeO
and
looks
kind
of
interesting
I.
Think
you
define
that
for
policies
and
then
it
kind
of
like
validate
that
your
cluster
implemented
correctly,
which
is
kind
of
like
the
idea.
Yet
when
I
talked
about
first
testing
I
guess
that's
kind
of
like
that.
Stochastic
approach,
except
they're,
all
different
levels
of
doing
it.
You
can
have
fixed
network
policies
and
then
validate
that
there
I
enforce
correctly-
and
you
can
also
like
generate
Network
post-
is
like
randomly.
F
B
D
D
D
F
D
F
Soon,
as
the
pure
is
merged,
I
mean,
obviously
it
ran
as
part
of
the
TR.
But
as
far
as
soon
as
the
PR
is
emerged,
it
will
run
for
every
other
PR,
which
is
submitted
to
Andrea
running
in
parallel
with
other
tests
that
we
do
so
there
is
no
like
a
performance
penalty
and
it
turns
on
a
kind
cluster.
So
actually
someone
runs
the
script
locally.
It
takes
care
of
everything
for
them,
creating
a
cluster,
pushing
Andrea
to
it
and
being
the
doctor
in
pushing
it
to
the
notes
and
running
the
tests.
Yeah.
D
E
Only
one
thing
that
I
was
wondering
I
mean:
should
we
do
it?
I
mean
considering,
is
the
POC
ride
in
release
press
on
it,
but
we
are
giving
the
cluster
admin
service
account
to
the
role
binding
to
the
to
the
you
to
the
sixth
service
account,
so
she
should
be
like
specifically,
I
have
a
yarmulke
right
or
you
think
it's
fine.
You
guys
think
it's
fine.
E
D
That's
another
one
framework
thing
right:
there's
a
whole
class
of
questions
that
are
like
at
what
point
are
we
building
a
framework
for
people
to
run
the
tests,
and
once
you
started
doing
our
back,
it's
kind
of
like
okay,
you
know
this.
Now
you
have
to
have
instructions
on
how
to
modify
the
are
back
for
namespace
and
then
you're
like
well.
D
D
Copy
paste
is
shell
script
and
hack
it
up.
If
you
want
to
run
it
on
your
own,
that's
kind
of
what
I
that's
where
I
would
land
on
that,
but
but
we
could.
We
could
do
something
more
fancy
than
that.
Hey
slick
problem
is
like
someone
wants
to
run
it
in
this
namespace
force,
another
namespace
or
something
so
you
need
to
parameterizing
a
mole
at
some
point.
You
know
I.
F
So,
having
maybe
to
conclude
I
mean
my
take
is
that
this
is
great,
but
we
should
present
it
actually
at
Signet
work
and
you
can
get
concrete
feedback
because
we
kind
of
need
to
know
the
direction
sure
we
can
keep
adding
stuff.
We
can
keep
adding
tests
and
making
it
better,
but
at
the
end
of
the
day
we're
gonna
have
to
integrate
it
upstream
I
mean
we
have
to
know.
If
we're
going
to
be
able
to
integrate
it
upstream,
and
then
we
have
to
do
some
work
to
make
this
happen.
D
D
I
think
we
should
build
this
to
solve
our
problems
and
to
to
really
become
a
standard
either
way
and
I
think
that
we
for
now
we
can
be
optimistic,
but
who
knows
after
tomorrow,
maybe
maybe
we
won't
be
able
to
be
as
optimistic,
but
from
what
so
far
talking
to
Bowie
and
Casey
and
a
lot
of
other
people?
Everybody
wants
this
right
so
but
yeah,
let's
see
what
they
say
tomorrow.
D
They
may
say
something
like,
for
example,
we'd
like
to
do
the
framework
change
his
first
instinct
test
and
then
do
the
network
policy
updates,
and
you
know
one
thing's
for
sure.
Nobody
really
cares
about
network
policies
upstream.
So
part
of
the
goal
here
is
to
get
people
to
care
about
the
way
we
validate
them,
because
if
people
care
about
how
these
things
work
to
begin
with,
we
wouldn't
be
in
this
situation.
So
it's
an
interesting
kind
of
conversation.
D
Part
of
it
is
getting
people
to
acknowledge
that
we
have
this
code
in
upstream,
that's
not
used,
and
then
the
other
part
is
to
get
people
to
think
to
care
about
the
fact
that
the
code
should
be
better
and
without
going
so
far
as
for
them
to
say.
Well,
maybe
we
should
just
move
this
completely
out
of
upstream,
because
that's
not
our
goal,
but
that
seems
to
be
happening
to
everything
else
in
kubernetes
that
isn't
perfect
right,
they're
like
well.
Let's
just
get
rid
of
it.
It's
not
in
core
right!
D
So
so
yeah
there's
a
lot
of
conversations
it
could.
It
could
be
like
you
know
it
could
be
a
few
months
before
I
mean
it
will
be
a
few
months,
but
it
could
be.
You
know,
may
take
six
months
for
us
to
get
this
truly
into
upstream
and
so
I
think
we
have
to
think
about
how
we're
gonna
build
the
best
framework
we
can
to
solve
these
problems
in
the
in
the--in
room.
You
know
what
I
mean
like
some
of
the
stuff
we're
just
gonna
have
to
do
on
our
own
I
think,
but
we'll
see.
D
E
D
D
F
D
D
None
of
them
were
hard
to
implement.
They
just
seemed
like
they're
kind
of
kludgy,
anyways,
so
I,
but
but
I
think
we're
we've
got
about.
We
got
about
90%
of
them.
Sadef
actually
made
a
spreadsheet
auditing
them
and
looking
at
it,
but
I
think
I
stopped
it
about
yeah,
like
75
80,
90
percent,
something
like
that.
You
know
they
have
like
20.
We
have
comments
in
them.
D
D
D
E
D
D
F
D
Feel
was
up
to
me
I,
like
the
idea
of
coming
up
with
ideas
and
pushing
him
into
hack.
You
know,
I
think
that
the
best
thing
we
can
do
is
kind
of
innovate
right
now
and
be
creative,
because
I
think
at
some
point
it's
gonna
be
very
constrained
in
terms
of
what
upstream
is
willing
to
accept
and
I
think
we've
done
most
of
the
groundwork
to
make
sure
that
what
we
have
is
compatible
with
upstream
that's
what
I
would
prioritize,
but
I'm
kind
of
weird.
D
Like
here's,
what
we
actually
need,
Fran
TRAI
from
a
requirements,
perspective
and
kind
of
get
that
in
there
and
I
think
there's
that's
really
two
totally
different
ways
to
go
about
it
and,
depending
on
what
but
like
maybe
we
should
just
vote
on
that?
Should
we
should
we
really
should
we
prioritize
for
automation
and
CI,
or
should
we
prioritize
for
playing
with
ideas
and
stuff
I
I?
Don't
really
have
a
strong
opinion
either
way,
I
think
we
should.
Probably
everybody
should
probably
weigh
in
on
that.
What
do
you
think?
D
Just
really
taking
requirements
that
we
have
Fran
TRAI
in
terms
of
scale
targets
and
and
other
things
and
making
and
compatibility
matrix,
like
you,
said
upstream
compatibility
all
the
base
stuff
that
we
know
we
we
could
flesh
out
and
like
flesh
that
stuff
out,
like
I
kind
of
think,
either
way
could
work
right
or
we
could
do
one
and
do
the
other
later
you
know.
Do
we
want
to
harden
the
codebase
or
do
we
want
to
expand
it?
You
know
what
do
we
want
to
really
do
or.
G
F
D
Yeah,
so
the
only
thing
I
would
say
is:
let's
do
that,
but
let's
at
least
solve
the
scale
problem
before
we
harden
it.
Because
that's
one
thing
we
know
we
want
to
do
that.
We
don't
support
yet
so
you
know
I
mean
like
we
don't
support
the
scaling
stuff,
but
I
think
we
should
think
about
that,
because
I
think
there
may
be
some
some
ways
to
rethink
how
we
build
tests
with
the
scales
thing
and
then
think
kind
of,
or
maybe
we
could
harden
what
we
have
and
then
solve
the
scale
problem
later.
D
F
D
D
I
just
I
I,
don't
know
like
I
guess
maybe
we
should
maybe
we
could
just
maybe
we
could
just
harden
the
code
base
as
it
is
and
and
add
a
separate
scale
test,
and
just
do
that
orthogonal
E
and
just
and
accept
that
there's
gonna
be
a
little
code,
duplication
there.
Maybe
that's
the
best
way
to
do
it.
That
way,
we
what
Salvatori
is
suggesting-
and
we
can
say
that
yeah
there's
a
different
test.
That
kind
of
duplicates
some
of
this
logic,
but
that
test
is
designed
to
do
something
different
I
think.
F
D
G
F
D
Which
is
why
I'm
thinking,
maybe
the
scale
test
which
should
be
separate-
and
maybe
the
only
thing
the
scale
test
does-
is
reuse
the
KH
util
library
right?
Maybe
it
doesn't
do
anything
else
right.
Maybe
all
this
stuff
is
not
useful
for
the
scale
test.
I
mean.
Maybe
some
of
it
is.
You
know
what
I
mean,
but
I.
D
F
Your
problem,
you
know
because
yeah
that's
true,
because
if
you
have,
if
you
have
a
scalar
test,
where
you
end
up
having
like
a
lot
of
pods
with
the
same
labels
with
the
same
network
both
being
applied
to
them,
then
you
only
really
need
to
check
one
of
them
as
like
the
source
to
verify
policies.
For
example,
yeah.
D
F
F
H
D
D
D
D
F
B
B
B
A
B
All
right
say
that
I
think
Kobe
was
saying
going
3
going
to
going
1.
This
is
your
last
chance
to
talk
so
the
other
plays.
Are
we
keeping
the
meeting
for
next
week
right?
We
are
not
going
to
skip
it.
Is
that
correct?
That's
correct
all
right!
So
talk
to
you
next
Wednesday
and
have
a
good
one.
Then
thanks.