►
From YouTube: Antrea Community Meeting 08/01/2022
Description
Antrea Community Meeting, August 1st 2022
A
So
good
morning,
good
evening
or
good
afternoon,
this
is
the
anterior
community
meeting
and
today
is
tuesday
august,
the
second
or
monday
august,
the
first,
according
to
your
location
and
for
today
we
don't
have
any
pre-planned
topic
in
the
agenda,
which
means
that
we
are
going
pretty
much
exclusively
for
open
discussion.
B
A
I
yeah,
I
just
meant
that
if
there
was
a
since
we
don't
have
any
topic
planned
for
today's
agenda,
I
was
just
asking
if
there
was
a
any
follow-up
item
from
the
presentation
that
we
had
in
in
our
last
meeting.
So
unfortunately,
I
did
not
attend
that
meeting.
So
I
do
not
know
if
there
is
any
follower.
If
there
was
any
follow-up
conversation
for
this
meeting.
A
Yep
so
right
so
yeah,
we
need
to
look
for
some
other
topic,
so
I
I
asked
on
the
slab
chart
yesterday,
but
it
didn't
seem
there
was
any
proposal
up
for
for
any
any
proposal
for
today.
So
you
know
I'll
just
wait,
maybe
a
few
seconds
in
case
anyone
has
anything
they
would
like
to
bring
up
and
then
perhaps
I
could
just
we
could
just
go
for
I
mean
at
least
from
my
perform
my
part.
I
can
just
provide
an
overview
of
the
changes
that
we
are
planning
for.
A
The
upcoming
release
for,
for
you
know,
run
for
entry
flow
visibility
in
the
tia
project.
A
All
right,
so
it
doesn't
seem
that
we
have
any
topic.
This
looks
like
this
is
going
to
be
a
fairly
short
meeting.
I
will
just
like,
as
I
was
saying
I
would
like
you
know,
since
I
love
talking.
I
would
like
just
to
give
you
an
overview
of
the
changes
that
we
are
planning
for:
the
upcoming
tier
release,
which
is
a
t
at
0.2
which
will
be
synchronized
with
andrea
1.8.
A
We
are
expanding
the
capabilities
for
our
cli.
You
know
in
in
tier
0.1,
we
had
a
very
basic
cli
for
triggering
and
checking
the
status
of
network
policy
recommendation
jobs.
Now
we
are
adding
more
crowd
operations
for
listing
network
policy
recommendation
jobs
and
also
deleting
results
recommendation
results
that
are
not
needed
anymore.
Let's
say
that
this
is
more
like
a
basic
usability,
a
business.
We
are
satisfying
a
basic
usability
use
case.
A
Then
there
is
a
lot
of
focus
on
improving,
so
the
software
a
bit
in
terms
of
test
coverage.
We
are
adding
coverage
in
terms
of
unit
test
for
all
our
python
code
and
also
our
typescript
code.
As
you
know,
tei
is
a
slightly
different
from
andrea.
In
that
we
use
a
mix
of
languages.
A
It's
not
just
golem
golang
at
the
moment
is
used
for
the
click
house
monitor
for
another
component
that
we
are
developing,
that
we
will
present
in
a
probably
in
the
next
meeting,
but
all
the
grafana
plugins
which
are
used
for
displaying
for
displaying
a
graph.
You
know
like
the
sankey
graphs
and
the
code
diagrams
that
we
use
for
network
policy
representation.
A
They
are
rendered
using
plugins
written
in
typescript
and,
finally,
all
the
policy
recommendation,
the
pro
all
the
policy
recommendation.
Logic
is
written
in
python.
So
now
we
are
providing
coverage
in
terms
of
unit
tests
for
both
the
goaling
python
and
typescript
parts,
and
we
are
also
adding
automated
jobs,
github
workflows,
for
providing
coverage
for
providing
jobs
for
all
this
stuff.
A
As
you
know,
for
python,
another
important
aspect
is
is:
are
style
checks
because
you
know
software
like
pie,
flake
or
pp.
This
kind
of
applications
are
on
the
on
the
one
end.
A
They
check
that
you
know
that
python
python
would
check
python
coding
conventions,
but
on
the
other
end
they
are
also
able
to
spot
programming
errors
which
normally
are
not
found,
because,
because
you
know
python
is
an
interpreted
language
and
similarly,
we
are
also
improving
the
end-to-end
tests
for
all
the
components
by
improving
coverage,
and
we
also
have,
of
course,
an
open
pr
for
adding
code
code
integration
in
in
terms
of
functionality.
A
Another
important
thing
that
we
are
doing
in
this
release
is
the
support
for
class
3d
click
house
deployments,
and
this
is
because,
if
you
want
a
let's
say
a
reliable
software,
you
don't
want
to
risk
losing
or
your
flow
of
data.
If
your
click
house,
if
your
click
house
database
is
lost,
so
we
are
adding,
we
are
adding
clustering.
We
could
do
to
provide
a
both
like
the
ability
of
scaling
horizontally
and
the
ability
of
replicating
replicating
data.
A
One
thing
that
it's
worth
mentioning
is
that
we
are
considering
not
providing
database
clustering
as
an
option,
but
we
are
switching
from
non-clustering
mode
to
clustering
mode,
which
means
that
by
default,
all
tier
deployments
will
use
a
click
house
in
clustering
mode.
And
if
you
want
to
have
let's
say
the
same,
simple
deployment
that
you
have
with
the
non-clustering
mode,
you'll
need
to
deploy
it
here
with
the
click
house,
with
a
one
shard
and
one
replica
that
will
give
you
just
one
pod,
which
is
pretty
much
the
same
as
the
non-clustered
model.
A
The
reason
for
doing
this
is
that
we
wanted
to
avoid
the
having
to
deal
with
too
much
configuration
complexity
and
also,
unlike
other
database
management
systems.
When
you
do
a
replication
click
house
requires
you
to
use
a
different
schema
and
a
different
database
engine.
A
That
was
like
something
that
I
discovered
in
the
in
the
course.
For
instance,
I
had
no
idea,
and
but
it
is
actually
true
so
the
way
in
which
you
created
the
data
basically
cause.
If
you
want
to
use
it,
just
a
single
replica
or
if
you
want
to
replicate
us
to
be
different,
which
means
supporting
clustered
and
non-clustered
required
us
to
maintain
two
schemas,
and
that
was
probably
the
main
driver
behind
the
decision
of
going
for
a
single
just
for
the
clustering
mode.
A
About
database
clustering,
sorry
database,
the
other
thing
that
we
are
doing
is
that
we
are
adding
schema
management
logic,
which
means
the
ability
of
upgrading
the
schema
from
one
release
to
another.
You
know
to
take
into
account
the
changes
that
we
made
into
the
schema.
Given
example,
we
can
add
the
table
or
add
the
column
or
change
the
type
for
a
column.
So
you
we
want
to
be
able
to
transform
the
schema
seamlessly
without
forcing
the
users
to
destroy
and
recreate
their
databases
upon
upgrade
as
easy.
A
As
you
know,
as
it
was
with
the
pd
the
procedure
that
we
had
put
in
place
in
a
tia
0.1,
then
in
terms
of
grafana-
and
you
know
the
the
dashboard
we
are
doing
some
minor
ui
improvements
to
work
around
a
little
bit
as
some
issues
like
you
know,
column
name
display
some
some
minor
issues
with
the
throughput
calculation,
and
that
is
another
thing
which
is
currently
under
review,
and
this
is
the
plan
for
of
the
things
that
are
going
to
happen
in
tia
0.2.
A
You
know,
then,
we
will
have
more
announcements
in
terms
of
user
experience,
reliability
and
service
ability
in
in
the
coming
in
the
coming
releases,
yeah
and
thanks
for
listening
and
if
you
have
any
questions,
feel
free
to
ask
and
then
see
that
zero
is
also
on
the
whole.
And
you
will
be
able
to
provide
all
the
answers
that
you
need.
A
All
right,
it
seems
that
my
overview
was
either
very
clear
or
very
boring,
or
probably
a
mix
of
the
two
and
yeah
so
that
that's
pretty
much
it
for
the
upcoming
tier
release
and
that's
all.
C
Hey
salvatore,
I
do
have
like
a
super
quick
question,
not
directly
related,
but
do
we
plan
on
keeping
in
the
flow
aggregator
support
for
ip6
export,
or
are
we
going
to
eventually
remove
that
code
and
click
out
is
going
to
be
the
only
supported
option
there.
A
It's
just
because
it's
there.
I
think
that
it's
also
covered
by
end-to-end
tests,
but
clearly
in
in
the
coming
future.
If
we
are
not
going
to
have,
if,
if
we
don't
have
any
use
case,
for
you
know
exporting
this
data
to
a
an
ipfix
collector,
then
yeah
then
for
us
it
will
become.
I
cannot
say
it
will
become
a
dead
code
and
probably
in
order
to
avoid
the
bit
rot.
C
A
B
As
soon
you
can
see
my
router
yeah,
but
so
far
we
have
merged
some
pr's
and
some
are
related
to
features.
The
first
of
one
one
mentions
that
we
in
next
release.
We
will
be
able
to
support
topology.
B
We
are
hints
in
internal
proxy,
he
did
the
feature
already
implemented
in
cube
process
and
it
will
better
load
balance
and
service
traffic
based
on
the
location
or
for
the
client
and
the
back
end,
and
the
next
feature
I
want
to
highlight
is
that
we
will
support
her
chart
installation
method
since
next
release,
and
the
next
feature
is
about
multiple
cast
in
capital
support
with
it.
B
I
think
there
are
still
quite
a
for
some
number,
the
first
one
about
important
the
first
important
feature.
I
think
it's
the
support
for
namespace
group,
namespace
scoped
group
for
angel
narrow
policy.
Previously
we
only
support
cluster
group
and
it
can
only
be
used
in
entire
cluster
network
policy.
B
So
this
is
a
foreign
space,
scope,
policy
and
group.
I
think
it's
already
close
to
march,
but
this
release
I
suggest.
Last
week
last
week
I
suggest
one
man.
I
had
one
one
minor
suggestion
about
the
status
field
of
this
crd
for
better
extense
extensibility,
since
it's
currently
implemented.
B
B
Some
status
like
a
unrealizable,
partially
unrealistic,
but
according
to
kubernetes
style
api
conversion,
it
should
be
something
like
there
is
a
condition
called
realizable
and
the
value
could
be
true
or
false,
and
we
could
use
message
or
error
to
indicate
what
what's
the
reason
that
causes
the
condition
to
be
false,
so
it
could
be
used
in
more
scenarios
in
the
future
and
the
second
is
is
to
add
an
ip
poor
counters
for
portal
for
ip4
resource.
Currently
we
already
have
this
for
external
ip4,
but
not
for
ip4,
so
to
keep
them
consistent.
B
There
is
a
pr
to
do
that.
Next
is
the
open
flow
file,
dot,
one
dot
file,
protocol,
support
and
single.
We
already
merged
one
of
the
three
apis
that
are
needed
to
support
it,
and
with
that
support
we
I
would.
We
will
be
able
to
resolve
some
legacy
issues
due
to
the
limit
of
for
open
flow
1.3
like
the
ability
to
add
more
packets
to
a
group
with
this
new
version
of
protocol-
and
there
are
some
pr's
major,
mainly
these
two
pr's-
to
support
to
to
add
the
support
for
external
node.
B
B
This
one
is
close
to
much
and
first
of
all,
how
this
feature
support,
applying
enchant
cluster
nail
policy
to
know
the
product
service
yeah.
B
B
It
is
asking
by
users
that
the
future
that
alternate
policy
can
can
can
have
audit
logs
for
both
for
specified
net
neural
policies,
so
they
want
to
have
this
support
for
kubernetes
native
neuropolicies
and
chia
has
a
patch
to
do
this.
With
that
we
will
have
a
user
can
annotate
a
name
space
with
a
specific
annotation
and
o
nail
policy
under
that
namespace
will
lock
their.
B
We
have
audit
logs
for
for
for
traffic,
and
we
also
have
some
changes
in
much
much
multi-cluster
futures,
but
I'm
not
very
closely
reviewing
this
prs.
So
maybe
if
anyone
familiar
with
these
features,
could
add
more
details
on
this
other
list
or
for
prs,
we
are
trying
to
get
them
merged
in
this
release
and
appreciate
your
reviews
and
hard
working
on
that.
B
And
note
that
we
may
freeze
the
code
next
week
in
the
middle
of
next
week
and
we
will
release
1.8
in
the
end
of
next
week.
A
A
B
Yeah,
you
are,
you
are
correct.
I
think
the
major
problem
while
is
implemented
in
such
ways
that
there
could
be
multiple
nail
policies
applying
to
a
port
and
all
of
them
can
cause
this.
The
traffic
to
can
cause
the
port
to
be
isolated,
and
if
we
want
to
see
which
one
which
net
policy
is
respond
responsible
for
it,
we
have
to
randomly
choose
one
or
if
it's
just
one
nail
policy,
then
it
must
be
this
one,
but
if
there
are
more
than
one
we
have
to
randomly
choose
one.
A
No,
I
I
don't
think
I
think
it's
it
will
be
more
confusing,
but
it's
it's
also.
I
believe
something
that
it
it
doesn't
really
have
to
do
with.
It
doesn't
really
have
to
do
with
the
our
implementation
of
network
policies,
but
it
has
to
do
with
the
network
policies
themselves,
like
you
know,
if,
if
you
have
multiple
network
policies
on
a
pod
and
your
pod
does
not
match
any
network
policy,
then
it's
dropped
and
not
because
of
any
network
policy,
but
because
it
does
not
match
any
network
policy.
A
So
let's
say
the
the
correct
way
of
doing
it
will
be
either
not
just
logging
generically
dropped
by
kubernetes
network
as
we're
doing
now,
or
dropping
the
list
of
network
policies
that
that
are
applied
to
that
pod,
which
you
know.
I
think
it
would
be
not
very
easy.
I
mean
at
least
four
hours
in
our
current
implementation.
A
I
don't
think
it
will
be
very
easy
to
provide
a
list
of
network
policies
that
are
applied
to
a
pod
and
logging
that
list,
so
maybe
it's
better
that
we
keep
the
current
approach
and
yeah
and
because
I
I
do
not
think
that
there
is
a
anything
that
we
can
do.
As
you
said,
if
there
is
just
one
network
policy
that
applies
to
the
pod,
then
it's
clear,
but
if
we
have
multiple
there
is
no
solution.
B
A
I
think
the
current
message
just
say:
dropped
by
kubernetes
network
policies.
It's
a
kind
of
generic.
We
can.
We
can
probably
just
work
a
little
bit
on
the
message
and
yeah
and
then
that
that
will
be
it
because,
even
even
if
we
did
try
and
do
something
clever,
it's
not
possible
to
to
associate
the
drop
to
any
specific
network
policy.
No,
you
will
need
to
have.
We
will
need
to
have
to
say
some
sophisticated
logic
that
if
it's
just.