►
From YouTube: Antrea Community Meeting 04/08/2020
Description
Antrea Community Meeting, April 8th 2020
A
I'll
be
holding
now
it
is
recording.
Now
it's
also.
It's
also
sharing
my
screen
so
which
is
perfect.
So
to
summarize,
we
have
a
tie
between
choosing
between
a
single
meeting
and
rotating
the
meetings
in
this
case,
instead
of
asking
for
a
tiebreaker
will
interpret
this
result
as
a
preference
to
keep
the
meeting
scheduled
as
it
is
today,
therefore,
with
a
single
meeting,
but
we
have
a
preference
for
a
different
time
zone,
which
is
a
4
in
the
morning
GMT,
which
translates
to
9
p.m.
A
central
6
in
the
morning
European
time
and
12
p.m.
China
time.
We
will
now
start
meeting
at
this
time.
We
will
communicate.
We
share
this
with
the
community
over
the
email
channels.
I
think
we
need
also
to
update
the
some
page
in
the
course
on
pages
in
the
code
repository
and
the
next
meeting
therefore,
will
be
in
two
weeks
time
on
Monday
April
28th
at
9
p.m.
Pacific
or
Tuesday
April,
21st,
4:00
a.m.
GMT.
A
That's
right
because,
by
the
way
we
had
a
tie
between
the
chose
chose
choice
in
choosing
the
day,
and
you
know,
according
to
contributor
preference
and
the
clear
test
contribute
could
remain
kubernetes
contributor
calendar,
it
seems
like
Monday
is
the
best
day
and
therefore
we
will
go
for
Monday,
and
with
that
we
can
conclude
the
discussion
on
the
on
the
meeting
time,
and
so
tell
me
good.
You
know.
A
A
B
A
Okay,
so
is
now,
should
we
should
we
proceed
to
discussing
having
a
look
at
the
release
status,
even
the
even
though
we
know
this
is
one
of
the
most
boring
things
we
can
do
in
our
meeting.
So
if
there
is
any
other
one,
we
wish
is
to
propose
a
more
exciting
topic
for
today,
please
step
forward.
Otherwise
I
will
annoy
you
with
the
release.
One.
B
C
B
B
B
So
we
watch
for
namespaces
pods
network
policy
updates
and
we
produce
internal
outputs
that
we
send
to
the
entry
agent
and
those
objects
are
called
applied
to
groups
address
groups
and
internal
network
policies.
Applied
to
groups
is
basically
an
IP
address
set
of
of
pods,
on
which
specific
policy
is
being
applied.
Address
group
is
leased
also
an
IP
addresses,
but
it
represents
like
the
peers
for
the
network
policy,
so
the
peers
on
which
we
apply.
B
The
ingress
role
of
the
eager
through
an
internal
network
policy
is
just
like
kind
of
like
an
internal
representation
of
a
kubernetes
network
policy
which,
instead
of
which,
instead
of
using
labels,
is
using
references
to
apply
to
groups
and
address
groups
and
each
output
object
is
like
what
we
call
a
span
which
determines
a
set
of
agents
or
nodes
which
need
to
receive
the
output.
So
we
only
send
information
to
agents
when
this
in
for
me
and
is
required
by
the
agent
and
the
implementation
of
the
network
policy.
Commutation
can
be
divided
into
two
steps.
B
First,
we
compute
these
output
objects,
apply
to
groups,
address
groups
in
turn,
electoral
policies
and
we
publish
them
to
a
store,
and
then
we
have
Watchers
for
that
store
and
incremental
updates
to
the
to
the
store
or
distributed
to
the
appropriate
agents
based
and
on
the
span
through
an
internal
API
server
that
we
have,
and
instead
of
distributing
the
whole
object
all
the
time
we
have
like
incremental
distribution.
So
not
only
do
we
only
distribute
the
information
to
the
relevant
nodes
for
each
update.
We
on,
we
also
only
distribute
the
incremental
updates.
B
So,
for
example,
IP
address
is
being
added
to
an
apply
to
a
group
and
not
the
entire
applied
to
group
every
time,
and
so
the
idea
behind
this
investigation
is.
Can
we
use
the
dialog
for
the
first
step,
computing
output,
objects
and
publishing
them
to
the
store.
I
have
a
slide
here
that
I
worked
on
a
while
back,
which
shows
our
network
policy
computation
works
in
practice
in
entry.
I.
B
Don't
think
I'm
really
going
to
cover
it
cover
it
in
detail,
but
if
you're
not
familiar
with
the
implementation,
you
may
want
to
look
at
that
slide
basically
chose
starting
from
a
simple
communities.
Network
policy
which
has
pods
with
label
app,
be
equal
to
server,
can
only
receive
traffic
from
pods,
whose
label
AB
is
equal
to
client.
Clients
are
in
only
on
port
80.
What
what
are
the
internal
objects
for
a
specific
cluster
that
we're
gonna
generate
and
how
we're
going
to
distribute
them
to
the
different
engines?
B
B
So
now
what
I
imagine
people
on
family
whiz
is?
What
is
did
you
know?
Saudi
log
stands
for
differential
data
log,
it's
a
project
developed
at
VMware
research
by
Leonid,
Rizk
and
me
I
booed
you
and
I
added
two
rulings
here,
one
to
the
github
and
one
to
a
paper
that
presents
DT
login
details.
And
so,
if
we
look
at
this
paper-
and
we
take-
we
look
at
this
paper-
we
see
that
basically
de
log
is
a
bottom-up
computation
engine.
So
it
starts
from
a
set
of
ground
facts
which
we
call
the
inputs.
B
So
in
our
case,
it's
really
like
the
kubernetes
object,
namespaces
network
policies
pods
and
it
computes
all
the
possible
derived
facts
which
we
call
outputs
by
following
data
log
rules.
So
basically
those
data
rules
are
like
kind
of
like
transformation
rules
like
joins
and
everything
which
constitute
the
DD
dog
program.
The
dialog
program
is
a
set
of
data.
A
data
log
rules
which
tells
a
DD
logout
to
transform
inputs
into
outputs-
and
this
is
done
in
the
bottom-up
fashion.
B
Is
this
thing
possible
and
I'm
gonna
like
basically
use
a
top-down
approach
to
verify
whether
this
query
is
true
or
false,
based
on
the
inputs
that
I
have
so
instead
of
computing,
everything
ahead
of
time,
I'm
going
to
compute
in
reaction
to
user
queries,
and
did
you
know,
is
incremental
by
nature.
So
whenever
it's
presented
to
a
change
to
the
inputs
of
round
facts,
a
dialogue
is
only
going
to
perform
the
minimum
computation
necessary
to
update
all
the
outputs.
B
B
B
I,
give
you
note
that
I
want
to
exclude
from
pass,
and
then
the
dog
is
going
to
compute
the
set
of
all
existing
paths
which
which
pairs
of
nodes
can
reach
each
other
by
using
the
edges,
I've
provided
and
by
excluding
the
node
I've
provided
from
the
path,
and
here
this
is
an
example
of
this
program
being
used.
I
insert
two
edges.
I
say
there
is
an
edge
between
n1
and
n2.
B
There
is
an
edge
between
and
2
and
3
and
based
on
that,
didi
dog
is
immediately
going
to
compute
three
paths
for
me
between
n1
and
n3
and
one
and
then
two.
Those
are
sorry.
Let
me
phrase
it
this
way
between
n1
+,
n2,
+,
n2
and
n3.
So
those
are
two
edges
I've
provided.
So,
of
course,
an
edge
is
also
path
and
then
I
have
a
path
between
n1
and
n3,
which
is
just
combining
those
two
edges.
B
So
this
is
a
work
linear
than
I
did
to
to
have
our
experimental
implementation
of
DD
log
using
DD
log
in
the
entry
controller.
So
I
wrote
some
good
bindings
for
D
log,
because
DD
log
only
had
a
Java
and
C
API
and
those
bindings
are
all
stood
on.
The
DD
log
me
for
now
leonid
wrote
the
DD
log
entry,
a
controller
program
in
DD
dog,
and
I
added
the
link
here.
It's
also
hosted
in
the
DD
dog
repo
for
now,
and
it's
used
as
a
test
for
the
DD
lock
CI.
B
Then
I
took
both
of
those
things
and
I
worked
on
GG
long
integration
direct
in
the
entry
a
code
base
and
on
writing
some
benchmarks
as
a
starting
point,
because
that
was
a
little
starting
point
for
is
DD
log
implementation
I
use
entry
0.3,
which
means
that
there
is
no
named
port
support
at
that
stage.
But
that's
definitely
something.
That's
sorry.
Definitely
something
that's
easy
to
add
if
we
want
to
commit
to
DD
lon-
and
the
first
thing
I
did
is
verifies
that
the
DD
log
implementation
was
passing
all
cutest
community
tests.
B
And
here
are
the
different
benchmarks.
I
ran
to
evaluate
the
implementation,
so
our
root,
5
tests
and
and
not
all
of
them
actually
may
be
representative
of
our
an
actual
a
cluster
is
gonna,
be
used
and
what?
What
what
actual
kubernetes
network
policies
are
going
to?
Look
like,
but
I
kind
of
like
wanted
to
exercise
a
bunch
of
like
different
edge
cases
as
well.
So
and
if
you
guys
have
ideas
and
one
kind
of
other
benchmark
I
can
perform.
I
am
happy
to
add
a
test
for
that.
B
B
All
in
the
same
namespace,
that's
why
I
said
it
may
not
be
like
a
realistic
test
if
you're
familiar
with
the
work
that
we've
been
doing
in
the
entry
network
policy
controller
uses
a
test
for
three
is
actually
the
one
that
Chan
has
been
using
as
the
latest
benchmark
that
he
added
to
evaluate
the
network
network
policy
computation
code
so
in
test
part
three
is
the
actual
benchmark
we
have
in
the
entry
code.
I
just
poured
it
and
was
able
to
run
it
for
the
DD
log
implementation
as
well.
Test
power.
B
4
is
like
kind
of
like
a
different
version
of
test.
Part
3,
where
you
have
like
kind
of
like
more
objects
per
namespace,
because
one
wants
test
only
has
like
four
pots
per
namespace
I.
Don't
know
if
this
was
like
a
very
realistic
Hughes
case,
so
test
perform
is
kind
of
like
the
same
idea
but
uses
50
pots
per
namespace
instead
and
test
for
5
I
added
recently,
and
instead
of
using
pod
selectors
for
network
policy.
It
uses
like
namespace
selectors
for
from
and
to
network
policy
role
peers.
B
B
There
is
a
there
is
an
API
server,
but
there
is
no
cluster.
It
uses
exactly
what
I've
been
using
as
well
for
a
while
what
Chinese,
using
in
these
tests,
which
is
like
the
what
they
call
the
fake
API
server,
and
it's
only
testing
computing
objects
and
publishing
them
to
the
store.
It's
not
taking
into
account
like
distributing,
distributing
the
policies
to
agents.
Okay,.
B
Actually,
like
a
good
point,
because
I'm
gonna
walk
through
those
results,
but
you
see
that
for
some
tests,
the
time
it
takes
to
compute
the
policy,
even
though
we
have
large
inputs
is
kind
of
like
small.
So
if
the
actual
time
is
like
dominated
by
Network
policy
distribution,
if
the
CPU
usage
is
dominated
by
the
distribution
to
agent,
then
the
differences
in
time
between
the
DD
log
and
the
current
implementation
may
be
neglected.
B
So
this
table
shows
what
I
call
the
computation
time,
which
is
the
time
it
takes
for
the
implementation
to
take
all
the
inputs
and
generate
all
the
outputs
in
the
store.
And
so
we
start
the
clock.
Once
we've
published
all
the
objects
to
the
API
server
and
before
the
entry,
a
controller
starts
listening
to
the
it
starts
watching
the
API
server
and
we
stop
the
clock
once
all.
The
updates
have
been
published
to
the
store
the
total
CPU
time
kind
of
like
measures,
the
impact
that
computation
as
on
CPU
consumption.
B
It's
not
very
accurate,
but
it
gives
a
good
idea
of
which
implementation
is
more
like
CPU
one
with
more
resource
hungry
and
then
the
peak
memory.
Well,
that's
pretty
easy.
We
just
measure
the
maximum
memory
usage
while
the
test
is
running,
and
so
you
see
that
for
test
4
3,
which
is
a
test
case
that
Chan
recently
optimized,
is
an
Ethernet
whole
policy.
Controller
implementation
for
the
current
implementation
behaves
like
very,
is
very
efficient
and
this
actually
better
than
the
drug
implementation
and
for
tests
before
which
is
kind
of
like
a
similar
test.
B
The
difference
is
not
as
big
and
actually
the
current
implementation
uses
more
CPU
resources,
because
there
is
increased
parallelism.
It
can
achieve
like
better
computation
time
and
if
we
look
at
the
other,
mostly
test
perf
1
&
5,
which
may
be
like
really
like
synthetic
benchmarks.
I,
don't
know
realistic.
There
is,
or
at
least
test
of
one
is
kind
of
like
synthetic
I.
Think
test
/,
5
kind
of
like
is
like
a
realistic
use
case.
B
Probably
you
see
that
there
is
a
huge
difference,
like
kind
of
like
for
test
performance
like
150
X
difference
for
tests
per
5
bits
like
30
X,
difference
between
the
digital
computation
time
and
the
current
computation
time.
So,
overall,
what
we
can
take
from
this
table
is
that
maybe
for
the
most
common
case,
let's
say
test
before
the
drug
is
slightly
less
efficient
than
the
native
implementation.
The
current
implementation
rating
that
the
d-dog
implementation
is
more
like
a
consistent
and
again
I.
Think
that's
because
yeah.
C
A
B
A
B
B
B
In
a
certain
name
space
we
had
like
performance
improvement
because
you're
going
from
n
square
complexity
to
encumber
or
very
large
inputs,
basically,
and
so
there
was
a
very
significant
difference
yeah.
So
this,
like
kind
of
like,
is
about
this
and
explains
all
the
different
tries
to
explain
the
differences
between
the
different
results.
B
B
Basically,
we
have
the
notion
of
sync
onigiris
groups
and
applied
to
groups
and
we
recompute
the
entire
object
whenever
we
think
the
subject
may
be
impacted
by
a
new
pod
or
a
new
namespace
or
sorry
another
new
namespace,
it's
a
new
pod,
whereas
DD
dog
feels
like
comprehensive
indexes,
including
indexes
not
limited
by
just
namespace,
but
it
exists
by
label.
So
everything
is
pretty
much
indexed,
which
explains
the
increased
memory
usage,
but
means
that
we
more
consistent
performance
across
all
use
cases
because
we
didn't
try
to
optimize
for
a
single
for
a
single
case.
B
In
that
case,
test
power
of
three,
which
is
the
one
I
showed
you
before.
Well,
we
just
have
indexes
about
namespace.
So
if
you
tried
some,
if
you
try
to
work
on
different
input
where
you
have
more
objects
per
namespaces
and
if
you
try
to
have
like
more
complexity
with
different
labels
that
you're
using
you
start
seeing
the
difference
in
results,
the
native
controllers
still
have
some
functions
which
are
linear
in
nature.
B
Basically,
we
iterate
over
a
large
set
of
input
and
you
label
matching,
whereas
in
digital
pretty
much
everyone,
everything
is
indexed,
so
the
amount
of
linear
search
we
do
is
actually
premium
and
actually
I'll
go
back
here
to
test
/
5.
You
can
see
that
the
result
was
like
35
seconds
versus
one
second,
and
actually,
surprisingly
enough,
the
memory
usage
was
the
same
between
a
DD
log
and
the
native
implementation.
Here,
what
current
representation
tries
to
do
is
every
time
a
new
part
is
added.
B
We
iterate
over
all
address
group
subject
that
we
have
in
all
namespaces
and
we
try
to
do
label
matching.
This
is
because
all
the
network
policy
rules
are
using
a
namespace
selector
and
the
index
we
have
for
namespaces
is
useless.
We
actually
want
to
iterate
over
all
namespaces
and
check
if
that
satisfies
the
selector.
B
In
that
case,
we
cannot
use
our
index
based
on
the
namespace
name.
Well,
in
this
case,
yeah.
Sorry
yeah
in
this
case
in
deck
from
the
labels
also
did
not
knows
not
recompute
the
entire
address
group
and
I
applied
to
group
every
time.
So
that's
why
I
said
that
the
current
control
is
not
really
completely
incremental.
B
B
However,
where
the
native
controls
in
that
range
that
we
actually
try
to
reduce
the
number
of
things
we
do
on
address
groups
and
apply
to
groups
by
using
the
community's
work
you.
So
basically,
if
you
have
multiple
polyp
dates
in
a
row,
if
those
updates
are
close
enough,
we
only
gonna
do
a
sync
on
the
relevant
address
group
and
apply
to
group
once
and
basically
kind
of
like
those
over
the
consumer
is
which
is
the
internal
computation
engine
the
better.
B
This
is
because
this
means
that
we
can
avoid
like
recomputing
and
recomputing
if
we
get
three
pod
date
at
the
same
time,
we're
only
gonna
date.
The
address
groups
and
a
practicing
address
group
can
apply
to
good
form.
However,
this
may
benefit
network
policy
computation
only
and
not
like
regular
updates
that
comes
through
in
a
controller
lifetime.
Why
do
I
say
that?
Because
when
we
do,
all
of
this
is
measured
using
what
we
call
the
initial
computation,
so
we
load
all
the
inputs
in
the
API
server.
B
Then
we
start
the
controller
which
is
going
to
process
all
those
inputs.
However,
in
real
life,
that's
only
gonna
happen
once
we
start
when
we
start
the
company,
we
start
the
controller
and
you
may
have
updates
using
the
controller
lifetime
for
which
we're
going
to
have
to
update
our
internal
objects.
In
this
case.
Obviously,
the
controller
is
not
gonna
like
you
generate
a
user
with
person
of
the
CPU,
but
the
Vantage
of
using
DD
log
is
more
that
maybe
the
CPU
footprint
is
gonna,
be
reviewed.
I'm,
not
gonna,
use
as
many
resources.
B
And
finally,
I
consistent
20
or
30
percent
average,
because
of
how
you
log
in
as
a
C
API,
and
we
need
to
invoke
it
from
go
because,
like
so
many
API
calls,
we
have
actually
a
big
big
overhead
by
caused
by
switching
from
go
to
see.
We
believe
that
can
be
eliminated
by
changing
how
that
is
passed
between
DD
log
and
entry.
Oh,
that's
kind
of
like
a
significant
effort,
so
that
requires
a
commitment.
C
B
B
Because
in
that
case,
we
have
only
two
network
policies,
and
so
we
have
one
big
applied.
We
have
two
applied
to
do
what
read
books
and
that's
exactly
why
I
added
instead
compute
the
in
fact
to
evaluate
the
impact
of
the
work,
because
in
that
case
a
DD
log
is
gonna,
keep
doing
a
weight
on
G,
apply
to
groups
and
the
address
groups
and
the
performance
is
good.
But
what
the
native
controller
is
gonna,
do
update
use,
apply
to
groups
and
address
groups
much
fewer
times,
callie.
C
B
So
this
ride
is
the
open
question,
but
I'm
the
question
we
kind
of
need
an
answer
to
I
mean
it
was
fun
for
me
to
evaluate
GD
dog,
but-
and
we
can
do
more,
we
can
like
improve
data.
Passing,
for
example,
but
kind
of
like
implies
the
commitment
to
DD
dog,
because
that's
more
engineering
resources
having
the
big
question
is:
is
there
enough
upside
to
switching
to
DD
dog?
B
If,
if
policy
computation
is
actually
not
what's
dominating
CPU
usage
with
distribution
and
then
maybe
having
a
little
bit
of
difference
is
not
like
very
significant
I
doubt
that
distribution
account
for
like
three
hundred
and
seventeen
seconds
like
in
this
case.
So,
in
this
case,
like
maybe
the
difference
between
DD
log
and
in
the
native
controller,
mean
something
for
test
before
12
22
seconds
versus
14
seconds.
If
distribution
itself
is
like
two
minutes,
then
the
difference
is
not
really
significant,
however,
is
a
memory
usage
difference
may
be
significant
as.
D
As
we
add
additional
attributes
by
which
we
make
policy
decisions,
is
it
going
to
be
easier
to
keep
the
translation
from
those
attributes
into
the
DD
Log
form
right?
It's
basically
what
we
wouldn't
have
to
continually
update
the
computation
engine
right,
we're
just
basically
transforming
attributes
into
the
DD
Log
form.
Is
that
get
easier
to
maintain
long
term
and
basically
continually
updating
our
occupation.
B
D
B
Pretty
it's
just
in
a
different
language,
so
you
update
that
we
compile
it
I.
Think
I,
think
what
you
mean
is
a
very
good
point,
because
here
what
I
have
is
so
right
now
the
dealer
is
a
bit
smaller.
You
have
a
lot
of
conversion
between
DD
log
in
and
and
I.
Think
in
earnest,
I
mean
I'm,
not
trying
I'm
not
trying
to
be
biased
here,
but
the
train
controller
code
is
like
not
very
complex,
so
at
this
stage
may
be
the
reason
stuff
actually
and
we,
as
we
add,
more
complexity
to
it.
B
For
example,
if
we
have
like
our
own
entry,
a
global
network
post,
no
cluster
Network
policy
object
then
having
something
that
like
did.
You
know,
may
start
to
make
more
sense
if
that
means
that
those
new
objects
are
actually
going
to
drastically
increase
the
complexity
of
our
goal
and
controller
implementation.
B
So
that's
something
to
keep
in
mind
if
we
can
keep
the
complexity,
small
and
stay
incremental
or
try
to
be
more
incremental,
even
in
our
current
controller
and
maybe
switching
that'll
make
sense.
However,
a
busy
introduction
of
new
inputs
makes
our
controller
more
complex.
For
some
reason,
because
we
have
more
different
objects
to
to
handle-
and
we
don't
have
like
common
representation
at
some
point
in
the
API,
then.
B
D
C
B
I
think
the
consistent
performance
that
we
get
is
also,
but
there
is
I,
think
the
the
point
of
this
investigation,
and
we
did
a
lot
of
work
with
learning
on
this
is
to
show
that,
from
a
computation
perspective,
the
lock
is
not
gonna
be
worst
and
all
the
currents
implementation
and
we
use
the
30
increase
because
you're
bringing
that
additional
like
see
library
ready
with
its
own,
like
state
to
the
point,
was
also
to
evaluate
that
the
memory
usage
wasn't
gonna,
go
through
the
roof.
I'm
also.
C
D
C
B
We
can
try
to
like
improve
out
we
and
all
like
that-
Fifth's
case,
for
example,
in
our
current
implementation
that
may
make
the
code
like
more
difficult,
where
I
mean
in
general,
it's
very
easy
to
be
in
index
stores
indexes.
It's
also
not
very
complicated
in
companies
right,
because
we
have
all
those
like
emeritus
libraries.
We
use
all
the
communities
primitives
and
it's
actually
pretty
easy
to
also
build
indexes
in
in
the
native
goal
and
implementation.
C
B
I
mean
we
did
do
like
I
mean
like
Leonid
I
did
write
a
bunch
of
tests.
We
did
find
sambong
like
since
the
drug,
initially
Leone
didn't
do
a
lot
of
work
to
optimize
the
dialogue.
Implementation
I
think
that
the
optimization
we
make
or
more
like
January
kind
of
kind
of
like
benefit
all
use
cases,
unlike
what
we
could
do
in
gulang,
but
like
a
lot
of
optimization
work,
log
implementation
as
well.
C
Wall
I
mean
that
is
somewhat
generic
I
mean
sometimes
you
want
to.
If
you
need
to
make
some
trade-off
like
sometimes
you
just
want
to
miles
specific
kids,
but
for
some
other
color
code,
you're
saying:
okay
for
some
reason,
maybe
I
you
don't
want
to
increase
memory
or
whatever
you
don't
want
mice
and
I
guess
with
the
opal.
You
cannot
do
that
fun,
critical.
D
Kind
of
in
the
future
the
question
I
have
is:
can
we
take
the
model?
The
DD
log
is
building,
there's
some
really
interesting
papers
on
taro-san
zelkova.
The
AWS
had
built
that
you
did
that
used.
Satisfiability,
solvers
and
I.
Think
datalog
was
one
of
those.
They
basically
used
the
information
to
construct
horn
clauses
and
and
basically
check
reach
ability,
queries
that
might
be
another
interesting
application.
When
we
want
to
extend.
B
Yeah
and
I
guess
the
last
point
is:
it
would
be
a
bit
more
difficult
from
a
training
perspective
and
not
real
distribution
perspective
more
like
a
tool
chain
perspective
to
adopt
something
like
DD
log.
There
is
like
an
extract
type
compilation
when
you
actually
modify
a
log.
I
think
this
is
not
very
significant
because
everything
in
our
ice
cream,
so
actually
putting
and
didn't
walk
into
the
tool
chain,
wouldn't
be
very
complex
and
what
one?
Since
it's
in
the
final
entry,
a
docker
image
that
we
distribute
there
is
no
like
significant
difference.
B
B
So
I'm
not
expecting
to
come
to
a
decision
today,
I'm
gonna,
I'm
gonna
send
the
slides
to
everyone.
So
if
you
want
to
look
at
it
feel
free
to
look
at
it
and
as
well
sending
the
recording
of
the
meetings
see
if
I
see
what
I
and
you
opinion
on
this,
and
if
you
have
a
benchmark
that
you'd
like
this
objective,
this
case
scenario
feel
free
to
share
it
with
me,
I'm
happy
to
add
tests.
B
A
C
D
D
C
C
I'm
saying
it
needs
more
comments,
because
now
you
have
a
multiple
kills
and
you
can
have
denies
diversity.
It
was
a
Proteus.
You
have
to
somehow
adhere
to
this
twist
always
flows,
and
you
have
some
limit
on
this.
So
is
flow.
Priority
values
I
think
it's
like
sixteen
beats
or
something
else.
So
there's
something
cannot
be
wrong
here
to
then
I
make
it
dynamically
allocated
and
adjust
the
priorities.
Something
I
think.
B
B
This,
which
is
very
easy
because
you
just
update
you
store
in
the
same
way
as
we
do
today.
That's
why
it
was
like
so
easy
to
compare
the
two.
The
only
thing
is,
if
we
add
a
bunch
of
different
features
to
the
controller,
but
I
don't
see
that
happening
in
the
immediate
features
and
I
mean.
Obviously
we
need
to
update
the
DD
lock
program.
I,
don't
think
switching
itself
is
gonna,
be
a
big
cost
and
I
don't
see,
like
obviously,.
B
C
C
B
B
C
C
One
more
thing
that
I'd
I
should
I
believe
we're
not
I
decided
to
use
DD,
vocal,
outing,
honesty,
I,
think
that's
were
the
decision
made.
B
A
But
I
can
talk
about
the
present
and
we
are
at
time
for
today
so
I
think
that
I
mean
I.
I
was
curious
about
what's
the
behavior
of
DD
log
compared
to
the
current
implementation
in
presence
of
failures,
but
maybe
we
can
discuss
that
offline.
Us
luck,
you
know
me,
or
maybe
it
could
be
another
test
case
to
to
define
similar,
injecting
a
fault
in
the
controller
and
then
I.
Guess
that
see
we'll
see
whether
did
log
performance
is
comparable
or
not
with
the
native
performance.
But
that's
a
story
for
another
meeting.
A
That's
the
discussion
for
another
meeting
because,
unfortunately,
for
today,
time's
up
and
I
would
like
to
thank
everyone
unless
you
want
to
bring
up
some
very
last
topic,
going
five
four
three
two
one
zero
and
therefore
with
this
we
conclude
today's
meeting
two
days
of
obviously
April
8
2020
thanks
everyone
for
attending
and
we'll
meet
again
with
the
new
meeting
time
on
Monday
April,
21st,
9
p.m.
Pacific
time
or
6
a.m.
Central
European
Time
on
Tuesday
or
12
p.m.
Tuesday,
April
21st
China
time
all
right.