►
From YouTube: Antrea Community Meeting 06/21/2021
Description
Antrea Community Meeting, June 21st 2021
A
The
recording-
and
so
we
can
start
this
meeting
so
good
morning,
good
afternoon
and
good
day
good
evening
today
is
monday
june,
21st
or
tuesday
june
22nd.
If
you
don't
live
in
united
states,
welcome
to
this
instance
of
gentria
community
meeting.
So
thanks
everyone
for
joining
and
for
today
in
the
agenda,
we
have
a
presentation
and
a
poc
demo
for
andrea
multicast
support
that
will
be
provided
by
russia,
russian
and
land.
So.
A
This
is
the
only
topic
that
we
have
pre-booked
in
advance
in
the
agenda,
so
please
go
ahead
with
this
presentation.
Russian
lan
hi.
B
B
B
B
Okay:
okay,
let's
start
hi
everyone,
my
name
is,
and
this
is
my
partner
roland.
We
bought
from
vm
vmware
container
team.
So
today
we
are
going
to
present
you
with
our
design
of
multi-task
for
the
entry.
B
So,
basically
our
presentation
can
be
divided
into
four
parts.
B
Okay,
let's
start
so
traditional
ip
communications.
Allow
us
allows
a
host
to
send
packets
to
a
single
host.
We
call
it
unicast.
B
So
if
it's
sent
to
our
host
with
broadcast
translation,
ip
multicast
provides
a
third
possibility
allowing
a
host
to
send
packets
to
a
subset
of
all
hosts
of
group
transformation.
B
Achieving
igmp
is
used
to
dynamically
register
individual
posts
in
a
multicast
group,
particularly
so
post
identical,
identify
group
membership
by
sending
igmp
message
to
their
commodity,
password
on
the
igmp,
how
to
listen
to
igmp
messages
and
predict
theoretically
standard
queries
to
discover
which
groups
are
actively
active,
particularly
somewhere.
B
We
also
have
a
ip
video
counterpart
called
mld,
so
mld
v1
is
similar
to
rgmp
2
and
mld.
V2
is
similar
to
igmp
v3.
B
B
So
we
have
do
some
investigation
of
current
cnr
about
what
it
has
support.
B
Of
them
actually,
yes,
already,
some
of
them
can
they
support
what
he
has
featured,
but
actually
most
of
them
just
support
multicast,
which
are
partially
and
do
some
innovation
to
learn
how
to
implement
these
features,
basically
for
openshift
sdm
the
implements
multiclasses
below.
So
it
allocates
a
certain
dni
key
for
the
created
net
space
for
isolation.
B
Real
id
here
is
24
bit
value,
part
of
fixed
land
handler.
So
if
a
net
space
is
annotated
to
enable
multicast
obvious
obvious
tables
will
be
programmed
to
take
actions
on
them.
Multicast
ip
specifically,
the
multicast
track,
will
be
broadcast
to
all
the
nodes,
but
for
the
parts
inside
the
one
node
only
pathways-
and
this
one
local
part
has
joined
the
multitask
group
will
receive
more
defense
strategy.
B
B
It
supports
most
features
in
terms
of
ip
multitask
and
support
support,
ibm
ipv4
with
v6
multitask,
with
igem
p
and
mld
snoopy.
So
the
implementation
is
based
on
ovn,
so
sometimes
some
concepts
about
obvious.
B
It
adds
a
default
deny
or
level
policy
to
all
right
ip
multicast
traffic
from
all
parts
like
like
opium
openshift,
spn
ovn,
when
it
is
an
advanced
space
annotation
to
enable
multitask,
so
basically
icomodic
test
traffic
is,
is
forward
only
to
the
ports
in
the
center
space.
Okay.
Next
week
we
will
talk
about
our
proposed
design
on
entry.
B
We
have
two
main
options:
we
call
it
igmp,
2p
plus
forecast
solution
and
the
second
one.
We
call
it
a
control
plan
based
solution,
so
each
has
its
advantages
and
disadvantages,
and
we
will
summarize
it
later
enough
for
each
option
in
cam.
No,
it
can
hybrid
mode.
I've
discussed.
B
We
want
to
add
a
new
feature:
gate
like
multicast
2,
to
enable
multicast
traffic
inside
the
node
in
the
class.
Node
default
value
will
be
false,
so
when
the
feature
is
enabled
all
multicast
group
will
be
allowed
by
obs
database
for
particular
traffic,
but
for
the
internet.
B
What
multitask
and
what
has
the
initialization
would
be
different
for
in-cap
and
knowing
cam
mode,
and
also
the
details
will
be
given
in
the
following.
B
Pages
some
basic
setup
so
when
multicast
feature
build,
is
invitable
agent
in
each
node,
we'll
set
up
based
above
flow
rule
to
allow
multiple
a
screen
line
below.
So
this
flow
will
allow
the
multicast
traffic
to
be
four,
that's
normal
and
means
multicast.
I
will
be
flowing
to
all
parts
inside
in
federal
and
then
we
enable
igem
scooping
ovs
to
limit
the
transaction
and
load
to
be
forward.
To
this
part,
you
can
see
the
camelot
sample,
we
should
add
in
the
green
lines.
B
So
when
it's
incapable.
B
Entry
10.0,
I
so
by
default
we
will
create
a
flow
for
other
nodes
for
the
multitask
traffic
can
be
forecast
to
others,
and
it
has
a
sample
flow
for
two
nodes
via
obvious
group
table.
B
B
Zero,
so
for
the
for
the
low
income
board
agent
will
install
the
multicast
email
for
the
multicast.
B
For
the
multicast
traffic
between
those,
so
the
demon
cam
can
be
m
routed,
which
is
dvmrp
implementation
for
unix
or
pimd.
B
B
We
need
to
leverage
the
under
underlying
network
to
support,
what's
impossible
across
note
where
entries
deploy
in
golden
campbell.
It
means
the
physical
or
virtual
laws
which
is
in
the
underlying
network.
It
will
support
multitask
protocols
like
gmt,
pl,
gim
tddmrg
and
today
we
it
take
cameras,
example
by
default
and
router
config
cell
to
add
us
multicast
water
on
all
multicast
cable
interfaces,
and
you
will
broadcast
multitask
traffic
to
all
those
external
interface.
B
The
hybrid
mode
is
a
mixture
of
no
ecap
and
it
can
mow
so
there's
no
new
rules
to
be
discussed
when
the
user
choose
hybrid
mode,
both
agents,
so
agents
will
install
multi-customers
and
create
open,
folders,
fraternally
and
now,
okay,
we
finished
the
first
option.
We
called
it
the
igem
distribution,
the
broadcast
solution
for
any
question
here:
okay,.
D
E
E
B
Okay,
now
we
now
we're
focusing
on.
B
E
Interface,
actually,
there
there's
no
multicast
ip
for
specific
network
interface.
It's
just
you
can
consider
it
as
a
logic
concept
and
as
long
as
you
use
the
ip
multicast
ip
address,
you
bind
it
in
your
process.
It
will
join
as
a
multicast
group,
so
it's
actually
no
need
to
do
anything
in
the
physical
or
any
software
interface.
Yeah.
B
F
Right,
the
pod
interface-
usually
we
in
our
in
any
interface
like
we
enable
the
multicast
on
that
and
we
configure
a
multicast
address
right.
So
how
do
we
do
that?
For
parts
eight
years.
E
And
I
will
show
the
demo
later
and
maybe
to
answer
your
question,
but
indeed
there
is
a
way.
If
you
want
to
allow
the
interface
to
be
a
part
of
the
multicast
group.
We
can
do
that,
but
I
think
it's
some
process
belongs
to
user
perspective.
E
Instead
of
our
and
three
parts
for
my
own
standing,
yeah.
B
Okay
questions.
So
now
we
are
focusing
on
the
second
solution.
We
called
control
plan
based
solution
so
for
the
contribution
resolution,
where
multitask
feature
cases
and
able
agent
in
each
node
will
set
up
based
on
overflow
design.
All
multicast
traffic
was
low
priority,
so
you
can
see
the
green
line,
so
it's
different
for
igem,
which
is
forecast
solution.
B
So
in
that
solution
we
assign
multitask
first
one,
but
here
we
we
job.
We
do.
We
have
action
to
drop
the
multicast
traffic
from
priority
and
the
when
it's
in
cam
mode
and
multicast
feature
is
enabled
engine
will
install
rules
to
follow
the
multitask
traffic
container
interface
conditionally
by
default,
will
not
create
any
any
flow
for
each
node.
When
there.
F
B
B
Also,
the
hybrid
mode,
no
new
rule
to
be
introduced
when
user
choose
hybrid
mode
agent,
will
install
multicasting
operations
and
upgrade
overvalues
for
tunneling
inside
the
node.
According
to
the
multitask
traffic
membership,
okay,
we
have
discussed
two
possible
multitask
solutions.
We
propose
that
here's
the
comparison
for
this
tool
so
for
the
ignp
school
king,
the
broadcast
solution
is
relatively.
C
B
To
get
an
implement
and
we
don't
need
to
involve
controllability
to
do
extra
calculation.
However,
it's
not
very
not
a
very
quarterly
code,
multicast
solution,
because
actually
all
multicast
traffic
across
node
is
forecast.
B
C
Hello,
I
a
question
here:
okay,
okay,
yeah,
go
to
the
last
page,
the
cons
and
pause
compilation.
Yeah.
My
question
is
even
for
option
two
control
plane
based
solution
is:
it
is
the
first
plus
two
and
no
broadcast
across
no
no
traffic
and
when
the
traffic
mode
is
knowing
cap.
E
Sorry,
I
cannot
send
the
question,
I
think,
for
the
no
income
mode
it's
still
broadcast
if
we
use
the
two
emerald
d,
but
I'm
not
sure
if
we
use
another
third
party,
the
pmid
we
haven't
do
the
investigation
for
that
tools,
but
I
I
think
at
least
for
for
now
and
for
the
emerald
d
too.
It's
a
broadcast,
no
matter,
it's
the
option,
one
solution
or
option
two:
they
are
the
same
or
a
broadcast.
D
So
for
low-income,
actually
even
for
income
mode,
I
think
we
should
also
handle
the
case
receivable
center
outside
the
cost.
D
G
D
D
And
actually
for
for
both
knowing
cabin
for
the
case,
you
need
to
draw
the
traffic
outside
the
cluster.
In
the
internal
case,
I
think
orangey,
I'm
turning
around.
Looking
look
we're
looking
at
a
solution
called
ignp
proxy.
D
D
E
Okay,
we
haven't
do
any
investigation
for
the
igmp
proxy.
Maybe
we
can
do
more
works
on
this
one
and
for
now
I
think
we
just
use
the
third
party
to
to
do
the
demo
yeah.
C
D
G
Yeah,
it's
a
it's
a
demand
that
bridges
the
igmp
traffic,
essentially
between
the
entry
gateway,
zero
interface
and
the
physical
interface,
and
so
it
relays
the
agmp
traffic
so
that
the
igmp
traffic
that
goes
through
the
obvious
bridge
can
be
sent
to
to
the
underlay.
D
C
Yeah,
you
just
said
we
should
consider
knowing
which
we
should
consider
in
capital,
or
we
shouldn't
consider
knowing
incap.
I
think
we
should.
D
Consider
intact
at
least
that's
what
why
what
we
heard
from
uk's
amazon
folks,
seeing
this
customer
want
holocaust
for
the
it
was
only
a
little
dance
support
manifest,
so
we
are
looking
at
any
solution
by
overlay.
D
E
Sure
I
think
we
can
set
some
priority
for
the
if
there
is
any
implementation
starting
yep,
okay,
okay,
I
will
start
sharing.
Oh
sorry,
russia.
I
think
you
need
to
stop
at
first
sure,
okay,
thanks
for
watching-
and
I
will
give
the
demo
about
how
we
can
do
multicaster
in
entry
through
some
manual
steps.
E
So
we
can
see
how
the
multicast
works
in
entry
and
you
may
learn
and
understand
it
better
about
what
we
present
by
rotten
and,
as
you
can
see
here,
this
is
our
demo
environment
and
there
are
three
nodes,
the
first
one
and
there
is
the
zero
and
we
will
call
this
a
zero
node
and
the
first
node
and
the
second
node.
All
three
nodes
are
actually
the
work,
node,
nomatic
master
or
not,
and
by
default
I
have
created
the
entry
by
in-cap
mode
and
the
default
default
general
mode
tunnel,
and
you
can
see
here.
E
We
have
c
three
agent
here
and
let's
go
to
here
another
window.
I
will
show
you
that.
Oh
sorry,
let's
go
back
here,
as
you
can
see
here.
E
There's
four
parts
I
already
created
for
the
demo
and
these
two
are
running
in
their
node
and
I
will
show
you
that
how
the
open
floor
wall
multicast
open
flow
works
when
there
is
added
it
can
do
the
modcast
and
for
the
part
b
and
three
s
b
and
c,
and
it's
running
in
there:
no
zero
node,
first
and
second
node,
okay,
okay,
let's
go
the
part.
E
E
I
will
use
this
two-part
demo
how
the
multicast
traffic
inside
the
nodes
by
default,
I
will
run
a
server
multicast
server
in
part,
a2
and
run
a
client
in
part.
A
let's
go
back
here.
Here
is
some
script
I
will
use
for
demo
and
the
first
let's
go
and
copy
this.
E
Also,
okay:
here
you
will
see
that
I'll
start
the
iperf
with
the
client
mode.
This
one
is
the
iphone
multicast
ip
address,
it's
one
of
the
class
d,
ip
and
okay.
Back
to
here,
okay,
I
will
start
part
a
two.
In
the
same
note,
you
will
see
that
even
we
start
this
and
bind
the
multicast
address.
There's
no
connection
between
these
two
process
and
let's
go
back
to
the
zero
node
and
I
will
go
and
sq
actually
this
script.
E
It
will
actually
run
some
command
inside
the
agent
and
change
and
in
the
zero
nodes.
Let's
add
the
default
workflow
to
the
table
0..
E
This
one
will
actually
allow
all
the
multicast
in
the
in
our
os
bridge
when
the
action
is
normal.
It
means
that
any
broadcast
traffic
will
be
treated
like
normal
traffic.
It
looks
like
there's
no
open
flow
inside
this
bridge
and
we
will
do
more
setting
to
make
sure
that
there
is
no
broadcaster
flood.
E
E
E
Okay:
let's
go
back
to
show.
E
E
Okay,
the
actually
is
good
and
let's
go
back
to
the
sometimes
happen.
Let's
let
me
exist
to
this
and
also
exist.
F
E
E
E
E
E
E
E
E
Here's
another
part
running
another,
the
second
node
which
I
showed
at
the
beginning.
Okay,
let
me
also
start
the
broadcaster
server.
First
sorry
modcaster
server.
First,
you
will
see
that.
Okay,
let
me
I
need
to
increase
the
ttl
once
you
want
this
traffic
to
be
sent
out
of
the
node,
you
need
to
increase
it
okay.
Here,
let's
start
it,
you
will
see
that,
because
we
have,
we
already
have
the
flow
allowed
inside
the
node,
so
if
it
can
connect
it
by
right
now
and
but
for
the
intra
node,
we
cannot
get
any
connection
here.
E
You
will
see
that
no
traffic
no
connect
connected
information.
Here,
let's
go
back
to
the
okay
by
geneve.
I
will
add
a
group
first
in
in
this
agent
one
and
three
agents
in
the
xero
node,
okay
added
a
group
first,
this
group-
it
actually
means
that
once
the
broadcaster-
sorry
just
this-
is
just
a
group
which
means
there
are
some
traffic.
E
If
you
want
to
send
this
group,
it
will
do
such
action
to
load
the
ip
address
to
the
tunnel
just
before
destination.
First,
for
these
two
remote
nodes,
this
actually
the
ip
of
the
two
remote
node,
the
first
and
the
second
node,
and
then
we
will.
We
don't
need
to
dump
the
groups.
We
just
add
the
flow.
E
E
For
for
now
you
you
will
see
that
there
is
no
still
no
connection,
because
we
haven't
set
any
configuration
to
allow
the
multicast
in
in
the
first
node
and
second
node.
So
let
me
go
back
here.
E
Let
me
go
and
do
agent
one,
so
we
can
execute
sound
command
in
the
atrium
and
tradition
one,
and
we
will
just
simplify
the
open
flow
to
allow
all
the
multicaster
as
normal
here
and
another
one
is
agent,
two,
it's
actually
the
agent
in
the
second
node.
We
do
the
same
action.
E
Okay,
let's
go
back
here
great!
You
will
see
that
there
is
a
connection
between
those
all
four
parts.
These
two
are
cross
nodes
and
these
two
parts
are
in
one
node.
Okay,
let
me
go
back
here,
let's
go
to
for
in
cabin
mode.
I
think
that's
all
for
the
income
mode,
and
I
will
go
to
show
you
that
no
income
mode,
so
I
need
to
delete
the
entry.
E
E
Let
me
go
to
the
so.
The
part
has
been
existing.
Let
me
go
to
the
agent
zero
first
in
the
xero
node
we
have
to
do.
We
just
need
to
simply
edit
the
normal
flow
to
allow
the
traffic
between
inside
notes
and
the
multicast
daemon
will
do
the
job
across
node
nodes.
E
E
E
E
Oh,
I
think
the
terminal
is.
I
need
to
reconnect
it.
E
Okay,
it's
actually
all
runny
and
healthy.
So
let
me
go
back
by
default.
This
demon
will
be
responsible
to
set
up
any
multicast
roads
if
there
is
any
multicast
igmp
message
inside
and
out,
for
example,
one
part
finding
the
multicast
address
and
trying
to
receive
any
multicast
traffic
for
the
ip
group
and
okay.
Since
we
confirmed
that
this
demand
is
active
and
running,
let's
go
back
to
the
okay
great.
It's
actually
is
connected,
as
you
can
see
that
from
multi.
Let
me
rewind.
E
I'm
wrote
here
and
all
three
have
the
similar
sorry
similar
information.
You
will
see
that
in
zero.
The
first
and
second
note
they
all
have
m-rods
being
added
by
this
demon,
and
there
are
also
more
information
if
you
like
to
use
the
control
command
line
provided
by
this
demon
and
they
also
multicast
forward
the
caching
table
and
if
there's
no
multicast
inside
the
node,
there
will
be
no
information
on
any
record
in
this
table.
E
C
Could
you
could
you
switch
to
the
third
terminal
that
yeah
the
ip,
as
I
saw,
that
the
the
first
node
and
the
second
node
the
ip
in
that
node
the
connecting
ip
is
not
happy
instead
of
the
port
ip?
Is
it
because
the
traffic
is
is
netted
by
those
nodes?
G
A
Thanks
thanks
alan
and
rojan,
that
was
a
very
informative
presentation.
I
I
you
know
you
presented
quite
a
lot
of
options
and
that's
great
my
opinion.
I
also
share
with
you
enjoy
the
concern
that
broadcast
across
nodes
is
probably
it's
not
probably
what
users
will
be
looking
from
a
multicast
solution,
but
on
the
other
end
I
also
appreciate
the
fact
that
we
want
to
be
sort
of
self-sufficient
with
openshift
and
not
starting
installing
many
open
source
components.
A
So
I
wonder
if,
in
your
investigation,
you
believe
that
there
is
something
else
that
opens
up
which
can
do
or
whether
the
you
mean
the
flow
settings
that
you
have
provided
is
everything
that
ubuntu
switch
can
do
for
multicast,
and
you
also
mentioned
the
the
control
plane
solution.
Yes,
my
second
question
is
that
you
mentioned
the
control
plane
solution
and
did
you
do
any
experiment
to
verify
whether
that
can
actually
allow
us
to
save
a
useless.
A
Yes,
but
technically
technically,
both
are
about
the
let's
say
the
solution
without
them
and
the
first
solution
this.
The
first
question
is
whether
there
is
anything
else
that
we
can
do
in
open
with
which
even.
E
For
the
first
one,
actually,
I
think
oes
only
supports
igmp
snooping
and
for
cross
nodes.
We
can
do
the
mud
cast
for
in-cap
mode,
but
for
no
income
mode.
I
don't
think
obs
by
default
can
provide
multicast
support.
E
If
you
say
that
if
we
do
some
obvious
implementation,
I
don't
know
if
we
can
do
that
to
support
it.
Maybe
we
can
do
more
investigation
for
that
part
and
yeah,
and
for
the
second
one,
can
you
repeat
your
question?
Sorry,
yes,.
A
I
will
I
I:
I
would
just
like
to
understand
a
little
bit
more
about
the
control
plane
solution
that.
G
A
Mention
the
two
to
prevent
broadcast
to
prevent
multicast
traffic
to
go
and
broadcast
across
nodes.
E
Yeah
sure
for
control
plan
based
attack
tree
and
more
about
the
income
mode
for
no
income
mode.
Actually,
there
are
no
difference.
We
are
still
at
least
for
our
current
demo.
We
are
used
to
using
the
third
party
and
we
are,
it's
actually
do
the
same
thing:
to
broadcast
the
traffic
to
all
nodes
and
for
income
mode,
because
we
can
involve
the
control
plan.
A
E
Oh
yes,
actually
in
our
design,
we
assume
that
the
user
will
do
some
annotation,
like
the
this
deployment,
has
multicast
application
running
inside
the
deployment
so
which
means
we
can
find
out
which
part
running.
In
the
note,
that's
our
proposed
design
you're
right.
If
there
is
no
extra
information,
we
cannot
know
that
part.
A
Okay,
yeah
thanks
for
the
qualification
and
finally,
final
question.
I
swear
this
is
my
last
question:
is
it
for
the
knowing
cap
solution
and
clearly
the
ibridge
solution?
Is
it
possible
to
do
without
mrd
or
any
other
third
party
multicast
routers,
or
in
my
understanding
we
do
need
that
that
third
party
component
right.
E
E
F
A
No,
no,
and,
and
so
as
a
follow-up
on
this
one
is
instead,
could
we
use
mrd
or
igmp
proxy
in
the
in
cap
mode?
Sorry,
this
is
just
my
ignorance,
and
I
want
you
to
understand
if
this
is,
this
would
be
possible
to
use
german
the
same
demo
that
you've
done
with
mrd
but
run
it
in
in-cap
mode.
E
E
I'm
not
sure
about
these
details,
but
maybe
I
will
I
I
think
I
can
verify
that,
but
I
doubt
it
it
can
work.
Some
no.
A
No,
I
I
I
know
that
the
traffic
obviously
doesn't
go
to
the
external
interface,
so
I
was
wondering
if
there
is
a
if
there
could
be
a
solution,
for
you
know,
running
having
emerald
d
run,
intercept
traffic
before
it
goes
into
the
tunnel,
but
probably
that
that's
not
possible
that
that
will
require.
I
don't
think
there
is
a
way
for
doing
that,
honestly,
so
yeah
that
would
that
was
probably
a
little
bit
of
a
dumb
question.
A
So
yeah
thanks
for
all
your
answers,.
D
I
think
you
mean
you
want
to
run
this
m
router
protocol
in
the
real
network.
I
I
guess
maybe
that
is
possible-
I'm
not
saying
we
should,
I
think
tightly.
Maybe
it's
possible.
We
banned
this
mrd.
I
don't
really
know
that
much
further
assuming
to
it
can
bend
the
interface.
Maybe
we
can
bind
it
to
the
anterior
gw0
and
then
we
run
the
protocol
inside.
A
Yeah
that
that's
pretty
much
what
I
meant,
I
don't
know
what
is
the
best
way
of
doing
it,
but
you
know
I'm
thinking
that
if,
if
we
need
to
use
emerald,
you
know,
then
maybe
we
might
want
to
use
it
both
for
incapable
income,
but
that
would
be
just
like
an
alternative,
an
alternative,
not
really
not
really
super
important
at
this
stage.
So
yeah
thanks.
I
have
exhausted
all
my
questions.
A
Sorry,
every
time
that
I
see
an
interesting
presentation,
I
end
up
with
a
lot
of
questions,
so
I
apologize
for
using
a
lot
of
time
and
we
still
have
three
minutes
into
the
meeting.
Is
there
any
other
question
comment:
observation
on
the
design.
D
Yeah,
since
we
told
us
I
just
want
to
mention
that
earlier,
I
I
consider,
although
we
are
it's
very
similar
to
ignp
proxy,
but
I
was
thinking
in
theory
we
can.
We
can
also
do.
D
Let
me
see,
I
know
in
indiana's
tcpa
mistake,
I'm
not
actually,
I'm
not
sure
bsd,
at
least
in
bsd
production
statewide
in
pcdst
daily
stack.
You
have
a
way
to
use
the
software
api,
I
believe
in
linux
too.
You
have
a
way
to
use
software.
If
you
have
to
join
my
house
group
so
orange
layer,
I
consider
the
solution
that
we
do
igm
solution
in
openly
switch
and
then
our
theme
on
called
a
software
api
to
join
the
multicast
group
for
the
local
groups.
D
Again,
I'm
not
really
suggesting
our
issues.
We
should
go
that
way
at
least
later
since
antonin,
and
I
look
at
lgmp
proceed.
I
think
that
one
sounds
like
a
simpler
solution,
but
I
just
want
to
mention
for
you
guys
just
for
reference
purpose.
I
think
there
is
a
is
this
indeed
another
solution
we
can.
We
can
just
do
our
gmp
using
software
api
without
any
third-party
code.
Sure.
A
Perfect
so
before
concluding
this
meeting
london
roger
this
stage,
what
you
believe
will
be
the
next
steps
in
your
investigation,
design
and
implementation
work.
B
B
We
will
discuss
software
perfect.
E
Yes,
we
have
a
issue.
The
number
is
two
two
five
one.
A
Okay
thanks,
so
I
guess
we
can
also
use
that
data
tracker
for
for
follow
for
additional
conversation.
A
Perfect,
okay,
unfortunately,
we
are
top
of
the
hour
now
and
is
there
any
final
question
comment
any
anything
else
to
report
for
today's.
A
A
Perfect,
so
I
would
like
to
thank
again
thank
you
again,
russian
and
lana
for
the
nice
presentation
and
proof
of
concept
for
multicasting
entry,
and
I
would
like
to
thank
everyone
for
attending
this
meeting
and
I
wish
all
the
attendees
a
good
evening
good
morning
or
a
good
afternoon.