►
From YouTube: Antrea Community Meeting 04/26/2021
Description
Antrea Community Meeting, April 26th 2021
A
Meeting
hello,
everyone
and
welcome
to
the
systems
of
the
entire
community
meeting
today
is
a
tuesday
april
27th
or
you
know
it's
still
april
26th
if
you
are
in
the
united
states,
and
the
agenda
for
today
is
rather
simple,
as
the
only
topic
that
we
have
booked
for
today
is
a
chan
with
the
design,
with
a
design
to
support
failover
for
ingress.
I'm
going
to
share
the
reference
github
issue
on
the
chat
and
so
say
that
I
will
leave
it
to
chan
the
floor.
Is
yours?
Go
ahead.
C
Okay,
thanks
for
joining
thanks
for
attending
the
design
review
for
this
future
enhancement,
and
this
is
about
the
egress
feature
changing
has
introduced
in
the
last
community
meeting
and
as
you
know,
that
we
released
the
alpha,
will
we
really
will
release
this
feature
as
an
alpha
feature
in
the
last
major
release?
C
We
already
mentioned
in
the
last
meeting
that,
like
the
iv,
must
be
assigned
to
an
arbitrary
interface
of
one
node
by
yourself
by
users
themselves
and
if
one
node,
if
one
one
eagles
node
by
eagles
node,
I
mean
that
the
node
that
holds
the
egress
ip
over
the
egress
resource
and
when
well,
I
know
it
becomes
an
available
menu.
Migration
is
required
because
currently
unsure
doesn't
take
care
of
for
assigning
the
ip
or
fedora
and
and
to
to
reduce
this
limitation
and
making
it
a
production
ready.
C
We
want
to
do
some
enhancement
in
this
release,
which
is
support,
which
is
supporting
the
fellow
for
egress
ip
and
currently
the
crd
is
looks
like
this,
and
it
is
quite
simple
that
it
has
a
applied
tool
failed
which,
which
means
the
ports
that
were,
that
that
will
there
were
the
the
egress
will
take
if
take
a
effect
effect,
and
another
field
is
the
egress
ip,
which
is
the
snat
ip
that
will
be
used
by
the
selected
ports
when
they
access
its
external
network.
C
And
so
in
this
release,
we
want
to
introduce
a
new
field
called
fellow
policy,
and
the
reason
that
we
want
to
introduce
a
new
field
instead
of
replacing
the
behavior
in
the
previous
behavior
entirely
is
because
I
think
of
two
used
cases
that
may
require
the
progress
behavior.
C
For
example,
the
first
one
is
for,
if
you
don't
have
any
available
secondary
ips
and
you
you
can
specify
the
egress
id
to
be
the
primary
ipo
of
one
node
so
that
you
can
still
get
this
feature
and
all
of
your
eagles
traffic
will
have
the
same
source
ip.
But
you
don't
have
to
you.
Don't
have
to
allocate
another
secular
ip
to
do
this,
so
as
this
ip
is
for
it's
is,
is
a
primary
ipo
for
a
node.
So
definitely
no
failure.
C
Failure
is
desired
and
another
use
case
I
can
think
of
is
that
in
on
some
cloud
platforms,
the
second
ips
must
be
unbound
from
an
interface
while
the
cloud
api
or
ui
before
it
can
be
bound
to
another
instance.
So
even
we
do
failure,
fellow
and
trail,
it
doesn't
have
this
case.
I
think
in
this
case
users
have
to
have
their
own
mechanism
to
to
detect
the
health
of
node
and
bound
and
unbound
the
secondary
eyepiece
to
wham's
menu
out
of
and
chance
control.
C
Most
importantly,
we
we
need
to
first
detect
the
node
failure,
failure
events
to
do
that.
We
basically
there
are
two
solutions:
one
is
to
leveraging
the
kubernetes
control
play
api,
specifically
the
node
api
and,
for
example,
we
could
hire
android
controller
to
watch
the
node
api
for
the
status
change.
C
When
a
node
that
holds
some
us
ips
become
unhealthy,
we
will
we
could
let
android
controller
to
reassign
this
egress
eyepiece
to
other
nodes,
and
then
the
cross,
corresponding
android
agents
could
take
action,
that
they
could
watch
the
egress
ip
and
the
the
bound
the
the
binding
between
eagles
ip
and
the
node
have
changed.
So
they
could
take
action
that
it
unassigned,
basically
for
failover
case
they
will
assign
the
new
eagles
ips
to
their
own
node
interfaces,
and
in
my
test
it
typically
takes
around
40
seconds
to
detect
a
node
failure.
C
This
is
because
kubernetes
that
has
used
gives
some
great
for
period
to
the
to
before
they
they
mark
the
node
as
unhealthy.
C
This
this
period
is
controlled
by
an
option
called
the
node
monitor
grace
period
which,
by
default,
is
which
divorce
to
40
seconds
and
on
some
platforms
or
some
discharge.
This
may
be.
This
may
be
tuned,
maybe
higher
or
lower.
But
basically
you,
if
we
go
this
way,
we
will
strongly
rely
on
this
option
and
we
also
rely
on
android
controller
to
be
healthy
so
that
it
can
absorb
the
failure
and
initiate
the
failure.
C
Another
enough
solution
is
to
have
a
separated
data
plane
mechanism,
for
example
member
list.
In
this
way,
the
the
android
agent
could
detect,
detect
node
failure
by
themselves
the
for
example
in
member
list
it.
It
will
use
some
gossip
based
protocol
to
to
to
probe
other
other
members
healthy,
and
they
will
report
to
each
other
and
and
reported
no
failure
to
to
other
peers.
C
C
D
And
yeah
good
question
on
the
previous
slide:
why?
On
the
solution
on
the
left,
why
does
it
rely
on
the
entry
controller
to
blc
on
on
the
all
the
agents
watching
all
the
nodes
anyway,
in
the
node
route
controller.
C
Yes,
good
question:
I
my
email
orange,
and
I
thought
I
I
I
was
thinking
we
need
an
and
leader
a
leader
to
to
be
responsible
for
assigning
the
eyepiece
to
nodes.
So
I
was
considering
android
controller,
but
in
fact
it
could
it
could
be
same
as
the
right
solution
right.
It
could
just
oh
entry
agent
watches
a
api
and
they
use
the
same
mechanism
to
select
the
node.
I
think
you
are
right
where
I
should
remove
this,
because
it
is
not
specific
to
this
solution.
Yeah.
Okay,
thanks.
C
And
so
to
in
this
proposal,
we
we
want
we.
We
would
like
to
use
this.
The
second
approach
member
list,
to
do
the
failure,
note
from
failure,
detection
detection
to
reduce
the
value
over
time
and
has
less
dependency
on
kubernetes
control
plane,
and
I
tried
some
code
to
do
this
in
android.
C
It
is
pretty
simple
that
the
library
member
list
library
is
quite
reusable.
We
just
need
to
construct
a
default
configuration
and-
and
we
could
assume
the
configuration
like
the
frequency
or
for
health
check
in
this
config,
and
we
just
need
to
set
our
own
the
the
port
we
will.
C
That
will
be
used
for
for
the
data
plane,
communication
and
the
initial
initialize
channel
that
which
we
already
see
with
the
the
node,
drawing
or
real
events.
And
then
we
just
need
to
get
the
list
of
nodes
and
extract
their
ips
then
make
this
agent
join
the
cluster.
C
After
that
we
we
can
get
the
the
whole
the
whole
list
of
the
healthy
nodes
in
the
in
the
cluster
by
using
the
member
list.
Interface
like
a
list
and
dynamic
events
will
be,
will
be
sent
to
this
channel,
and
we
can.
We
can
we
can
pause
this
channel
and
react,
for
example,
triggering
the
egress
ip
migration.
C
And
another
thing
we
need
to
consider
is
how
we
always
select
the
egress
node
for
each
egress
in
traditional
way
that
we
could
just
calculate
the
hash
value
of
the
based
on
the
egress
metadata,
and
then
we
map
it
to
a
resource
to
a
node,
while
an
modular
operation,
for
example,
for
the
first
egress
we
map
to
the
first
node
and
for
the
second
we
map
the
second
and
for
the
fourth
one,
because
it
beyonce
the
number
of
the
nodes.
C
It
then
re
maps
to
the
first
node
as
well,
but
in
this
in
this
approach,
the
if
one
node
joins
or
leaves
the
cluster,
it
will
cause
almost
all
egress
be
to
be
remapped
to
other
nodes,
because
the
number
of
slots
has
changed,
so
the
so
the
so
that
so
their
mapping
will
likely
change.
For
example,
when
this
first
node
failed,
the
the
the
first
four,
the
faster
four
egress
will
have
to
be
remapped
to
this
nodes,
because
right
now
us
must.
C
It
will
be
mapped
to
the
second
node,
and
this
egress
will
be
mapped
to
just
the
third
node
and
the
third
node,
where
the
set.
This
third
egress
will
be
mapped
to
the
second
node,
and
the
fourth
egress
will
be
mapped
to
the
third
node
for
the
last
two.
They
happen
to
remain
the
same,
and
to
avoid
this
the
to
avoid
affecting
the
most
egress
traffic
of
the
whole
cluster,
we
propose
to
use
consistent
hashing
to
to
reduce
the
number
of
egress
that
will
be
remapped.
C
It's
kind
of
it's,
it's
not
exactly
the
consistent
hashing,
but
kind
of
for
that,
for
example,
for
egress.
For
for
each
egress,
we
will
calculate
the
hashi
value
of
each
node,
the
the
hash
value
based
on
the
us
metadata,
also
including
the
nodes
metadata.
For
example,
they
have
the
node.
We
could
use
the
node
name
to
be
the
load,
the
domain
data
and
we
calculate
their
hashi
and
get
a
string.
C
Then
we
sort
we
we
sorted
the
strings
and
always
use
the
first
one
to
be
the
to
be
the
egress
node
of
this
egress,
so
that,
ideally
same
same
as
the
traditional
hashing.
The
egress
will
be
mapped
to
ono's
on,
like
a
avenge
a
engine
generally
and
but
when,
when
a
node
becomes
an
unavailable,
that
only
two
egress
in
this
case
will
be
remapped
to
other
nodes,
because
we'll
adjust,
for
example,
when
the
first
node
becomes
an
available.
We
just
lose
this
candidate,
and
for
this
egress
is
effective.
Node
is
still
the
same.
C
And
this
is
how
we
propose
to
select
the
equals
node
for
u.s
and
the
basic
workflow
for
an
entry
engine
and
share
agent
to
handle
the
the
egresses
that
has
our
policy
set
to
auto
is
the
agent.
We
are
first
join
the
memberless
cluster
on
startup.
C
C
C
So
we
may
want
to
have
a
mechanism
to
specify
that
which
you
know
which
request
notes
can
be,
can
be
the
candidate
which
notes
can
be
the
egress
notes.
I
I
mean
considering
three
options
for
the
first
one.
We
just
have
a
cluster
equation
selector
in
the
configuration
file,
so
you
don't
have
to
specify
it
before
before
it
starts
until
or
you
could
change
it,
but
it
needs
to
restart
and
share
agents
by
default.
C
But
the
drawback
is
that,
if
user
just
want
to,
if
the
user
just
want
to
have
the
same
configuration
for
all
eques,
they
will
have
to
duplicate
the
same
configuration
to
all
degrees
and
the
third
one
is
that
we
don't
make
it
too
flexible.
But
we
just
define
a
well-known
annotation
or
label
then
and
ask
users
to
annotate
or
label
the
the
nodes
that
are
supposed
to
be
the
equest
nodes,
and
this
will
apply
to
all
equations.
So
it's
also
is
a
per
cluster
configuration
and
we
could
discuss
this
late
more
later.
C
C
There
are
other
ways
to
do
it
and
last
I
think,
to
improve
the
user
experience
or
the
ability
to
troubleshoot
the
egress
issue.
I
think
it's
good
to
report
the
current
the
the
current
bonding
relationship
to
egress
api,
so
we
could
have
a
egress
status
struct
and
we
could
let
the
selected
note
for
this
egress
to
report
the
relationship
to
the
egress
api.
C
C
Like
created
events,
which
is
associated
with
the
egress
resource
that
when,
when
we,
when
we
migrate
an
egress
ip
to
another
node,
we
could
just
create
an
event,
and
so
that
user
could
watch
this
event
to
get
the
the
to
to
get
the
updates
of
the
failover
events,
and
perhaps
they
could
use
some.
C
A
John,
I
have
a
quick
question
regarding
member
list,
so
it's
very
it's
fairly
quick
in
failure
detection,
but
did
you
also
verify
what
are
the
chances
of
false
false
positives,
like
you
know,
detecting
a
member
as
field
where
it's
still
actually
working.
C
Yeah,
I
guess
that
might
be
force
there
might
be
in
that
case,
because
even
the
document
of
memories
or
the
the
protocol
based
on
says
is
just
eventually
consistent.
So
so
maybe
there
could
be
several
nodes
reporting
the
same,
announcing
the
same
ips
in
some
period,
but
I
think
eventually
it
could
get
the
eventual
consistency.
C
But
I
haven't
tested
that
I,
but
this
this
library
is
already
used
in
metaboli
under
console.
So
I
think
it
is
relatively
reliable
to
to
be
the
solution.
A
Sounds
good
and
the
other
question
is
about
hashing.
Do
you
think
that,
in
terms
of
assigning
eyepiece
to
nodes,
we
need
to
nominate
some
sort
of
active
and
standby
node
for
every
recipe,
or
should
we
just
rely
on
the
failover
mechanism?
In
that
case,
I
wonder
the
reason
why
I'm
asking
is
that
I
wonder
if
relying
doing
the
fill
over
and
then
having
a
time
of
about,
I
guess
three
seconds
for
reallocating
the
eagle
side
piece
in
case.
C
Sorry
I
didn't
get
a.
What
do
you
mean
by
a
a
shorter
period
than
three
seconds.
A
Yeah,
so
basically
I
mean
I
was
thinking
if
we
could
have
a
solution
like
having
an
active
and
a
standby
node
for
egress.
I
understand
that
this.
This
may
also
require
some
other
changes
in
the
design,
but
I
just
I
was
just
considering
whether
this
is
something
that
might
be
good
or
not.
You
know
having
having
a
solution
that,
when
a
node
is
declared
is
found
to
be
dead,
there
is
already
standby
available
where
the
ipd
recipient
is
implemented,
but
first
yeah
go
ahead.
Sorry.
C
C
C
Some
period
to
detect
that
I
think
memories
is
another
is
another
mechanism
compared
with
like.
I
think
there
are
some
other
solutions
called
where
ip
write
some
other
tools
to
do.
The
it
use
vrrp
protocol
or
something
like
that
to
to
do
the
active
to
to
implement
the
active
and
standby
mode
right
and
actually
yeah
something.
C
Yeah,
I
think
immember
is
the
case.
It's
likely
is
just
all
the
nodes
are
the
candidates
of
this.
This
egress
ip
is
just
not
assigned,
is
just
that
the
ip
is
not
assigned
to
other
nodes
yet,
but
I,
I
think
assigning
an
ip
is
rather
fast
if
the.
If
we
already
know
that
the
we
already
detected
the
failure.
Events.
A
Okay,
yes
thanks
and
sorry
about
ever
one
small
battle.
Just
my
final
question
is
in
terms
of
egress
node
selection.
I
don't.
I
don't
think
that
it
will
make
any
sense
to
try
and
make
it
local
to
where
the
actual
pods
are
deployed,
because
you
know
the
pos,
the
location
of
a
pod
can
change
across
nodes
and
also
in
most
cases
the
digra
cr
eagle.
Cr
is
going
to
select
multiple
pods,
so
I
don't
think
that
it
makes
any
sense
to
think
about
any
locality
principle.
A
C
I
didn't
translated
the
doing
this
in
the
proposal
because
it's
big,
but
maybe
because
of
the
implementation
difficulty,
because
if
we
want
to
do
that,
each
node
needs
to
know
that
the
ports
were
the
selected
parts
of
ingress.
But
currently
we
don't
transmit
this
information
from
controller
to
agents
and
we
want
to
avoid
that
because
if
we
want
to
do
that,
all
notes
need
to
do
to
need
to
know
all
the
members
of
the
egress
right.
So
it
will
include
the
control
plane
communication.
A
lot.
A
C
A
A
C
Any
other
question
about
this
proposal
and
the
detailed
design-
I
also
have
put
them
in
an
issue
and
you
can
find
them
in
the
comments
there.
If
you
have
any
questions
or
comments.
D
I
just
wanted
to
comment
quickly
that,
for
the
allocating
I
mean
restricting
egress
to
a
certain
set
of
nodes.
If
you
had
like
three
options,
I
feel
it
seems
to
me
that
we
can
have
like
a
if
there
is
a
use
case
for
it.
D
We
can
have
like
a
combination
of
like
either
the
first
and
the
second
one
or
the
the
second
and
the
third
one
kind
of
like
that:
granularity,
a
per
egress
plus
some,
like
kind
of
like
more
global
configuration,
either,
which
is
shared
by
all
the
agents
using
the
entry
agent
configuration
or
by
being
able
to
annotate
nodes
to
exclude
them
or
include
them
in
egress
selection.
C
Approach
that
uses
some
crd
to
define
the
ego
as
not
poor,
but
I
think
it's
basically
similar
as
the
third
one
is
just
another
way
to
nominate
the
candidates
from
implementation
perspective.
I
think
the
first
and
the
third
one
is
easier
to
implement,
because
not
only
nothing
in
the.
If
we
do
the
first
hour
setup,
only
the
the
selected,
the
selected
nodes
are
well
known,
so
we
don't
have
to
join
all
nodes
to.
C
We
don't
have
to
join
all
nodes
in
the
cluster,
but
for
the
second
approach,
because
each
egress
could
select
any
node.
So
we
have
to
join
all
nodes
to
in
the
cluster
in
the
memberlist
cluster
and
handle
each
egress
separately.
To
to
determine
is
effective
note.
This
is
for
an
implement
implementation
perspective.
C
C
D
C
Concern
is
made
about
the
traffic
you
will
introduce,
because
if
we
have,
we
use
this
in
2000
nodes
for
2000
nodes,
and
we
go
this
second
approach.
The
2000
nodes
will
have
to
be
the
members
of
the
memories
cluster
and
I
I'm
not
sure
how
much
traffic
you
are
introduced,
but
it
will
definitely
greater
than
like
just
once
100
nodes.
I
think
yeah.
D
That's
why
I
said
you
we
could
kind
of
like
I
mean
once
again,
if
there
is
a
use
case,
I
have
like
a
combination
of
like
two
of
those
right,
because
maybe
you
want
to
say
okay,
only
this
set
of
100
nodes
can
be
used
for
egress.
But
then
maybe
you
want
to
have
some
selection
on
a
perry
grass
basis,
but
within
that
set
of
100
nodes,
but
yeah
it's
only
if
there
is
a.
D
C
It
doesn't
generate
traffic
to
ops.
I
think
it
randomly
selects
some
ps,
no
matter
how
many
members
already
in
this
cluster,
you
can
be
even
you
have
1000
members
in
this
cluster.
Maybe
you
a
single
a
single
instance
will
only
ping
like
a
five
or
ten
I
get.
I
guess
I
didn't
get
much
details
too
much.
I
will
check
that.
D
Yeah
yeah
so
for
each
node
it
doesn't
scale
like
linearly
with
the
number
of
yeah,
okay,
yeah.
B
C
B
What
yeah,
I
I
think
at
least
I
was
thinking
if
you
want
to
be
very
flexible,
and
ideally
we
should
have
ipro
concept
and
that
defines
the
proof
inputs
for
egress,
and
then
you
have
a
way
to
maybe
some
crd
to
to
associate
the
ipro
is
note
proof.
B
B
C
Okay,
so
this
is
also
about
ip
allocation.
B
Yeah,
I
I
think
I
I
yes,
I
I
talk
about
two
two,
two
things:
five,
ipr
location
and
also
the
ip2
load
association.
I
think
they
can
they
can.
They
can
be
separate,
but
I
mean
just
just
from
what
I
I
was
thinking.
I
I
try
to
make
it
complete
and
it
can
be
two
case.
One
case
you
do
automatic,
ips
assignment
together.
Another
case
you
just
had
user
to
manually,
assign
the
ipv4
and
egress
so
in.
In
both
case
I
mean
the
the
way
to
associate
ip
to
the
node.
C
Okay,
I
I
haven't
thought
about
that
happy
allocation
part
and
we
maybe
we
could
discuss
it
offline
and
sure
of
our
details.
B
Sure,
because
I'm
thinking,
for
example,
if
you
look
at
the
let's
say,
if
you
look
at
tkg
case,
it's
quite
possible,
the
classic
have
multiple
sublets.
They
can
be
attached
to
multiple
underlayer
layer,
two
networks.
C
That
is
where
they
pi:
where
are
they,
where
the
the
subnet
of
the
vocals
be
quite
different,
that
they
are?
They
are
they're
in
different
subnets,
on
the
switches
or
in
different
zones.
B
They
can
be
different
under
the
layer,
2
network,
some
ugly
network
or,
let's
say
just
physically
land.
Now
they
can
be
independent,
physically
okay,
and
that
means
when
you,
when
you
allocate
the
estate
ipo
e
side.
Here,
probably
you
want
to
allocate
a
different
set
of
ips
for
different
underlay
summit.
B
Sure:
okay,
let's:
let's
look
at
the
one
one
specific
example
with
tkgr,
so
so
so
you
know
with
tkg
here
the
underlevel
can
be
nst
right.
B
So,
for
example,
we
start
from
slash
26
subnets,
but
once
you
have
more
than
62
notes
deployed
and
we
created
a
new
subnet,
actually
that
that
maps
to
a
new
nxt
segment,
there's
27
connected
to
the
same
t1,
but
there
are
two
different
segments.
So
so
I
was
thinking
in
this
case.
If
we
want
to
be
very
flexible
and
be
able
to,
you
know,
balance
the
equal
traffic
to
water
nodes.
What
can
happen
that
we
collected
two
iphones
one
for
each
segment,
each
underlying
segment
and
then,
when
we
do.
B
B
That
that
is
fun.
Finally,
in
our
kind
inaudible,
we
tunnel
the
eagle
traffic
to
the
s9
node,
it
can
be
in
the
center
or
different
segment
matter
since
this
tunnel.
B
It
can
go
through
the
layer,
two
segments
and
for
my
opponent,
in
this
game
we
probably
have
two
ports
and
for
the
each
pull
is
only
reachable
from
from
one
underlay
segment.
D
B
I
I
don't
mean
that
I
mean
for
that
one,
you
can
be
flexible.
If
you
want
you
can
no,
no,
probably
you
cannot
do
it.
That's
one
problem.
You
cannot
do
since
we
don't
control
the
scheduling
of
the
port
to
be
based
on
underlying
secondary,
so
I
just
mean
probably
want
to
consider
the
eagles
ip
to
know
the
poorer
piping.
D
B
Yeah-
and
I
I
don't
know
how
the
school
can
do-
that
by
the
way,
for
example,
for
mental
illb,
I
thought
that
they
have
a
sip
pro
load
pro
concept.
Maybe
I'm
wrong
can
do
now.
Actually.
B
I
think
they
have
a
weak.
Actually
I
forgot,
but
when
I
look
at
that,
I
I
served.
They
have
a
way
to
say
in
theory,
you
can
create
a
multiple
demon
set.
B
B
Maybe
when
you
actually
I'm
not
sure,
but
I
solved
when
you
when
you
when
you
for
the
service
wave,
maybe
you
can
allocate
the
service
wave
to
say
they
belong
to
each
group
or
something
like
that,
and
then
only
the
weep
from
that
that,
for
that
config
pool
for
your
demon
set
will
be
allocated
for
for
this
group
of
service.
B
And
finally,
only
the
demon
set
only
the
nodes
were
in
the
demon
setting.
This
group
will
handle
the
wind,
but
maybe
I'm
wrong.
That's
my
impression.
Why
write
the
method
of
vlog.
B
B
So,
to
be
a
more
flexible,
probably
we
need
to
handle
the
ip4
to
another
pro
mapping.
C
Or
maybe
we
maybe
we
could
do
it
automatically
that
we
will
like,
like
we
will
report
the
the
we
first
select
the
candidate
that
can
hold
this
specific
ip
from
the
from
the
egress
nodes
like
if
this
egress
node
primary
ip
is
not
the
same
subnet
as
the
egress
ip.
We
will
draw
it
out,
then
this.
B
C
B
Then
maybe
the
the
the
the
primary
id
of
the
node
is
in
one
subject,
but
I
can
have
another
side
just
for
eagle
circuit.
B
Not
really
because,
because,
if
you're
out
of
support
right,
if
your
router-
I
think
many
many
others
support
that
you
can,
you
can
create
one.
You
can
create
multiple
sublets
on
the
router
interface,
okay.
So
so
even
it's
a
single
layer,
two
segments,
so
you
can
say
I
have
multiple
stop
lights
there
and
then,
in
this
case,
what?
As
long
as
the
traffic
go
to
the
result
is
able
to
sending
a
traffic
bike
to
the
wheeler.
C
A
That
was
a
very
good
discussion
and
and
but
unfortunately
time
is
ticking
and
we
are
almost
at
the
end
of
the
of
the
call
and
as
a
chance
at
we
can
continue
the
discussion
on
github
and
for
today
is
there
any
other
question
on
the
eagles
feature.
A
I
think
we
can
safely
assume
there
isn't
any
other
topic
to
bring
up
for
today,
at
least
from
the
slug
channel
doesn't
seem
like
there
was
any
other
topic
proposed
for
discussion
for
today,
but
you
know
if
you
want
to
bring
up
everything
we
still
have
about
seven
minutes
left.
A
And
and
then
it
appears
that
this
is
really
all
for
today.
So
if
you
I
mean,
if
there
is
nothing
else
that
you
like
to
discuss
for
today,
I
would
like
to
thank
chan
for
presenting
the
announcements
to
the
egress
feature
and
the
thanks
thank
everyone
for
attending,
and
I
wish
everyone
a
good
night
good
morning
or
a
good
afternoon.