►
From YouTube: DASH HA Working Group 20220517 (May 17, 2022)
Description
Reliable or stateless
Synchronization and state updates discussion
C
Yeah
I
mean
I,
I
did
submit
a
pull
request,
basically
trying
to
explain
the
bandwidth
requirements.
If
you
know
you
need
to
do
potentially
like
six
state
updates
per
connection
and
try
to
explain
the
need,
what
the
need
for
it
to
be
reliable
for
those
state
updates
to
be
reliable.
C
I
think
it's
kind
of
like
maybe
putting
the
cart
before
the
horse
a
little
bit
because
in
some
sense
like
we
really
should
be
showing
like
what
are
the
states
and
what
are
the
updates
that
need
to
be
made,
and
then
I
think
it
would
be
clearer
to
understand
like
why,
like
those
state,
ups
updates
need
to
be
reliable,
but
I
believe
that
that
if
you
make
the
state
updates
reliable
it
it's,
it
solves
some,
some
problems.
C
C
If
you
accept
the
premise
that
the
that
the
that
the
state
synchronization
messages
should
use
a
reliable
transport,
then
the
question
becomes
well.
What
should
that
reliable
transport
be?
And
I
think,
like
one
obvious
answer-
is
well:
why
don't
we
just
use
tcp
and
we
could-
and
I
think
that
would
be
an
okay
answer.
But
one
of
the
things
I'm
trying
to
point
out
is
that
the
bandwidth
is
so
high
that
I
don't
think
that
there
should
be
any
expectation
that
a
single
tcp
connection
for
state
synchronization.
C
You
would
want
the
state
updates
to
be
balanced
across
all
of
the
physical
ports,
so
that
you
know
not
you're,
not
like
stealing
the
bandwidth
just
from
one
port
to
do
the
state
update.
So
for.
For
that
reason
as
well,
it
seems
that
the
transport
of
state,
synchronization
updates,
needs
to
be.
C
You
know
multiple
parallel
streams
between
the
two
gpus,
and
so
that's
really
kind
of
the
point
of
what's
in
this
document,
and
then
sort
of
the
second
part
of
the
document
is,
if
you
think,
tcp
is
kind
of
too
heavy
weight
for
this.
The
sort
of
operating
conditions
under
which
dash
is
operating
where
the
two
dpus
are
kind
of
localized
they're
near
each
other.
C
There's
like
the
latency
between
them
is
pretty
low.
The
you
know,
number
of
switch
hops,
maybe
is
low
and
that
the
network
is
already
like
pretty
pretty
reliable,
but
not
a
hundred
percent
reliable
then
like
under
those
conditions.
Like
there
may
be
like
a
simpler,
reliable
transport
protocol
that
can
be
implemented
on
top
of
udp
and
so
there's
a
description
of
something
that
is
simpler
than
tcp,
but
I
think
that
that's
kind
of
a
secondary
thing.
C
I
think
the
primary
thing
is
just
the
acceptance
that
that
the
transport
for
state
synchronization
needs
to
be
reliable
and
that
it
needs
to
use
multiple
parallel
streams
between
the
dpus.
I
think
like.
If
there
was
agreement
on
that,
then
I
think
it's
like
a
secondary.
It's
sort
of
a
secondary
matter
as
to
like.
Well,
what
is
the
appropriate
transport.
B
We
can
separate
them.
First
of
all,
dividing
dividing
connection
into
streams
can
be
solved
separately,
yeah,
but
then
the
question:
if
we
need
the
reliable
connection
or
reliable
transport,
I
don't
really
don't
really
agree
with
that,
for
the
reason
that
we
are
trying
to
solve
for
real
time
data,
and
we
cannot
really
guarantee,
like
tcp
guarantees
for
the
static
data
to
not
lose
any
connections.
B
C
Right,
so
let
me
try
to
address
that.
I
believe
that
the
entire
system
has
to
be
engineered
to
provision
for
enough
bandwidth
in
priority
potentially
and
dedicated
buffering
or
whatever
is
necessary
to
not
have
the
path
between
the
dpus
for
state
synchronization
be
the
limit
of
your
connection
rate
performance.
C
However,
in
some
event,
where,
where
you're
not
able
to
synchronize
state
because
you're
getting
drops
or
or
you
know,
you're
not
getting
enough
throughput
through
those
streams
between
the
two
dpus,
I
think
that
you
have
to
mitigate
that
by
you
know:
dropping
packets,
that
are,
you,
know,
data
packets
that
are
arriving
on
the
dpu,
and
maybe
you
don't
have
to
drop
packets
for
like
established
connections.
C
But
I
think
you
would
at
a
minimum
drop
packets
for
like
syn
packets,
for
like
new
connections
that
are
like
trying
to
start
up.
That's
kind
of,
in
my
mind,
like
the
only
mitigation
for
for
not
being
able
to
synchronize
the
state,
because
I
think
if
you
do
the
opposite,
if
you
say
okay,
I
never
want
to
drop
like
packet
data
packets
arriving
at
the
dpu,
regardless
of
whether
I
can
synchronize
state
or
not.
C
I
think
that
you
end
up
with
like
broken
state,
and
that
broken
state
means
that
that
you've
lost
your
ability.
B
So
simple
example:
we
have
like
the
usual
topology
that
we
are
looking
at
is
two
tours
for
high
availability
on
the
tour
level,
and
then
each
gpu
is
connected
with
one
port
to
one
of
those
stores,
meaning
that
if,
if
we
want
to
synchronize
two
dpus
one
of
them,
the
the
state
synchronization
messages
will
go
from
one
dp
to
the
tor
to
another
dpu
right.
C
No
in
that
scenario,
I
would
call
that
a
failure
scenario,
and
in
that
scenario
one
of
the
dpus
should
be
elected.
As
the
you
know,
as
the
like
single
available
dpu
and
all
traffic
data
plane,
traffic
should
be
directed
to
that
dpu.
If
the
link,
if
the
synchronization
link
goes
down,
that's
as
much
of
a
failure
as
a
as
a
dpu,
failing
or
or.
B
C
B
B
B
Yeah,
which
means
that
every
second
we
transport
like
how
much
one
and
a
half
gig.
D
C
B
I
don't
really
understand
why
we
would
be
so
radical
to
drop
the
sand
packets
if
we
cannot
synchronize
yeah.
We
know
that
we
have
our
state
desynchronized,
so
we
cannot
do
failover
for
some
time
until
we
will
catch
up
with
all
the
like
periodic
updates,
because
even
if,
even
if
something
happens,
we
still
have
the
periodic
updates,
which
will
eventually
synchronize
the
state.
So
why
would
we
want
to
sacrifice
new
connections.
C
C
C
B
C
On
the
long-lived
connections,
marian
the
the
periodic
rate
for
updating
long
live
connections,
just
has
to
be
at
like
the
time
like
has
to
be
like
around
the
same
as
the
timeout
rate
of
the
connection.
C
You
just
have
to
be
like
a
little
bit
faster
than
the
timeout
rate
of
the
connection
to
keep
it
alive
on
both
dpus
right
and,
like
you
know,
we're
talking
about
like
four
minutes
or
something
like
that
as
like
the
timeout
for
like
a
tcp
connection,
but
if
you're
saying
like
no
like,
I
want
to
just
be
able
to
like
re-synchronize
the
state
of
every
single
connection,
every
one.
C
Second,
I
don't
that
doesn't
scale
I
mean
like
this
is
like
64
million
flows
like
per
200
gig
of
throughput,
like
that,
would
use
a
lot
more
than
the
12
gigabits
per
second.
To
do
that.
B
B
No
sorry,
I
I
won't
accept
such
a
premise
that
link
does
not
flap
in
the
data
center
at
at
any
point,
link
can
flap.
This
is
a
common
thing.
We
are
not
doing.
We
don't
have
a
soldered
connection
between
those
two
guys.
The
link
can
flap
between
any
two
points.
A
In
the
the
optical
box
that
they
have
to
go
through,
sometimes
the
technicians
are
performing
maintenance
and
they
bump
wires.
Or
you
know,
I
have
seen
a
lot
of
link
flaps
in
our
data
centers.
C
Okay,
but
what
we're
talking
about
here
is:
let's
talk
about
a
smart
switch.
Okay,
and
you
know
what
you
could
talk
about
like
that:
the
backup
is
remote
or
whatever,
but
like.
Let's
just
talk
about
like
normal
operation,
two
smart
switches
with
like
a
physical
point-to-point
link
between
the
two
smart
switches
and
that
physical
point-to-point
link
between
those
two
smart
switches
is
there
for
the
purpose
of
this
h,
a
path
okay.
So
the
link
flap
you're
talking
about
is
like
between
two,
like
co-located
switches,
with
a
dedicated
point-to-point
link.
C
I
think
the
answer
to
that
is
that
the
dpus
drop
syn
packets
during
that
period
of
link
flip
flap.
I
think
that
I
think
that
that's
the
answer.
I
think.
If
the
link
is
deemed
to
be
broken,
then
I
think
you
have
to
declare
one
gpu
the
master
and
switch
all
traffic
to
it.
D
So
this
is
about
a
related
scenario,
is
if
the
link
is
flat.
Link
flaps
link
goes
down.
If
there
are
asymmetrical
flows
and
say
the
like,
a
syn
packet
arrives
and
that's
supposed
to
put
an
entry
in
the
flow
table
and
open
up
like
an
entry
in
the
reverse
direction.
B
D
B
Really
the
point
there
is
so
even
in
the,
if
you
look
at
the
topology,
you
have
two
links
from
the
dp,
even
if
one
flaps
eventually
bgp
will
redirect
you
to
the
other
one.
It
will.
C
B
So
again,
you
cannot
assume
you
have
a
dedicated
h,
a
link
h.
A
goes
on
most
likely
goes
on
the
same
boards
that
the
date.
C
I
I
think
that
this
is
an
unsolvable
problem,
if
you're
not
willing
to
accept
that.
There's
like
an
engineered
connection
between
the
two
dpus,
I
I
mean.
Maybe
you
can
accept,
like
a
very
small,
like
link
outage
time
without
like
failing
over
to
one
or
the
other,
but
I
think
it's
like
unrealistic
to
assume
that
there's
going
to
be
like
unbounded
periods
of
time
where
that
link
is
like
unusable
and
that
somehow
you're
going
to
still
keep
these
two
dpus
running
like
independently,
I
mean
it's
like
you
will
lose
like
the
synchronization
state.
B
C
I'm
afraid
that
the
solution
of
just
like
periodically
updating
the
state
is
like
it
solves
it.
The
period
like
periodically
updating
the
state
at
like
the
the
flow
timeout
rate
is
okay,
that's
like
minutes,
but
periodically
updating
the
state
every
one.
Second,
because
a
link
might
have
flapped,
I
think
is
like
if
you
do
the
math
on
that,
like
the
bandwidth
you
that
you're
going
to
need
is
more
than
the
bandwidth
of
the
dpu
you
you
can't
like
you
can't
copy
the
entire
like
flow
table
state
in
one
second,.
B
How
can
it
be
more
than
the
bandwidth
of
dpu
if,
in
the
worst
case
scenario,
let's
take
the
worst
case
scenario,
we
would
just
mirror
every
flagged
tcp
packet,
it's
not
more
than
it's,
not
more
than
gpu
bandwidth
and
adding
to
the
and
adding
to
that
periodic
update.
B
Multiple
into
multiple
messages
into
the
back,
so
according
to
what
you
have
in
the
paper,
how
many
messages
can
you
pack
into
the.
B
C
B
Now
you
will
need
event
base
to
be
to
cover
to
not
have
to
be
synchronized
to
lesser
amount
of
time.
We
don't
want.
Do.
C
Right
so
so,
like
let's
say,
you're
doing
event
based
updates,
that's
like
maybe
another,
like
you
know,
million
packets
per
second,
yes,
okay,
so
you've
like
you,
you've
doubled
you've
like
doubled,
like
the
amount
of
messages
in
this
approach
in
in
in
and
marion
like
one
of
the
issues
here
is
not
just
like.
What's
the
bandwidth
of
the
messages,
but
you
actually
have
to
process
those
messages
you
know
like
if
you're
getting
a
periodic
update.
C
You
don't
know
that
that
periodic
update
is
like
needed
or
not
needed
right.
So
the
only
thing
the
receiver
of
that
periodic
update
can
do
is
to
treat
it
as
if
it's
a
real
update-
and
you
know
you
have
to
go
into
your
your
flow
table-
you
have
to
like
do
the
actual
work
of
the
update
so
like
it
you're
like
using
a
lot
more
bandwidth
and
you're,
using
a
lot
more
like
dpu
resources
to
deal
with,
like
the
the
the
periodic
updates
and
like
all
of
this
goes
away.
B
C
Okay,
I
I
mean
in
the
really
short
one
I
mean
that
the
same
thing
would
happen
in
both
in
if
you
have
like
a
short
link,
flap
right,
you
will
like,
in
the
one
case
like
let's
say,
syn
packets
are
coming
in
and
you're
processing
those
syn
packets
and
those
of
those
event
driven
messages
are
now
being
dropped
because
of
the
link
flap
right.
C
C
Okay
and
it
will
take
like
one
second.
For
for
that,
you
know
synchronization
to
to
to
happen
and
then
and
then
I
guess
the
act
packet
in
the
other
direction
will
work.
C
Okay,
if
you
take
the
reliable
transport
approach
you
don't
have
to
buffer,
you
know,
like
you
know,
you
could
decide
like
how
much
buffer
you're
going
to
have
for
very
short
length.
Flaps,
but,
like
you,
don't
have
to
buffer
more
than
the
window
size,
and
if
you
know,
if
the
link,
flaps
and-
and
you
know
the
window
fills,
then
you
start
dropping
syn
packets
and
it
has
the
same
net
result
which
is
like
the
new
connection
is
not
opened
up
until
you
can
get
the
state
reliably.
Synchronized.
B
Okay
yeah,
so
a
new
the
same
net
result:
okay,
fine,
but
then
I
will
ask
you
this
time,
give
me
the
size
of
the
window.
What
is
the
expected
window
size.
C
Yeah
I
mean,
I
think
the
expected
window
size
is
small.
I
mean
I
think
it
could
be.
You
know
like
these.
Packets
are
large.
I
think
that
this
window
size
could
be.
You
know,
128
packets,
let's
say.
B
B
C
C
We
don't,
what
we'll
do
is
we
will
flow
control
and
the
flow
control
will
cause
us
to
not
generate
new
state
update
messages
and,
and
the
way
that
you
not
generate
new
state
update
messages
is
to
is
to
basically
drop
packets.
That
would
have
caused
state
updates.
B
C
C
C
B
C
B
It
goes
to
the
tour
from
the
tour
it
goes
to
this
to
the
back
up
and
link
between
the
tour
and
the
backup
goes
down.
Primary
doesn't
know
that
the
link
went
down.
What
will
happen
next
like?
How
will
it
stop
in
time
so.
C
Okay,
so
what
will
happen
is
that
the
sender,
okay
and,
like
you
know,
there's
senders,
there's
transmitters
in
both
directions
but,
like
what
will
happen
is
the
transmitter
will
will
send
up
to
the
window
size
and
not
have
received
an
acknowledgement.
C
It
the
acknowledgements,
don't
have
to
be
for
every
packet,
but
the
acknowledgments
have
to
be
at
least
like
you
know,
as
frequent
as
your
round
trip
time.
They
don't
have
to
be
for
every
packet,
but
generally
marion
like
there
will
be
packets
flowing
in
both
directions
like
synchronization
packets
will
be
flowing
in
both
directions.
C
Just
because,
like
you
know,
like
you
said,
some
connections
originate
on
one
dpu.
Some
connections
originate
on
the
other
dpu
and
so
like
there's
kind
of,
let's
just
say,
a
balance
between
like
each
dpu
sending
state
synchronization
messages
to
the
other
dpu
and
the
acknowledgements
like
piggyback
off
of
those
those
data
packets,
so
like
you'll,
have
acknowledgements
you'll
have
data
and
you'll
have
acknowledgements
like
continuously
flowing
in
both
directions.
C
B
Tcp
suffers
the
same
problem.
Well,
not
really.
The
same
problem
tcp
cannot
withstand
that
because
it's
a
so
my
point
is
that
reliable
transport
requires
you
to
have
an
end-to-end
flow
control.
You
don't
really
have
end-to-end
flow
control.
Eventually
we
end
up
with
the
same
thing.
You
are
dropping
new
connections
that
are
incoming
that
you
cannot
accommodate
for
because
you
cannot
go
back
to
the
vm
and
say
please
pause
your
application.
No.
B
C
B
We
do
actually,
if,
if
you
take
like
really
lossless
end-to-end
protocols
like
rdma,
that's
what
they
do,
they
pause
the
application
until
they
can
transfer
all
the
data.
C
So
marion
like
regardless
of
completely
independent
of
each
vm,
is,
will
have
a
connection
per
second
rate
right.
That's
been
provisioned
for
that
vm,
okay,
yeah.
What
happens
when
the
vm
exceeds
that
connection
per
second
rate?
How
do
we
tell
the
vm
that
to
stop
new
connections?
We
don't.
B
B
B
Right
as
well
as
any
other
tcp
packet,
and
it.
C
B
Yeah,
but
the
difference
is
that
difference
goes
down
to
the
implementation.
So
probably
that's
why?
Because
up
till
now
only
I
was
asking
questions.
Maybe
other
vendors
want
to
put
their
two
cents,
I'm
not
sure.
If,
if
the
end
result
is
the
same,
what
is
the
preferable
way
for
the
implementation
in
the
hardware,
because
for
me
it
is
much
simpler
to
have
a
stateless
implementation
and
lossy
as
opposed
to
what
is
proposed
here,
to
keep
a
window
to
retransmit.
B
This
seems
to
be
from
the
implementation
standpoint
much
more
complicated
if
we
agree
that
the
end
result
is
the
same
in
case.
If
we
have
a
link
flap
longer
than
the
window
size.
C
C
The
doubles,
the
bandwidth
between
the
two
dpu's,
potentially
okay,
and
I
don't
think
it
actually
is
simpler
in
the
sense
that
it
uses
like
twice
as
much
dpu
resource
to
handle
these,
like
periodic
messages
that,
like
99.9999
of
the
time
like
weren't
necessary,
so
the
dpu
is
like
handling
them,
processing
them
not
knowing
whether
like
it,
it
is
a
real
state
update
or
it's
just
reflecting
like
the
state
it
already
has.
So
I
so.
B
Yeah
we
can
talk
about
the
the
periodic
updates
if
we
really
need
them
or
if
we
lose
a
packet
or
or
a
few
packets.
What
will
really
happen
then?
Can
we
resynchronize
only
those
without
really
doing
retransmissions?
C
C
Think
the
periodic,
the
periodic
you
know
the
periodic
messages
is
like
another
form
of
a
reliable,
reliable
messaging,
except
that,
like
it's,
it's
not
like
selective
to
only
send
the
messages
that
like
needed
to
be
resent,
it's
sending
all
flows
over
and
over
and
over
again.
B
C
C
I
think
four
was
a
number
that
has
been
talked
about
in
the
past
is
like
this
is
what
azure
currently
has.
I
don't
know
if,
if
there's
a
better
number
to
use
than
four
minutes,
that
that's
fine,
but
let's
just
say
it's
four
minutes.
C
Okay,
what
you
would
do
is
like
you
would
send
an
update
to
your
peer
like
sometime
before
that
four
minute
expires,
okay,
like
if,
if
the
flow
is
still
active
on
your
local
dpu,
you
wouldn't
keep
updating.
You
know
your
peer
like
to
say:
oh
I'm
still
active,
I'm
still
active,
I'm
still
active.
You
know
that
your
peer
is
not
going
to
time
out
that
flow
for
four
minutes.
So
as
long
as
you
update
your
peer,
let's
say
after
three
minutes
and
58
seconds
to
say:
hey
this
flow
is
still
active.
C
That's
like
enough
for
the
peer
to
like
add
an
additional
four
minutes
of
time
out
to
the
pier
so
like
that
you
do
need
to
have
periodic
updates,
but
only
at
the
idle
timeout
rate.
They
don't
need
to
be
at
like
a
one.
Second
rate.
B
Okay,
you
still
need
to
somehow
pick
exactly
those
which
is
not
clear
to
me,
but
then
going
back
to
original
argument.
If
we
were
saying
that
we
have.
B
Let
me
put
it
right:
we
have
well,
let's
say
with
the
four
minutes
timeout
if
we
would
put
periodic
updates
to
10
seconds,
it
makes
our
primary
and
backup
or
sender
receiver,
be
out
of
sync
for
at
most
10
seconds,
which
is
which
reduces
or
in
other
word
ratio
to
the
period
of
periodic
updates
to
to
the
newly
opened
connections
will
be
one
to
ten,
which
we
have
10
seconds.
B
D
So
mary,
so
I
started
to
ask
this
question
earlier
about
how
asymmetric
flows
are
handled
in
the
event
of
the
h,
a
link
going
down,
I
mean
so
say
a
syn
packet
comes
in
and
it
can't
be
transmitted
to
the
pier
and
the
reverse
flow
is
that
synack
is
going
to
arrive
in
that
pier
like
how
does
that
work?.
B
Unlike
udp
interesting
question,
I
didn't
think
what
happens
to
udp
because
I
don't
really
know
the
use
cases
for
udp,
but
yeah.
D
Okay,
so
if
you,
if
the
synchronization
messages
between
the
two
peers,
say
you
retransmit
everything,
every
10
seconds,
then
after
10
seconds,
the
peers
flow
table
gets
populated,
the
ack
is
being
continuously
re-transmitted
by
the
remote
host
and
then
eventually
the
fin
act.
I
mean
sorry,
the
synap
comes
in
sees
the
flow
tables
populated
and
then
the
connection
continues
is
that
the
idea.
B
Kind
of
like
a
state
machine
with
the
window
and
retransmissions
plus,
I
don't
really
see
how
it's
gonna
work:
okay,
we're
saying
that
we
will
receive
x
and
we
can
say
that
for
some
time,
if
we're
not
receiving
an
ack,
we
will
start
dropping
even
faster
even
before
the
link
goes
down
and
we
understand
what
was
happening.
B
C
Well,
that
seems
like
it
might
be
a
limitation
of
your
hardware.
I
mean
I
it
it's.
You
know,
I
think,
if
you
just
if
it
just
look
at
like
the
bandwidth,
you
look
at
like
how
quickly
you
get
back
to
a
synchronized
state
and
you
look
at
how
much
like
dpu
resource
it
takes
to
handle
updates,
like
the
reliable
transport
is,
is
like
more
efficient
on
on
all
of
those.
If
the
issue
is
that
you
just
can't
implement
a
reliable
transport,
I
I
mean
I
I
don't
know
what
to
say
well,.
B
I
don't
think
that
I
really
there
is
I'm
looking
at
it
as
a
caused
cost-benefit
scenario,
because
we
will
solve
for
one
thousandth
of
a
percent
of
traffic
in
the
network
or
less.
B
And
won't
really
gain
much
of
it,
because
even
in
your
proposal,
you're
saying
all
the
time
that
network
is
engineered.
Well,
you
don't
really
have
any
losses
it,
except
from
what
I
can
think
of
which
could
be
common
like
first
one
would
be
link.
Flap
second,
would
be
buffers.
C
Again,
I
mean
you
know
the
the
amount
of
bandwidth
and
the
amount
of
events
and
synchronization
that's
going
through.
This
path
is
very
high,
like
the
the
processing
load
of
the
synchronization
messages
is
like
almost
as
high
as
the
as
the
data
packets
and
no.
B
C
B
C
Yeah,
I
know,
but
I'm
talking
about
like,
like
you
know,
flow
table
like
updates.
You
know,
like
you,
know,
doing
a
lookup
in
the
flow
table
and
updating
the
updating,
the
state
right.
You
know
you're
getting
data,
packets
and
you're
doing
lookups
in
the
flow
table
and
you're
updating
the
state
based
on
you
know
the
flags
and
stuff
in
the
in
the
in
the
packets.
C
Like
that
h,
a
like
state
updates
are
generating,
like
a
very
large
number
of
like
load
in
terms
of
like
flow
table
lookups
and
in
updates-
and
I
would
say
it's
like
you
know
it's
equivalent
load
to
like
the
to
the
to
the
load
on
the
flow
table
from
the
data
packets,
because
it's
symmetrical
right
or
or
you
know,
or
or
in
some
cases
asymmetrical
like
the
like.
C
Okay,
so,
like
the
h,
a
is
like
a
big
source
of
of
work
that
has
to
be
performed
right,
and
you
know
in
in
with
like
this
is
like
so
fine-grained
of
like
state
update
that
it's
that
you
know,
if
you
were
architecting
a
product
you
would
say
I
would.
I
just
want
a
like
dedicated
link
between
like
these
two
dpus,
and
I.
E
E
That
is
a
requirement,
though,
for
a
reliable
transport
to
have
for
this
control
connection.
I
don't
think
we
should
have
to
insist
on
having
a
dedicated
I'm.
C
E
My
point
is,
I
think,
that's
the
argument
that
throws
off
the
whole
theory.
I
am
actually
in
the
favor
of
creating
a
reliable
connection
between
the
gpus
for
the
state
updates
and
in
terms
of
whether
it
should
be
a
live
update
or
an
incremental
update.
That
path
is,
is
the
same
across
between
these
two
approaches.
Right
should,
should
we
update
only
increments
incremental
state
or
not,
I
mean
even
in
case
of
periodic
update
that
marian
was
proposing.
I
don't
think
it's
updating
every
repeated
connections
right.
It's
only
the
changes.
E
F
Right
there
is
this:
you
know,
connections
per
second
in
the
vm
and
then
there's
flow,
add
rate,
and
then
we
know
that
certain
packets
will
be
dropped.
But
the
focus
is
for
those
float
entries
that
we
have
already
added.
We
need
to
ensure
works
and
to
communicate
and
have
that
connection
between
the
h,
a
pair
that
these
are
the
connections
that
I
am
looking
at
and
these
are
the
connection
status
and
that
needs
to
be
synchronized
with
the
pair.
F
F
I
haven't
gone
through
the
pr
from
john,
which
I
will
do,
and
you
know
have
some
more
comments
for
the
next
time,
but
some
method
to
ensure
that
the
flow
entries
that
have
been
added
and
being
tracked
in
one
of
the
pairs
to
be
communicated
and
synchronized
and
kept
in
a
complete
alignment
with
the
hap
is
important.
A
I
think
guys,
I
think,
we're
running
out
of
time
for
this
meeting.
Would
you
all
agree?
Maybe
we
need
to
have
some
more
discussion
next
week.
A
B
A
B
A
A
Yeah
I'd
love
it
I
like
that
yeah.
Let's
just
do
that,
I'm
kidding
great
well.
I
think
I
captured
a
lot
of
notes,
a
lot
of
questions,
I'll
put
it
in
my
notepad
for
next
time.
I
think
you
know
exploring
both
paths.
Is
it's
a
lot,
but
do
you
guys
feel
like
we're
at
a
good
ending
point
for
today,
yeah.
B
A
B
Yeah,
maybe
next
time
others
can
look
at
the
proposal
and
other
vendors
and
think
and
say
what
they
think
because
look.
A
It
and
I
was
reading
it
while
we
were
talking
as
well
so
okay,
great
I'll,
send
that
out
and
thank
you,
john
and
bud
marion
for
all
the
work
and
everyone
for
attending
and
I'll
give
gerald
a
summer
is
a.
I
see
him
tonight
at
seven
o'clock
I'll
give
him
a
summarization
of
what
we
talked
about.