►
From YouTube: DASH High Availability Working Group Aug 9 2022
Description
Review of XSightLabs proposal
SONiC Team attendance & discussion
SAI APIs to query flow state?
Performance concerns
A
Hi
everyone
welcome
to
high
availability
for
August
9th
figure.
We
can
get
started.
We
hope
we
have
everybody
this
week.
I
think
the
action
item
was
to
go
ahead
and
read
through
the
documentation
that
was
provided
in
the
repo
The
Proposal
site,
Labs
the
last
time
and
maybe
think
about
think
about
amd's
implementation
and
come
armed
this
meeting
with
that.
So
maybe
we
could
get
started
I'm
just
having
I'm
going
back
and
forth
trying
to
let
everyone
into
the
meeting
so
yeah.
You
want
to
get
started.
Marion.
B
Yeah
hi
Christine
I
I
thought
we
had
the
The
Proposal
on
an
agenda
right
for
today
or
do
we
need
to
close
something
from
the
last
week.
B
A
Bulky
were
you
creating
a
bulky
created
a
PR
for
us
to
look
at
if
everyone
had
a
chance
and
then
that's
that's
what
I
recall
happening.
B
C
I
I
mean
I
think
we
could
give
an
overview
of
it
and
then,
if
people
want
to
ask
questions.
C
Good
yeah,
so
basically,
what
this
is
proposing
is
not
like
any
specific
algorithm.
You
know
like
can
Sando
presented
last
week.
You
know
an
approach
where
you
know
the
packets
are
sent
and
returned
back
to
the
primary
and,
like
I,
think
that
approach
is
sort
of
understood
it's
reliable
and
that
I
would
call
that
sort
of
like
an
algorithm.
C
C
The
you
know,
the
the
the
end
of
the
link,
that's
transmitting
state
and
then
the
receiver
just
implements
like
a
simple
kind
of
stateless
operation.
It
either
receives
packets
and
it
processes
those
packets
or
it
receives
just
like
State
updates
and
it
processes
those
State
updates,
and
it's
really
the
sender
that
decides
when
to
send
State
updates.
C
Whether
reply
should
be
returned,
whether
that
reply
should
be
the
full
packet
that
was
sent
or
whether
that
reply
should
be
trunking,
and
so
essentially
it's
just
kind
of
the
wire
protocol
for
communicating
packets
or
State
updates
and
again
it
like
really
allows
the
transmitter
to
have
the
intelligence
and
the
receiver
to
just
do
some
simple,
stateless
operation.
C
C
One
of
the
capabilities
of
this
framework
is
the
ability
to
batch.
You
know
like
multiple
packets
or
multiple
messages
within
a
single
transport
packet,
but
the
receiver
decides
whether
you
know
what
the
maximum
batch
size
should
be.
So
if
the
receiver
only
wants
to
deal
with
one
message
at
a
time
or
one
packet
at
a
time,
the
receiver
specifies
that
it
can
do
a
maximum
batch
of
one.
C
C
It's
it's
a
way
to
try
to
allow
for
flexibility,
but
at
the
same
time
allow
for
interoperability
and
also
the
ability
to
test
each
implementation
and
to
really
try
to
quantify
each
implementation
in
terms
of
its
ability
to
like
not
lose
established
connections
to
test
how
much
bandwidth
you
know
an
implementation
uses
on
the
on
the
link,
and
so
that's
kind
of
an
overview
of
what
this
is
attempting
to
to
to
do.
A
B
D
Doesn't
have
much
details
right,
so
it's
a
generic
approach.
You
know
to
approach
so
I'm,
not
sure
how
the
you
know,
vendors
going
to
you
know
implement
this
one.
C
Right
so
I
right
so
I
think
you
know
this
is
a
framework
right
and
then,
on
top
of
this
framework
we
can
specify
you
know,
let's
just
say,
like
modes
like
one
mode,
maybe
just
like
the
AMD
mode,
that
you
know
the
AMD
approach
that
was
described
last
week,
we
could
say
that's
one
mode
that
can
operate
on
top
of
this
framework.
C
Another
mode
like,
let's
just
start
with
that,
you
know,
maybe
every
vendor
is
capable
of
implementing
that
mode.
Okay,
another
mode
might
say
you
know
what,
instead
of
sending
just
one
packet
at
a
time,
I'm
gonna
batch
them
up
and
send
multiple
packets
at
a
time.
So
like
there's,
like
incremental
improvements
that
you
can
make
on
top
of
say
like
that
base
mode
and
there's
no
requirement
that
any
vendor
has
to
implement
anything
above
that
base.
C
But
but
the
vendors
have
flexibility
to
to
do
that
so
like
this
was
really
an
attempt
to
create
a
framework
for
incremental
Improvement
in
without
having
to
Define
it
all
up
front
like
I,
like
maybe
like
I'll,
just
sort
of
throw
something
out
there.
What
if
you
took
the
approach?
C
Amd
talked
about
last
week,
and
you
said
instead
of
sending
the
packet,
the
sender
will
buffer
the
packet
and
it
will
send
like
a
digest
of
the
packet
and
it
will
get
an
acknowledgment
back
and
then
the
sender
will
forward
the
packet
when
the
acknowledgment
comes
back
well,
that
seems
pretty
complicated,
but
it
has
some
advantages.
C
It
uses
less
link
bandwidth
and
so
like
this
framework
would
allow
for
like
such
an
approach,
because
it
only
requires
the
receiver
to
do
this
very
simple,
stateless
updates
of
its
state,
but
the
sender
could
choose
to
do
the
buffering.
It
could
hold
the
packet
until
it
gets
the
acknowledgment
and
then
it
could
send
the
packet.
C
You
know
to
the
end
points,
and
so
that's
really
what
this
is
trying
to
do
is
intentionally
you
know,
doesn't
have
the
details
of
what
algorithms
you
want
to
use,
because
it's
allowing
you
know
that
to
be
either
developed
by
each
vendor
or
to
be
you
know
or
to
be
like
decided.
Incrementally
you
know,
and
I
could
see
like
different
deployments
might
have
different
requirements.
You
know
some
deployments
that
have
like
lower
connection
per
second
requirements.
C
D
C
So
you
know
we
actually
like
started
this
whole
AJ
discussion
by
by
proposing
that
each
vendor
only
be
interoperable
with
themselves
and
I.
Think
Gerald,
like
was
really
pushing
for
interoperability
and
I,
think
you
know
he
has
reasons
for
it
like
he
wants
a
common
way
to
be
able
to
test
everybody's
H.A.
He
wants
to
be
able
to
look
at
the
link
between
the
two
dpus
and
be
able
to
understand.
You
know
what's
being
passed
on
the
link,
and
so
you
know
I
I,
think
Pro.
C
The
community
may
be
okay
with
you
know,
sort
of
a
high
level
aha
requirement
where
each
vendor
can
be
interoperable
with
themselves,
but
I
think
Gerald
had
challenged
us
to
try
to
come
up
with
like
a
multi-vendor
interoperability.
E
C
Has
made
the
statement
that
it
probably
wouldn't
be
deployed
as
multi-vendor,
but
that
you
know
that
we
should
consider
this
goal
of
trying
to
make
the
implementations
be
interoperable
for
these
benefits
that
he
sees
from
it.
D
Yeah
I
I
think
I
agree.
You
know
a
tree
is
a
very
complicated
stuff
right.
So
even
I
remember
when
we
were
testing
some
material,
not
not
specific,
for
this
GPU,
but
some
other
HD
writers
industry
have
the
experience
you
know
even
even
the
same
vendor
same
Hardware,
but
different
software
version
will
not
be
able
to
interrupt
with
yourself
right
so
yeah
because
they
I
think
they.
C
Right,
no,
no,
no,
no!
No,
but
but
but
the
the
way
that
this
interoperates
is
by
defining
like
a
very
simple
receiver
and
that
that's
like
invariant.
It's
like
from
release
to
release
the
receiver
of
the
state
update,
always
performs
like
a
very
simple
function
in
all
all
of
all
of
the
sort
of
complexity
of
H
A.
The
burden
of
it
is
put
on
the
transmitter
side,
and
so,
from
that
perspective,
like
like
the
receiver,
that's
designed
on
day,
one
is
interoperable
with
all
future
transmitters.
C
That's
the
that's
the
goal
of
what's
being
proposed
here.
It's
kind
of
like
no.
D
But
but
you
said
right
so
you
know,
for
example,
the
MD
approach
he's
a
he's,
basically
a
subset
of
it
right.
So
a
subset
of
this
approach.
But
when
I
look
at
the
AMD
approach,
you
know
I
have
to
admit
you
know
it's,
you
know
they
divide
into
different
stages,
rather
there's
an
initial
stage
and
there's
the
incremental
update
and
they
leverage
the
data
plan.
You
know
to
be
sent
from
one
to
another
and
then
coming
back
so
that
you
know
make
sure
the
state
is
in
sync
right.
D
So
I,
you
know,
I
think
there
are
complexities
in
those
receivers
to
how
to
passive
State
and
make
sure
you
know
the
states
being
re
reconstructed
in
their
own
establishing
and
how
to
handle
those
failure
cases.
You
know,
for
example,
you
know,
for
example,
if
if
the
packet
is
not
just
is
the
lawsuit
between
you
know,
sender
and
receivers,
and
how
those
you
know,
State
replication,
be
naturally
recovered
in
there
in
their
in
their
approach.
Right
so
I
think
there's
a
lot
of
details.
D
I'm
more
and
look
I
mean
I,
you
know
fine,
you
know
it's
a
in
the
concept
level.
It's
just
the
station
between
the
sender
and
receiver,
but
when
you
boil
down
to
all
the
details
how
they
choose
implemented
between
the
you
know,
the
protocol
between
the
center
and
receiver
there's
a
lot
of
complexities.
So
that's
what
I'm
worried
about
right,
so
you
know
be
able
to
interrupt
between
different
vendors,
yeah.
C
This
proposal,
doesn't
you
know
you
know,
make
any
kind
of
like
assumption
about
like
how
to
deal
with
lossy
packets,
or
you
know,
lossiness
in
the
channel.
It
just
provides
the
mechanism
to
to
be
able
to
deal
with
it
and
so,
like
all
of
those
complexities
that
you're
talking
about
like
exist,
but
they
can
all
be
mapped
on
top
of
this
framework.
C
That
was
the
idea
of
what
we
were.
What
we
are
proposing
here.
F
John
I
have
a
question
because
you
mentioned
the
pensando
proposal
will
fit
into
this
mechanism
from
what
I
heard
last
time,
I
couldn't
say
for
the
whole
call,
but
there
were
two
types
of
synchronization
mechanisms
that
were
described
there.
One
was
the
software
path
and
the
other
one
was
the
data
path
which
is
inline.
So
how
does
this
fit
into
both
of
those
different
ways
of
synchronizing?
Okay,.
C
So
basically,
the
way
this
works
is
that
every
you
know,
every
state
update
is
its
own
independent
message,
and
so
it
doesn't
really
matter
whether
that
state
update
is
originated
from
some
background
process
or
from
you
know,
the
data
inline
and
the
data
plane.
Each
state
update
is
its
own
independent
message
and
the
sender
basically
specifies
in
the
message
how
it
wants
the
receiver
to
respond
to
the
message
or
if
the
receiver
even
needs
to
respond
to
the
message
and
the
receiver.
C
The
receiver
basically
is
just
does
what
it's
told
to
do
from
the
sender.
So
the
sender
is
basically
instructing
the
receiver
on
what
to
do
with
with
the
message
and
so
like
from
that
perspective,
it
doesn't
really
matter
whether
the
message
originated
from
the
background.
You
know,
like
scanning
of
of
the
the
you
know,
connection
table
or
whether
it
originated
from
the
inline
and
the
data
plane.
C
C
G
Yeah
can
I
make
a
couple
of
comments.
Yeah
first
I
want
to
share
my
screen
because
I'm
I'm
glad
I'll
just
share
it
temporarily
and.
A
G
Just
wanted
to
rehash
my
initial
perceptions
of
this
proposal.
Can
you
see
my
screen
and
I
remember
sending
this
right
away,
because
I
was
impressed
with
its
Simplicity
and
elegance
and
I
I.
Don't
think
it's
in
any
way
a
vote
against
any
other
approach,
but
I
endorsed
this
a
proposal
of
John's,
because
it's
so
simple
and
elegant
and
I
think
it'd
be
very
worthwhile
to
actually
approach
someone
to
proceed
along
this
and
actually
Implement.
D
G
D
But
we
cannot,
we
cannot
talk
about.
You
know
the
vendor
interrupt
just
so
with
approach
right,
so
we
need
the
details.
G
You
don't
necessarily
need
the
communities
participation
to
do
your
own
project
and
develop
it
further
and
show
show
a
little
more
detail
and
actually
maybe
have
a
functional
working
model
of
it,
and
if,
if
people
think
it's
worthwhile,
people
are
free
to
go
ahead
and
join
in
on
this
and
let
it
prove
its
own
Merit,
it
doesn't
need
a
giant
debate
right.
You
can
actually
bring
more
to
the
table
and
you'll
also
find
out
wow.
Does
it
handle
these
certain
use
cases?
We
can
come
up
with
use
cases
to
test
it
against
I.
G
D
C
Also
started
that
way,
so
we
started
that
way.
I
mean
we've
spent
endless
hours
like
internally
working
through
like
details
of
how
to
do
and
you're
right.
It's
a
complicated
thing.
The
problem
is,
when
you
come
to
the
table
with
something
detailed,
there's,
immediate
disagreement,
because
everybody's
architecture
has
different
trade-offs
and
everybody
wants
an
aha
that
works
well
with
their
architecture.
So
we
can
come
to
the
table
with
something
that
works
really
well
with
ours,
and
maybe
it's
the
best
you've
seen.
Maybe
it
uses
the
lowest
bandwidth,
it
has
perfect
fault
tolerance.
C
D
D
I
can
have
an
internal
extinguished
jar
right
so
because
you
know
we
also
have
this
internalizing
I,
don't
think
we
need
the
you
know
the
the
any
Community
member
have
a
strong.
You
know
the
ask
that
we
need
to
interrupt
between
different
vendors.
H
H
So
so
the
goal
here
was
not
to
stop
anybody
from
doing
whichever
mode
they
wanted
to
do,
because
we
realized
that
at
least
initially,
it
will
be
very
difficult,
given
that
the
hardware
existed
without
any
guidance
to
any
particular
mode,
but
the
messaging
that
John's
bringing
forth
could
be
used
by
anyone
in
any
mode
or
you
know
it
could
be
used
by
Intel.
It
can
be
used
by
panzano.
H
It
can
be
used
by
anyone,
because
it's
very
general-
and
it
at
least
allows
interrupt
that
we
can
see
the
messages
we
know
what
they
mean
and
we
didn't
think
we're
going
to
get
to
the
interop
at
the
actual
mode
level.
On
this
round.
I,
don't
see
a
reason
why
not
to
have
a
common
transport
unless
somebody
comes
forth
and
says
no,
my
hard
drive
just
can't
even
handle
that
transport,
which
I
have
a
hard
time
believing
then,
why
would
we
not?
H
It
should
be
a
goal
until
people
can
come
forth
and
say
and
I.
You
know,
I
would
love
to
do
that
good
goal,
but
I
can't
do
it.
My
Hardware
just
can't
do
that.
If
that
were
the
case,
then
we
could
have
that
discussion,
but
I
haven't
heard
anybody
after
John's
proposal
say
that
that
was
the
case
that
they
couldn't
use
this.
This
common
transport
protocol,
which
is
so
flexible
that
it
accommodates
everything
that
anybody's
ever
talked
to
on
the
Community
about
what
their
scheme
might
be.
H
It
seems
like
it
fits
all
of
them,
so
that's
the
that
would
be
my
ideal
goal
if
we
can't
reach
that,
we
can't
reach
that,
but
I
haven't
heard
anybody
say
that
they
couldn't
accommodate
this
common
transport
protocol.
Yet
not
not
a
single
company
saying
that
yet
I'd
like
to
know
why
and
because
this
is
pretty
pretty
encompassing
of
what
every
single
company
has
said.
They
would
like
to
do.
H
D
But
it's
real!
No!
It's
really
different
right.
So
because,
if
you
talk
about
detail
man,
because
there
is
a
young
line
update,
there
is
a
control
Plan
update,
so
one
of
them
I
assume
you're
not
going
to
use
like
TCP.
Otherwise,
using
the
you
know
the
the
individual
package,
which
is
lossy,
you
know
the
TCP
could
be
law.
You
know
basically.
D
No
I,
don't
think
the
message
will
be
will
be
the
same.
You
know
the
TCP
there's
a
lot
of.
If
it's
a
control
plans,
there's
a
lot
of
flexibility.
You
can
craft
use
messages,
it
doesn't
have
to
be.
You
know
for
frame
frame
right,
so
it
can
be
a
simple
message
digested
the
connection,
but
you
know
when
you
go
to
the
inline
data
plan,
there's
less
flexibilities
rise
to
the
packet
format.
There
you
know
due
to
Hardware
constraint
can
be
be,
you
know,
cannot
be
aligned
perfectly
for
each
of
the
Asic
vendors
right.
So.
H
Similar
questions
like
there's
no
nobody's
saying
that
they
couldn't
fit
into
this
messaging
scheme,
so.
F
General
one
question
that
I've
been,
but
it.
H
No
we're
we're
not
talking
about
interrupt
of
H
A
Gohan.
We
I
think
we
all
agree
like
we're
not
going
to
quite
get
there.
This
round
we're
talking
about
how
we
get
as
close
as
possible
this
round,
which
was
the
apis
and
the
transport
and
leave
the
interop.
As
far
as
the
scheme
I
mean
we're
going
to
have
to
accept
different
schemes,
but
the
apis
are
common
between
everybody.
H
This
messaging
could
be
common
between
everybody
and
then
yes,
the
details
of
how
people
do
a
I'm
afraid
this
round
they're
going
to
be
different
over
time
to
make
them
the
same.
H
You
know
Northbound
and
what
the
APA
must
do,
and
so
we
did
that,
and
now
we
moved
on
to
to
the
transport,
hoping
that
we
can
do
the
same
thing
that
knowing
that
the
final
solution
of
the
AHA,
of
course,
is
going
to
be
different.
This
time
around.
That
doesn't
mean
that
we
don't
strive
for
as
much
commonality
as
possible,
though.
H
I
D
I
Is
Bud
here
from
excite
so
so
the
purpose
of
this
document
was
really
to
sort
of
introduce
the
idea
behind
the
the
the
the
message
format,
but
not
to
Define
it.
We
were
a
little
apprehensive
in
proposing
too
much
at
once,
so
the
idea
here
is
to
get
General
agreement
on
the
on
the
idea
of,
like
you
know,
the
complicated
logic
in
the
transmitter,
a
simple,
stateless
receiver,
and
if
we
could
agree
on
that,
then
we
can
go
to
the
next
level
and
say:
okay.
I
This
is
our
proposal
for
the
actual
format
for
those
packets
in
the
the
detail
on
what
the
receiver
does
so
so
just
to
take
a
step
back
so
in
in
general,
like
we,
we
think
between
the
two
hapers
they'll,
be
a
control,
plane,
Channel
and
a
data
plane
Channel.
This
proposal
is
really
addressing
the
data
plane
Channel,
which
is
the
more
difficult
of
the
the
two
the
control
plan
channel.
We
assume
there'll
be
an
exchange
of
capabilities
and
some
other
things
that
go
on
periodically.
I
It's
stuff,
that's
low
performance
and
and
probably
easy.
We
assumed
easy
for
vendors
to
be
flexible,
with
the
most
challenging
part
was
the
the
high
performance
requirements
of
the
data
plane,
Channel
and
that's
what
this
is.
This
is
focused
on
so
you're
right.
There's,
no
there's
no
packet
format
proposed
here
and
there
is
some
a
lot
of
detail
missing
here,
but
that
was
intentional.
Intentional.
A
F
So
in
the
data
path,
if
you
were
to
have
inline
synchronization
between
the
master
and
standby,
we
will
not
be
able
to
change
the
transport
right
because
it
will
just
be
mirrored.
Can
you
please
answer
this
question.
C
First
of
all,
like
you,
you
would
have
to
encapsulate
the
packet
in
with
an
additional
UDP
header,
so
there's
some
encapsulation,
but
there's
also
the
ability
to
not
only
send
the
packet
but
to
send
opaque
information
with
the
packet
and
the
ability
for
the
sender
to
say
that
it
would
like
to
get
that
opaque
information
back
as
a
response.
So
you
know
it's
not
simple
mirroring
but,
let's
just
say
the
mode
one
which
is
like
what
we
think
is
the
simplest
implementation
of
h,
a
it's
pretty
close
to
to
mirroring.
H
J
K
Hey
Gerald
and
Gohan,
considering
a
pin,
Sanders
work
and
I'm
assuming
from
the
way
they
were
talking
about,
it
is
a
working
model.
There
is
something
that
is
functioning
already.
Why
wouldn't
we
and
if
they
are
willing
to
contribute
into
the
community?
Why
wouldn't
we
start
there
and
you
know,
go
to
the
next
steps
from
there.
D
Yeah
I
think
that
that
that
to
me
is,
is
a
more
Progressive
approach
right,
so
I,
I,
yeah
I,
don't
see
why
we
start.
You
know
with
that.
H
I'm
not
sure
if
the,
if
they're
on
the
call
I
don't
have
any.
A
H
With
pensando,
offering
anything
that
they
want
right,
you
know
so
that's
up
to
them.
Is
anybody
from
consano
even
on
this
call
yeah.
A
E
A
Got
BJ
and
everybody.
E
E
Probably
I
think
we
should
work
on
details
on
what
what
would
be
the
means
of
this
contribution
and
the
algorithm
is
there.
The
framework
is
there.
You
can
probably
dig
more
on
what
else
we
need
and
foreign.
H
F
Looks
like
when
sun
does
method
also
carries
the
metadata
that
we
intend
to
carry,
and
that
could
also
probably
bypass
the
slow
path
by
Design
in
the
standby.
Those
details
would
be
good
to
have
and
and
in
terms
of
acknowledgments,
if
there
is
any
optimizations
that
can
that
can
that
we
can
do
offline.
F
E
Batch
them
I
think
some.
Some
of
this
metadata
that
is
sent
is
kind
of
specific
to
the
implementation,
because
for
this
thing,
but
what,
instead
of
exactly
what
is
carried
I,
think
we
can
definitely
share
what
what
is
the
information
we
carrying
so
format
and
stuff
like
that,
and
that
should
I
think
implementation
wise?
That
might
change.
That
might
be
the
opaque
you
know
analogous
to
whatever
is
there?
It
is
opaque
fields
in
the
data,
so.
D
E
D
Have
a
you
know,
one
comment
on
your
proposal
right,
so
you
know
I
think
on
you
know
there
is
a
initial
sink,
which
is
more
on
the
control
plan
to
sync
those
messages
right,
so
so,
for
that
part,
I
think
you
know
do
do
you
expect
the
you
know
the
your
drivers
to
initiate
those
connections
or
you
know.
Actually
you
know
all
this
connection
information
that
we
can
actually
use
a
Sonic
to
query,
and
then
you
know
basically
the
Sonic
would
be
doing
those.
D
You
know
those
those
message
transfer
between
the
active
and
standby
I.
Think
in
that
case
you
know
we
just
need
to
Define
some
side
apis
to
query
the
state
and
then
be
able
to
get
the
state
and
then
do
the
do
the
synchronizations.
E
Yeah
yeah
for
experiencing
this
happens
from
our
from
the
direct
module
itself
today,
because
that
additional
hop
kind
of
would
add
to
the
performance
this
thing,
so
this
happens
directly
today
from
the,
but
they
just
yeah.
D
But
but
I
don't
understand
the
additional
Hub
right,
so
you
just
you
know
you
you,
your
SDK,
your
site
gets
a
you
know.
You
have
a
side,
API
bulk
API,
to
get
those
State
directly
and
then
you
know
just
have
the
TCP
sessions
to
be
transferred
right.
So
you
know
you
know
that
in
that
case,
I
think
you
know.
On
the
other
hand,
it's
really
improving
the
interrupt
right.
So
you
know
basically,
then
you
know
different
versions.
We
can
you
know,
then
the
Sonic?
You
will
be
controlling
those
message.
Format
right.
D
E
Right
so
I
think
one
thing
is:
maybe
we
should
we
can
have
a
little
bit
a
little
more
did
this
thing
there,
but
one
thing
is
the:
whatever
is
the
thing
that
the
control
plane
is
doing?
There
is
some
for
optimization.
There
is
I,
think
the
the
control
plane
also
keeps
track
of
what
state
is
changing
in
between,
because
it's
not
an
atomic
thing
right.
When
we
are
doing
this
thing,
there
might
be
connection
state
that
is
changing
incrementally,
as
we
are
sinking
yeah.
We
need
to
that.
E
Probably
is
some
complication,
I
think,
since
it's
running
together,
we're
able
to
do
it
and
some
people
need
to
think
about
it
on
if
we
need
to
expose
it
as
an
API,
where
some
other
external
entity
is
going
to
do
the
bulk
sync
that
handshake
between
the
pulsic
and
the
incremental
sync,
probably
we
need
to
think
through.
That's.
D
Fine
yeah
yeah,
but
this
is
like
those
you
know:
VMI
live
migration
scenario
right,
so
we're
at
some
point.
We
have
to
phrase
one
side
right
so
and
then
go
to
the
other,
because
if
you
don't
freeze,
you
know
your
your
memory
keep
dirty
right,
keep
dirty.
So
you
know
you
never
get
to
a
point
where
you
have
all
the
memory
being
cleaned
up
and
single.
It's.
H
D
H
We
do
a
perfect
thing,
which
means
there's
a
time
you
go
from
a
time
stamp
or
a
color.
You
send
over
all
the
all
of
the
data
from
that
all
from
the
bottom
of
the
table
to
the
top
of
the
table
when
you're
done
that
you
are
done
the
bulk
in
the
meantime,
you
have
been
doing
inline
all
throughout
that,
so
it
is
in
perfect
sync
at
the
end
and
there's
nothing
more
to
do
once
you
get
to
the
top
you're
done.
B
D
But
but
my
my
comments
is
still
you
know:
can
we
get
those
you
know
those
you
know
non-in-line
part
to
be
controlled
by
the
the
sonic
software
right.
So
what
what
would
be
the.
H
Next
yeah
I
think
that
that's
a
good
idea
and
then
the
coordination
of
how
it
tells
it's
done
right,
just
just
it
just
needs
to
be
like
I'm
done
now
and
because
you
need
to
at
that
point,
you
need
to
do
the
switch
over
at
some
point,
which
is
a
separate.
You
know
as
Michael
wanted.
It's
like
it
doesn't
matter
whether
you're
done
or
not,
when
you're
done,
then
you
need
to
Signal
it
and
then
SDM
will
come
down
and
say:
okay,
now
advertise
bgp.
H
You
know
look
at
a
a
moment
later,
whenever
a
ton
of
their
choosing.
So
that
sounds
like
it
would
be
good
to
coordinate
that
piece
and
there's,
as
you
said,
there's
no
reason
in
the
boxing
why
you
couldn't
just
use
it.
You
know
the
Sonic,
let's
call
it
Sonic
messaging
whatever.
That
means,
because
it
really
is
just
a
transfer
of
data
and
it's
in
the
background
and
it's
actually
not
as
fast
as
the
inline
right.
It's
like
it
can
be
done
as
a
background
task,
because
you
can
take
10
seconds
20
seconds,
30
seconds.
H
If
you
want
to
finish
right,
you
just
go
to
the
bottom
of
the
table
to
the
top
table,
and
when
that,
when
it's
done
it's
done
so
it
sounds
right
that
will
get
us
one
step
closer
to
interoperability.
At
least
the
boxing
could
be
interoperable
for
everyone,
but
everybody
still
will
want
some
metadata
at
it.
So
that's
the
only
caveat
to
that.
E
Yeah
yeah
so
I
think
the
longer
I
think
from
our
experience,
I.
Think
Balki.
Can
this
thing
so
I
think
the
longer
we
take
for
finishing
the
bull
thing
we
find
that
it
gets
more
and
more
complicated
with
keeping
the
Integrity
of
the
two
sides
this
thing,
so
we
try
to
finish
everything.
H
E
F
The
receiver
side,
the
packets,
will
be
queued,
but
from
the
center
side
you
know
if
we
batch
them
and
we
take
time
to
pack
the
different
datas
of
different
flows
in
one
packet.
You
know
there
could
be
an
update
in
the
flow
as
we
are
parsing
the
whole
table
and
that
may
be
sent
outside
of
you
know
the
bug
sink.
In
that
case
it
it
would
be
out
of
order
when
the
bug
sink
packet
arrives
at
the
receiver.
Now.
H
H
The
one
thing
that
that
we'll
need
to
discuss
Martin,
you
brought
that
up
before
it's
a
good
point,
it's
possible
to
get
a
race
condition
there,
and
and
maybe
we
can
work
with
pensando
on
what
what
they've
done
there.
J
J
There
is
a
h,
a
start
which
is
coming
from
the
controller,
which
is
the
which
is
the
triggering
point
at
which
the
bulging
basically
starts
I
mean
in
the
sense
the
pairing
starts
and
the
bulk
scene
starts
and
likewise
after
the
bulking
ends,
it
doesn't
automatically
pair
up
or
it
doesn't
automatically
advertise
the
IP
address
again
there
is
this
activate
admin
role
API.
Once
you
push
that,
then
the
advertisement
starts
under
the
parry
and
then
the
the
role
gets
established.
So
basically
it
becomes
active
and
standby
yeah.
You
can
go
through
that
yeah.
D
I
get
this
part,
but
what
I'm?
What
I'm?
One
of
my
comments
is
that
you
know
some
of
this
protocols
stop
right.
So
you
know
I
I
think
it
would
be
more
simpler
if
you,
you
know,
if
the,
if
the
side
gives
the
API
to
query
all
those
Flow
State
and
we
can,
we
can
usually
you
know
those
sync,
you
know
star,
those
kind
of
things
is
really
moved
to
the
Sonic
implementation
right.
So
you
know
you
you
just
give
us
a
you.
D
Just
give
us
apis
to
query
all
your
Flow
State
and
you
know
some
kind
of
notification
when
the
Flow
State
changes
and
then
we
will
get
all
those
and
we
will.
We
will
do
the
implementation
in
the
Sonic
to
to
draw
this
State.
You
know
to
do
this
slow
passes,
you
know
synchronization
and
but
for
the
first
pass.
Yes,
I
think
we
we.
We
will
basically
realize
that
you
know
your
your
inline
mechanism
to
do
that.
J
So
so
State
or
the
role
whatever
we
transition
right,
that
is
actually
streamed,
like
I,
mean
through
our
listening
right
streaming
mechanism,
so
it
is
actually
can
be
seen
by
the
controller
and
based
on
that.
It
can
basically
invoke
these
start
and
I
activate
admin
role.
Apis,
like
I
mean
it
is,
it
is
basically
you
can
make
it
programmable
on
your
side
as
to
when
you
want
to,
you
know,
start
the
pair
up
and
start
the
role
establishment,
Etc
no.
D
No
I,
no,
no
I,
I,
still
I
understand
what
you're
saying
but
I
I'm
I'm,
saying
it's
better
to
move
those
complex
the
you
know
this.
You
know
phrases
implementation.
You
know
State
transition
diagrams
the
Logics
into
the
Sonic
right.
So
instead
of
you
know
in
the
SDK
or
PSI,
because
that's
just
common
right,
so
you
know
it's
just
you
know
we
we
we
Implement
once
you
know
every
winner
will
just
get
to
get
the
same
benefit
right.
So
then
we
don't
have
to
debug
this
over
over
again
right.
D
So
what
we
really
need
from
the
from
your
side,
SDK
staff
is
to
get
those
Flow
State.
So
we
can
start
using.
You
know
like
grpc
to
transmit
those
all
the
state
from
active
to
standby.
Instead
of
you
know
letting
your
style
or
you
know
SDK,
to
do
that
transmission.
The
wiring
protocol
right
so
those
things
that
can
be
implemented
as
a
Sonic
Sonic
layer
instead
of
the
SDK
layer
and
on
the
other
side
I
think,
is
the
inline.
You
know,
of
course
you
know,
have
two
done
at
your
AC
level.
C
I
E
Clothes
or
could
it
be
something
like
to
start
with?
Can
it
be
an
API
to
say
start
bulk
sync
kind
of
a
thing
where
the
dpu,
the
SDK
can
do
the
the
reason
I'm
saying
this
is
I,
think
we
will
we
can
I,
don't
have
an
answer
on
specific
things
yet,
but
I
think
if
you
go
back
the
car,
the
hand
holding
or
the
coordination
between
the
incremental
the
data
path
and
this
bulking
that
we
have
I
think
it
might
be
a
little
more
involved.
E
D
Yeah
yeah
for
that
one
right,
so
whether
we
start
with
this
one
or
start
with
you
know
these
Escape
flows,
I,
you
know
I
I.
Think
I
would
you
know
hear
you
know
more.
You
know
Financial,
but
I
think
you
know
the
the
benefit
of
doing
this.
Simple
get
flows
and
the
the
sonic
manages.
That
is
that
you
know
just
one
implementation
and
it
will
fit
all
the
vendors.
D
I
D
Interoperability
that
Gerald
wants
right.
So
then
we
just
have
to
only
have
to
worry
about
the
inline
part,
which
it
could
be
much
simpler
right.
So,
however,
you
know
I
I.
My
preference
would
be
that,
let's
start
with,
you
know
why
implementation
and
then
you
know
it's,
it's
not.
You
know
if
we,
if
we
start
with
another
approach
and
then
later
to
move
to
a
second
approach.
That
means
we're
duplicating
our
efforts
right.
D
So
you
know
if
people
all
agree
that
you
know
moving
those
you
know
stem
machine
to
the
to
the
Sonic
part,
saying
I
think
we
should
start
with
that
right.
So
basically,
we
Define
the
apis
and
all
of
you
guys,
you
know,
can
end
it
with
us
enough
details
and
all
of
we
can
Implement
that
and
we
we
take
care
of
the
Sonic
part
and
they
will
benefit
all
of
you
guys.
D
F
B
How
do
we
believe
that
we
are
going
to
bring
in
all
the
data
and
be
able
to
really
you
know,
store
and
be
able
to
process
and
send
out
all
this
synchronization
from
there?
What's
your
thought
on
that.
D
Yeah,
that's
a
good
question
right,
so
I
think
my
answer
is
that
okay,
so
I
I
think
we
understand
you
know
there
are.
You
know
some
arm
core
CPUs
to
do
because
he's
running
the
Linux
rights
running
Sonic,
there
must
be
some
CPU
and
the
memory
is
there
right
and
also
I.
Think
what
really
doing
here
is
that
you
know
we
don't
have
to
get
all
the
flows
to
be.
You
know
stored
in
the
memory
and
start
transferring
right.
D
So
you
know
we
can
create
some
kinds
of
ring
buffer
and
then,
as
soon
as
we
you
know,
get
flows
from
your
SDK.
Then
we
start
transferring
right.
So
you
know
there's
a
you
know,
you
know.
Basically,
you
don't
have
to
store
all
the
Flows
In
memory
and
then
start
doing
transfer
you
you
get
them
as
soon
as
you
get
them.
You
put
in
the
queue,
and
then
you
know
that
that
q
and
the
the
then
you
move
those.
You
know
you
you
send
those
you
know
to
your.
D
You
know
grpc
channel
to
the
other
side
right.
So
then
you
you
you
and
then
you
can.
You
can
Define.
You
know,
depending
on
your
system,
you
can
Define
what
what
what
would
be
the
queue
size
right.
So
then
you
you
do
you
don't
have
to
run
all
of
that,
and
then
you
know
the
the
transfer
just
as
soon
as
those
you
know
flow
being
transferred
to
the
other
side.
You
can.
You
can
remove
those
messages
from
the
queue
and
then
you're
creating,
and
you
can
get
more
state
from
your
SDK
right.
So.
H
H
H
F
H
Oh
yeah,
are
you:
are
you
saying?
Oh
first
of
all,
AJ
always
has
to
be
followed
by
a
race,
a
re-simulation.
If
that's
what
you're
saying
right
standardized
on
that
just
for
a
good
measure,
because
you
don't
want
the
policies
could
be
slightly
out
of
sync
between
them
and
yeah,
and
it's
just
good
practice
to
make
sure
that
you
re-simulate.
You
know
after
you've
done
this
bulk
transfer.
This
perfect
sync
so
agreed
on
that
yeah.
It
should
follow
after
the
perfect
sync.
It
should
be
re-simulation.
I
I
I
have
some
performance
concerns
about
this
plan.
I
mean
I,
think
it's
logically
feasible
but
like
Gerald,
you
were
saying,
like
you
know,
the
back.
The
bulk
sink
can
take
a
long
time.
It
can
take
10
seconds
20
seconds
or
30
seconds
to
complete,
but.
H
I
H
Might
I
don't
know
if
anybody
knows
it
might
be
a
minute
right,
I,
don't
know.
If
anybody
knows
the
exact
time
we
should
actually
load
up
our
unit
actually
and
figure
out
that
yeah
I
don't
know
how
long
it's
going
to
take.
I'll
be
honest.
So
in
the
like,
if
you're
running
a
hero
test,
that
means
you
have
maximum
number
of
flows
in
that
kind
of
test
and
you
did
a
switch
over.
H
E
D
D
H
H
That
under
switch
over,
but
what
they're
saying
is
when
you
do
that?
Sorry,
when
you,
when
you're
bringing
up
a
device,
it
doesn't
happen,
I
want
to
switch
over
by
the
way
yeah.
It
happens
when
you're
bringing
up
a
device,
that's
new
and
it
has
to
be
undergo
this,
but
and
so
I
I
think
we
don't
know
the
answer
to
it.
Like
I,
don't
know
how
long
it
takes.
E
J
D
Yeah
yeah,
that's
what
I'm
saying
right.
So
you
know
think
about
right.
So
our
SDK,
our
our
sync
D
process
and
your
SDK
is
running
the
same
process
right
your
size.
It's
running
the
same
process
right
so
I'm.
What
what
I'm
asking
is
that
you
know
what
what
my
thinking
my
Approach
is
that
you
know
the
sync
D
will
carry
all
those
flows
right,
so
you
know
incrementally
right.
D
So
you
know
as
soon
as
you
get
flow,
you
set
up
the
TCP
connection
sent
to
the
other
side
right,
so
you
know
the
same
way
as
you
did
right
so
so
therefore,
I'm
not
sure
you
know.
If
we're
talking
about
overhead,
what
kind
of
over
you
know
how
much
overhead
would
that
be?
You
know
we
just
add
a
new
API
call
to
get
all
your
Flow
State,
which
you
you
publish.
D
You
do
that
internally
in
your
SDK
I'm,
just
saying
you
know
expose
that
to
the
side
and
then
we
get
it
and
we
set
up
the
same
TCP
connection
to
do
that
thing
right,
so
you
know
I'm,
not
sure
you
know
it's
just
some.
Some
some
some
kind
of
a
wrapper
of
your
SDK
wrapping
to
the
side
and
get
that
function,
call
and
send
it
right.
So,
but
the
benefit
is
that
you
know
all
the
vendors
are
aligned
with
the
same
implementation
for
the
state
transitions
for
their
trade.
Now.
E
E
This
thing
is
to
work
out
what
are
the
dependencies
if,
if
we
say
that
the
incremental
thing,
which
is
this
data
path,
thing
that
we
are
doing,
is
going
to
be
specific
to
the
you
know,
the
I
don't
know
to
the
vendor
or
to
the
mode
that
we
are
in.
E
If
you
just
pull
out
just
the
bulk
thing
out,
but
leave
the
this
thing
to
be
independent
upper
vendor
then
I'm
trying
to
see
what,
if
there
are
tie-ins
between
the
bulk
sink
and
the
data
paths,
I
think
we
are
doing
how
much
Advantage
do
we
get
from
that?
Maybe
we
can
go
back
and
see
I'm
just
saying
that
there
is
a
tie-in
between
the
two.
We
can
go
back
and
see
what
are
those
science?
How
much
is
the
tie
in
between
when
we
are
doing
pulse
sync?
E
What
are
the
dependencies
we
have
with
the
data
path
sync
to
maybe,
depending
on
that
we
can
see.
Is
there
an
advantage
in
pulling
out
just
the
bulk
sink
out
or
leaving
the
data
path?
Sync
to
be
implementation
specific,
or
does
it
make
sense
to
separate
it
out.
F
It's
like
there
is
one
open,
which
is
the
data
metadata
that
is
being
synced
that
part,
since
it's
not
fully
defined.
Maybe
we
could
use
excites
proposal
for
that
part
right
on
how
to
define
what
is
the
metadata
that
needs
to
be
synced
between
active
and
standby.
F
That
was
one
and
the
second
one
was
that
when,
in
case
of
you
know
a
proposal
where
we,
if,
if
we
have
to
mirror
a
packet
with
a
separate
type
of
encapsulation,
will
that
affect
CPS?
And
the
third
part
that
I
wanted
to
ask
was
the
inline
synchronization
between
active
and
standby?
The
acknowledgments
in
that
part,
can
they
be
batched.
E
Yeah,
the
the
badge
for
the
inline
I
think
the
problem
will
be
the
buffering
in
Hardware.
That
might
be
this
thing
to
buffer
photos.
This
thing.
B
E
B
One
comment
that
I
have
I,
don't
believe
like
that
is
compatible
with
the
other
proposal
that
has
buffering.
E
C
The
date,
so
this
is
John
from
exide
it's
possible
for
the
data
plane,
sync
to
be
batched.
It's
up
to
the
sender.
If
the
sender
has
the
ability
to
gather
up
multiple
messages,
whether
those
are
entire
packets
or
you
know,
digests
of
the
packets,
it
could
send
them
all
in
one
single
bulk.
C
You
know
to
the
receiver
to
be
to
be
synced
but
I
like
I,
said
earlier,
like
it's
up
to
the
receiver
to
decide
what
the
maximum
bulk
size
is.
If
the
receiver
is
only
capable
of
one
at
a
time,
then
the
sender
will
only
send
at
most
one
at
a
time
so
like
the
The
Proposal
that
we
had
like
allows
for
the
possibility
of
bulking.
C
If
the
sender
has
the
ability
to
do
bulking
and
of
course
the
receiver
can
decide
whether
whether
it's
going
to
accept
bulks
bigger
than
one,
so
you
know
there's
an
optimization
there.
I
mean
we've
done
some
math
to
you
know,
there's
an
optimization
if
you
can
send
multiple
syncs
in
the
same
packet.
C
D
Okay,
yeah
I
think
we're
running
off
time
right.
So
what
what
do
we
do?
Next
I
think
it
seems
like
people
say?
Okay,
maybe
let's
start
you
know
improving.
You
know
iterate
on
the
same
the
proposal
and
get
the
the
you
know
and-
and
you
know
get
it
close
and
you
know
sort
it
out
the
grades
right.
So
is
that
what
the
where
the
sinking
is
yeah.
K
K
You
know
proprietary
mechanism
between
the
winters,
maybe
so,
and
if
you
can
have
that
details
shared
with
the
community
I
think
that's
a
good
start,
I
think
everything
else,
our
next
level
of
details
right,
whether
we
should
batch
it
or
you
know
who
should
accept
all
those
are
Next
Level,
details,
I,
think
having
effort
working
design
or
which
that
meets
most
of
the
needs.
I
think
it's
already
there.
There
is
no
need
to
reinvent
it.
B
Yeah
I
agree
with
that.
You
know
because
this
is
a
working
version,
a
lot
of
it.
Basically,
at
least
we
will
have
one
example,
a
working
example
for
from
where
we
can
build
upon
further,
and
then
it's
too
good
to
have
a
working
example
as
something
without
actually
you
know,
without
that
we
all
have
a
lot
of
theory
right.
D
C
This
is
John
from
excite
I.
Don't
think
again,
I!
Don't
think
that
the
that
the
AMD
proposal
is
incompatible
with
the
excite
proposal,
so
like
I
mean
I
would
be
perfectly
happy
to
to
work.
You
know:
have
the
community
work
through
all
of
the
details
of
a
working
and
then
see
if
we
can
map
it
into
a
framework
that
would
then
allow
for,
like
future,
optimization
and
future
interoperability,
because
I
don't
think
again,
I,
don't
think
that
what
we
propose
is
incompatible.
D
B
D
F
Agree-
and
you
know,
John
has
some
of
the
things
that
defines
some
of
the
metadata
I
think
it
can
be
used
in
conjunction
with
what
AMD
has
and
yeah.
If
we
can
keep
the
options
open
to
optimize
further,
that
would
be
perfect.
A
D
A
A
Is
Sun
jail,
checkout,
bulk,
sync
and
data
pass
sync
and
data
extract
details
and
share
if
John's
happy
to
work
with
AMD
for
future
optimization
and
compatibility
and
I
know,
Marion
has
some
apis
that
he's
been
working
on.
That's
what
I
have
written
down.
Do
you
guys
have
anything
I'm,
not
the
work
group
leader
but
I'm,
just
taking
notes.
A
B
I
I
would
ask
maybe
AMD
to
review
them.
I,
don't
know
if
you
were
present
on
previous
meetings.
B
Think
we
we
also
you
know
if,
if
Sanjay-
and
you
know
bulky
if
they
can
bring
in
more
detail
as
we
discussed
right
Beyond
just
the
presentation,
if
there
are
any
something
they
want
to
open
up
in
terms
of
the
design,
spec
or
whatever,
which
has
more
detail
into
it.
We
can
review
that
as
well.