►
From YouTube: DASH High Availability Working Group Sep 13 2022
Description
Pick up w/SAI definitions on page 10
Q: will this be represented as a P4 model?
A: Sanjay - this could be challenging. Will look for a way to formalize and represent it.
Q: Guohan - would be great if we can sample code to call the api.
A: SAI API common config examples (Marian)
Change CP messages opaque vs common
A
A
You
know
how
it
works
and
things
like
that
and
we
we
talked
about
gohan,
had
a
few
questions
here
and
I
think
I
put
these
in
the
last
meeting
notes
and
so
this
next
session
we
were
going
to
pick
up
with
amd
and
then,
if
gohan
joins,
maybe
go
into
these
topics
here.
So
I'll
stop
sharing
my
screen
and
let
you
guys
pick
up.
A
B
Yeah,
I
think
somewhere
here
there
we
just
finished
the
state
machine
is
where
we
had
stopped.
I
think
so.
Next,
I
think
we
can
continue
on
from
here.
I
think
here
I
think,
what's
coming
is
we
have
some
side
definitions
with
given
the
background
of
the
state
mission
we
were
talking
about,
so
there
are
two
aspects
of
messaging:
that's
going
on
for
control.
B
One
is.
There
is
some
messaging
that
is
going
on
between
the
the
state
mission
itself
talking
back
to
the
sdk,
so
this
is
the
psi
sdk
definitions
that
we
have
below
and
then
there
is
another
aspect
of
communication
that
is
going
on,
which
is
the
the
communication
happening
between
the
boxes.
So
there
is,
there
is
some
communication
happening
within
the
node,
which
is
the
psi
sdks,
the
psi
sdk
definition
messages,
and
then
there
is
a
messaging
that's
happening
across
the
nodes,
which
is
the
control
plane.
B
I
mean
on
over
the
control
plane
channel
and
those
are
defined
later
on.
I
think
we'll
come
to
that
section
later
on,
which
is
the
grpc
definitions
right
so
first
to
touch
up
on
this
isd
case,
which
is
this
in-node
communications.
B
A
Missions
come
from
the
ocp
or
the
linux
foundation,
or
the
psi
repo
or
just
little
con,
or
are
we
making
we're
making
these
we're?
Creating
these?
This.
B
B
So
one
is,
I
think
there
is
if
we
look
when
we
looked
at
the
picture
right.
So
there
was
this.
B
There
is
this
channel
that
we
were
talking
about
between
the
dpu.
These
were
messages
that
we
are
passing
through
to
the.
If
you
remember,
we
were
talking
about
a
stream,
so
there
is
that
channel
that
we
have
to
establish
the
first
message
here.
The
first
call
here
is
to
register
register
that
cp
channel
function.
B
So
in
this
document
I'm
just
suggesting
we
use
a
something
like
a
pi
named
fifo
kind
of
a
thing
so
that
it
can
be
to
simulate
a
stream
if
there
are
other
ideas
that
the
community
has
on
what
is
a
better
way
to
represent
a
stream
inside
that,
please
do
suggest
right,
but
essentially
this
is
to
create
a
stream,
a
bi-directional
stream
between
the
sdk
and
sonic
right,
so
that
I'm
representing
here
there's
a
registered
cp
channel.
B
Then
there
is
the
get
capabilities
function,
so
this
is
for
in
the
in
the
state
machine.
We
had
this
where
there
was
a
capability
exchange,
but
before
for
the
capability
exchange
before
it
before
the
exchange
happens
with
the
node
there
is
a
capability
get
from
the
local
hardware,
implementation
from
the
local
dpu
implementation.
B
B
C
Just
a
quick
question
on
just
say
api
right,
so,
as
you
know,
you
know
for
for
dash
overlay
apis,
you
know.
As
a
as
a
rule,
we
basically
said
we
are
going
to
derive
these.
I
apis
through
a
p4
model
right,
so
we
we
also
basically
said
that,
okay,
that
c4
model
is
going
to
be
the
source
of
truth
from
we
are
going
to
auto,
generate
these
api.
Now
for
the
underlying
apis
prior
to
these
overall
api
that
we
had
in
the
ocp
psi
generally,
we
we
have
what
we
call
it.
C
So
so,
with
this
discussion
that
we
are
having
right
now,
are
we
just
basically,
you
know
just
brainstorming
here
and
then
eventually
we
are
going
to
have
some
sort
of
a
model,
whether
it's
a
visio
based
or
p4
model
based,
so
that
we
can
sort
like
you
know,
officially
have
some
representation
saying
that
okay,
this
is
how
it's
going
to
be.
You
know
depicting
it.
B
Yeah,
so
that's
a
good
question.
Actually
I
don't
know
the
correct
answer.
One
thing
that
I
know
is
representing
this
as
a
p4
model
might
be
a
little
challenging,
is
my
opinion,
like,
maybe
so
that
I
can
think
maybe
I'll
get
some
this
thing
on
that
vizio
model
and
see
if
that
is
a
better
way
to
formalize
it
and
represent
it,
but
maybe.
D
B
Is
yeah
because
this
yeah
this
it's
not
a
p4
behavioral
thing,
truly
right!
Some
of
these
get
capabilities.
All
these
are
not
truly
a
p4
thing,
so
maybe
we
can
do
it.
We
can
explore
looking
at
how
to
formalize
that,
for.
C
You
some
sort
of
like
a
you,
know,
examples
of
how
you
can
show
this
thing,
but
all
in
all,
I
guess
you
know
the
the
ways
I
actually
represent.
Anything
is
size.
Just
a
is
a
is
a
logical
pipeline
right.
So,
however,
you
represent
it.
You
know
it
just
it's
just
to
basically
show
that
okay,
we
need
to
get
get
something
done
and
then
eventually
you
know
something
should
be
represented.
C
Eventually,
you
know
what
has
happened
is
that
even
in
the
underlay
apis,
the
pin
spins
project
has
really
represented
the
even
the
psi
so
psy.p4
in
the
p4
as
well.
So
those
are
good
examples
to
see
how,
even
though
what
you
may
think
that
okay,
this
may
not
be
able
to
be
representable
in
people.
You
may
find
some
ways
to
represent
the
people
also,
but.
E
C
B
Okay,
cool
yeah,
so
I
think
for
documentation.
I
think
we
can
do
that
yeah,
I
think.
Maybe
now
we
look
through.
I
think
my
purpose
here
was
to
put
the
messaging
here
so
that
we
can
actually
visualize
better
or
understand
better.
When
we
are,
we
were
talking
about
on
the
in
the
state
machine
we
were
talking
about.
We
do
this
handshake
with
the
dpu.
This
is
what
we
are
going
to
send
across
to
the
the
node
etc.
B
I
thought
putting
a
more
little
bit
of
structure
to
it,
so
that
it's
more
easily
understandable,
like
we
everyone
reading
it
can
get
a
mental
model
of
what's
actually
happening
behind
the
thing.
So
I
think
it's
not
probably
the
best
way
to
document.
I
think
next
step
we
can
see
how
to
formalize
it
and
document.
This
thing,
like
you,
see.
B
All
I
wanted
to
show
in
this
peer
capabilities
is
so
these
capabilities
themselves
are
kind
of.
There
are
two
sets
of
capabilities
that
we
have
talked
about,
so
there
is
a
sonic
implicit
capabilities
and
there
are
some
capabilities
which
are
very
implicit
to
the
dpu
itself,
so
those
will
be
probably
something
like.
B
B
Yeah,
so
the
attributes
are
defined
here
right.
There
is
some
which
are
this
thing,
and
then
there
is
one
which
is
a
sciulisti
which
is
actually
an
opaque.
This
thing
that
comes
through.
B
So
that
is
one
approach
we
were,
I
think,
from
the
last
thing
we
thought
we
will
have
another
discussion
marian.
We
are
thinking
the
dpu
right
now.
I
think
in
this
design,
because
we
were
thinking
it
can
be
pretty
aggressive
and
happening
this
thing,
so
we
thought
we
can
offload
it
to
the
dpu
itself
to
do
the
heartbeat.
B
B
B
B
Then
there
is
the
process
peer
capabilities,
so
I
think
this
one
is
the
call
that
happens
from
once.
The
sonic
stack
has
exchanged
the
peer
capability,
so
what's
happening
is
each
local
node
is
using
the
get
peer
capabilities,
get
capabilities
to
get
the
local
capabilities
and
then
exchanging
it
across.
I
think
the
exchange
happens
via
the
control
plane
message.
B
We
will
see
that
in
a
minute,
but
so
there
is
an
exchange
happening
between
the
nodes
and
once
that
happens
on
exchange
each
of
the
sonic
stacks
themselves,
the
sonic
itself
can
actually
do
some
do
the
first
level
of
validation
that
they
are
compatible
and
they
can
actually
peer
up
right
after
that.
There
was
also
this
component,
where
the
capabilities
are
hardware
specific
or
specific
to
the
dpu,
the
underlying
implementation.
B
So
for
that
this
get
process
peer
capabilities
function.
This
is
a
psi
api
that
gives
a
chance
to
the
underlying
implementation,
to
signal
whether
it
is
compatible
or
not.
Depending
on
what
was
each
of
the
dpo
capabilities
were
passed.
B
B
So
if
it
comes
so,
this
is
if
the
dp2
current
gpu
to
dpu
control
message
comes
via
the
control
plane
channel.
This
needs
to
be
relayed
back
to
the
dpu
to
the
sdk.
So
this
is
the
dpu
control
message.
Process
deeply.
Can.
B
B
Oh
no,
no
so,
for
example,
let's
say
we're
doing
the
bulk
sync,
and
I
think
the
example
that
we
were
talking
about
in
the
last
meeting
is
each
of
these
dpus
may
have
specific
ways
which
are
more
efficient
for
it
to
do
the
bulk
sync.
So,
for
example,
if
there
were
multiple
threads
or
multiple
slices
of
flows
that
we
have,
it
might
make
sense
more.
It
is
much
more
efficient
to
get
flows
from
one
thread
which
is
free
to
process
flows
right.
B
So
there
might
be
some
message
that
the
dpu
can,
but
then
the
relay
back
to
the
other
side
to
the
peer
side,
so
that
those
can
be
those
are
the
flows
and
there's
some
number
of
flows
or
a
specific
slice
of
flows
that
can
be
pushed
across
on
the
bulging
pulsing
channel
right.
So
this
is
a
message
that
is
going
from
dpu
to
dpu.
So
these
are
the
kinds
of
messages
that
are.
B
We
can
say
I
think
there
is
a
there
is
a
continuum
right,
so
there
may
be
things
that
are
common.
I
think
that
is
something
that
we
can
this
thing
that
might
be
common
across
multiple
dps.
We
have
the
same
set
of
requirements
that
we
could
make
it
in
this
control
message.
We
could
make
it
this
thing,
but
there
can
be
some
things
like
the
implementation
itself,
how
many?
B
Whether
we
have,
I
think
one
implementation
may
have
this,
where
we
have
different
slices
for
performance
for
the
flows
and
we
don't
want
to
hold
a
lock
while
we
are
processing
the
other
right.
Another
implementation
may
not
have
the
same
this
thing
or
they
may
have
a
different
set
of
requirements
or
restrictions
or
requirements
or
optimizations
that
they
can
implement
right.
If
that
is
the
case
now,
I
think
the
question
is:
how
do
you
achieve
this
when
there
is
a.
D
Because,
initially,
if
I'm
not
mistaken,
once
proposal
was
to
sync
the
already
opened
connections
from
active
dpu
to
the
one
that
is
only
coming
up
through
the
control
plane
channel,
which
would
be
universal.
If
there
is,
if
you
think
there
should
be
still
some
proprietary
aspect
to
that,
and
it
ruins
the
whole
point
of
having
a
common
way
to
synchronize.
D
The
open
connections
between
two
dpus.
B
They're
not
not
really,
so
I
think
the
connection
itself
the
flow
closing
message
itself.
We
will
see
in
the
control
plane
in
the
following
messages
right,
the
actual
flows
themselves
are
being
synced.
The
the
floating
messages
during
the
bulk
sync
are
being
synced
via
the
control
plane
channel
completely.
B
There
is
no
proprietaryness
in
that
right,
so
everything
is
whatever
is
being
synced.
Is.
B
Is
all
defined
exactly
as
it
is
this
thing
right
now?
The
only
thing
that
is
that
will
need
to
be.
This
thing
is
what
is
gpu
specific
if
there
is
any
optimization
to
be
had,
which
we
see
some
optimizations,
depending
on
the
implementation,
from
our
experience
right,
so
there
are
some
optimizations
to
be
had
by
actually
knowing
what
the
implementation
is.
So
if
there
are
things
like
that,
then
you
need
some
mechanism
to
be
able
to
come.
B
You
know,
communicate
those
optimizations
or
you
know
some
mechanism
to
make
use
of
those
optimizations,
and
since
these
optimizations
are
cross
node,
it's
not
within
the
node,
but
it
is
within.
You
know
something
that
needs
to
be
communicated
across.
This
is
that
dpu
control
message,
but
the
flow
sync
messages
themselves.
We
will
see
what
we
are
coming
there.
That
is
all
in
the
open.
Now
it
could
be
that
you
one
might
have.
One
of
the
implementations
does
not
need
to
pass
any
of
the
dp
control
messages.
That
is
fine.
A
I
see
so
whatever
is
needed
that
can
be
generalized
and
common
across
different
vendor
hardware
is
specified
in
the
non-highlighted
portion
and.
C
B
B
C
Or
you
know
you
can
have
something
like.
Basically,
even
if
you
have
control
message
just
you
can
have
a
control
message
function
which
can
process
all
the
control
messages
and
one
of
the
messages
could
be
say.
Com
common
message,
such
as
you
know,
speed
and.
C
Be
some
you
know
bits
in
there
which
can
say
okay
if
somebody
wants
to
define
it
as
a
an
extension
vendor
extension
or
proprietary.
They
can
do
this
thing,
so
I
think
instead
of
perhaps
I
mean
that
would
be.
B
Yeah,
so
I
think
this
yeah,
I
think
you
can
add
additional
things
here
so
right
now.
I
think
there
is
an
attribute
type
and
there's
some
attribute
data
and
the
attribute
type
I
was
thinking
there
will
be
one
op.
Whatever
is
this
vendor-specific
information?
I
haven't
added
other
things,
but
I
see
a
point
there
could
be.
I
think
in
this
thing,
for
example,
if
there
is
a
heartbeat,
the
heartbeat
could
be
here
or
something
else
which
is
more,
which
we
find
is
a
more
common
denominator
across
multiple
implementations.
B
B
B
Similarly,
there
is
this
flow
sync
message
that
I
think
this
is
the
actual
flowsync
message
itself,
so
that
was
just
the
back
back
channel,
so
the
actual
floating
messages
are
happening
in
the
quote:
unquote
open.
If
you
want
to
say
that
right,
so
it's
in
a
well-defined
format.
So
that
is
this.
B
D
B
Yeah,
so
I
think
this
one
is,
for
example,
marian
here.
Let
me
show
the
state
machine
and
show
what
potentially,
where
we
could
this
thing.
So
there
is
this.
There
are
some
some
of
these
things
where
we
are
controlling
to
say
to
move
from
one
state
to
the
other
right.
So
we
are
saying
start
sync
right
so
that
one,
for
example,.
D
Yeah
but
again
everything
is
opaque.
I
see
type
un16t
shouldn't,
we
make
it
all
standardized
like
define
the
medium
start,
sync
and
so
on.
D
You
expect
control
plane
to
actually
set
this
operation.
B
D
B
D
C
D
Know
usually,
when
we
have
psy
proposal,
there
is
an
example
how
the
apis
are
called
like
common.
C
E
C
Split,
you
can
split
them
into
two
right.
You
know,
like
you
know,
just
just
have.
If
you
have
16
bits,
you
know,
maybe
higher
or
higher
bits
are
basically
eight
bits
you
can
deserve
it
for
for
standardized.
You
know
messages
and
then
maybe
other
bits
be
it
for,
for
you
know
saying
that,
okay,
you
can
either
stuff
or
you
can
use
for
for
vendor
extension
attributes
right
that
way,
what
happens
is
that
at
least
you
have
some
means
of
really
defining
it,
and
then
that
becomes
really
opaque
for
any
implementation
sort
of
speaker.
B
Got
it
okay,
I
think
yeah,
okay
yeah,
I
don't
see
it
in
this,
at
least
in
the
cpu
control
message.
For
now.
I
didn't
see
any
vendor
specific
this
thing,
but
it
will
be
a
good
thing
to
keep
it
that
way.
I
think
I
think,
over
time
we
will
realize
there
might
be
some
vendor
specific
things
that
we
want
to
do.
B
B
Cool
that
is
this
thing,
so
I
think
yeah.
I
think
I
take
two
comments
from
here.
One
is
yeah
the
the
enum
thing.
The
second
is
sample
code
on
how
it
calls,
I
think
I
need
I
I
need
some
we
need.
This
needs
some
work
in
terms
of
probably
making
it
more
of
a
psy
compliant.
This
thing
I
guess
but
yeah
the
intent
was
to
just
draw
a
proper
mental
model
on
how
it
is
working,
the
definitions.
B
I
will
look
at
making
it
making
docking
and
documenting
it
in
a
more
accurate
fashion.
B
B
Okay,
so
moving
to
the
control
plane
message.
Definition
again
like
I
was
saying
this
is
the
message
definition
that
is,
that
we
are
talking
about
here,
is
here
in
the
control
plane
channel
right.
So
this
in
this,
this
thing
is
defined
as
a
bi-directional
grpc
channel.
B
So
this
is
the
jpc,
generator
are
talking
about.
So
this
is
a
bi-directional
stream,
and
each
of
this
stream
carries
the
sync
message.
The
sync
message
can
be
either
a
flowsync
message
or
there
is
a
control
message
right.
The
control
message
is
one
of
a
tpu
control
message.
What
we
have
an
equivalent
that
we
were
talking
about
there,
it
should
be.
It
could
be
a
compact
check.
B
There
is
a
compat
results
message
and
then
there
is
a
cp
control
message
right,
so
you
will
find,
I
think,
very
similar
equivalents
in
the
psi
that
we
were
talking
about.
So
the
compact
check
and
the
compact
results
were
tied
to
the
capability
exchange
right.
So
there's
the
cp
control
message,
which
we
are
just
talking
a
few
minutes
back
and
then
there's
a
dpi
control
message,
which
was
the
dpu
to
dpo
control.
B
B
Then
there
is
the
compact,
this
thing
itself
the
message
that
was
there.
The
compact
check
has
two:
I
mean
one
piece
which
is
the
whip
info.
There
is
a
vip
id,
which
is
the
two
pips
that
we
have
there's
an
ip
address.
There
is
an
admin
draw
and
then
there's
a
protocol
matrix.
So
sometimes,
if
we,
when
we
need
to
control
either
the
bgps
and
something
like
that
that
we
need
to
there
is
a
metric
that
is
needed.
B
So
that
is
the
protocol
metric
and
then
there
is
a
dpo
info
which
is
specific
to
the
this
thing.
So
this
mirrors
what
we
saw
in
this
high
side,
where
there
is
a
heartbeat
interval
and
a
miscount
which
we
saw
in
the
dp
capabilities,
and
then
there
is
the
opaque
capabilities
themselves
right.
This
is
the
vendor
specific
capabilities.
We
were
talking
about
right,
so
there.
A
B
A
No,
what
I
mean
is,
let's
say
you
run
an
older
version
where
you
don't
support
this,
this
extension
of
the
new
message
right.
So
what
would
be
the
behavior?
So
that's
what
the
versioning
is
for,
fortunately,
for
the
message
itself,
so
that
you
know
what
information
coming
so.
B
Usually
yeah
yeah,
so
one
way
that
in
pro,
when
you're
doing
when
you're,
defining
on
grpc
and
protobuf,
one
way
that
this
is
handled
is
the
the
protocol.
It's
a
prototype
itself
handles
these
versioning,
so
you
could
have
two
peers
which
are
working
off
two
different
definitions
of
the
message
right,
so
some
of
the
invariants,
for
I
think
this
is
standard
thing,
which
is
the
message
id
doesn't
change
so
whenever
we
extend
the
protobuf
on
whenever
we
want
to
make
any
extensions
to
the
protograph,
there
are
some
rules
or
guidance.
B
This
thing
right
so
where
the
message
id
doesn't
change
and
the
prototype
of
itself
will
handle.
If
there
is
a
if
it
does
receive
a
message
which-
and
it
has
a
message
id
in
the
which
it
does
not
understand
with
the
local
definition-
does
not
understand-
there
is
a
way
that
the
prototype
handles
it
right
when
it
unmarshals
it.
B
B
In
the
sense
to
be
able
to
handle,
when
you
get
a
message
from
the
from
an
older
version,
for
example,
which
does
not
is
not
specifying
a
parameter
so
to
have
a
reasonable
default
to
be
to
be
able
to
work
with
those
things
are
things
we
can
do,
but
specifying
the
header
version
kind
of
a
thing
right.
A
message
version
is
not
strictly
necessary
with
protobuf.
A
B
The
semantic-
oh
yes,
okay,
so
that
I
was
thinking
we
can
be
able
to
capture
in
the
oh,
I
say
in
the
I
say
we
need
yes,
I
I
think
I
misunderstood,
so
I
think
in
these,
in
the
compact
check
itself,
we
need
something
else
to
say
h
a
protocol
version
or
something
like
that
is
what
you
meant.
B
A
Another
question
I
have
is
you
seem
to
have
this
whole
thing
going
through
the
sonic
deck
at
what
component
in
sonic
using
running
the
control
channel
protocol.
B
Yeah,
so
that
I
think
we
should
the
the
state
machine
itself.
I
think
that
is
we
need
to
decide
which
component
will
run
the
state
machine.
I
think
it
would
be
somewhere
where
the
state
machine
the
same
component,
that
the
state
machine
is
running.
E
Yeah
yeah,
so
so
that's
why
I
think
you
know
we
we
we
probably
want
to
implement
this
in
the
zinc
d
in
the
zinc
d
process.
You
know
where,
where
it's,
I
think
the
reason
is
being
that
you
know
the
first.
This
is
a
generic
sonical
feature
right.
You
need
to
be
implementing
in
a
sonic
and
then
the
second
is
that
the
sync
d
is
directly
linked
with
the
side.
E
E
Yeah,
whether
it's
a
new
thread
or
you
know
other
things,
I
think
yeah,
I
think
you're
definitely
open
to
that.
A
E
E
Oh,
but
by
the
way
I
you
know,
I
I
made
a
comment:
don't
see
if
this
is
right,
so
is
it?
Is
it
possible
that
we,
you
know,
usually
a
company
with
a
you,
know
the
side
proposal?
People
will
have
a
sample
code
to
see
okay,
how
how
to
invoke
those
site
functions
right.
So
you
know
generally,
that
will
give
you
know
people
more
idea
how
those
api
used
and
then
people
can
walk
through
that
code
and
the
singapore
and
and
that
will
be
the
basis
for
the
you
know,
for
the
actual
implementation.
E
A
It's
okay,
yeah,
that's
fine
and
they
took
a
note
to
try
and
figure
that
out
and
yeah.
C
C
We
are
processing
them
and
it's
it's
going
to
be
a
channel
and
then
also
we
are
literally
you
know
getting
things
through
grpc
right,
so
we
do
have
some
implementation
of
the
grpc
as
well.
So
we
have
to
see
whether
we
want
to
really
utilize
that
grpc
implementation
that
already
exists
in
sonic
and
how
does
it
really
tie
up
with
the
stream
channel
that
we
are
talking
about
here?
C
Yeah
so
gohan,
you
know
I
mean
I'm
talking
about.
Gnmi
runs
over
grpc
right,
so
yeah.
So
if
we,
if
we
already
have
a
gnmi
right
right
so
and
then
gnmi
has
a
grpc,
do
we
want
to
duplicate
it?
Or
do
you
want
to
really?
You
know?
Just
have
one
implementation
and
then
utilize
that
I
think
those
are
the
those
are
the
things
that
we
need
to
really
look
at
with
respect
to
all
the
packet.
I
o
that
you
want
to
really
process.
C
C
E
C
E
C
C
C
C
There
is
only
basically
this
you
know:
flow
sync
and
control
plane
messages
right.
These
are
the
two
things
that
we
are
talking
about.
E
E
Oh,
that
that
is
a
purely
data
plan,
functionality
right,
so
you
know
there
were
two
part
ways
the
box
sync
and
otherwise
I
don't
remember
what
is
it
called?
It's
like
an
incremental
sync,
those
incremental
sync,
those
those
thing
pack,
I
think,
is
being
encapsulated
and
sent
to
the
other
side
directly
from
the
dpo.
Those
packets
are
not
going
through
the
control
panel.
C
See:
okay,
yeah,
depending
upon
whatever
the
rate
at
which
it
is.
We
just
have
to
be
careful
in
terms
of
how
much
we
want
to
really
expose
it
through
the
sindhi
versus
how
much
we
want
to
over
some.
Some
other
other
means
to
handle
it
right.
E
Yeah
yeah
yeah,
so
so
you
know,
maybe
this
is
related
to
the
other
question
we
raised.
You
know,
for
example
like
can
we
reuse
the
gmi?
You
know
container
and
those
kind
of
things,
so
I
think
that
that
might
be
a
a
little
bit.
You
know
the
you
know
as
you
as
you.
As
you
know,
the
rice,
you
know
the
overhead
might
be
a
little
bit.
The
past
might
be
a
little
bit
longer
right
because
it's
a
different
container.
E
You
need
some
kind
of
inner
process
communication,
so
you
can
send
those
flow
records
to
those
gma
container
and
then
from
there
you
go
you
go
to
the
other
side
and
then
you
know
get
it
back
to
the
sink
d.
You
know,
then,
to
the
side
right,
so
that
part
is
a
little
bit
long
right.
So
therefore
you
know
it,
you
know.
Probably
you
know
it's
better
to
you
know
to
shorten
that
path
so
that
we
can,
you
know
directly,
you
know
sink
in
between
those
two
components
right
so.
E
We
were
thinking
you
know,
just
you
know,
use
the
thing
d
because
it
you
know
basically
link
to
the
site
library
and
you
can
get
all
those
messages
from
the
site.
Using
your
native
api
calls
and
then
you
know,
have
a
grpc
channel
to
sing.
That's
probably
the
lowest
overhead.
We
can
think
of.
C
Yeah
clearly
the
this
may
be
the
first
place.
We
can
try
it,
but
just
keep
in
mind
that
you
know
what
might
basically
entail
right
or
for
any
or
the
other
things
that
are
already
other
dynamics
within
sonic
that
that
might
be
run
into,
but
you're
right.
Actually,
we
can
try
the
first
thing
and
see
how
it
performs
and
then
then
we
try
to
see
you
know
the
reusability
or
scale
or
whatever
be
the
case
we
have
to
really.
You
know
we
keep
in
mind
about
it.
A
A
Is
there
acknowledgement
in
the
process
of
bulking.
B
Is
there
yeah?
This
is:
there
is
no
specific
act
unless
you
want
the
implementation
need
something
to
say
that
okay,
we
have
processed
this.
I
need
to
get
to
the
other
one,
but
there
is
an
there
is
an
acknowledgement
at
the
end,
which
is
the
bulk
sink
done.
B
B
A
I
see
and
the
acknowledgements
for
bulk
sync:
are
they
handled
and
processed
in
the
dpo
itself
or
are
they.
B
B
Through
sonic
it
passes
through
sonic
sonic
can
handle
it,
but
it
needs
to
pass
it
back
to
the
dpu
too,
because
there
is
some
state
on
the
dpu
that
we
will
potentially
need
to
change.
For
example,
you
know
like
we
were
talking
in
the
last
meeting
right
once
bulk
sync
is
done.
The
way
we
are
syncing,
the
dp
sync
messages
right
for
the
syncs
for
the
flows
that
have
been
synced
across
already.
There
might
be
a
marker.
We
may
need
to
put
that
the
bulk
sync
is
complete
for
this.
B
B
So
that
was
the
compact
message
that
we
talked
looked
about
and
then
there
is
this
the
equivalent
to
what
we
were
talking
about
last
time.
I
think
this
is
what
marian
was
also
bringing
up.
We
need
a
enum.
I
think
I
didn't
know
the
right
way
to
put
it
inside,
so
I
left
it
as
a
un-16,
but
here
in
the
this
thing,
like
you
see
there
is
it's
a
well-defined
set
of
operations
right.
B
This
is
the
mirror
image
of
or
not
mirror,
but
the
analog
for
what
we
saw
in
the
side
of
you
know.
This
is
for
controlling
the
state
mission
on
either
side
other
than
this.
There
is
the
flow
sync
message
itself.
The
flowsync
message
is
a
certain
noise.
This
is
a
batch
of
flow
info
messages
right,
it's
a
repeated,
it's
a
batch
and
each
flow
info
has
a
key
and
then
the
vip
id
which
it
belongs
to
there
is
a
key
and
then
the
policy
results
themselves
is
carried
in
the
metadata
right.
B
A
B
B
Right
so
that
is
yeah
give
me
one,
so
that
was
this
thing,
so
we
looked
at
the
flowsync
message
itself.
Then
there
is
the
control
message,
the
different
incarnations
of
the
control
messages.
What
we
saw
so
I
think,
since
we
are
running
about
six
minutes
remaining
and
just
yeah,
try
to
see
if
we
can
complete
the
different
message
flows
and
then
that
way
we
can,
you
know,
gone
through
one
round
on
the
list,
so
the
message
flows
themselves.
I
think
we
can
add
more
flows
as
we
go.
B
B
We
have
the
unplanned
switchover
and
the
planned
switch
over
okay,
so
the
so
the
note
pairing
and
bulk
sync.
I
think
this,
these
floating
messages,
I
think,
is
to
clarify
how
I
think
this
state
machine
that
we
saw
had
a
node
centric
view
this.
That
was
more
of
what's
happening
on
one
node
right
in
terms
of
state
this.
The
these
ladder
diagrams
are
these
pictures
and
message
flow
pictures
are
supposed
to
give.
B
The
intent
was
to
give
a
two
node
view
on
how
each
of
them
are
moving
in
relation
to
the
other
side
to
the
peer
side
in
this
picture.
What
was
this
thing
is
on.
The
left
side
is
once
one
node.
The
right
side
is
another
node
in
the
left.
Side
is
a
is
the
node
that
was
already
booted
up
and
existing
right
and
it
was
in
standalone
primary
mode
state
and
it
is
actively
forwarding
traffic
at
this
point
in
time
the
left
node
comes
up.
There
is
a
peer
connect.
B
This
is
where
the
on
the
heartbeat
channel
that
we
see
a
heartbeat-
and
there
is
this
thing
at
that
point
in
time.
The
sonic
state
mission-
this,
I
think
from
our
discussion,
is
syncd.
Wherever
the
state
mission
is
running,
there
is
a
the
control
plane
channel
the
grpc
channel
gets
established
and
then
once
that
gets
established,
there
is
a
this
thing
to
notification
back
going
to
the
sdk
to
say
to
register
the
cp
channel
that
we
saw
in
the
psi
sdk.
B
After
this
there
is
a
compatibility
exchange
and
there
is
a
compatibility
result
that
happens
here
right
and
then
there
is
a
notification
from
this
is
being
pushed
from
when
the
newly
coming
up.
Node
is
ready
to
accept
this
thing.
So
there's
a
start,
bulking
message
that
is
sent
to
you
know
owned
over
the
cp
channel,
and
this
is
related
to
the
to
the
implementation.
B
And
then
you
see
that
there
are
a
bunch
of
these
floating
messages
that
are
being
exchanged
right
to
sync,
the
channel
optionally.
There
could
be
there
could
be
messages,
this
gpu
control
messages
that
are
flowing
in
the
reverse.
Now,
if
you
needed
to
do
any
of
the.
B
Flow
control
or
anything
like
that
there
could
be
these
optional
dpu
control
messages
that
we
talked
about
right
and
once
this
is
done,
the
active
site
or
the
primary
does
the
primary
side
of
it
or
the
one
that
was
already
active.
B
That
knows
that
the
bulk
sink
is
complete
and
then
the
bull
sync
notification
is
passed
back
or
relayed
back
back
through
to
the
all
the
way
to
the
other
side.
One
thing
to
note
here
here
is:
I
think
we
have
the
black
messages,
all
the
black,
the
dark.
This
thing
that
is
there,
those
are
messages
that
are
happening
on
the
cp
channel.
I
also
tried
to
represent
the
orange
lines,
so
these
lines
are
actually
dpsync
messages
that
are
going
on.
B
I
think
it
was
just
to
illustrate
that,
as
we
are
doing
the
bulk
sync
there
there
there
will
be
dpsync
messages,
also
that
are
happening
between
the
two
nodes
between
the
two
dpos
right,
so
this
will
be
for
any
new
new
connections
that
are
getting
formed
or
updates
that
are
happening
depending
on
what
state-
and
I
think
we
discussed
this
two
weeks
back
and
what
is
this.
A
B
A
So
we're
one
minute
away:
gohan,
prince
maria,
not
everyone,
I'm
not
sure
how
you're
looking
on
time.
If
you
want
to
keep
going
or
save
this
pairing
for
the
next
time.
D
A
Okay,
thank
you
for
taking
us
through
this
sanjay.
It
looks
like
we'll
just
need
one
more
next
week.