►
From YouTube: DASH High Availability Working Group Aug 2 2022
Description
AMD HA presentation
A
Good
morning
my
name
is
balakrishnan
rahman
and
let's
do
an
overview
of
amd
pensando's
high
availability
implementation.
Sanjay.
Can
you
move
on
to
the
next
slide
yeah,
so
our
design
goals
that
we
had
are
so
all
connections
set
up
before
switchover
should
work
reliably
after
planned
and
unplanned
switchers.
A
A
So,
basically,
we
support
a
very
high
cp
cps
in
the
sense
connections
per
second
in
the
standalone
mode,
and
we
wanted
to
retain
the
same
with
a
high
availability
as
well.
So
we
wanted
to
basically.
A
Sorry,
I'm
hearing
okay,
so
with
with
the
high
c
I
connections
per
second.
Obviously
we
have
to
do
the
setup
and
tear
down
has
to
happen
at
high
connections
per
second
rate.
A
So
we
also
wanted
to
do
the
the
sync
of
connection
setup
and
tie
down
also
at
the
data
path
rate,
so
that
we
can
support
that
high
cps
rate
and
if
you
are
trying
to
sync
with
the
data
packets,
so
wanted
to
sync
only
the
required
packets
so
that
not
sync
all
the
packets
with
with
the
secondary,
because
we
wanted
to
conserve
the
packets
per
second
for
the
data
traffic.
A
So
these
are
the
design
goals
with
which
we
started
to
you
know,
design
and
implement
our
you
know:
high
availability
feature:
okay,
sanjay.
You
can
go
on
to
the
next
slide.
A
Okay,
so
with
that
we
we
started
exploring
a
few
flow
replication
options,
so
the
three
options
that
we
started,
exploring
where
one
is
the
asynchronous
flow
replication,
then
the
other
one
is
that
inline
flow
replication
with
the
secondary
forwarding
and
then
the
other
one
is
inline
flow
replication
with
primary
forwarding.
A
So
the
first
option
asynchronous
flow
replication,
so
basically
flows
are
established
between
endpoints
even
before
they
are
synced
to
the
secondary.
A
What
that
means
is
basically
syncing
happens
off
offline
after
flows
are
set
up
in
the
primary
and
basically
primary
sets
up
the
flow
and
also
it
forwards.
The
packet
to
the
end
point
even
before
it
syncs
it
to
the
secondary,
so
secondary
doesn't
have
the
state
yet
and
then
offline.
A
The
primary
goes,
and
you
know
syncs
the
state
with
the
secondary.
So
what
this
means
is
there
is
a
there
is
a
possibility
where
connections
could
be
missing
on
any
unplanned
switchover.
A
In
other
words,
that
is
something
we
term
here
as
that
it's
not
reliably
synced,
and
so
those
missing
connections
will
not
be
able
to
resume
after
switchover,
because
there
is
no
connection.
So
without
the
connection,
then,
all
the
packets
that
come
to
this
new
new
node
after
switch
over
they
will
they
will
get
dropped,
so
they
cannot
resume.
A
So
that's
the
asynchronous
flow
replication
where
the
synchronous
is
happening
offline.
So
then
we
looked
into
the
the
next
option.
Where
say
we
we
do
inline
flow
replication.
What
that
means
is
basically
flows
are
synced
in
line.
A
So
when
we
receive
a
data
packet
to
set
up
the
connection
in
the
primary
we
set
up
the
connection
in
the
primary
and
we
use
that
we
redirect
the
data
packet
to
the
secondary
so
that
with
some
metadata,
so
that
the
secondary
can
also
set
up
the
connection,
thereby
the
state
is
basically
replicated
to
the
secondary
and
then
in
this
particular
option.
The
secondary
itself
will
forward
that,
because
the
sync
happened
with
the
data
actual
data
packet
itself,
the
secondary
will
forward
the
data
packet
towards
the
to
the
to
the
end
point.
A
So
there
is
no
acknowledgement
that
comes
from
the
secondary
in
this
case
to
say
that,
yes,
the
state
has
been
replicated
so
second,
because
as
soon
as
the
first
packet
that
sets
up
the
connection
in
the
primary
it
is
redirected
to
the
secondary
and
the
secondary
sets
up
the
connection,
and
then
it
forwards
to
the
end
point.
A
So
the
advantages
here
are
obviously
the
the
the
packets
are
not
let
go
to
the
endpoint.
It
is
first,
is
replicated
in
the
secondary,
so
basically
the
connections
the
states
are
replicated.
They
are
there
in
both
primary
and
secondary,
so
the
sink
is
kind
of
reliable,
so
that
no
connection
is
missed
in
the
secondary,
as
flows
are
first
synced
before
sending
packets
to
the
aim
point
also,
we
use
the
data.
A
Packets
and
the
data
packets
are
redirected
at
the
data
path
rate,
so
it
happens
at
the
data
path
rate
and
we
can
sustain
the
high
cps
that
we
support
in
our
data
path
and
also
we
don't
do
any
buffering.
So
there
is
no,
I
mean
no
drops
due
to
any
sync
delays
or
anything
like
that,
as
we
redirect
the
actual
data
packet
itself
to
the
to
the
secondary
for
sinking
the
state.
A
A
Let's
say
the
connection
is
in
the
process
of
getting
terminated,
so
that
means
it's
basically
sending
a
last
tag
or
reset
packet,
and
the
control
packets
of
tcp
is
what
we
try
to
redirect
to
the
secondary
to
sync
the
state.
So
the
last
tag
reset
will
be
redirected
to
secondary,
to
you
know,
basically,
sync
the
state
of
terminating
the
connection.
A
So
if
this
last
stack
are
reset,
it
gets
dropped
before
going
to
the
secondary.
Then
what
would
happen
is
basically
primary
has
seen
the
last
tag
reset,
so
it
it
will
clean
up
the
connection,
whereas
the
last
tag
reset
redirected
to
secondary
didn't
reach
the
secondary
and
so
secondary
will
still
maintain.
The
connection,
so
kind
of
the
connection
is
now
out
of
sync
between
the
primary
and
secondary.
A
So
so
that's
the
so
that's
a
disadvantage
by
not
having
the
acknowledgement
coming
back
from
the
secondary
to
know
whether
the
connection
has
been
really
synced
or
or
not.
So
then
we,
we
went
and
explored
the
other
option
where
we
we
do
inline
flow
replication.
A
But
what
about
getting
the
packet
back
from
the
secondary
and
let
the
primary
do
the
forwarding?
Finally,
and
not
the
secondary
this
way,
it
will
serve
as
an
acknowledgement
of
the
replication
state
replication
and
the
primary
also
will
do
the
I
mean
we'll
receive
the
inbound
packet
as
well
as
it
will
forward
the
outbound
packet
kind
of
consistent
there.
A
So
so
so,
this
option
is
same
as
the
above
option,
with
respect
to
inline
forwarding
with
an
additional
acknowledgement
so
where
the
secondary,
after
setting
up
the
flow,
will
redirect
whatever
data
packet
that
was
sent
from
the
primary
to
secondary,
and
so
this
lets
the
primary
know
that
the
the
state
has
been
synced
and
then
it
will
forward
the
packet
to
the
endpoint,
knowing
that
the
connection
has
been
replicated
with
the
secondary.
A
So
it
has
the
same
advantage
of
inline
forwarding,
whatever
that
we
discussed
in
the
previous
option
and
in
addition,
the
issue
with
respect
to
the
last
stack
reset
getting
dropped.
So
even
though,
let's
say
it
gets
dropped
because
if
it
gets
dropped,
the
acknowledgement
won't
reach
the
primary.
As
a
result,
the
primary
won't
clean
up
the
state
it
will
still
maintain
the
state
and
retransmission
has
to
happen
from
the
end
point
at
which
and
the
primary
still
maintains
the
state
in
this
particular
case.
A
So
it
will
still
try
to
synchro,
send
the
redirect
to
sync
the
state
with
the
secondary,
and
that
way
it
will
converge.
A
So
that
way,
the
acknowledgement
allows
us
to.
You
know,
keep
the
connections
in
sync.
A
Yeah,
so
that's
why
I
mentioned
that
that
the
acknowledgement
comes
and
which
is
the
actual
data
packet,
and
that
is
the
data
packet
which
we
forward
to
the
end
point.
Yes,
we
expect
the
acknowledgement
to
come,
and
that
is
the
acknowledgement
which
has
the
data
packet,
which
is
actually
forwarded
to
the
end
point.
A
But
again
you
have
the
same
issue
right
where
you
forwarded
the
packet,
so
the
endpoint
is
going
to
move
forward
and
then
it
is
going
to
send
the
subsequent
packets
or
it's
going
to
basically
terminate
the
connection.
But
we
the
act,
might
not
come
because
there
can
be
drops.
C
B
This
being
done
for
for
packets,
when
you're
in
the
established
state,
or
only
when
there
are
state
changes.
A
B
So
so
doesn't
that
lead
to
like
another
situation
where,
like
when
you're
in
the
established
state,
the
primary
like
doesn't
time
out?
Because
it's
you
know,
there's
no,
it's
not
idle.
The
flow
keeps
moving,
but
the
secondary
could
time
out
the
connection,
because
it's
not
seeing
any
of
the
pack,
the
you
know,
packets,
while
you're
in
the
established
state.
Isn't
there
like
a
p,
do
you
have
to
like
periodically
keep
the
secondary
alive.
A
Yeah
so
basically
we
have
a
coordinated
aging
between
the
primary
and
the
secondary,
so
yeah.
So,
like
you
said
yeah
that
in
the
secondary
I
mean
so,
the
aging
process
basically
happens
in
the
primary
in
the
secondary.
There
is
a
kind
of
aging
process,
but
that
timeout
is
quite
quite
large.
So,
basically,
as
you
said,
there
is
a
there
is
a
coordinated
exchange
between
the
primary
and
secondary
to
keep
the
connection
alive.
If
data
is
getting
forwarded
in
the
primary.
A
Thanks
so
the
inline
so
yeah
so
general
to
your
question.
Yes,
it's
kind
of
a
trade-off
like
I
mean
to
make
the
replication
reliable
yeah.
There
are
some
extra
exchanges
that
we
had
to
do
with
between
the
primary
and
the
secondary
and
also
yeah.
So
we
are
trying
to
also
at
the
same
time,
trying
to
optimize
to
see
what
is
the
required.
You
know
state
changing
packets,
that
we
can
sync
between
the
primary
and
the
second
and.
C
I
think
if
you
may
or
may
not
have
been
here,
but
I
think
excite
did
do
some
some
interesting
measurements
on
what
happens
at
least
in
azure
today,
and
it's
when
you
know
there's
some
shorter
timeouts
of
aging
so
like,
for
example,
if
you
receive
a
fin
and
you
don't
receive
the
the
fin
act
within
five
seconds-
I
think
they
tear
it
down
or
if
you
receive
a
thin
act
and
you
don't
receive
the
act
within
five
seconds,
they
tear
it
down.
C
C
C
B
Right,
so,
if
we
suppose,
for
example,
if
we
get
an
act
for
you
know
for
a
state
that
is
in
the
tear
down
sort
of
bucket
right-
and
we
don't
have
that
flow
anymore,
because
we
have
sent
it
to
the
end
point
and
we
have
already
received-
and
we
have
already
deleted
that
flow,
we
can
always
ignore
that
right.
B
A
Yeah
we
can
ignore
that
I
mean
see
here.
I
mean
the
particular
case
that
we
were
talking
about
was
the
one
where
yeah,
so
we
try
to
send
the
last
tag
to
the
secondary
and
it
got
dropped
and
but
let's
say
the
primary
kind
of
clean
the
connection
right
with
the
second
option,
particularly
so
the
end
point
wouldn't
have
received
the
receive
the
package,
so
it
will
keep
retransmitting,
and
in
that
particular
case
I
mean
the
case.
What
I
was
talking
about
was
with
respect.
C
C
You
know
six
packets
into
one
one
dpu
out
that
dpu
to
the
mape
dp
back
from
the
mate
dpu
into
the
deep
hurricane
and
then
out
against.
It's
like
it's
like
going
through
too
many
times
like
that.
If
you
count
it
all
up,
it's
like
six
control
packets,
which
are
coming
in
and
out
twice
and
at
three
million
times
six
is
18
million,
and
then
you
double
that
now
you're
up
to
36
million
packets
per
second
just
in
syncs
and
that's
seems
well,
it's
not
doesn't
seem
excessive.
C
It
is
excessive
because
you're
not
going
to
have
much
left
over
for
user
traffic.
So
that's
why
I
think
we
need
to
I
get
the
benefits
for
sure,
but
we
need
to
optimize
it
because
it's
just
too
much
messaging
and
quite
frankly,
it's
sending
whole
whole
packets
and
talk
when
the
other
side,
all
it
needed
was
to
update
its
state,
not
not
3c,
pole
packets,
but
talk
about
that
later
too,
but
anyways
we
get
these
three
that's
good.
We
should
go
on
with
your
presentation,
so
we
can
absorb.
A
Okay,
okay,
so
the
third
option
is
what
we
have
implemented
so
yeah.
So
let's
can
you
go
on.
D
A
quick
question:
you
talked
about
that,
of
course,
only
the
control
packets
are
synced.
So
how
about
you
know?
Besides
the
connections,
do
we
not
want
to
synchronize
any
statistics
or
telemetry
data
between
primary
and
secondary.
A
So
statistics
wise,
like
see
packets,
transmitted
and
received-
I
mean
there
are
control
packets
that
we
receive
in
the
standby,
but
the
other
all
the
other
data
packets.
They
go
only
in
the
in
the
in
the
primary,
so
our
so.
Basically,
we
maintain
stats
in
both
the
primary
and
the
secondary,
and
we
present
that
to
the
con
to
so.
A
Basically,
the
controller
can
present
that,
as
the
starts
coming
from
the
primary
and
the
secondary,
like,
for
instance,
one
one
one
case
that
I
can
one
example
that
I
can
give
you
is
like
before
switchover.
You
will
see
the
packet,
for
I
mean
the
packets
and
bytes
transmitted
and
received
that
will
be
incremented
in
the
primary
but
not
in
the
secondary
and
then
after
switch
over
it
will
for
the
same
connection.
It
will
be
updated
in
the
secondary,
or
rather
the
secondary,
which
becomes
standalone
at
that
point.
D
So
there
are
separate
control
packets
that
that
essentially
synchronize
all
these
statistics
between
primary
and
secondary.
Besides
the
database.
D
C
Yeah,
this
isn't
even
a
goal
for
for
this.
The
sdn
control
plane
gathers
up
the
information
aggregates
as
needed
in
a
controlled
failover.
That's
not
even
a
problem,
it's
just
aggregated
and
in
a
total
failure.
Quite
frankly,
we
don't
know
what's
going
to
be
where,
but
the
control
plane
will
will
gather
that
or
it
will
be
pushed
to
the
control
plane,
while
the
other
probably
pushed
from
periodically,
but
with
this
h.a
program
is,
doesn't
have
a
goal
of
maintaining
all
the
stats.
C
That's
to
be
excessive,
like
we
would
have
no
bandwidth
for
anything
at
that
point.
Not
it's
not
really
a
something.
That's
doable,
okay,
but
we
will.
We
do
have
the
notion
of
even
in
you
know,
even
in
sonic
you
can
have
a
pub
sub
interface
and
you
can
push
statistics
periodically
up
to
the
different
databases
etc,
and
that's
what
we
planted
it
and
aggregate
sdn
control
plane
will
aggregate.
D
So
I
have
a
basic
fundamental
question
like
so
when
we
replicate
this
state
right
to
the
pier
like.
Do
we
just
wanted
to
confirm
like
when
the
first
packet
goes?
On
the
other
side,
it
carries
the
like,
for
example,
if
it
is
a
vx
plan,
encapsulated
the
the
policy
set
and
kept
it
with
vxlan
and
acl
said
allow.
So
I
I
think,
like
we
will
be
carrying
this
derived
data
and
the
other
side
will
just
cache
it
right.
It
is
not
going
to
do
the
cps
processing
again.
A
So
yeah
so
the
policy
evaluation
and
so
yeah
we
do
carry
metadata
when
we
redirect
the
packet
to
the
secondary
and
that
metadata
yeah
has
some
of
these
information,
like
the
policy
evaluation
result,
whatever
that
happened
in
the
primary
is
carried
to
the
secondary
and
some
rewrites.
Yes,.
D
A
So,
with
respect
to
the
virtual
ip
yeah,
we
use
the
virtual
ip.
I
will
come
to
those
those
details
in
the
subsequent
slides.
There
is
a
common
virtual
ip
that
is
shared
across
these
pairs
and
used
yeah
yeah.
A
I
counted
those
slides,
yeah
sure
yeah,
thanks
so
with
respect
to
the
deployment
topology,
so
one
topology-
I
have
shown
here
where
so
we
have
this
dash
appliance
where
there
are
one
are
one
or
more
I
mean
dses
I
mean
dses
is
the
amd
pensando
terminology:
you
can
yeah
it's
sequent
to
the
dpu,
so
one
or
more
dses
and
each
dac
in
the
appliance
is
paired
with
another
ds
in
the
other
appliance,
and
these
dacs
are
are
dually
connected
to
the
to
both
the
tars
and
internet
towers
are
connected
to
the
spines
in
a
class
network.
A
So
with
our
implementation.
So
we
have
multiple.
You
know
sync
channels
between
these
dses,
so
we
have
a
so.
As
you
know,
we
have
a
p4
data
path
and
we
also
have
a
software
data
path.
So
there
is
a
p4
data
path
channel,
which
is
the
one
that
does
the
inline
flow
replication.
Basically,
as
and
when
the
packets
come,
the
redirecting
the
packet
with
metadata,
all
that
is
taken
care
in
the
data
path
inline,
and
that
is
the
channel
that
is
used
to
do
that.
A
And
then
there
is
a
another
channel
which
is
the
software
data
power
channel,
and
that
is
used
for
a
sync
like
bulk
sync.
Bulk
sync
happens
when,
when
one
dac
that
is
already
existing
with
state
and
a
new
dac
comes
up
and
it
pairs
up
with
the
existing
dac.
A
A
So
this
deployment,
whatever
I
have
shown,
is
an
appliance
with
a
multiple
dses.
So
so
we
can
also
have
other.
You
can
also
think
other
topologies,
like
you,
can
have
just
two
dacs
which
are
doing
the
pairing
connected
to
the
tars,
or
you
can
also
think
about
dses
within
the
within
the
smart,
switch
doing
the
doing
the
between
them.
A
Okay
yeah,
so
so
the
option
that
we
have
implemented
is
the
inline
flow
replication
with
the
primary
forwarding.
So
I
will
go
into
little
bit
of
details
here.
So
basically,
so
as
you
see
in
the
picture
there,
so
we
have
the
primary
dac
and
secondary
dse
so
with
so.
Basically
the
idea
is
we
present
this
primary
dac
and
secondary
dac
as
a
single
logical
dac.
A
So
what
that
means
is
whenever
there
is,
whenever
you
you,
you
have,
you
know
packet,
I
mean
connection
setup
between
vmware
and
vmb.
Let's
say
you
have
only
a
standalone
dcr
one
dsc.
You
basically
set
up
the
connection
in
that
dse
and
then
you
forward
the
traffic
to
the
vmb.
A
A
So
flow
is
basically
set
up
and
clean
in
the
logical
dsc
before
we
follow
the
packets
to
the
endpoint.
So
how
we
set
up
in
the
secondary
dac
is
by
redirecting
the
data
packet.
I
mean
data
packet.
In
the
sense
it
will
be
the
control
packets
in
case
of
tcp,
like
the
synfin
reset
and
for
under
basically
for
udp.
A
It's
the
data
udp
packets
and
the
packets
that
are
redirected
will
also
have
additional
metadata,
and
these
are
all
taken
care
in
our
p4
data
path
between
the
primary
dse
and
the
secondary
dse,
so
that
the
metadata
carries
policy
evaluation
and
the
rewrites
from
the
primary
dac
to
the
secondary
basic
and
secondary
dsc
uses
that
we
will
come
to
the
the
config,
the
the
config
related.
You
know
things
in
in
the
subsequent
slides,
so
the
coordinated.
C
I
I
don't
understand
something
you're
saying
send
in
and
reset
well,
if
you
do
a
switch
over
and
and
you
you
have
finnac
and
ack
right
and
how
does
the
state
machine
know
where
it
is,
if
you're
only
sending
up
in
like
what,
if
it
sent
the
finn
fin
act,
and
then
it's
only
missing
the
act?
How
would
the
secondary
even
know
that.
C
Okay,
well,
that's
important
because
that
triples
that
triples
the
amount
of
data
being
sent.
So
this
this
kind
of
indicates
that
you're
sending
us
you
know
at
the
beginning
and
at
the
end
and
of
course
you
can't
close
out
the
fin
for
sure,
because
then
you
would
never
see
the
final
act
at
the
end
point,
so
we
should
at
least
write
in
what
we're
really
doing,
because
it's
not
you
could
say,
sin
sequence,
pin
sequence.
Maybe.
A
Yeah
yeah,
so
so
so
gerald.
C
A
You
know
so
we
our
base
implementation
was
all
six
packets
since
snack
hack,
finfinak,
all
of
them
are
synced,
and
we
are
also
looking
into
the
optimization
of
you
know:
reducing
the
number
of
those
six
packets
to
a
few
packets
to
get
the
same
effect.
Yeah.
C
E
All
right
question:
that's
chris:
how
do
you've
been
describing
the
synchronization
mechanisms
mostly,
but
what
I
haven't
heard
really
is
a
just
a
discussion
of
what
happens
when
a
failure
occurs.
How
does
the
traffic
switch
from
the
primary
to
the
secondary,
etc?
Practically.
C
A
E
A
So
yeah,
so
coordinated,
ideal
aging
is
taken
care
by
the
software
data
path
between
the
primary
and
the
secondary
dses,
and
also
bulk
sync,
which
happens
when
a
new
dac
pairs
up
with
an
existing
dse.
A
So
when
they
get
dropped
this
will
the
those
drafts
will
be
perceived
by
the
endpoint
as
network
drops,
and
it
will
basically
trigger
re-transmissions
at
which
point
again,
the
the
the
sink,
the
redirecting
the
packet
and
sync
will
take
place,
and
it
will
the
the
sync
I
mean
the
replication
will
converge
and
the
packet
will
go
to
vmp.
A
We
also
have
periodic
heartbeat
exchanges
between
these
primary
and
the
secondary
dses.
A
A
The
dsc
takes
over
transitions
to
a
standalone
role.
When
he
transitions
to
standalone
role,
it
doesn't
sync,
he
doesn't
redirect
any
packets
to
say
to
to
replicate
the
state,
because
there
is
no
peer
at
that
point.
Of
time
when
it
is
in
standalone
state
and
thereby
we
won't
be
keep
dropping
the
packets
as
soon
as
we
know
that
the
pier
is
down.
C
I
think
that
leads
to
split
brain,
but
I
have
another
question
before
we
get
to
that
is
you
can
sync
to
the
secondary
and
it
can
you
can
send
the
you
know
like
a
syn
packet,
for
example,
to
the
secondary,
and
then
it
sends
it
back
to
you
and
then
you
forward
it
on
and
so
now
you're
both
synced.
C
But
then
it
gets
lost
between
the
primary
and
and
the
the
end
point,
and
so
it
has
to
resend,
but
now
that
it's
rescinding,
where
both
both
devices
believe
that
they
have
a
connection
already.
B
B
C
A
Yeah,
it's
like
yeah.
We
do
handle
the
re-transmissions
in
general.
Yes,
that
sin
will
be
re-transmitted
by
the
pre-by.
The
primary
dse.
A
Yeah
I
mean
we,
it's
not
exactly
same
as
first
I
mean
with
respect
to
our
implementation.
Yes,
we
have.
There
is
a
difference,
because
we
keep
track
of
the
state
in
the
primary
right,
because
there
is
some
change
in
the
state.
So
we
know
that
it's
a
retransmitted
sin,
but
yeah
it
is
handled.
We
don't.
We
don't
drop
the
re-transmission.
C
Okay,
so
if
you
on
that
note,
are
you
requesting
the
secondary
do
anything
but
copy
the
state
into
its
like
once
it
opens
the
connection?
Is
it
just
copying
the
state
and
because
it
doesn't
really
need
to
do
any
calculations?
The
primary's
already
done
so
he's
suggesting
that
the
metadata
would
just
say,
hey
just
go,
update
this
connection
with
the
state
and
forget
about
any
calculations,
you're
trying
to
do,
or
are
you
actually
asking
it
to
do
the
same
thing?
The
primary
already
did.
A
No
so
with
respect
to
I
mean
yeah,
this
gets
into
our
implementation,
so
some
of
the
things
we
need
to
do
see
we
have
to
tie
it
to
some
of
the
resources
that
have
been
allocated
in
the
in
the
secondary
right.
The
connection
has
to
be
tied
to
that.
So
for
that
you
need
to
do
the
corresponding
local
lookup,
but
mainly
where,
where
the
copy
happens,
is
the
policy
evaluation
result
right?
We
need
to
use
whatever
that
the
primary
assent,
because
the
config
that
is
getting
pushed
to
primary
and
secondary
they
are
not.
A
They
are
not
in
sync
at
any
instant.
They
are
eventually
becoming
consistent,
because
the
controller
is
kind
of
pushing
the
configuration
to
each
and
every
dac
one
after
the
other
right.
So
that's
why
the
policy
evaluation
result.
We
send
it
between
the
primary
and
the
dac
and
d,
and
the
secondary
dac
basically
takes
that
and
it
it
uses
that
for
setting
up
the
connection,
likewise
the
rewrites
it
does.
A
But
in
addition
to
that,
with
respect
to
our
implementation,
there
are
some
things
which
we
there
are
some
resources
to
which
we
have
to
tie
up
the
connection.
Those
results
are
what
we
are
doing
like
I
mean
some
ena
related
resource
and
things
like
that.
C
B
B
I
think
those
fix
up
is
what
valkyrie
is
referring,
but
there
are
some
pieces
where
we
have
to
tie
it
to
the
hardware
resources,
for
example,
for
example,
the
next
stops
that
we
use
and
things
like
that,
those
things
happen
on
the
secondary,
but
the
policy
itself,
whatever
is
the
primary
decision,
is
carried.
B
Okay,
hey
welcome
one
question
about
the
bulk
sync,
so
this
is
the
case
where
the
secondary
dse
is
newly
coming
up
and
I'm
assuming
you're
synchronizing.
You
know
a
whole
bunch
of
flows
from
primary
to
secondary
right.
How
does
it?
How
does
it
you
know,
ensure
the
state
that
properly
maintained
in
secondary
dac
right
and
if,
if
this
actually
works,
why
wouldn't
you
use
the
same
approach
for
even
the
inline
instead
of
sending
the
packet?
Why
wouldn't
you
send
the
states.
A
No,
the
bulks,
so
the
yeah,
the
bulk
thing
is
happening
kind
of
offline
right,
so
obviously
there
will
be.
We
cannot
keep
up
with
the
state
so
yeah.
I
will
come
to
the
come
to
the
bulking
part
where
we
will
kind
of
catch
up
to
the
state.
Like
I
mean
I,
I
I
tell
you
what
we
do
that
yeah.
C
I
think
we
also
described
that,
as
the
and
perfect
thing
doesn't
happen,
you
know
in
in
real
time
it
is
a
background
test
and
when
it's
done
because
you're
also
doing
the
the
synchronization
you're
talking
about
here,
then
you're
done.
You
do
one
perfect
sync
when
you
get
to
the
top
of
the
table.
You're
done
so
I
think
we
described
that
already,
but
I'm
sure
that
we
can
also
because
this
meeting's
only
an
hour-
we're
not
gonna,
but
we
can
discuss
that.
Definitely
there's
techniques,
but
it
is
asynchronous.
C
A
A
Yeah
yeah
the
next
slide,
yeah
yeah,
so
so
so
the
so
the
primary
secondary.
So
basically
it's
it's
basically
the
active
and
the
standby
sync
that
we
do
we're
in
the
standby.
The
secondary
is
like
acting
like
the
stanway,
wherein
it
doesn't
forward
any
traffic.
The
active
is
the
one
that
is
forwarding
the
traffic,
but
at
the
same
time
it's
not
like.
The
hardware
is
acting
like
a
active
standby,
and
what
I
mean
by
that
is
it's
not
like
dac.
A
One
is
completely
forwarding
the
traffic
and
dac2
is
not
forwarding
traffic
at
all.
The
hardware
is
still
active,
active
and
how
you
do
active,
active.
It's
the
eni
based
active
active.
What
that
means
is
basically
so
so
we
have
the
so
we
have
a
virtual
ip
or
the
pa
ips.
We
have.
We
give
the
virtual
ip
that
is
shared
across
these
dscs,
so
both
share
the
same
ip
and
we
have
two
virtual
ips.
A
So
the
way
these
virtual
ips
are
configured
in
these
dacs
are,
as
shown
in
the
in
the
in
the
picture,
wherein
let's
say
we
have
whip
one
and
group
two
in
dac
one
whip.
One
is
primary
and
vip2
is
secondary
and
in
the
other
dac
it's
the
other
way
where
in
viewpoint
becomes
secondary
and
vip,
two
becomes
primary.
A
So
basically
dac1
will
be
getting
traffic
for
whip,
one
which
is
primary,
and
it
will
do
the
forwarding
for
that
and
whereas
the
dac2
will
do
the
forwarding
for
book
two
and
basically
how
you
attract
traffic
for
the
for
the
for
the
corresponding
virtual
ip
in
dac1
and
dac2.
A
It's
all
based
on
config
vgp
configuration
where
in
weapon
becomes
the
dsu-1
becomes
the
favorable
node
for
virtual
ip1
to
forward
the
traffic
and
the
likewise
virtual
ip2
becomes
a
favorable
node
for
forwarding
the
traffic
in
dac2
and
then
what
the
controller
does
is.
So
we
have
bunch
of
enis
that
are
you
know,
being
serviced
by
these
dses.
A
So
out
of
these
dna,
some
dnas
will
be
also.
The
controller
will
associate
some
of
the
enas
to
virtual
ip1
and
some
other
set
of
enis
to
the
virtual
ip2.
A
A
Does
the
forwarding
for
those
respective
enis
in
case
of
any
failure?
Let's
say
one
dac
fails,
let's
say
dc1
fails
and
dac2
is
the
only
one
that
is
surviving,
then
it
basically
at
that
point
it
owns
both
the
virtual
ips
and
it
basically
attracts
and
forwards
traffic
for
both
the
virtual
id.
So
all
the
enas
will
be
kind
of
active
at
that
point
in
that
surveying
dac.
E
A
All
right,
so
there
is
a
there,
is
a
heartbeat
that
is
being
periodically
exchanged
between
the
dacs.
So
at
any
point,
when
one
of
the
dac's
go
down
the
there
is
a
disruption
in
the
heartbeat
exchange
and
we
have
a
configurable.
You
know
heartbeat
interval
and
the
count
after
the
after
that
bit
count
multiplied
interval
timeout
period
it.
A
Basically
the
dac
declares
that
the
pr
dac
has
gone
down,
and
so
it
the
virtual
ip
whatever
that
is
configured
in
the
dac,
they
all
transition
to
a
standalone
role.
So
both
whips
like,
for
instance,
dac
one,
goes
down.
Ds
c2
will
not
get
any
rdb
responses
from
dac1,
so
dac2
will
basically
move
the
both
the
virtual
ip
one
and
two
to
standalone
role.
At
that
any
time
the
data
path
sees
that
the
virtual
ips
are
in
standalone
role.
It
doesn't
do
any
packet
redirection.
A
E
C
This
is
the
it
is
state
by
the
way,
there's
there's
lots
of
bad
things
that
can
happen
in
split
brain,
but
let's,
let's
move
on
because
there's
only
a
limited
one
left.
A
Okay,
yeah,
so
the
node
pairing
process,
so
basically
yeah
I'll
go
over
what
happens
in
our
you
know,
dses
when
the
para
happens
right
so
basically,
let's
say
the
ac1
dac2
and
dac1
is
the
one
that
comes
up
first.
Let's
say
it
boots
up
as
soon
as
it
boots
up
the
controller
configures
and
it
kickstarts
the
the
finite
state
mission.
It's
a
the
high
availability
ssm.
We
call
it
the
finite
state
machine,
so
there
is
a
apa
from
the
controller
to
start
the
fsm.
A
So
basically
the
controller
needs
to
configure.
We
provide
a
control.
So
there
is
a
control,
basically
a
gate
for
the
controller
to
when
to
start
the
kickstart,
the
fsm.
Basically,
it
can,
even
though
it
let's
say
it,
configures,
just
the
h,
the
high
availability
parameters,
but
it
doesn't
configure
any
other
thing.
It
doesn't
doesn't
mean
that
we
will
immediately
start
the
high
availability.
You
know
pairing
process,
so
if
we
pro
there
is
a
there
is
a
control
for
the
controller
where
it,
even
after
all
the
configuration
it
will
come
and
issue.
A
This
h
is
start
at
which
point
the
fsm
gets
kick-started.
It
binds
the
so
there
is
a
in
terms
of
configuration
there
is
this
inter
dac
ip
addresses.
That
is
the
one
that
is
used
to
communicate
between
the
dac-1
and
dac2
for
the
those
are
the
endpoint
addresses
for
the
inter
dhc
channel,
so
we
bind
to
that
and
then
the
virtual
ips
will
go
to
a
dorman
standalone
roles.
A
The
dharma
means
basically
the
I
mean
it
is
kind
of
I
mean
in
this
case
okay
before
before
explaining
dharma.
So
it
goes
to
standalone
the
reason
being
that
there
is
no
peer
at
that
point
right,
because
dac2
has
not
come
up
yet
so
it
goes
to
standalone.
A
In
addition,
we
have
another
another
control,
so
it
first
goes
to
a
dartmouth
state.
The
dartmouth
state
is
where
we
have
not
advertised
the
virtual
ip.
So
we
are
not
attracting
any
traffic.
We
are
in
a
role
in
a
dormant
role
only
when
the
controller
comes
and
it
issues
take
over
the
admin
roads.
That's
when
we
take
over
the
roads,
so
I'll
explain
why
we
have
the
darwin
rule
a
little
later,
so
we
take
over
the
roles.
A
So
at
this
point,
as
you
see
in
the
picture,
so
just
so,
as
you
see
in
the
picture,
so
both
the
virtual
ips
they
go
to
standalone
and
standalone
the
reason
being
that
when
the
fsm
started,
it
has
already
started
the
heartbeat
exchanges.
A
But
given
that
there
is
no
appear,
there
is
no
response,
so
it
goes
directly
to
standalone,
saying
that
there
is
no
peer
for
me
to
stay.
Do
any
state
synchronization.
So
it
goes
to
standalone
and
now
any
data
path,
traffic.
We
set
up
the
connections
and
we
start
switching
the
traffic
and
a
little
later,
let's
say
so.
All
the
states
are
being
set
up
at
this
point.
Little
later,
let's
say
boot
up
happens
and
the
same
thing
happens
in
the
dac2
where
it
goes
through
his
start,
at
which
point
it
is.
A
It
will
start
exchanging
the
heartbeat
behind
the
dhcp
ips
and
then
start
exchanging
the
heartbeat,
and
then
they
they
they
see
that
there
is
appear
as
soon
as
they
see
that
there
is
a
pier
the
dac
one
will
kind
of
go
immediately
active
for
both
of
them.
Basically,
the
point
being
here
is
that
this
is
the
dac
which
is
still
will
be
receiving
the
traffic
for
both
the
virtual
ips.
There
is
no.
A
This
dac
has
not
advertised
the
virtual
ip8,
because
in
darwin
state
we
don't
advertise
the
virtual
ip8,
we
don't
give
the
control
it,
so
it
becomes
active
and
then
it
starts
all
the
sync.
Basically,
there
will
be
existing
connection,
for
which
we
do
the
bulk
sync.
There
will
be
real-time
new
connections
that
are
being
set
up
and
then
connections
that
are
being
toned
down,
which
will
be
inline
synced
through
the
p4
sync.
A
So
all
those
things
that
have
will
happen
and
at
the
end
of
the
bulk
sync,
the
p,
forcing
is
kind
of
ongoing,
though
the
picture
doesn't
show
it
it's
kind
of
keep
going
as
and
when
there
is
a
state
change,
so
the
dac
2
becomes
a
dormant
and
then
the
controller,
as
I
said,
has
this
control
to
come
and
say
activate
the
admin
rule,
so
it
it
activates
the
role,
and
that
is
when
the
virtual
ip
takes
the
role.
A
So
in
this
case,
what
would
happen
is
virtual
ip1
becomes
standby,
so
virtual
ip1
in
the
dsc1
continues
to
be
active
and
virtual
ip2
becomes
active.
Basically,
its
admin
role
is
active,
so
this
is
kind
of
a
preemption
here
where,
even
though
vip2
is
active
in
dac
one,
the
role
is
preempted
by
dac2
and
it
becomes
active.
A
So
the
dharment
basically
is
the
is,
is
a
control
given
to
controller
where
we
don't
want
to
preempt,
and
you
know
dac
to
directly
take
over
and
start
switching
the
traffic
microsoft
wanted
not
to
have
that
kind
of
this.
You
know
behavior,
so
we
want
also
wanted
to
have
a
control
there.
So
that's
why
it
first
goes
into
your
tournament,
where,
even
though
it
is
prepared
to
be
ready
with
all
the
state,
it
still
doesn't.
A
You
know
take
over
completely
all
the
other
traffic
by
doing
preemption
so
and
then
once
the
control
comes,
then
it
will
go
and
do
the
creation.
A
Yeah,
okay,
yeah.
So
with
respect.
B
To
one
question:
so
in
the
previous
case,
when
dac1
and
dac2
both
are
independently
in
dormant
state,
initially
right
and
when
only
when
the
active
is
triggered,
they
would
start
the
heartbeat.
So
is
there
a
race
condition
where
both
of
them
could
be?
You
know
forwarding
traffic
or
advertising
for
big
for
a
period
small
period
or
is
it
freely
controlled
and
well
connected?
No
two.
A
No
so
that
see
the
the
heartbeat
exchange
happens
using
the
inter
dc
channel
ip
address.
That
is
not
using
the
virtual
ip
address,
so
the
virtual
ip
address
are
not
advert
so,
for
instance,
when
the
dac2
comes
up
here
right
until
this
point,
the
activated
admin
role,
the
virtual
life
is
not
advertised
from
dac2
at
all,
so
dac2
doesn't
get
any
traffic.
Until
this
point.
B
A
Yes,
heartbeat
we
go
through
because
that
is
not
using
the
virtual
ip
address
that
is
using
a
different
ip
address,
which
is
the
inter
dsc
channel
ip
addresses.
A
Okay,
so
just
to
touch
upon
this,
the
sink
mechanism
that
is
happening
during
the
para
process,
right
so
yeah.
This
is
this-
is
kind
of
a
yeah
involved
one.
This
is
always
a
challenge,
because
you
have
basically
during
the
para
process,
there
is
an
existing
connection
that
needs
to
be
bulk
synced
to
the
to
the
pier
and
at
the
same
time.
So
these
are
all
the
things
that
are
happening
during
the
already
set
up
connections.
A
There
are
tear
down
that
are
happening
for
the
existing
connections,
which
are
aging
and
end
point
termination
of
tcp
connections
and
then
new
connections
that
are
getting
set
up
during
the
para,
and
there
is
also
so.
This
is
a
okay,
something
that
I
wanted
to
bring
bring
it
up
here,
so
that.
C
Is
also
yeah,
I
think
we'll
we'll
go
over,
not
everybody
in
this
audience.
You
know
what
simulation
is.
We
are
introducing
it
and
I've
like
sent
some
documents
to
some
of
them,
but
yeah.
We
can
talk
about
the
re-simulation
at
a
different
time,
but
definitely
you've
got
it
down.
Yes,
it's
it's.
The
new
connections,
tear
downs
flow
rate
simulation
and
already
set
up.
There's.
Definitely
all
this
is
true
and
then
we'll
we'll
send
out
documents
on
re-simulation,
okay
to
everybody,
so
that
they
understand
yeah.
Okay,.
A
Okay,
sure
so
we
have
so
the
bulk
thing,
so
basically
yeah
so
in
our
implementation.
So
we
do
track
this
existing
connection
with
the
connections
that
are
newly
getting
set
up,
and
so,
as
I,
as
I
mentioned
earlier,
our
software
data
path
takes
care
of.
You
know
doing
the
bulk
sync
of
the
existing
connection
and
p4
data,
but
it's
doing
the
inline
sync
of
the
connection
setup
and
the
termination.
A
So,
like
you
said
the
while
we
are
doing
the
bulk
thing,
there
are
possibilities
where
there
can
be
a
change
in
state
happening
to
those
existing
connections.
So
so
we
we
are
basically
doing
a
combination
of
both.
You
know
bulksing
at
the
software
data
path,
as
well
as
the
inline
sync
from
the
p4
data
path
to
keep
up
with
it.
A
Okay,
so
so
chris,
so
you
asked
about
that
heartbeat
exchange,
so
so
the
so
just
wanted
to
capture
on
that.
So
the
unplanned
switchover
is
basically
is
through.
I
mean
we
detect
that
to
the
heartbeat.
So,
basically
in
the
stable
state,
the
heartbeat
messages
are
exchanged
between
the
dacs
periodically.
They
are
exchanged.
So
we
have
something
configurable,
heartbeat,
timeout
and
counts,
which
are
configurable.
Both
are
configurable
and
let's
say
if
dac1
goes
down,
the
dac2
keeps
sending
heartbeats
and
obviously
it's
not
going
to
get
any
response.
A
So
after
a
count
of
consecutive
heartbeat
misses,
then
it
will,
it
will
basically
transitions.
The
role
to
you
know
standalone,
at
which
point
any
received
data
packet
is
not
redirected
to
the
pier
it
will
directly
get
forwarded.
So
you
won't
see
any
more
drops
of
those.
You
know,
packets
that
are
going
between
the
ac1
and
ac2
and
getting
dropped
because
the
ac1
is
not
there.
E
A
Correct
it
should
be
in
millisecond
range.
B
C
We
happen
to
use
six
different
priorities
in
our
network,
but
different
clouds.
We
use
a
different
cloud,
so
we'll
do
that
differently.
I
do
want
to
go
back
over
like
a
couple
slides
before
that,
where
you
said
that
you
know
the
bulk
transfer.
If
you,
if
you
update
anything,
that's
in
the
you
know,
the
perfect
sync
table,
in
other
words,
see
the
connections
that
were
already
established,
not
the
new
ones
that
are
dynamically
being
established.
C
If
you
update
for
any
reason
one
of
the
connections
in
the
ones
that
were
already
established,
it
moves
automatically
to
the
dynamic
sync.
So
you
don't
you
don't
re-transmit
state
from
an
old
connection
after
you've
updated
it
dynamically,
so
it
would
no
longer
exist
and
we
described
that
in
perfect
synthesis
colors.
You
move
to
the
next
color
and
you
send
all
the
old
color
across
the
bulk.
But
if
you
update
anything
in
the
old,
then
it
be
just
basically
becomes
a
new
color
or
in
case
of
punsando.
C
I
think
they
use
timers,
but
that
is
that
is
a
subtlety
and
that
anything
in
the
perfect
sink
that
gets
dynamically,
updated
itself
moves
it
into
the
dynamic
range
and
no
longer
in
the
perfect
sync
up
state.
C
A
C
C
B
C
Yeah
once
you
it's
dynamic
now,
because
you
just
update
it,
you
don't
want
to
resend
an
old
old
state
yeah,
and
you
can
do
that.
I
I
in
perfect
thing
described
it
by
colors.
Like
you
move
towards,
you
know,
incremental
colors
and
whatever
is
behind.
C
You
is
a
different
color,
but
you
can
do
it
with
time
stamps
as
well,
but
once
you,
but
once
you
up
anything
dynamically
in
that
asynchronous
table,
which
is
the
you
know,
the
boxing
thing
you
have
to
move
that
you
no
longer
will
retransmit
that
in
the
box
and
kill
just
move
it
to
the
to
the
dynamic
sync.
And
now
everything
will
be
fine.
D
D
C
D
So
no,
I
I
I
understand
what
you're
saying
so
we're
trying
to
avoid
you
know
getting
out
of
sync
by
this,
this
color
or
time
stamp,
so
so
the
way,
probably
the
way,
I'm
understanding
this
thing
is
that
the
bulk
sync
starts
by
taking
a
snapshot
of
current
stack
state
and.
C
C
Okay
got
it
thanks,
thank
you
and
it
can
be
done
now
asynchronously.
Now
it
doesn't
need
to
be
done
in
real
time.
These
are
done
fast,
but
that
doesn't
need
to
be
done
as
fast
as
a
dynamic
sink.
You
just
go
from
the
bottom
of
the
table
to
the
top
of
the
table,
moving
everything
across
that's
either
before
a
time
frame
or
before
us
a
color
that
you're
on.
B
C
B
C
Only
once
you
see
only
you
need
to
send
it
once
you
go
from
the
bottom
of
the
table
and
you
just
analyze
every
connection
and
say
was
this
before
this
time
or
before
this
color
or
whatever?
And
if
it
is
you
send
that
across
you
go
to
the
next
one,
the
next
one.
Next
until
you're
finished
now
you
can
send
you
know
we
can
argue
whether
you
should
send
multiple.
C
You
know
messages
across
in
one
message
or
not,
but
essentially
that's
what
you're
doing
and
you're
skipping
over
anything
that's
dynamic.
If
it's
in
a
dynamic
state
like
it's
it's
after
that
time
period,
epic
or
it's
in
the
new
color
epic,
you
don't
need
to
send
that
across.
You
don't
need
to
send
things
that
were
before.
C
So
so
you
need
to
transfer
the
table
only
once
and
you
only
transfer
the
things
either
before
the
time
frame
or
before
the
color
change
and
that's
it
and
you
do
it
once
and
by
the
time
you
get
to
the
top
you're
sure
it's
in
perfect
sync,
because
while
you
were
doing
that,
you
were
sending
all
the
dynamic
updates.
So
therefore,
as
long
as
they
keep
up
with
the
dynamic
connection
updates
and
now
you
just
traverse
the
table
once
you
are
in
perfect
sync
at
that
point,.
D
C
C
C
Okay,
keep
going,
I
know
we're
over
time,
but
if
people
want
to
drop
out,
they
can.
But
I
I'm
you
know
I'll
stay
on
to
listen.
B
To
the
rest,
actually
it's
just
two
slides.
I
think
we
have
the
plan
switch
over
and
then
the
re-simulation.
Maybe
we
can
skip
the
re-simulation
since
until
everyone
comes
in
yeah,
I
think
it's.
A
Yeah,
so
the
plan
to
switch
over.
So,
unlike
the
unplanned
switchover,
it's
a
controller
control.
The
switchover
where
I
mean
you
want
to
basically
bring
down
the
pairing
and
you
want
to
do
maintenance
on
one
of
the
dlcs,
so
so
the
controller
basically
goes
and
triggers
the
plan
to
switch
over
once
a
planned
switchover
is
triggered.
Basically,
there
is
a
coordination
between
these
dses
where
for
a
shorter
duration.
Basically
so
here
the
so
the
dac
one
is
basically
is
active
for
whip
one
and
stand
before
we've.
A
Two
and
a
dac2
is
the
other
way,
so
it
is
still
forwarding
traffic
for
whip
one
over
here.
As
you
see
here,
the
addresses
are
not
withdraw.
The
virtual
ip
is
not
withdrawn
here
yet
so,
but
what
it
does
is
it?
Basically,
it
notifies
the
peer
dsc
to
become
active.
A
What
that
means
is
not
not
to
be
by
for
whippone
anymore,
if
you,
if
it
sees
the
traffic
at
any
time,
start
switching
the
traffic,
so
it
basically
prepares
the
other
dsc
also
to
be
active
at
the
time,
and
then
it
starts
the
the
for
some
shutdown
process,
but
in
first
it
will
basically
go
and
it
will
basically
go
and
start
withdrawing
the
the
bgp
routes
once
the
bgp
routes
are
withdrawn.
So
there
is
a
convergence.
A
You
know
time
for
those
bgp
routes
to
be
withdrawn,
and
so
after
that,
after
that,
basically
the
traffic
will
start
going
towards
the
dac2
for
all
the
both
the
whips,
but
the
dac
one
has
not
has
not
removed
the
virtual
ips
yet
from
the
data
path,
it
is
still
the
virtual
ips
are
there.
A
So
this
is
the
period
during
which
it's
the
all
the
all
the
in-flight
data
traffic
are
flushed
out
between
the
dses
and
once
all
the
so
we
basically
we
do
it
in
using
timers.
So
once
all
the
data
traffic
are
flushed
out
between
the
dses,
then
the
the
dac-1
will.
A
Finally,
you
know
completely
brings
down
the
the
the
fsm
and
closes
the
connection
between
the
dses
and
dac2
will
at
that
point,
move
to
standalone
and
it
doesn't
do
any
syncing
between
the
doesn't
look
for
it
doesn't
send
any
packet
or
doesn't
look
for
any
packet
coming
from
the
other
side.
So
basically
it
becomes
standalone
and
it
goes
so.
It
has
to
be
a
very
coordinated
mechanism
where
we
need
to
flush
out
all
the
sync
packets
and
whatever
that
are.
A
You
know
that
are
there
between
the
the
in-flight
packets
that
are
there
between
the
dacs.
B
E
Hi,
this
is
jay
from
microsoft.
I
have
a
quick
question
when
we
start
the
aha
switchover,
the
vp1
becomes
active,
also
in
dhc
two
and
what
happens
if
there
are
two
active
vip
in
both
dsd
one
and
dsc2,
and
why
can't
we
just
switch
from
switch
to
the
standalone
instead
of
being
too
busy
being
active.
A
I
mean
in
standalone
mode:
basically
we
don't,
we
don't
honor
any
sync
packets
coming
or
we
don't
send
any
sync
packets
more
than
sending
sync
packets.
Anything
that
is
coming
from
the
other
side
is
also
not
honored.
So
that
means
whatever
in-flight
packets
that
are
that
are
going
between
the
dac's.
They
all
will
get
dropped.
A
So
in
order
to
do
it
in
a
graceful
way,
we
basically
we
put
them
in
a
in
a
kind
of
the
data
path
in
the
active
active
state
and
we
honor
those
packets
and
then
we
in
a
coordinated
way.
We
gradually
move
to
the
standard
state.
B
Yeah,
if
I
may
add
so,
I
think
the
route,
whatever
is
the
width,
that
the
secondary
advertises
is
always
an
inferior
from
the
metrics
perspective.
It
is
an
inferior
so
from
the
if
your
question
was
from
the
underlay,
is
there
two
parts
leading
to
both
that
doesn't
happen
because
the
primary
withdraws
whatever
it
has
this
thing,
and
only
then
only
after
the
primary
withdrawals?
So,
let's
switch
over
to
the
standby,
it
is
not
like
both
of
them
would
be
advertising,
and
there
is
god.
E
E
When,
when
the
dsc
2
changes
the
one
from
standby
to
active,
does
it
change
the
v1
to
be
primary
on
the
sc2
or
is
dsd1
still
the
primary
of
the
vp1.
B
Ds
will
still
be
the
primary
but
administratively,
but
it
will
cross
the
road.
There
are
a
couple
of
other
enhancements,
we're
looking
at
grace
will
shut
down
and
withdraw,
but
effectively
it
will
cross
the
cloud.
C
I
think
one
probably
requires
a
lot
of
talk.
I
mean
even
even
in
microsoft.
We
talk
about
this,
so
what
the
best
way
to
do
this
is
what's
your
your
last
slide,
I
think,
is
unplanned.
A
B
C
Okay,
so
I
think
we've
gone
over
time,
but
this
is
really
good.
I
would
like
to
see
this
as
a
pull
request
is
that
people
can
read
it
think
about
it.
Also
like
to
you
know
over
time.
We
can.
I
mean
this.
Is
you
know
by
the
way
this
is
like
most
h8
devices
today
use
some
technique
like
this.
I
think
we
can
do
even
even
better,
but
this
is
a
great
starting
point
and
at
least
for
the
most
part,
we
know
that
this
this
works
pretty
well
the
switch
overs.
C
I
think
we
still
need
to
talk
about
because
we
we
even
talked
about
that
internally,
but
I
think
generally,
this
is
what
I
always
call
mode
one.
This
is
like
the
classic
one,
and
then
we
can
talk
about
optimizations
of
this
and
whether
you
need
to
send
whole
packets
or
or
you
just
send
metadata
or
you
send.
You
know
you
know
more
optimized
messages,
but
I
think
from
a
starting
point.
I
think
this
is
excellent
and-
and
it's
certainly
logically
true-
that
it
works,
and
so
we
should.
C
We
should
definitely
do
it
as
a
pull
request.
Let
people
digest
it
for
a
little
bit,
because
it's
a
lot
to
digest
and
then,
like
a
couple
weeks
from
now,
we
can.
We
can
kind
of
create
topics
around
where
people
are
unclear
about
this,
and
also
I
do
want
between
now
and
then
to
give
excite
a
chance
to
go
over
once
more
that
you
know
the
what
their
pull
request,
which
is
not
like
the
full
like
how
do
you
do
aj
but
like
what
is
the?
What.
E
C
What
is
the
transport
methodology
that
you
could
use
to
optimize,
aha
and
I'd
like
to
do
that
next
week
to
give
them
a
chance?
They
were
gonna.
Do
it
this
week
actually,
but,
but
I
think
that
this
you
know
this
was
important
to
bring,
and
then
we
could
do
like
the
excite
proposal,
which
I
read,
which
sounds
reasonable.
I
hope
everybody's
going
to
read
their
proposal
for
the
transport
protocol
and
then
we
can
like
in
the
following
week.
C
We
can
again
come
back
to
this
and
talk
about
potentially
unclear
parts,
questions
and
even
optimizations
that
maybe
pensando
thinks
of
or
others
might
think
of
so,
but
you
know
excellent,
of
course,
to
to
bring
something
you
know
in
such
a
complete
form.
That's
I'm
sure
it's
going
to
benefit
the
community
tremendously.
B
Thank
you
yeah.
Let's
plan
for
excite
paper,
next
week's
job
and
team
and
then
we'll
wait
for
the
pull
request
or,
however,
this
content
is
distributed.
Bulky
do
you
think
you'll
be
able
to
attend
next
week.
C
B
Stop
the
recording
now
guys
it
was
a.
It
was
a
long.