►
From YouTube: DASH Workgroup Community Meeting 20220119
Description
January 19, 2022
A
Define
networking
on
top
of
our
physical
layer
right,
so
this
is
basically
the
overlay
that
we
built
and
this
overlay
started
to
be,
and
it
was
created
for
for
many
many
years
already
and,
as
you
guys
know,
azure
grew
rapidly
and
we
actually
developed
all
the
other
features
based
on
our
needs
and
basically
also
customer
needs.
We
did
lots
of
optimization
there
and
right
now.
A
Part
of
the
work
that
we
are
doing
with
dash
ecosystem
is
try
to
standardize
it
in
order
to
reach
for
us
kind
of
like
a
next
level,
because
the
stuff
that
we
were
able
to
do
in
software.
A
It
scales
to
the
point
right,
and
this
is
where
we
need
rest
of
the
community
to
help
with
with
basically
creating
much
much
more
performance
for
for
basic
customers
right
this,
and
this
is
where
the
dash
community
comes
in,
with
with
a
potential
standardization
of
how
the
apis
will
look
like
and
this
kind
of
stuff
right
and
as
we
are
building
our
ecosystem
right.
We
of
course
develop
our
stuff
from
the
from
the
microsoft
perspective
and
also
something
similar
aws
has
google
has
and
and
and
other
companies
right.
A
So
right
now
we
release
the
first
version
of
the
of
the
standardization
kind
of
like
open
for
comments
open
for
additions
and
the
work
that
gohan
did
and
the
team
to
release
the
site,
which
was
the
solved
by
the
facing
interface.
So
this
is
the
interface
that
is
basically
facing
the
the
actual
hardware
right,
and
I
would
like
to
talk
slightly
about
the
next
part
of
this
high-level
architecture
that
a
christina
mentioned
there.
A
So,
basically
the
how
we
will
be
controlling
this
right
so
say
something
that
will
run
in
is
basically
implemented
by
hardware
and
and
sony
system
will
basically
be
calling
psi
right.
However,
from
the
point
of
view
of
the
control
plane,
so
from
the
point
of
view
of
our
sdn
control
plane,
so
whenever
customer
clicks
something
on
the
ui
and
they
request,
for
example,
vinnette
creation-
and
they
request,
for
example,
those
ip
allocation
to
be
there,
they
request
service,
like
private
link,
service
tunnel.
A
They
set
up
peering
in
and
we
we
basically
discolorate
the
calls
to
us.
We
on
the
control
plane
allocate
all
the
required
resources
the
site
where,
based
on
the
capacity
the
vm
should
be
put
in,
do
we
have
its
official
networking
capacity
and
prepare
all
this
configuration
right
and
later
it's
end
up
in
a
way
that
our
control
plane
needs
to
actually
call
the
actual
hardware.
In
this
case
right
and
currently,
we
have
a
software
which
is
the
vfp
right
to
kind
of
configure
it
right.
A
So
the
calls
that
we'll
be
calling
it
is.
It
is
basically
the
normal
phase
interface,
this
kind
of
api
based
how
the
customers
will
be
able
to
control
in
the
same
way
as
in
the
sonic
switch.
There
is
an
api
interface,
which
is
gnmi
that
is
being
used
to
to
control
the
switch
behavior.
In
the
same
way,
we
want
to
standardize
on
this,
so
the
gnmi
is
the
open
format
that
that
we
are
basically
extending
right.
A
Now,
apart
from
the
normal
functionality
that
right
now
it
offers
we
will
be
offering
an
additional
set
of
apis
that
will
be
designed
on
top
of
the
gnmi.
They
will
do
do
things
related
to
sdn
right
so
right
now
we
are
working
on
this,
the
eta
for
the
potential
first
version
of
the
creation
for
this
api
and
to
publish
this
is
most
likely
around
end
of
january,
maybe
slightly
into
the
february
we'll
see
what
it
would.
A
It
would
do
so
in
general,
we'll
have
the
api
for
working
with
the
abstraction
layer
for
the
sdn
right,
so
all
the
abstraction
layer
which
were
already
described
in
the
document
that
is
on
github
right,
which
means
that
we
will
will
have
a
front
no
byte
facing
api
for
creating
and
deleting
the
eni.
So
this
elastic.
B
A
Interface,
so
we
having
the
specification
how
to
create
the
cni,
how
to
remove
the
cni,
we'll
have
the
specification,
how
to
create
the
aql
group
right
and
how
to
specify
the
rules
right
and
then
the
interfaces
to
create
the
routing.
A
All
the
routing
is,
of
course,
like
rpm
base,
the
same
as
describing
the
documentation
ability
once
the
routing
gets
established
for
functionality,
which
is,
for
example,
customer
will
plumb,
let's
say
10
8
is
my
vin
address
space
right
of
course,
different
instances
of
the
of
the
vms
are
lying
in
different
physical
locations
in
the
data
center.
So
this
is
where
the
mappings
comes
in
a
place,
because
customer
usually
calls
I
want
to
connect,
for
example,
from
the
vm
which
is
1005..
A
They
want
to
connect
with
the
vm1006
right,
and
the
customer
just
thinks
from
the
point
of
view
of
of
this,
like
kind
of
overlays
ips
right.
However,
in
order
for
us
to
transfer
the
traffic,
we
of
course
needs
to
end
cap
this
using
either
nvgre
or
vxlan
type
of
type
of
encap
right
and
we
need
to
in
outer
header.
It
is
how
the
sdn
works
right.
In
outer
header,
we
need
to
basically
provide
specific,
correct
ips
of
both
the
source
node
when
the
vm
is
and
the
destination
node
in
this
case
the
destination
node.
A
If
this
is
sonic,
sorry
not
sony.
If
this
is,
if
this
is
dash,
will
be
basically
the
the
bgp
vip
of
the
car.
That
is
doing
this
right.
So
all
those
apis
will
allow
us
from
the
northbound
part
to
basically
control
this
behavior,
and
then
there
will
be
an
agent
that
will
develop
and
publish
this
as
an
open
source
community,
and
we
are
hoping
the
community
will
also
help
develop
this
agent,
which
will
basically
provide
the
translation
right.
A
C
C
So
silvano
is
from
pensando
and
we
have
chris
from
keysight
and
so
sylvana.
Why
don't
you
go
ahead.
D
C
A
A
If
those
dpus
will
be,
for
example,
later
built,
maybe
in
one
chassis
or
this
kind
of
stuff
right,
we
want
to
be
able,
from
the
control
plane
perspective,
to
address
separately
each
of
them,
so
each
of
them
will
expose
the
management
ip,
and
this
is
how
we
will
be
able
to
control
them
this,
how
we
will
be
able
to,
for
example,
pair
one
card
with
the
other
card
from
the
point
of
view
availability.
So,
yes,
that's
correct.
A
This
will
be
also
true
in
the
smart
switch
or
not
yes
in
the
smart
switch.
So
there
is
things
to
this
right,
preferably
yes.
At
the
same
point,
we
know
that
there
is
a
problem
with.
We
don't
want
to
also
waste
lots
of
ips
right.
So
that's
that's
also
open
question.
If
this
is
ipv6,
most
likely
will
basically
be
the.
A
C
F
Hi,
michael
thanks
for
the
presentation
when
you
come
up
with
the
standards,
are
they
going
to
be
based
on,
for
example,
open,
config
and
or
possibly
non-open
config
type
models?
You
get
kind
of
an
overview
of
the
gmi
schema
without
diving
into
the
file
necessarily.
A
Yeah,
so
so
not
exactly,
they
will
be
based
on
the
open
conflict.
So
I
look
at
the
open,
config
right
there.
There
are
some
things
that
open
config
is
doing
right,
but
at
the
same
point
we
are
actually
not
using
open,
config
type
of
schema
in
in
our
sdn
data
center
right,
I
think
ap
config,
like
open
config,
is
like
a
more
generic
right
here.
A
We
want
to
use
kind
of
like
a
incremental
approach
in
a
way
first
deliver
all
this
actually
minimal
stuff
requires.
So
so
don't
don't
create,
like
a
big
big
feature
set
as
the
first
staff
and
if,
for
example,
will
not
be
using
most
of
those
feature
sets
right
so
go
to.
Basically,
this
is
all
about
performance,
I
would
say
so.
A
There
is
also
no
point
to
have
asking,
for
example,
the
vendors
to
implement
all
the
complex
features
that
I
would
say
like
polluting
the
the
device
to
be
very,
very
generic
right
if,
at
the
end,
the
specific
clouds,
which
is,
for
example,
azure
and
later,
for
example,
aws
or
google
or
or
different
open
community
will
be
mostly
using
the
subset
of
this
right,
because
we
are
losing
performance
on
the
p4
pipeline
potentially
and
this
kind
of
stuff
right.
So
I
would
say
it
will
be
subset
of
the
functionality.
F
G
Hi,
michael,
this
is
joy,
I'm
from
broadcom.
So
a
couple
of
questions,
so
I
think
I'm
gonna
do
a
follow-up
on
a
previously
asked
question.
So
in
a
switch
in
a
smart
switch,
you
said
that
if
it's
ipv4
we
may
want
to
optimize
the
ip
address
and
present
the
whole
thing
as
a
single
instance,
meaning
there
is
a
dpu
cluster.
So
in
that
case
today,
sdn
controller
does
all
the
workload
placement
management
and
everything.
A
No,
no!
No!
No!
No!
No!
I
don't
expect
this
agent
to
run
out
of
switches
cpu,
so
the
actual
stuff
regards
to
management
of
the
capacity
right.
It's
we
have
this
in
our
own
sdn
controller,
which
is
which
is
separate
service
right.
It
doesn't
have
to
be
high
performance.
This
kind
of
stuff
has
more
memory
and
has
entire
inventory
and
this
kind
of
stuff
right.
A
So
this
services,
sdn
controller
management
service,
will
be
mostly
the
service
which
will
be
deciding
that
this
specific,
this
specific,
let's
say
card
needs
to
needs
to
be
programmed
right
and
it
will
be
calling
this
card,
but
most
likely
we'll
be
calling
this
either
through.
Maybe
the
sport
based
approach
that
silvano
mentioned
that
this
will
be
the
same
address,
just
we're
basically
exposing
this
interface
in
three
different
ports,
or
something
like
this
right.
A
So
I
don't
expect
any
complex
logic
to
run
on
the
switches
actually,
because
there's
there's
a
lot
of
of
complexity
with
the
gas
to
currently
managing
the
they're.
Basically,
allocation
devices
also
ability
to,
for
example,
if
one
device
dies
like
one
cars
dies
and
this
card
because
of
the
high
availability
was
paired
with
the
other
other
like
a
second
card
right,
then
we,
for
example
rma
discard,
but
the
rest
of
the
switch
is
still
working
right.
A
So
so,
if
this
card
was
paired
with
some
other
card,
the
other
card
that
is
still
working,
it
cannot
be
left
in
this
state.
G
B
G
A
There
needs
to
be
a
way
to
upgrade
firmware,
for
example,
for
for
the
specific
cards
and
not
as
the
for,
for
example,
not
the
entire
switch
together
right
because
we
would
like
to
be
able
to,
for
example,
shut
down
one
card
to
upgrade
firmware
on
this
card
and
then,
for
example,
shut
down
another
car
to
upgrade
the
firmware
of
this
card,
because
the
updating
entire,
for
example,
switch,
will
will
shut
down
all
the
cars
together,
so
potentially
more
traffic
will
be
impacted.
A
So
we
should
also
keep
keep
this
in
mind
that
we
want
to
have
groundwater
control
over
those
cards.
G
You
got
it,
I
think,
that's
probably
related,
so
we
will.
G
Questions
so
I'll
leave
that
I'll
just
one
more
question
before
I
give
other
people
chance,
so
how
about
the
so
anytime,
you
provision
a
you
know
eni
over
a
vxlan
tunnel
going
what
happens
to
the
underlay.
So
underlay
management
is
also
with
the
sdn
controller
or
they
are
decoupled
completely.
G
A
That's
correct
so
so
each
card
each
card
will
have
so
basically
each
card
will
be
sending
their
own
traffic
so
which
card
is
connected
directly
to
the
tour
right,
so
what
we
are
doing
and
prototyping
on
our
site
right
now,
is
that
this,
this
kind
of
like
physical
interface
right,
so
this
physical
interface
is
set
up
at
the
beginning,
once,
for
example,
and
the
car
gets
gets
assigned
for
the
car
okay.
A
This
is,
for
example,
the
bgp
ip
for
the
car,
the
only
which
the
the
car
will
receive
the
traffic
right
in
case
of
ajks.
We
actually
want
the
car
to
potentially
be
in
long-term,
like
active,
active
mode
right,
so
so,
potentially
either
there
will
be
one
bgp
address
or
two
bgp
others
that
we
use
right.
But
then
the
eni
are
kind
of
on
top
of
it,
which
means
that
if
we
assign
five
unites
three
enis,
this
kind
of
stuff
right,
then
there
will
be
still
the
same
bgp
artist.
A
Right
and
now
you
are
talking
about
the
vxlan
with
regards
to
potential.
Each
vlan
has
a
different
vxlan
id
right,
so
vxlan
key
and
one
thing
they
want
to
add
here
right
from
from
this
point
of
view
of
the
of
the
uni
perspective,
that
at
least
on
our
site,
the
the
vlan
is
or
vxlan
is
not
really
a
property
of
the
card.
So
it's
not
shared
by
the
card,
it's
more
more,
it's
more,
the
property
of
the
eni.
A
So
in
this
case,
for
example,
you
can
have
multiple
knives
that
are
essentially
the
same
card
right
and
each
of
those
unites
can
be
either
in
the
same
vxlan
or
can
be
most
likely.
They
will
be
in,
for
example,
different,
vxlan,
right
and,
and
so
so
this
the
vxlan
is
not
like
top
level
object.
More
like
vxlan
is,
is
kind
of
something
that
a
only
eni
reference
right.
A
However,
from
the
point
of
view
of
the
memory
consumption
right,
it
is
okay
to
have
a
vni
as
a
top
level
object
from
the
point
of
view,
the
mappings
right,
so
the
mapping
will
be
controlled
by
per
vni.
V9
can
be
top
level,
but
at
the
same
point
the
v9
doesn't
exist
by
itself.
It's
more
like
the
eni
exists
and
eni
may
refer
multiple
vnis
right,
so
so
the
concept
of
adding
eni
or
removing
dni
is
basically
I'm
adding
eni
with
all
the
configuration
that
hey.
A
This
is
your
primary
vxlan
for
the
routing
for
this
other
space
or
this
other
space
use,
for
example,
the
other
gre
keys
to
kind
of
communicate
right.
So
it's
not
like
top
level
object
that
is
set
up
for
the
underlay.
The
angle,
the
only
thing
that
underlay
knows
from
the
card
perspective.
Like
a
top-level
perspective.
This
is
the
beep
that
the
traffic
arrives.
Yeah.
G
Yeah,
okay
and
one
sorry,
one
last
question:
sorry!
Sorry,
since
I
have
the
opportunity,
so
how
about
the
the
the
bandwidth
allocation
etc?
How
do
you
know
that,
given
our
eni
or
a
workflow
placement
on
this
vm,
how
to
do
the
you
know
available
bandwidth
of
everflow,
which
you
use
or
some
other
mechanics.
A
Yeah
so
available
bandwidth
from
the
allocation
perspective
right,
we
are
actually
assuming
that
the
card
bandwidth
will
be
you'd,
utilize
close
to
100
from
the
from
the
point
of
view,
because
those
will
be
the
workloads
which
actually
are
not
targeting
every
single
vm
in
the
cloud
there
will
be
mostly
targeting
bigger
customers
and
bigger
customers,
basically
using
more
bandwidth
right
so
based
on
this,
we
will
know
how
to
spread
the
eni
also
also
long
term.
A
A
That
goes
to
the
same
vp,
but
on
our
side,
when
we
basically
splitting
the
traffic
that
goes
from
the
from
the
actual
vm
and
from
the
node
right,
and
there
will
be
consistent,
hashing
algorithm
that
will
speed
the
traffic
across,
for
example,
three
different
vps
right,
so
so
each
card
which
will
advertise
different
vp
for
the
for
the
physical
network,
the
underlay
will
will
receive.
Let's
say
one
third
of
the
traffic
right,
and
this
is
how
we
can
we
can
allocate
this
right.
A
So
from
the
allocation
perspective,
we
are
assuming
that
we
need
to
have
capacity
on
the
t1s
or
t0s
actually
to
handle
all
the
full
cap
bandwidth
of
all
the
cards.
So
the
card
is,
for
example,
200
gigs.
We
need
to
have
sufficient
capacity
to
handle
200
gigs
on
that
on
the
d0.
A
B
At
least
in
our
network,
you
can
measure
the
traffic
as
it
comes
in
something,
but
we
do
rely
in
our
network
on
the
fact
that
the
vm
sits
on
a
server
somewhere.
It's
already
metered,
it's
metered
before
it
exits,
so
it's
not
like
they
can
just
grab
any
amount
of
bandwidth,
they're
limited
to
what
they
can
put
in
the
network,
whether
they're
talking
to
another
vm
or
whether
they're
talking
through
this
appliance
device.
H
Do
you
dominately
mark
to
a
different
vxlan
based
on
the
bandwidth
changes
consumptions
dynamically?
Is
that
no
we.
A
Will
not
dynamically
based
on
the
bandwidth
changes,
radio
traffic
to
a
different
vx
land?
So
so
so
the
vxlan
is
more
the
isolation
att,
so
it's
kind
of
one-to-one
mapping
with
the
v-net.
So
it's
so
basically
two
venus.
Let's
say
one
minute
belongs
to
pepsi
the
other
for
the
coke.
They
have
completely
different
vx
and
id.
So
even
if,
for
some
reason
the
mapping
anywhere
is
is
outdated
or
it's
gonna
stop
and
traffic
gets
redirected
so,
for
example,
different
cars.
A
H
Isolation
is
good
just
for
the
content
handling
purposes.
It
is
possible
to
instantiate
another
vxlan
to
traverse
a
different
path
just
to
handle
the
dynamic
changes
in
the
bandwidth
per
customer.
Just
some,
some
in
those
lines,
yeah.
A
So
can
you
can
you
tell
me
more
about
the
standard
of
instituting
another
vxlan,
because
because
vxlan
is
because,
based
on
my
like
what
what
basically,
how
we
are
using
vxlan
right
is
vxlan
is
really
a
tunnel
id
right,
but
but
this
is
used
for
the
outer
end
cap
right.
So
it's
more
like
a
key
of
the
of
the
communication
right,
but
how?
How
do
you
set
it
up
between
which
source
and
which
destination
right?
A
We
are
like
how
we
are
using
this
in
sdn
and
how
we
want
the
device
to
actually
use
it
right
is.
I
know
that,
for
example,
from
the
point
of
view
like
I
I'm
familiar
also
with
linux
freebsd,
this
kind
of
stuff
right,
I
used
to
spend
lots
of
time
there,
and
I
know
that,
for
example,
like
you
can
do
like
ipconfig
and
setup
like
give
zero
interface
or
this
kind
of
stuff
when
you
are
setting
up
gre
tunnel
between
this
source
and
destination
and
the
talent
is
kind
of
static.
D
A
And
you
want
more,
you
are
setting
up
more
right,
but
this
is
not
how
we
are
using
sdn
right
so
because
the
sdn
is
is
more
like
if
you
have,
for
example,
one
million
vms
in
your
big
v-net
right,
which,
for
example,
belongs
to
one
of
the
big
customers
right.
Those
those
vms
can
be
all
over
the
data
center
right
from,
from
our
point
of
view,
we're
actually
on
our
side.
A
Even
though,
from
the
isolation
perspective
from
the
configuration
perspective,
the
vm
belongs
to
the
v-net
that
has
one
minimum
devices.
Usually
one
vm
doesn't
communicate
with
all
the
other
million
devices,
so
we
don't
recreate
those
tunnels.
The
only
thing
that
we
have
is
which
in
our
spec,
which
is
called
the
mapping-
and
it's
more
like
dynamic
from
the
point
of
view
that
only
when
we
see
the
traffic
going
to
some
specific
customer
address,
which
is
called
ca,
which
is
the
vm
internal
vmip
right
only
then
we
do
the
lookup
against
the
okay.
A
We
are
going
to
go
to
the
ca
okay.
This
is
the
physical
arduog
called
pa,
which
should
be
put
in
outer
end
cap
right,
and
this
is
the
gr
key
that
you
should
use
or
vn
vxlan
id,
and
this
basically
is
dynamically
put
in
the
in
the
end
cap
right.
So
we
don't
recreate
those
tunnels
as
kind
of
like
lots
of
interfaces
in
linux
right.
A
We
just
have
one
single
interfaces
like
hey,
it's
the
interface
for
the
outbound,
let's
say
right
and
and
as
the
packet
arrives
as
the
pack
is
being
processed,
we
dynamically
stamp
the
the
output
destination
ip-
and
this
is
the
gre
key
right.
So
this
is
how
we
are
doing
this
right.
H
I
see
yeah,
I
was
thinking
I
can
discuss
with
you
offline,
it's
about
using
vxlan
multicast,
where
some
of
the
you
know
nodes
can
join
a
particular
group
and
then
you
know
we
distribute
traffic
and
we
can
have
a
full
mesh
there,
as
as
many
nodes
want
to
join
and
then
we
can
create
a
completely
different
set
of
nodes.
Another
mesh
and
spawn
different
fabrics.
A
Like
that,
based
on
that
yeah,
so
let
me
address
the
multicast
part
right.
So
one
of
the
reason
why
you
guys
will
notice
that
none
of
the
big
clouds
have
support
for
the
multicast,
for
this
kind
of
stuff.
Right
is
just
that
our
network
is
too
big
and
if
all
the
physical
devices
will
start
advertising
their
own
information,
that
hey
I'm
joining
this
group
or
I'm
leaving
this
group.
A
This
kind
of
stuff
there'll
be
two
big
chatter
on
the
network,
and
this
is
also
one
of
the
reasons
why
implementing
multicast
in
cloud
is
just
very,
very
costly
from
the
traffic
perspective
right.
So
there
is
no
multicast
in
the
cloud
right,
so
so
it's
only
driven
based
on
the
mappings
between
hey.
This
is
the
source
destination
right
and-
and
this
allows
to
save
the
bandwidth
on
the
physical
network
right
and
also
allows
to
optimize
the
traffic
right.
A
The
multicast
will
just
not
scale
from
the
from
the
amount
of
traffic
that
we
are
handling
right.
It
just
needs
to
be
direct
traffic
between
two
vms.
We
cannot
afford
for
big
charter
chatter
that
is
happening
on
the
wire,
so,
unfortunately,
we
cannot
use
multicast
on
the
physical
network.
E
Thanks
christina
yeah
hi
michael,
I
have
two
questions
from
the
has
point
of
view
of
what
you
described
so
far.
Is
it
correct
to
understand
that
sdn
control
is
the
only
one
that
having
knowledge
of
aha
there's
no
specific
requirements
such
as
syncing
across
you
know
whether
it's
sdn
appliance
or
the
smart
switch.
A
Yes,
as
the
end
controller
is
the
one
that
will
be
always
scoping
the
control
plane
state,
which
means
that,
for
example,
if
if
card
a
and
card
b,
which
is
somewhere
else,
needs
to
support
sensor
gni,
the
azi
encoder
will
be
the
one
who'll
be
calling
gnmi
on
card
a
and
calling
gnmi
on
the
card
b
to
say:
hey,
configure
the
cni
and
the
configuration
of
dni
and
then,
for
example,
update
mappings
for
the
communication
and
you'll
be
doing
this
for
for
both
cards.
A
Right
with
this,
you
guys
need
to
be
aware
that
there
is
in
in
the
cloud
in
the
distribution
there's
this
eventual
consistency
right.
So
there's
no
guarantees
that
we'll
start
to
be
within,
let's
say
milliseconds
or
this
kind
of
stuff
right
that
the
state
may
just
just
arrive
later
right.
The
only
requirement
that
we
have
on
the
card
from
the
aj
perspective
is
to
ensure
that
the
the
when
they,
for
example,
we
plumb
the
gold
state
to
the
card.
A
Saying
hey,
you
are
the
card
a
and
you
repeat,
with
card
b,
and
this
is
the
your
guys
respective
ips
right.
Then
those
cards
set
up
the
channel
between
themselves
for
the
purpose
of
of
basically
flow
synchronization
right
and
and
and
be
able
to
basically
replicate
the
flows
because
flows.
If
in
the
high
cps
system
right,
we
actually
from
the
contour
plane,
will
not
be
replicating
flaws
right.
A
This
needs
to
be
part
of
the
of
the
design
that,
for
example,
either
either
the
traffic
arrives
to,
for
example,
one
card
and
this
cars
after
processing
send
the
traffic
out
plus
send
additional
packets
to
the
to
the
backup
device.
So
that
was
also
the
flow
right
or
the
solution
that,
for
example,
the
traffic
arrives
on
the
card
a
and
the
traffic
the
card
a
actually
doesn't
send
it
back,
but
but
doesn't
send
it
out,
but
basically
send
the
card
b
and
after
card
b
and
card
a
establishes
the
flow.
A
Then
then,
basically,
it
goes
back
to
card
and
it
sends
out
right.
There
needs
to
be
some
pro
synchronization
and
this
we
kind
of
leave
it
open
for
the
for
the
vendor
to
to
kind
of
provide
the
most
efficient
way
to
to
synchronize
it
right
and
the
most
important
part
with
the
gas.
He
will
have
separate
documents
on
this.
This
kind
of
more
advanced
stuff
is
it's
more
about
that
two
cards,
maybe
in
different
state
at
this
part
right
so
as
part
of
the
flow
replication
we
cannot.
A
A
So
there
needs
to
be
some
additional
metadata
sent
as
per
the
floor
replication
to
make
sure
the
entire
flow
is
established
in
the
same
way
in
case
of
of
the
failover
right,
because
the
the
thing
that
from
the
customer
perspective
right,
if
customer,
has
lots
of
connections
right
if
there
is
some
problem
with
device
and
device
dies,
customer
is
kind
of
okay.
Seeing
I
mean
not
fully
okay,
but
if
you
see
some
retransmissions
of
the
sin
right
sure
sometimes
happens
during
failover,
maybe
for
central
transmissions
right.
A
B
C
C
B
C
That
are
published
publicly
are
the
ones
we
decided
on
when
we
initially
uploaded
and
that's
what
we
were
going
to
share
at
that
point.
C
B
C
A
C
A
A
B
J
B
Sure
you
know
if
there's
better
ways
or
if
there's
different
ways,
then
we
can.
We
can
discuss
that
yeah.
C
Absolutely
michael,
and
I
will
work
on
that
lisa
are
we
finished
with
yours?
Should
we
move.
E
Yeah
one
one
one
more
one,
quick
question
regarding
the
extension
for
the
operational
api,
michael.
If
you
can
elaborate
more
on
that
a
little
bit
more,
maybe
at
a
high
level
in
terms
of
what
at
the
area
where
the
attention
is.
Is
that
mainly
for
the
flow?
The
hardware.
E
E
A
The
normal
api
is
just
the
way
that
we
can
basically
control
it
through
the
control
plane
right,
because
the
the
site
api
is
something
that
you
guys
implement,
but
it's
not
exposed
as
the
as
the
grpc
or
anything
like
this
right,
yes,
and-
and
it's
not
indeed,
this
gnmi
format
right.
So
we
need
to
have
a
way
to
call
the
card
remotely
right
and-
and
this
is
basically
grpc,
which
is
the
entire
community.
It
was
standardized
on
grpc
and
gmi
right.
A
So
it's
more
about
creating
abstractions
layer
on
top
of
the
site
interface
that
you
guys
already
have
so
yeah
some
interfaces
you
guys
already
have-
that
will
be
just
okay.
This
will
be
api.
That
will
be
calling,
and
there
will
be.
Some
agent
will
be
translating
this
and
driving
the
goal
state
and
calling
the
site
api
to,
for
example,
configure
it.
Okay.
A
So
that's
that's
my
my
point.
We
are
currently
working
on
this
right
so
then
that
we
will
share
the
first
version
of
the
of
the
this.
This
kind
of
new
gnmi
interface,
most
likely
around
end
of
january,
beginning
of
february
yeah.
E
I
Oh,
thank
you
christina
hi,
michael.
This
is
from
fungible.
Thanks
for
the
clarity
on
you
know
the
sd
controller
inserting
flow
rules
into
the
into
the
card.
So
my
question
is
in
this
path.
Where
do
you
see
sonic
as
a
nose
fitting
in?
Do
you
expect
sonic
to
be
running
on
the
dpus?
You
know.
Yes,
there
is
always
okay
and
I
will
tell
you
I.
A
Know
yeah
finish,
and
I
can
add
some
more
stuff
about
this
one
yeah,
please
go
ahead.
I
think
you're
answering
my
question
already
so
so
the
answer
is
yes,
and
the
main
reason
is
that
sony
provides
also
more
like,
for
example,
will
not
be
specifying
information
about
the
telemetry.
Sonic
has
already
information
about
basically
how
to
push
the
telemetry
and
those
kind
of
stuff
right.
So
they
have
lots
of
agents
that
actually
providing
this
kind
of
basic
configuration.
A
So
sonic
is
more
like
open
source
ecosystem
right
where,
as
part
of
the
sonic
os
gohan
will
prepare
hey.
This
is
the
minimal
set
that,
for
example,
we
need
this,
but
we
don't
need
this
right.
It
will
be
running
this
right
and
on
top
of
it,
basically
we
add
additional
container,
that
is,
that
is
responsible
for
the
sdn
right,
but
at
the
same
time
we
will
not
reinvent
the
staff
that
is
already
in
a
sonic
that
we
also
need
right.
A
So,
for
example,
we
need
telemetry
right,
some
packet
capture,
diagnosis
or
this
kind
of
stuff
right.
We
sonic
already
has
it
right.
So
there
is
no
point
for
us
to
reinvent
it
from
scratch
right!
That's
why
we
kind
of
expanded
the
sonic
ecosystem
by.
A
Like
physical
network-
and
we
are
adding
sdn
capability
right,
you
don't
need
to
compile
it
in
if
you
don't
want
to
have
sdn
capability,
but
if
you
don't
know
as
the
incapability
and
don't
need,
for
example,
some
switch
related
stuff.
You
can
basically
remove
some
parts,
some
container
at
a
different
container.
I
Okay,
okay,
understood
and
just
a
follow-up
question
to
that.
You
know:
assuming
the
scenarios
were,
a
physical
server
may
be
having
multiple
smarty
cards
in
it.
Do
you
see
each
of
them
as
different
end
points
that
you
know,
or
you
know,
are
there
other
cases
where
you
know
you
probably
won't
have
the
traffic
coming
from
the
vms
to
be
load
balanced
across
across
both
the
nets
instead
of
having
them
independently?
I
A
Yeah,
so
let
me
answer
the
question
about
the
load
balancing
this
kind
of
stuff
right,
so
one
thing
that
we
really
care
about
in
the
cloud
right
is
availability
and
not
having
something
which
is
called
spoof,
which
is
single
point
of
failure
right.
So
so
so,
in
this
case,
for
example,
even
right
now,
when
we
are
balancing,
we
usually,
for
example,
have
a
server
that
is
connected,
let's
say
to
two
tors
to
basically
have
a
traffic
through
one
through
and
the
second
or
in
case
one
or
dies
right
and
the
same
here.
A
If,
for
example,
there
is
a
switch
that
has
multiple
cards
right,
the
load
balancing
it
will
provide
load
balancing,
it
will
most
likely
not
be
between
multiple
cars
on
the
same
switch,
because
this
doesn't
just
doesn't
provide
any
availability
for
the
customer,
because
if
the
switch
dies,
because
it's
connected
to
one
power
outlet
right,
if
there
is
no
power
on
this
power
output
right
customer
has
no
connectivity
right.
That's
why
the
same
as,
for
example,
traffic
is
balanced
across
multiple
tours.
A
In
the
same
way,
it
will
balance
across
multiple,
multiple
cars,
yes,
but
it
will
be
mostly
basically
one
car
that
is
relying
on
lying
on
the
same
switch
and
the
second
car.
That
is,
for
example,
in
a
different
switch
that
have
a
big,
very
close
proximity
from
the
tor
perspective,
right
latency,
but
the,
but
the
paired
car
will
be
definitely
on
a
different
switch.
A
B
A
To
have
one
to
have
basically
two
cards
working
in
the
active
active
mode
in,
in
a
way
that,
for
example,
transfer
program
also
to
both
right
and
the
two
cards
between
themselves
will
need
to
have
a
logic
to
basically
sync
their
flaws,
so
controller
will
not
be
seen
in
the
flows.
There
is
just
like
high
cps
high
connection
per
second,
let's
say
a
few
million,
so
this
kind
of
stuff,
right
and
and
there
like
control,
is
just
too
slow
to
sync
every
single
new
flow
that
is
created
right.
A
This
needs
to
be
one
car
talking
with
the
another
card,
with
some
direct
channel
to
kind
of
sync
those
right,
so
we'll
not
be
doing
the
flow
syncing.
This
is
part
of
the
card
responsibility
right,
we'll
only
be
syncing
the
rules,
let's
say,
for
example,
you
want
to
have
those
road
rules
and
those
configuration,
because
the
configuration
doesn't
change.
Often
configuration
usually
change
when
the
customer
specifies
something,
let's
say,
for
example,
customer
wants
to
add
additional
accol
right
or
modify
the
ocular
right,
so
the
configuration
changes
are
relatively
rare
right.
A
The
most
often
configuration
changes
that
we
see
is
in
case
the
entire
virtual
network
is
connected
with
the
let's
say,
express
route
or
the
ipc
gateway
to
the
on-prem
right.
Sometimes
the
on-prem
routes
change
relatively
often,
so
sometimes
we
need
to
update
them
every
day.
Let's
say
30
seconds
right,
so
change
the
routing
rules
right,
but
it's
not
very
high
performance
updating.
So
those
updates
are
relatively
rare.
A
I
would
say
from
this
point
of
view
right
and
because
the
cards
will
have
the
same
gold
state
and
the
same
kind
of
routing
rules,
program
by
the
control
plane
and
what
will
be
in
active
state
right
and
they
will
be
seeking
the
flows
by
themselves
right
if
one
car
dies,
the
other
car
that
is,
advertising
the
same
beep,
because
they
were
in
active,
active
state,
right,
automatic
picks
up,
because
the
turbo
will
just
continue
using
only
those
right.
So
in
the
aha.
A
The
control
plane
is
only
responsible
for,
let's
for
making
sure
that
the
control
plane
rules
are
being
provisioned
to
both
cards
right.
But
then
the
data
plan
is
all
within
the
cards
right
and
monitoring
the
help
of
the
cards
and
if
one
card
dies
rising
sums
f2
on
our
side
that
we
can,
basically
if
it
can
be
recovered
automatically
to
kind
of
manually,
try
to
recover
or
potentially
rma
the
card,
and
if
the
car
is
rma
to
automatically
select
a
second
backup
card
that
they
can
re-establish
pairing
relationship
with
the
other
cards.
I
Yes,
I
understood,
I
think
you
know
it
sounds
like
this-
is
an
area.
Probably
we
have
to
explore
more
in
terms
of
architecting
better.
You
know
from
a
sonic
perspective.
We
need
to
think
about.
Does
it
make
sense
to
have
two
independent
instances
of
sonic
which
are
talking
to
each
other
to
create
this
awareness
or
have
one
instance
of
sonic
with
independent
components
to
manage
to
dpus?
You
know
as
necessary
right.
I
I
I'm
not
sure
at
this
point
if
it's
clear,
if
we
have
two
instances
of
sonic
how
they
will
talk
to
each
other
to
have
the
you
know
this.
This
nation
built
in
I'm
not.
A
Sure
so
so
so
so
sonic
doesn't
talk
with
each
other
from
the
point
of
view
of
the
right,
because
because
sony
will
be,
sonic
is
softer
right.
It
is
too
slow
to
provide
floor
replication.
A
Right
so
so
the
floral
application
needs
to
be
data
path
feature
because
it
needs
to
be
kind
of
almost
like
lingerie
rate
right.
Sonic
is
more
like
for
for
control
planes,
so
sonic
is
more
like
providing
configuration
getting
metrics
statistics
and
this
kind
of
stuff
monitoring
the
health
right.
Sonic
sony
will
not
be
implementing
the
p4
pipeline
or
data
pipe
patterns
kind
of
stuff,
so
configuration
yeah.
I
C
Okay:
let's
go
to
jai.
G
So
sort
of
a
segue
into
what
suresh
was
asking
it's
a
little
bit
of
a
deeper
question.
So
let
me
a
little
bit
spend
like
a
minute
just
on
the
on
the
concept
I'm
asking
right.
So
in
the
hardware,
whenever
we
create
state
state
is
created
either
naturally
by
the
control,
plane,
update
or
some
state
is
created
because
of
the
data
path,
data
right
and
data
packet
state.
G
Most
of
the
time
I
mean
depends
on
the
feature,
will
need
some
kind
of
cpu
intervention
to
do
some
processing,
which
we
call
as
a
slow
path,
processing
right
and
then
that
state
is
constructed
and
is
programmed
in
the
hardware.
So
my
question
is
and
take
an
example.
Let's
say
that
we
are
doing
you
know,
flow
learning
or
some
some
kind
of
learning,
but
before
that
there
is
admission
control,
there
might
be
another
packet
transformation
which
is
needed
and
then
that
flow
state
is
created.
G
So
my
question
is:
do
we
have
a
list
of
items
in
the
feature
which
need
offload?
Because
if
you
push
that
function
into
the
sonic
or
to
the
sdn
controller,
where
the
state
is
constructed
and
then
pushed
back,
that's
just
not
a
viable
solution.
Ideally
you
want
it
as
close
to
the
hardware
as
possible
right
so
do
we
have
understanding
or
list
of
items
which
needs
such
functions
or
expectation
from
the
hardware
that
you
know
these
are
the
things
I'm
doing
a
flow
learning.
This
kind
of
state
is
needed.
A
Yeah,
so
let
me
answer
this
with
regards
to
this
controller
and
the
condo
plane
and
managing
the
software
is
definitely
the
control
plane
right.
A
Okay,
let's
do
this
transformation
because
those
conditions
are
met
and,
for
example,
I
is
the
outcome
passing
or
not
and
based
on
this,
it
creates.
This
kind
of
this
is
this.
This
is
how
the
flow
should
be
right
and
that
flows
offloads.
This
flow
to
fpga,
which
is
hardware
right
and
then
and
then
basically
usually
when
the
packets
are
coming,
the
packets
are
first
going
to
the
hardware
and
checking
if
there
is
extreme
flow
and
vs
then
process
and
do
this
kind
of
stuff
right
and
it
this.
A
This
kind
of
offload
like
proceeding
in
the
cpu
and
then
offloading
the
already
constructed
flow
to
hardware
is
already
what
we
have,
and
this
solves
the
bandwidth
problem,
but
it
doesn't
solve
the
high
cps
problem,
which
is
the
actual
initial
evaluation
of
the
apples
to
to
know
that
case
customers
in
the
packet
and
basically,
should
they
drop
it.
Should
they
allow
it
right,
it
doesn't
so
it
doesn't
solve
the
stuff
with
regards
to
okay.
This
is
the
address,
for
example,
that
I'm
going
to
right.
A
What
should
be
the
outer
end
cap
right
based
on
the
mapping,
this
kind
of
stuff
right,
because
it
still
needs
to
be
right.
It's
still
right
now
being
handled
in
the
cpu
right,
so
so
so.
The
the
main
reason
for
that
for
the
stuff
that
we
need
hardware
here
is
not
really
to
provide
the
offload
of
the
deconstructed
flow,
but
to
also
also
or
mostly
so,
the
first
part,
which
is
which
is
basically
the
the
rule
processing.
G
Right
right,
so
so,
if
I
may
interrupt
so
yeah,
so
you
know
like
in
the
dash
appliances,
there's
no
cpu
sitting
there
right.
The
appliance
itself
just
is
doing
the
power
management,
but
in
the
in
the
smart
switch
concept,
you
have
a
pretty
beefy
cpu
sitting
there
right,
both
that's
the
current
you
and
in
the
switch
itself
and
there's
a
lot
of
processing
which
can
be
optimized
across
the
switch
and
the
dpus
right.
G
So
so
my
ask
here
is
probably
that
is
it
possible
to
have
those
list
of
items
which
you
think
are
a
good
candidate
to
offload
into
the
data
path.
A
So
so
based
on
would
I
would
I
know
for
sure,
from
the
offloading.
So
so
it's
like
this
like
because
because
for
me
the
stuff
is,
I
would
say
like
this:
I
don't
care
what
is
really
offloaded
as
a
hardware
versus
what
this
process
in
the
software
cpu
whether
the
cpu
is
is
on
the
switch
or
the
cart
right.
I
don't
really
care
so
much.
I
care
mostly
about
hitting
this
high
cpu.
B
A
But
the
one
thing
one
thing
I
want
to
emphasize
with
you
guys
for
the
switch
perspective
right
is
that
we
definitely
need
to
need
to
think
from
the
point
of
view
of
the
availability
right,
because
when
we
know
the
cars
are
dying
because
even
right
now
we
are
playing
with
some
with
basically
some
cars
on
our
side.
Right,
for
example,
let's
say
one
car
died
because
of
something
right
and-
and
the
idea
is
also
from
our
point
of
view.
What
is
our
availability
story
as
the
cloud
right?
A
If
the
availability
store
is
always
entire
device
right,
that
it
dies
right
because
it
can
die
right,
then
it's
different
availability
story
that
one
specific
car
dice
and
if
you
guys
have
some
awesome
solution
that,
for
example,
on
the
card
you
guys
can
provide,
let's
say
4
million
cps
per
card.
But
if
you
are
using
the
the
processor
it
gives,
for
example,
as
let's
say,
8
million
cps
per
car-
those
kind
of
stuff
right.
A
That's
that's
the
that's
the
stuff
to
consider
right,
but
at
the
same
point
like,
for
example,
if
if
people
let's
say
can
deliver
their
let's
say
four
million
cps
per
card
right
and
give
us
availability
set
of
being
one
single
card
as
a
single
point
of
failure
right,
not
the
entire
device
right.
That's
that's
also
different
right
and
also
there's
a
question:
can
we
do
this
common
consideration?
Is
the
beefy
cpu?
Can
we
do
this
in
a
way
that
one
car
being
highly
utilized?
It's
not
jeopardizing
the
other
card's
performance.
A
There's
also
open
questions.
So
so
I
don't
have
a
clear
answer
to
this
right
because
we
never
tried
using
using
kind
of
very
big
cpu.
Actually,
let
me
correct.
We
kind
of
try
it
right
because
we
have
a
chassis
which
has
we
have
right
now,
a
chassis
that
has
basically
multiple
fpgas
with
normal
links,
and
we
are
doing
all
the
processing
in
vfp,
which
is
running
the
host,
which
is
also
bvhost,
but
it
hasn't.
It
doesn't
scale
as
much
based
on
our
testing
than
just
dedicated
hardware.
C
C
Ago
so
I
have
that
and
we
can
so
you
can
reach
out
to
me
so
just
fyi
now
is
it
silvano
or
gerald
I'm
the
the
order
of
questions
gerald.
I
think
you
dropped,
but
did
you
have
a
question
gerald.
B
I
wanted
to
comment
to
the
we
don't
care.
While
I
answer
one
question
about
why
we
do
sonic
first
in
the
nick,
because
it
was
a
choice
and
go
on
brought
a
lot
of
good
points
on
that.
But
you
do
realize
that
in
sonic
case
we
already
have
fips
compliance
and
it's
already
deployed
in
government
networks,
and
that
would
be
a
big
thing
to
lose
and
also
in
the
sonic
case,
it's
already
built
upon
container
management
and
all
the
management
activities
around
that
allow
us
to
add
and
delete
containers.
B
So
that
was
just
a
couple
of
things,
but
this.
The
second
thing
is
this
program
started
with
the
premise
of
going
to
software,
for
any
reason
is
why
we're
having
cps
issues,
and
so
our
implementation
that
we
have
today
doesn't
go
to
software
even
for
exceptions.
There
is
no
exceptions
that
go
to
software
and
that's
why
we
get
high
cps
and
the
reason
we
can
do
that
is.
B
B
Implement
it
in
people,
but
just
the
behavioral
models,
and
by
doing
so
you
can
do
everything
in
hardware.
You
never
need
to
go
to
the
software.
Should
you
choose
to
go
there,
you're
going
to
have
aj
issues
and
whatever?
And
if
you
solve
them,
we're
not
preventing
you
from
doing
it,
but
we
are
giving
you
the
recipe
that
you
don't
need
to
go
to
software.
You
can
do
everything.
B
We
don't
see
a
smart
egg
on
the
market
that
could
not
process
what
we're
talking
about
in
the
hardware
data
or
in
the
database,
so
that
that
is
the
goal.
But
if
you
need
to
for
some
other
reason,
or
you
have
some
secret
tasks
that
you
want
to
use,
we
don't
prevent
it.
You
just
need
to
provide
two
things:
the
exact
same
behavior,
including
aj,
and
the
cps,
because
without
the
cps
that's
what
this
whole
project.
That's
what.
G
D
H
B
B
G
I
agree
with
you,
I
mean
that's
what
my
question
was
originally
so
it
looks
like
it
is
a
requirement
that
would
handle
the
slow
path
processing
in
the
data
path
and
then
my
question
is
that
what
kind
of
requirements
right?
So
I
was
looking
for
some
details
on
that
right
now,
it's
very
very
abstract.
I
was
about
to
say
fungible,
but.
B
B
Are
human
readable
and
we
already
have-
I
think,
three
documents
up
there,
but
we'll
we'll
get
the
aha
document
up
there,
probably
within
the
next
month
for
sure
okay
got
it
sorry.
I.
K
Yeah
I'm
here,
can
you
hear
me
yeah,
okay,
so
mikhail
you,
you
mentioned,
you
know
that
route
updates,
like
acl
updates,
maybe
mapping
updates
that
are
like
relatively
infrequent,
but
I
would
still
like
to
understand
like
what
impact
do
those
updates
have
on
like
established
flows
like,
for
example,
if
an
acl
was
was
you
know,
queried
in
order
to
insert
a
flow
into
the
flow
table,
and
now
that
flow
is
established,
but
yet
the
acl
changes
like?
K
Is
there
an
expectation
that
that
flow
needs
to
be
re-evaluated
with
the
newly
programmed
acl
or
is
the
expectation
that
the
flow
as
it
existed
when
it
was
established,
it's
sufficient
to
maintain
that
flow
in
the
flow
table.
A
Yeah,
so
so
let
me
address
this
so
because
we
are
designing
something
to
be
on
pair
with
with
current
sdn
and
what
behavior
customer
right
now
are
experiencing
right
right
now
we
do
reprocess
the
flaws
which
are
already
created
right,
but
what
do
I?
What
I'm
talking
we're
talking
about
that
we're
doing
right?
A
We
are
doing
this
in
a
slow
part
right,
so
so,
for
example,
because
we
don't
guarantee
the
customer
that
if
he
clamps
the
axles
or
change
the
vehicles
right
vehicles,
there's
like
immediate
effect
in
milliseconds
or
whatever
those
kind
of
stuff
right.
So
it's
more
that
let's
say
some
alcohol
change
to
deny
right.
We
do
reprocess
those
flows,
but
those
are
being
reprocessed
in
a
kind
of
slow
way
and
right
now
I
can
tell
you
how
we
are
doing
this
in
the
in
our
path
is
usually
in
vip.
A
There
is
something
like
a
generation
of
the
conflict
on
the
generation
id
of
the
config
or
the
mapping,
this
kind
of
stuff
right
and
where
the
packet
comes
to
existing
flow.
There
is
a
quick
check
against
the
against
the
generation
right
and
if
the
generation
is
different,
then
basically
this
this
packet
gets
reprocessed
and
the
flow
has
dropped.
A
So
this
is
how
we
are
handling
this
right
and
there
is
some
low
running
thread
that
is
doing
this
in
the
background
for
also
others
to
kind
of
drop
those
flows
or
we
sometimes
let
them
expire
this
kind
of
stuff
right.
So
I
guess
there
is
expectation
mostly
because
we
need
to
make
sure
that
the
current
experience
is
there
for
the
customers,
because
the
customer
just
basically
deploys
the
vm
and
says
I
want
this
vm
to
be
faster.
They
have
no
clue
that
we
change
the
stuff
underneath
right.
C
Okay,
so
we
have
three
minutes
left
and
michael.
The
10-minute
presentation
has
turned
into
this.
So
thank
you
for
thank
you
for
coming.
Could
we
take
just
maybe
one
more
from
narajan
if
it's
quick,
broadcom.
L
Yeah,
thank
you
christina.
This
is,
I
think,
just
clarification
yeah.
So,
michael
in
this
diagram
that
you
have.
If
I
were
to
step
back
and
look
at
the
slow
path
functions,
they
are
all
match
action,
lookups,
right,
accols,
yes,
any
route
lookups.
L
A
So
so
the
the
actions
then
don't
feel
that
the
match
so
so
few
things
like
because
in
eccles
we
have
different
eclipse
group
right.
So,
for
example,
the
accol
needs
to
the
packet
needs
to
pass
through
all
the
groups
and
be
allowed
by
all
of
them
to,
for
example.
Finally,
finally,
leave
right,
but,
for
example,
there
are
some
actions
which
which
after
so,
basically
there
is
like
matching,
and
there
is
action
right,
so
they're
matching.
A
For
example,
hey
I'm
matching
this
specific
base
on,
let's
say
vni
id
or
base
vni
id
plus,
let's
say
address
destination
address,
and
this
kind
of
stuff
right.
But
then
the
action
can
be
either
a
static
action.
That
says,
like
hey,
always
basically
do
this
kind
of
impact
end
cap
and
this
kind
of
stuff
right,
but
the
action
can
be
like
a
second
type
of
action,
which
is,
for
example,
doing
the
mapping
lookup
right.
A
So,
for
example,
the
outer
end
cap
right
is
really
used
using
the
dynamic
ips
depending
on
the
destination
to
which
you
are
going.
A
The
outer
end
cap
will
be
constructed
in
different
way,
based
on
the
mapping
right
and
in
in
case
of
scenario,
which
is
private
link.
Sometimes
this
end
cap
is
also
doing
additional
transformation
of
the
destination
ip
address
from
ipv4
to
ipv6
right.
So
so
so
so
matching
is
definitely
condition
based,
but
then
there
is
usually
second
level
for
the
routes.
For
example,
lookup,
based
on
the
based
on
the
mapping
based
on
the
destination
ipa
and
this
kind
of
stuff.
L
Sure
so
I
would
summarize
what
you
said
as
conditional
matching
and
some
sophisticated
resolution
right,
but
there
are
other
things,
for
example,
let's
say
in
as
part
of
the
packet
transformations
you
need
to
do
that.
Okay.
Now
that
requires,
let's
say
for
a
first
packet,
that
you
assign
a
new
mapping
from
a
pool,
and
this
is
a
function
that
cannot
really
be
simulated
as
a
match
action
right,
because
now
you
have
a
pool
of
ip
addresses
and
ports.
L
You
need
to
know
which
mappings
are
free
and
which
are
used,
and
you
need
to
pick
a
new
mapping
to
assign
it
to
a
new
flow
when
the
flow
dies.
You
need
to
be
able
to
release
that
mapping.
Listen.
A
L
A
Answer
the
nut
part
right,
so
so
we
explicitly
didn't
so
right.
Now
we
specify
more
like
static
nut
like
because
this
is
this
is
we
need
for
for
creating
the
basic
vip
communication
that
we
all
we
either
have
right
now,
so
we
have
three
different
kind
of
mods.
One
mode
is
that,
basically,
we
are
assigning
something
which
calls
instance
level
public
ip,
that
all
the
traffic
comings
on
all
the
ports,
for
example
tcp
and
also
tcp.
A
Let's
say
udp
like
always:
gets
statically
knotted,
just
the
source,
ip
and
destination
ip
right
to
the
specific,
let's
say
vmca,
and
there
is.
There
is
no
need
to
do
this
dynamic,
port
lookup
right
then
there
is
also
second
type
of
nothing
where,
for
example,
traffic
arrives
on
port
80,
and
this
needs
to
be
redirected
to
port
8080
locally
right.
It
is
also
static,
type
of
mapping
and
nothing
right,
and
the
replace
comes
back
from
the
port
8080
to
power
needs
to
be
mapped
to
port.
80
is
also
static
right.
A
The
only
thing
when
we
have
dynamic
nothing
right
is
more
like
outbound,
slp
scenarios
right
and
right
now
there
is.
There
is
two
things
there
is.
One
thing
is
that
we
are
not
blocking
based
on
this
right,
because
we
have
something
that
slb
team
has,
which
is
called
managing
other
plans,
which
means
that
right
now
we
have
a
solution
for
for
them.
A
A
So
we
are
not
blocked
with
all
the
other
advanced
slb
scenarios
right.
We
do
have
discussion
on
our
site
right
to
bring
some
basic
potential,
not
management
to
the
hardware
right,
but
this
is
a
second
step
in
the
sdn
right.
So,
even
if
we
deliver
right
now,
the
first
version
that
just
supports
sdn
functionality
with
the
static
nothing
of
the
ips
and
ports.
This
will
light
up
majority
of
the
scenarios
right
and
with
all
the
complex
scenarios
we'll
be
going
through.
C
Okay,
guys
and
who
has
10
o'clock
meetings,
I
hate
to
break
it
up.
Michael,
do
you
have
a
ten.
A
If,
before
christina,
I
would
like
to
finish
one
person
at
least
yeah.
A
No,
no,
I
think
nirajan,
I
think
you
didn't
complete
yours
or
you
completed.
L
No,
so
the
next
follow-up
question
was:
you
know
if
there
is
a
phase
two.
Is
there
a
timeline
for
that
and
in
addition
to
that,
are
there
any
other
functions
which
require
some
something
in
the
slow
path,
processing
that
doesn't
fit
the
match
action
paradigm?
L
I'm
trying
to
understand
the
broader
scope
of
where
this
project
is
going,
and
you
know
where
new
requirements
could
come
in
later.
A
C
He's
trying
to
get
v-net
to
v-net
out
the
door
guys
I
mean
we.
We
need
to
focus
on
that,
but
I
I
can
appreciate
looking
forward
into
the
different
scenarios.
Is
there
a
tiny
bit
of
time
to
answer
renato's
question
about
icmp
processing,
yeah.
M
M
That's
done
in
private
link,
I'm
expecting
that's
to
be
done
by
the
sdn
control
plane.
Setting
up
those
gnats
for
you
know,
for
private
link,
as
opposed
to
being
done
in
hardware.
Is
that
so,
in
other
words,.
A
So
so
all
the
all
the
nuts
for
the
private
link
are
mostly
static
stuff.
So
so
you
plumb
it
as
a
mapping
right
and
then
the
hardware
you
you
can
optimize
it.
Yes,
but
there
is.
There
is
also
the
stuff
that
part
of
this
is
is
kind
of
like
part
of
the
transformation
of
the
private
link
is
also
that
some
ipv4
address,
for
example,
needs
to
be
extracted
and
put
in
some
place
when
there
is
ipv.
Ccl
has
been
constructed
right
and
it
is
normal.
It
can
be.
A
Not
really
my
strong
expectation
and
I'll
tell
you
the
reason
for
it
right,
because
the
icmp
for
the
redirect
right
is
optimization
on
top
of
the
the
connection
that
is
already
established
right.
So
if
you
can
quickly
switch
the
traffic
sure,
if
not,
then
few
additional
packets
will
go
through
max's,
not
a
big
deal,
because
because
the
the
icmp
packets
also
on
our
control
plane,
sometimes,
for
example,
the
icmp
redirect
packets,
get
lost
and
maxes
need
to
retransmit
them
right.
A
So
I
would
say,
if
you
can
do
this
in
hardware,
that's
super
awesome
right.
I
don't
think
we
have
any
strong
requirements
or
guarantees
to
make
sure
it's
done
in
hardware.
C
So
I
want
to
say
thank
you
to
michael
for
coming
and
talking
to
all
of
us,
michael,
thank
you
for
your
time.
I
I
don't
expect
you
know
we
want
you
here
every
week.
I
know
you
have
to
deliver
the
functionality,
but
you
know,
maybe
maybe
in
a
couple
weeks
or
so
and
yeah.
So
thank
you
and
thank
you.
C
Everyone
else
for
attending
and
all
the
great
engineering
questions
I
appreciate
it
and
can
I'll
I'll
do
the
recording
and
release
you
to
your
10
o'clock
meetings
and
I'll
I'll
type
up
notes
like
I
usually
do
by
friday.