►
From YouTube: DASH Workgroup Community Meeting 20220518 (May 18, 2022)
Description
May 18, 2022
SONiC on DPU progress
Demo
Q&A
A
And
today
we
have
quite
a
few
on
the
call
and
we
are
going
to
cover
with
a
few
items
with
gerald
and
we're
covering
sonic
on
the
dpu
progress
today.
So
we
have
the
nvidia
team,
liat,
alexander,
marion,
etc.
Anyone
else
who'd
like
to
be
introduced
as
we
hop
into
slide
three
and
from
microsoft
gerald
me,
james
and
maybe
prince
will
be
joining
so
the
agenda
for
today
is.
A
We
would
like
to
have
nvidia
present
some
information
and
give
a
demo
with
sonic
on
the
dpu
and
gerald
millhoppen,
and
we'll
talk
first
about
why
sonic
and
what
we're
working
on
there.
We
were
hoping
that
would
take
about
eight
minutes
and
then
we'll
do
a
demo
with
nvidia
and
we've
allotted
20
to
30
minutes.
For
that
plus.
I
assume
there
will
be
some
q
a
at
the
end
of
that.
So
any
questions
on
the
agenda
today
or
should
we
get
started.
B
Christina
hi,
maybe
maybe
after
nvidia,
does
there
we'll
talk
just
about
automating
the
pipeline
a
little
bit
and
I
think
it'll
actually
be
a
nice
segue
after
after
that.
A
Okay,
okay,
great
and
so
moving
on
I'd
like
to
hand
over
control
to
the
nvidia
team,
we
have
alexander
speaking
and
presenting
right
now.
If
would
you
like
to
take
control
alexander.
C
C
C
Often
the
cli
might
look
the
same,
but
their
operations
were
different.
The
monitoring
was
different,
their
os's
were
different
and
it
took
a
lot
of
cost
and
people
to
manage
them
because
they
really
were
different
and
our
software
systems
actually
had
to
handle
all
the
you
know,
little
variances
and
then
handle
all
the
security.
So
what
sonicate?
C
The
switch
will
give.
The
dpu
is
the
ability
for
a
major
cloud
or
even
a
large
enterprise,
to
incorporate
this
technology
from
multiple
suppliers
without
undergoing
those
high
costs
of
of
operations
and
and
management,
and
that's
very
important
and
some
people
will
say.
Oh
I
don't
want
to
do
that
because
it's
hard,
but
you
know
what,
when
they
don't
get
the
business
down
the
road,
then
they
complain
and
get
the
business
and
there's
a
high
cost
of
bringing
something
with
a
different
os.
C
But
you
know
one
of
the
things
beyond
just
the
operational
benefits
and
the
cost
to
us
to
onboard
is
security
and
with
security.
C
And
so
imagine
having
you
know
three
different
suppliers,
but
they're
all
over
the
world,
and
they
will
be
and
they're
in
different
places
and
different
quantities.
And
now
we
have
to
monitor
like
the
three
different
operating
systems,
plus
a
sonic
operating
system
and
go
and
select
which
boxes
you
know
need
to
be
upgraded
or
or
patched
or
whatever
that's
pretty
cumbersome.
And
we
spent
a
lot
of
time
on
security
and
patching
and
making
sure
that
our
customers
are
safe
and
we
can't
actually
handle
that
many
operating
systems.
C
It
takes
a
fairly
large
team
already
to
go
and
do
this
work
and
it's
audited
by
the
government,
and
you
know
quite
frankly,
we
we
have
a
a
no
tolerance
to
security
issues
in
our
network
and
so
keeping
with
one
operating
system
allows
us
to
just
go
and
and
basically
repatch
the
entire
fleet
all
at
once
when
necessary,
and
we
have
tools
in
place
to
do
that,
but
having
multiple
different
types
of
operating
system
where
you
have
debian
on
the
switch
and
then
different
operating
system
on
gpus
and
different
for
every
vendor
and
different.
C
You
know
different
vintages
of
the
operating
system.
You
know
across
you
know,
multiple
implementations,
just
not
that's
not
sustainable
in
the
cloud
and
and
that's
why
sonic
has
been
successful.
One
of
the
reasons
it's
been
successful
in
the
cloud
and
then
we
have
you
know
we
have
a
lot
of
government
clouds
and
sonic
switches
is
in
all
the
government
clouds
and
is
widely
accepted
by
the
government
today
and
by
building
these
smart
switches.
C
C
C
Anything
you
do
in
the
cloud,
so
the
other
thing
is
that
when
we
started
sonic
besides
all
the
operational
security
benefits,
we
started
sonic,
based
on
the
fact
that
you
know
it
was
going
to
be
from
day
one
containerized
and
we
have
a
lot
of
infrastructure
on
being
able
to
maintain
containers
swap
containers
without
hits
to
the
underlying
infrastructure
and-
and
that
includes
things
like
bgp,
we
can
swap
out
the
bgp-
we
don't
like
quagga,
we
can
swap
it
out
for
something
else
and
that
ability
is
is
really
golden
to
us
and
so
over
time,
sonic,
developed
and
matured
and
and
added
more
and
more
containers
along
the
way,
and
we
want
to
take
advantage
of
that.
C
But
we
also
want
to
take
advantage
of
of
our
test
infrastructure
that
we
have
for
sonic
and,
yes,
it
needs
to
be
updated,
and
you
know
thank
you
keysight
and
others
who
are
working
on.
You
know
the
qa
part
of
this
for
doing
that
and
working
on
that,
but
they
have
something
to
build
upon,
and
this
is
this
is
invaluable
because
you
know
we
want
these
switches
in
our
network.
C
Ga
you
know
been
next
year:
we
want
them
proliferated,
thousands
and
every
quarter
through
our
network.
We
can't
afford
to
take
huge
delays
on
building
infrastructure
and
trying
to
rebuild
container
management
all
those
types
of
things-
and
these
are
the
things
that
we
get
you
know
which
are
very
mature
now
from
from
sonic
along
the
way,
which
is
why,
in
this
group
here,
we
call
it
disaggregated
apis
for
sonic
host
because
it
is
based
on
the
sonic
infrastructure.
C
Underneath,
as
we
know,
we
don't
you
know
just
like
psy,
underneath
the
covers,
even
though
you're
running
the
same
operating
system,
etc.
You
know
the
actual
data
path
software
is
is
separate
for
each
vendor,
we're
not
insisting
on
any
approach,
people
or
other,
but
but
the
overarching.
C
You
know,
management
of
these
units
is
going
to
be
based
on
sonic,
and
you
will
see
today
that
not
as
not
only
is
it
achievable,
but
melanox
is
going
to
show
you
exactly
what
they've
done
so
far,
which
is
pretty
significant
and
you'll,
see
more
demos
along
the
way-
and
you
know
all
of
this
knowledge,
of
course,
will
be
shared
with
the
community
as
we
go
along.
C
This
is
a
this
is
open
source,
so
anything
that
is
on
the
sonic
side
of
things
is
actually
open,
so
any
container
that
is
developed
for
dash
will
will
be
open,
and
so
you
get
a
lot
of
benefit
from
that.
So
that's
just
my
little
pitch
for
why
sonic,
but,
like
I
say
dash,
means
disaggregate
apis
for
sonic
hosts
and
this
this
group
is
based
on
sonic
infrastructure
and
we
decided
that
a
long
time
ago
at
the
beginning-
and
so
we're
not
going
back
on
on
that.
D
Thanks
gerald
very
well
said
I
just
want
to
add
one
more
thing
is
actually.
This
is
not
only
within
microsoft.
This
actually,
when
we
talk
to
many
of
the
sonic
adopters
in
the
industry,
and
they
resonate
right
and
they
resonates
that
the
solution
needs
to
be
generic,
so
that
it
can
be
long-term.
It's
not
a
one-time
one
technology
with
one
company's
engagement,
but
it
they
can
apply
it
for
much
longer
term.
D
So,
for
this
technology
that
enable
acceleration
using
different
like
different
technologies
in
general
and
using
a
standard
api
standard
way
so
enable
it
to
go
much
longer
and
being
adopted
by
many
other,
like
companies
and
users
who
are
interested
in
in
this
yeah.
Thanks
what.
C
Okay,
so
let's
get
on
to
the
excitement
for
the
day
before
getting
on
to
q,
a
I
think,
melanox
is
going
to
show
us
something
that
they've
achieved
over
the
last
little
while.
A
Right
so
we'll
have
alexander
present
and
share.
F
Okay,
good
good,
hello,
everyone,
my
name
is
alexander
and
I'm
leading
development
of
a
dpo
sonic
on
a
nvidia
site,
and
today
I'm
going
to
present
what
we
have
what
we
achieved
so
far.
F
So
here
on
the
diagram,
we
have
a
dbusonic
architecture,
this
architecture.
This
diagram
represents
our
current
status
of
implementation
and
it
covers
only
underlying.
F
Api
implementation,
so
far,
so,
as
you
can
see,
it
is
very
similar
to
what
switch
sonic
has,
but
with
a
reduced
number
of
components.
As
that
is
going
to
be
supported
in
in
a
diagram,
we
have
the
highlighting
components
with
two
colors.
F
The
blue
one
is
for
the
existing
components
that
we
took
from
sonic
and,
as
it
is
working
without
any
changes,
so,
as
you
can
see,
we
have
a
bgp,
ldp,
snmp
telemetry
containers,
which,
exactly
the
same
as
in
vanilla,
let's
say:
vanilla
sonic
we
have
a
swss
container,
which
right
now
is
is
the
same
as
as
in
vanilla,
sonic,
but
with
reduced
amount
of
of
features.
F
We
have
database
container
and
the
only
two
components
so
far
that
are
modified
in
order
to
run
debut
sonic
on
our
bluefield
modulus
is
sunday
container
and
piman
so
instantly
container.
We
have
the
application
which
is
going
without
changes
and
is
exactly
the
same
as
in
switch
sonic,
and
here
we
have.
F
Sci
implementation
or
integration
with
a
dpu
psi
implementation
and
our
nasa
sdk,
which,
which
is
actually
controlling
our
dpu.
So
in
order
to
to
be
able
to
run
so,
we
added
our
implementation
of
psi.
We
integrated
our
sdk
into
the
container
and
all
the
underlying
drivers
that
are
controlling
dpu
and
what.
C
G
E
Sdk
no
problem,
it's
our
that's
the
acronyms
for
our
in
for
our
dpu
sdk,
so
the
side
layer,
which
is
the
same
sci
api
that
will
be
used
in
sonic,
vanilla,
plus
the
dash
api
and
the
sci
implementation,
which
is
nvidia
psi
implementation
and
all
the
apis
that
are
defined,
are
implemented
on
top
of
an
sdk
and
the
nasa
is
the
sdk,
let's
say,
implementation,
which,
which
is
integrated
into
simply
so.
E
If
you
are
going
to
have
like
another
dpu
which
is
not
based
on
nvidia,
the
main
work
that
those
that
the
vendors
need
to
do.
They
will
need
to
create
their
own
sync
d
container.
Part
of
that,
as
as
alexander
mentioned,
is
the
psy.
It's
the
the
generic
part
and
part
of
that
is
the
vandal
specific
and
which,
if
you
will
go
to
the
switch
part,
you
will
see
the
same.
E
C
F
And
in
pimon
container
we
have
all
the
pimon
demon
running
that
are
vendor
independent
and
we
have
our
own
implementation
of
python
plugins.
F
F
Let's,
let's
move
to
to
implementation
and
details
a
little
bit
so
for
dbu
sonic,
we
have
a
reduced
amount
of
components
and
the
list
of
components
that
need
to
be
supported
by
dpu
is
defined
in
hld
documents
that
prince
presented
some
time
ago,
and
here
we
can
see
that
in
dpu
and
we
are
running
eight
containers
on
the
eight
content
containers.
F
We
have
telemetry
snmp,
ldp
or
cajon,
which
is
swss
containers
in
the
bgp
b1
and
database,
and
also
all
the
other
components
that
sonic
includes
when
providing
the
sonic
has
not
part
of
dpu
works
that
we
are
doing
right
now
and
we
can
also
check
the
list
of
features
that
should
be
supported
by
the
video
sonic.
And
here
we
can
see
that,
for
example,
router
advertiser
container
tmd
and
the
dhcp
relay
containers
are
disabled.
F
And
so
now
let's
talk
about
memory,
consumption
of
of
sonic,
so
we
did
some
some
measurements
and
what
we
wanted
to
understand
is
how
much
the
sonic
consumes.
F
Investigation
in
the
stages,
so
here
we
can
see
on
the
slide,
we
can
see
how
much
memory
is
used
by
sonic
base
os.
It's
in
the
case
where,
when
no
docker
containers
are
running
and
doppler
daemon
is
disabled
itself.
So
we
can
see
that
demo,
docker
d1
is
not
running
and
free
command
shows
us
that
base
sonic
os
consumes
only
159
megabytes.
F
This
base
os
includes
all
the
kernel
drivers
and
that
is
required
for
a
gpu
for
our
bluefield
dpo
to
be
able
to
run
so
it's
it's
a
base,
debian
plus
additional
drivers
that
are
required
for
gpu.
F
This
slide
shows
the
memory
usage
of
another
nvidia,
specific
debian
image
that
we
are
running
on
dbu
and
so
and,
as
you
can
see
here,
we
have
like.
F
In
use
100
721
megabytes,
which
shows
that
it's
actually
much
more
than
base
as
os
of
base
debian
os
in
sonic
consumes.
It's,
maybe
not
a
very
accurate
comparison,
but
it
shows
that
actually
sonic
doesn't
get.
F
Debian
os
plus
docker
d1
as
it
is
running,
and
we
can
see
that
docker
demon
is
consumed
consuming
around
50
megabytes
when
it's
just
running
without
any
containers.
F
F
Gigabytes
of
memory,
including
memories
that
is
consumed
by
debian
and
docker
dr
demon,
so
we
can
calculate
that
containers
itself
with
all
the
application
applications
running
inside
consumes
around
1.1
gigabyte.
F
And
we
can
also
take
a
look
to
docker
stats.
This
command
provides
the
resources
usage
by
docker
containers
and
we
can
see
how
much
memory
is
used
by
each
container
and
it's
actually
depends
on
what
container,
how
how
many
applications
containers
are
running
inside,
but
the
usage
is
quite
similar
and
between
all
the.
F
Here
we
can
see
the
result
of
show
platform
summary
command,
so
we
have
our
platform.
Our
sku
asic
ac
count
serial
number
of
modular
model
number,
and
we
can
also
see
that
here
we
have
like
a
new
field
which
is
called
switch
type
and
switch
type
is
set
to
dpu.
H
Quick
question
asked
a
quick
question
on
on
this
memory
consumption.
So
what
was
the
initial
configuration?
Did
you
have.
H
E
E
E
G
E
F
So
next
we
have
a
result
of
work
of
show
interface
status
command.
We
can
see
two
interfaces,
lens
configuration
for
the
interfaces
and
speed
mtu.
F
F
F
F
J
You
mentioned
in
the
architecture
right,
unlike
the
regular
sonic.
The
flow
programming
is
very
important
for
dpu
right.
I
don't
see
anything
and
stuff
like
that
are
mentioned
in
the
architecture.
You
know.
Did
we
design
that
part?
Yet
our
the
current
scope
is
to
bring
up
the
interfaces
and
stuff
like
that.
E
It
is
correct.
The
the
current
current
status
is
that
we
had
to
do
changes
in
order
to
have
sonic
running
on
top
of
arm
and
then
sonic
running
off
of
bluefield
and
then
making
the
interface
configuration
fully
aligned
with
what
sonic
expect
you
from
a
user
interface.
E
Now
the
they're
like
additional,
say
there
are
like
additional
phases
which
are
still
in
progress.
The
next
one
is
to
show
that
we
have
like
traffic
a
bgp
peering
that
is
is
working
and
later
on,
based
on
dash
hld.
You
will
see
that
the
how
the
overlay
or
how
the
v-net
to
v-net
flow
will
be
integrated
into
sonic.
E
It
is
like
possible
that
you
can
kind
of
still
use
the
like
all
the
in
the
sync
d.
You
have
everything
in
order
to
control
the
dpu,
so
you
still
can
control
it,
even
if
it's
not
sonic,
just
for
like
testing
or
inter
or
basic
interoperability,
but
later
all
of
the
flaws
that
you
are
talking
about
them,
then
you
are
then
will
be
available
through
the
dash
api
through
the
swss
through
the
sync
d.
E
As
yet
another
api
or
whatever
will
be
the
the
way
to
configure
the
the
dpo
itself.
J
Right,
I
mean
in
terms
of
if
you
keep
the
kernel
to
handle
close,
it's
really
not
gonna
scale,
anything
right,
so
I
believe
no
other
vendors
using
vpp
and
other
open
source
tools
in
order
to
handle
flows
better
right.
So
I
hope
we
consider
the
scalability.
K
J
G
We
are
trying
to
to
integrate
with
cydash
right,
so
we
need
a
dpu
like
you're,
starting
with
a
dpu
that
you
can
implement
psyduck.
Now.
How
did
you
implement
that?
Whether
it
is
software
or
hardware
is
tervendo?
Some
vendor
can
do
it
in
in
the
hardware.
Some
need
some
cpu
assistant,
but
it
is
actually
underneath
right.
J
Okay,
I,
the
flow
programming,
is
not
really
clear.
Yeah,
I
think
when
you,
when
we
there
go
there,
maybe
we'll
discuss
better
yeah.
C
There's
many
other
meetings
where
we
talk
about
blood
processing,
many
documents
you
can
read:
they
have
a
lot
of
documents
that,
if
you
go
read
them
then
come
to
other
meetings
where
we
talk
about
the
overlay
and
the
behavioral
models.
Those
are
you
know
when
we
talk
about
the
behavioral
model,
we're
talking
about
the
overlay,
so
you
can
come
to
those
meetings
and
make
sure
you
read
the
documentation
that
is
in
place
when
you
come
to
those
meetings.
This
is
ongoing.
I
think
what
nvidia's
show
is
amazing.
It's
a
great
start.
C
They
ported,
you
know
sonic
to
their
gpu.
The
infrastructure
is
there.
They
built
some
some
customized
containers
and
and
shown
the
community
how
this
can
be
done
and
they
did
it
in
fairly
short
period
of
time,
in
parallel.
Nvidia,
of
course,
is
just
like
others,
working
on
the
overlay,
behavioral
models
and
the
hld
and
as
those
do
complete,
I
I'm
sure
you're
going
to
see
a
very
equivalent
demo,
and
you
know
the
thing
that
you
should
understand
is
that
at
least
function
functionally.
C
We
have
this
working
in
our
network
in
pre-dash,
so
it's
not
like
it's
not
like.
We
have
to
guess
whether
this
might
work
it
does
work.
This
is
the
public
version
of
what
we're
trying
to
put
out
there
that
you
know
will
be
shared
with
the
entire
community
and
the
community
come
together
and
see
all
of
what
we
learned
over
the
last
year
or
two
and
standardize
on
it
and
be
able
to
produce
things.
C
You
know
products
that
other
clouds
can
use
easily,
but
whether
it
works
or
not
is
not
the
question
we
know
it
works.
This
is
about
the
standardization
or
in
the
open
community
about
how
to
do
this
in
such
a
way
that
we
can
have
multiple
vendors,
providing
the
same
functionality,
same
operations,
the
same
security
etc
so
that
the
consumption
of
these
technologies
can
be
cloud-wide
and
even
large
enterprises.
So
that's
what
we're
doing
those
are
different
meetings,
so
come
to
the
behavioral
model
meeting.
C
That's
where
we
talk
about
overlay
a
lot,
but
I
think
what
nvidia's
done
is
really
really
a
great
start
to
this
journey,
and
you
know,
hopefully,
by
the
end
of
the
summer,
we'll
have
like
some
really
cool
demos
of
overlay
and
things
of
that
nature.
But
it
will
take
a
little
bit
of
time.
H
Yeah,
I
concur
completely
general,
you
know
nvidia
folks,
great
work,
guys
a
lot
of
progress
has
been
made
and
then
this
is.
This
is
great.
So
just
you
know
quick
question
on
so
far.
L
Yeah
I
just
wanted
to
understand
is
a
good
great
demo,
but
I
want
to
understand
like,
like
is
anything
that's
being
demoed
today
like?
Is
there
like
a
plan
to
upstream
any
of
this
like
like
what
has
been
done
by
nvidia,
that's
like
applicable,
to
be
like
upstreamed,
and
I'm
just
thinking
in
terms
of
like?
Is
there
like
a
new
kind
of
target
like
build
image
for
for
dash,
or
I
I
just
want
to
know
if,
like
what
would
be
upstream,
that
like
was,
would
be
leverageable
by
french
yeah.
E
Yeah,
eventually,
the
plan
is
to
have
it
upstream,
since
it's
still
like,
as
you
saw
it's,
it's
a
very
let's
say,
major
milestone,
but
still
not
not
exactly
related
to
the
the
overlay
itself.
We
will
do
that
once
we
have
like
a
concrete
walking
environment
for
dash,
and
then
we
will
do
that.
It's
not
just
nvidia.
E
It
is
a
shared
like
f4
that
will
be
done
with
other
contributors
to
to
this
community,
and
the
purpose
eventually
is
to
have
it
as
part
of
the
sonic
vanilla.
H
Yeah,
so
thanks
christian
else,
so
question
for
for
nvidia.
Folks,
in
terms
of
you
know
this,
this
all
the
entire
exercise
that
that
you
guys
have
done
so
what
were
the
learnings
right?
Do
you
want
to
share
like
in
the
summary
of
what
did
we
learn
out
of
this?
This
exercise.
H
Yeah,
the
german
learning
in
general,
the
sonic
and
the
dpu
right.
Of
course,
there
were
some
objectives
that
you
wanted
to
achieve
by
bringing
this
one
in
so
that
it
you
know
moving
forward
it.
Just
really.
You
know
validate
certain
assumptions
you
made
or
certain
things
that
you
wanted
to
carry
it
out.
I
just
wanted
to
just
see
you
know
what,
if
you
have
any
insights
on
that.
E
Okay,
so
we
are
walking
in
parallel
with
the
let's
say
how
the
dash
api
will
be
translated
or
what
is
the
architect
the
software
architecture
of
the
sw
assist
dash
in
order
to
support
the
api,
I
can
say
that
one
of
the
things
that
we
are,
let's
say
in
the
process
we
are
learning,
is
what
exactly
is
needed
from
the
very
huge
list
of
features
and
a
very
huge
list
of
psy
apis
that
are
defined
on
the
switch,
what
they
are
needed
and
what
is
the
priority
for
them?
E
I
can
say
that
there
is
a
very
let's
say
there
is
a
process
already,
which
is
I'm
not
sure
if
it's
closed,
if
it's
almost
done
or
not,
but
it's
a
very
let's
say
in
in
a
good
direction
to
be
close
is
the
hld
for
the
swss,
and
I
think
that
that
one
will
be
shade
more
light
about
how
the
the
behavioral
model
that
is
defined
on
the
side
in
this
community
will
be
integrated
into
the
sonic.
E
As
that,
of
course,
we
need
to
take
into
consideration
the
amount
of
like
flows
and
configuration,
and
this
is
something
that
it's
been
also
under
discussion.
E
So
I
think
that
the
major
part
that
we
can
save
for
now,
which
I
will
put
the
execution
aside
of
the
architecture-
and
I
believe
that
mati
and
marian,
can
can
tell
you
a
little
bit
more
about
the
architecture
side.
But
from
the
execution
side
it
was
mostly
in
making
and
understanding
what
is
needed.
What
need
to
be
excluded
in
a
way
that
we
can
exclude
it
later
on
on
a
sonic,
vanilla
upstream
and
how
to
let's
say,
integrate
a
working
solution
on
a
specific,
let's
say,
specific
vendor.
E
The
nice
thing
is
that
sonic
is
already
available
with
all
of
that.
So
if
you
want
just
another
board
or
another
asic,
there
is
a
bunch
of
files
and
a
very
nice
wiki
that
can
describe,
and
if
you
are
familiar
with
sonic,
it
is
likely
to
be
easier
for
you
than
for
someone
who,
which
was
not
yet
familiar
with
that.
E
H
H
You
know
it
runs
on
on
on
switches
and
so
far
the
the
use
case
was
that
sonic
always
ran
on
on
external
cpu
right
and
then
now
we
are
go
getting
into
the
model
where
sonic
is
running
on
on
on
a
cpu
which
is
actually
embedded
in
the
dpu
right,
and
then
this
is
where
essentially
is,
is
a
huge,
huge
difference
for
for
people
who
are
basically
coming
in
and
then
the
reason
why
I
say
the
huge
difference
is
it's
not
from
the
perspective
of
you
know
that
whether
or
not
we
can
run
sonic
on
the
dpu,
but
the
fact
that
you
know
it's
embedded
in
in
the
dpu
asic
so
to
speak.
H
That
brings
in
you
know
some
some
sort
of
like
a
requirement,
part
of
it
to
say
that
hey,
you
know.
If
we
are,
you
know
going
to
say
that
okay,
we
require
a
dpu
with
a
certain
amount
of
cpu
and
an
embedded
memory
in
it.
What
are
these
right
so
so
far,
sonic
was
not
really
defining
at
least
as
a
as
a
requirement
or
as
its
specification
that
hey
you
know
you
must
have
yours.
You
know
the
cpus
by
by
this.
Much
of
you
know,
capability
or
memory
with
this.
H
Much
of
you
know
the
the
size
right.
However,
now
that
we
are
getting
into
the
the
mode
where
it's
actually
embedded
into
the
dpu
right,
the
cpu
as
well
as
the
memory
we
you
know,
we
do
need
to
as
a
community
need
to
come
out
that
you
know.
What's
the
minimum
requirement,
there
are
from
the
cpu
perspective
versus
you
know,
also
the
the
memory
perspective.
H
That
is
the
one
that
we
should
come
out
as
through
this
exercise
of
of
really,
you
know
running
this
thing
and
say:
what
is
it
really?
You
know
to
start
with
with
base
configuration
plus,
you
know
when
you
scale
up
and
how
it
is
going
to
be
so.
This
is,
this
is
what
I
was
basically
looking
at
and
perhaps
as
a
community
we
should
we
should.
You
know,
strive
for
that.
C
I
C
Preliminary
work
that
we've
done
to
estimate
as
well
in
parallel
to
what
we
think
is
going
to
happen
and
we'll
just
over
time.
Hopefully
those
things
will
converge,
but
that
should
come
out
of
this
working
group
for
sure
the
minimal
requirements
and
what
to
expect
from
you
know
cpu
and
memory
utilization,
because
we
all
understand
that
that
is
shared
with
the
dpu's
ability
to
do
overlay.
C
And
so
it's
important
and
that's
what
this
work
group
will
actually
in
the
end,
have
to
highlight
and
put
out
there
and
documentation
to
make
it
clear
and
and
also
we
will
optimize
this
as
we
go
along.
But
it's
a
good
point.
But
we
won't
have
time
time
today
to
actually
get
into
all
the
other
details
of
memory,
utilization
and
cpu
utilization,
but
we'll
definitely
get
to
what
you're.
Talking
about
over
time.
H
Yeah,
okay,
thanks
general
yeah
and
also
there
is
a
use
case-
part
right,
like
smart,
sewage
versus
appliance
versus
nick,
where
in
some
cases
you
have
an
external
cpu
versus
the
one
where
you
don't
right,
there
are
some
differentiation
that
we
will
we'll
also
touch
upon
that.
But
I
I
get
your
point,
but
thank
you
thanks
a
lot.
Thank
you.
A
A
question:
oh:
go
ahead:
chin!
Oh,
go
ahead!
That
someone
has
hands
up
before
me.
Oh,
it
was
lisa
when
has
her
hand
up
and
venkat
malinga,
but
we
want
to
get
yours
in
too
yeah
lisa.
I
I
sure
thank
you
christina
yeah,
so
so
far
we
have
sonic
running
on
x86.
So
this
is
the
first
time
that
you
know
the
via
team
have
showed
that
we
have
sonic
running
on
the
arm.
Great
demos,
thanks
for
showing
it
my
question
is:
is
that
a
smooth
transition
from
an
x86
into
an
arm?
Or
there
is
a
you
know,
low
level
platform,
not
platform
infrastructure
change
that
you
have
to
go
through
to
get
it
working.
E
If
I'm
not
mistaken
and
alexander,
please
correct
me,
there
was
some
infra.
There
is
some
infra
in
the
sonic
with
for
alma.
Not
exactly
the
same.
I
think
it's
arm
f,
if
I'm
not
mistaken,
but
the
transition
was
not
that
let's
say
I
think
that
there
will
be
switches
based
on
arm
as
well
as
for
dpu.
E
The
main
difference
is
that
our
switches
are
based
on
x86,
while
the
dpu
is
arm
arm64,
so
we
had
to
do
the
transition
and
making
sure
that
everything
is
placed,
including
the
build
server.
What
I
can
tell
you
that
cost
compilation
is
much
much
much
slower.
So
if
you
want
to
bring
up
and
an
equipment
which
is
arm
64.,
it's
better
that
you
will
have
a
server
that
it's
a
native
one
native
compilation
and
not
a
course
compilation,
because
then
it
takes
much
more
time.
A
I
Yeah
is
that
the
plan
that
you
will
eventually
share
on
of
this
platform
infrastructure
piece
of
software,
also,
the
change
that
you
make.
E
Yes,
that's
my
my
what
I
mentioned
is
that
eventually
everything
will
go
upstream.
I
don't
have
right
now
in
eta
when
and
it
depends
on
the
progress
and
as
well
as
the
let's
say,
the
the
completion
of
the
swss.
So
it
is,
it
is
a
work
in
progress
once
it
will
be.
Let's
say
I
won't
say
stable,
but
we
will
feel
that
this
is
the
time
to
upstream.
We
will
do
that.
No,
we,
we
believe
in
the
upstream
way.
A
A
Thanks
everyone,
and
thanks
for
showing
alexander
and
shen,
had
a
comment.
D
Oh,
my
comment
is
very
small
that
followed
and
I
believe
you
might
plan
to
do
that
for
the
dpu
side
right
once
we
standard
this,
we
need.
We
should
also
take
you
to
the
site
community,
the
broader
side
community,
so
that
we
talk
there
and
get
built
into
the
site
releases
like
I
mean
not
from
the
dash
site
only
but
for
the
general
site
community.
E
D
A
Great
so
oh
then
cut
so
we
have
seven
minutes
left
unless
we
want
to
take
a
break
for
our
next
meeting.
Venkat
go
ahead.
J
Yeah,
so
this
is
undoubtedly
you
know,
demo
great
starting
find
to
enable
sonic
on
the
dpu.
So
my
point
more
on
you
know,
I
know
what
you
are
doing
in
the
you
know
dash
basically
using
the
sdn
controller
and
doing
stuff,
but
when
it
comes
to
you
know,
non-stn
controller,
you
know
if
you
want
to
control
everything
from
the
dpu
like
flow
handling
and
everything.
I
think
we
need
a
you
know,
because
the
dpu
has
different
capabilities.
Not
every
dpu
has
same
capability.
We
cannot
generalize
it
and
we
need
to
come
up
with.
J
You
know,
differences
and
see
how
better
we
can
handle
it.
In
the
you
know,
seeing
the
container
and
stuff
like
that,
I
think
when
we
do,
I,
I
see
a
lot
of
unknowns
here
from
dash
perspective.
Things
are
contained,
but
when
we
enable
sonic
on
the
gpu
in
general,
there
are
a
lot
of
other
things.
We
need
to.
J
So
for
the
dash
perspective,
I'm
I
see
you
know
the
other
sonic
archer
as
well
and
how
things
are
being
handled
from
sonic
side,
but
when
it,
when
we
enable
dpo
in
general
for
sonic,
you
know,
then
we
need
to
really
consider
other
things
and
if
we
are
not
sure
on
those
things,
I
think
we
need
to
start
putting
things
for
the
discussion.
C
Said
before
come
to
the
behavioral
model
meeting,
so
such
things
that
this
is
not
the
meeting
for
that
there's
other
test
infrastructure
and
behavioral
model
meetings
where
we
discuss
what
the
dp
requirements
are
from
an
overlay
point
of
view,
which
is
right,
which
is
what
dpus
specialize
in.
So
I
think
you
know.
Definitely
you
should
come
to
those
meetings
and
you
know
there's
a
lot
of
work
being
done
in
those
meetings
and
there's
a
lot
of
contributors
to
those
meetings.
C
So
it's
not
like
mel
knox
only
as
leo
said,
there's
many
contributors
to
the
behavioral
model
for
the
overlay
and
to
the
test
harness
that's
being
created
right
now.
So.
E
C
Meetings
are
more
appropriate
to
discuss
like
the
dpu
capabilities
and
and
stuff
like
that.
J
So
general,
when
you
say
you
know
the
the
deep
view
that
here
here
we
are
talking
about,
is
meant
only
for
the
overlay,
not
for
other
traffic,
because
when
I
see
this
dpu
sonic
architecture,
I
I
assume
it's
for
everything
like
a
regular
switch.
You
know
the
all
the
use
cases.
You
know
star
replacement
a
lot
of
things
we
can
handle
right.
So.
C
You
you
can,
and
we've
listed
a
lot
of
use
cases
already
in
the
documentation,
and
today
we
we
actually
run
this.
The
equivalent
of
this
on
dpus
sit
on
a
server
there's
multiple
of
them.
That's
it
on
a
server
and
now
we're
building
switches
where
we
put
dpus
on
switches
and
you
could
even
run
dpus
and
host.
So
there's
lots
of
applications
that
are
even
beyond
you
know
the
switch.
C
But
again
those
are
things
that
we
discuss
more
in
the
behavioral
model
and
test
harness
groups.
Thanks.
C
H
Yeah
so
let's
go,
you
know,
I
think
this.
This
is
probably
for
for
gerald,
and
maybe
others
also,
but
initially
you
know
when,
when
the
entire
vision
was
shown,
it
seems
like
you
know,
we
wanted
to
run
sonic
on
a
dpu,
irrespective
of
whether
you
know
whatever
be
the
the
use
cases
such
as
either
appliance
or
or
nick
or
or
smart
switch
in
appliance
and
make
it
make
sense
right.
C
I
think
I
covered
that
we're
not
going
to
have
anything
in
our
network
and
I'm
sure
other
clouds
won't
either
that
we
can
support
operationally
or
from
a
security
point
of
view,
and
you
can't
ignore
the
fact
that
dpus
are
running
operating
systems,
they
are
themselves
running
operating
systems
and
they
need
to
be
managed
and
they
need
to
be
secure
and
they
need
to
be.
You
know
I
have
a
lot
of
common
functions
that
we
don't
want
to
duplicate
across
multiple
different
operating
systems.
C
It's
not
sustainable,
it's
not
what
any
large
customer
would
want
with
this
takes
millions
of
dollars
to
support.
You
know
an
operating
system
and
the
tools
around
it,
and
so
from
the
beginning
we
decided
that
a
switch
you
know
will
have
the
same.
You
know
operating
system
as
sonic,
whether
it
be
dpu
or
whether
it
be
on
the
switch
for
those
reasons
and
and
more
so
we're
not
we're
not
going
to
go
back
on
that.
It's
not
like
really
debatable
item.
C
We
can't
support
having
multiple
operating
systems
choice
being
made
by
you,
know,
suppliers
and
then
also
multiple
vintages
of
operating
systems
from
those
suppliers.
It's
not.
H
H
My
thing
is
that
there
is
a
little
bit
of
a
different
architecture,
even
in
hld,
that
is
on
the
community
right
now
for
a
smart
switch
whereby
you
know
there
are
certain
different
components,
are
split
between
running
on
external
cpu
versus
running
on
a
dpu
and
so
forth.
Right,
so
my
thing
was:
is
that
it's
it's?
It's
still
a
little
different
of
an
architecture
when
it
comes
to
a
appliance
or
a
box
which
has
an
external
cpu
versus
the
one
that
doesn't
right
and.
C
C
If
that
turns
out
to
be
the
more
efficient
way,
we
can
do
that.
So
what
we're
really
talking
about
is
where
certain
containers
run
and
depending
on
how
we
draw
or
what
we
learn,
we
might
run
containers
slightly
differently
than
what
is
shown
today,
we'll
find
out
over
time.
If
we
find
that
running,
say
the
dash
container
on
the
the
switch
doesn't
scale
as
well
as
we
thought,
then
we'll
move
it
to
the
you
know,
we'll
move
to
the
dps
and
do
things
slightly
differently,
but
be
open
on
that
one.
A
The
gerald
and
team
I'm
gonna
call
time
it's
it's
10,
I
don't
know
if
everyone
has
another
meeting
I
I
know
I
do,
but
maybe
we
can
pick
up
some
discussion
next
week.
Is
that
okay,
with.