►
From YouTube: RobotPerf subproject - meeting#2
Description
Weekly meeting of the RobotPerf (https://robotperf.org/) subproject of the ROS 2 Hardware Acceleration Working Group (https://github.com/ros-acceleration).
RobotPerf is an open reference benchmarking suite that is used to evaluate robotics computing performance fairly with ROS 2 as its common baseline, so that robotic architects can make informed decisions about the hardware and software components of their robotic systems. The group meets weekly at this room and discusses robotics computing architectures and benchmarking across various compute substrates.
Minutes of the meetings are available at https://docs.google.com/document/d/185Cy1xjpAOgJygEOnlf5OCgOQTywmF0qgSpS3GiW16Q.
A
Right
so
we
are
live,
and
this
is
the
second
meeting
of
the
rubber
perf
project,
hi,
everyone
and
Jason
back
to
you
awesome.
B
Cool,
so
one
thing
we
were
thinking
would
be
important
on
our
side
would
be
to
speak
about
the
nomenclature
and
iron
that
out
so
I'm
gonna
go
ahead
and
share
my
screen.
There
we
go.
B
All
right
so
for
those
of
you
who
haven't
been
keeping
up
or
I,
don't
know,
here's
here's
the
link
to
this.
B
Github
pull
request,
and
so
we
went
through
and
we
tried
to
Define
some
of
the
terms
that
we're
going
to
be
using
in
this
Benchmark,
frequently
just
so
that
when
we,
you
know,
use
these
words,
we
all
mean
the
same
things.
B
B
So
the
first
thing
we
defined
was
an
application,
which
is
like
a
particular
use
case
in
the
real
world,
and
so
this
an
application
could
be
like
an
autonomous
car
or
like
an
industrial,
robotic
arm
or
a
quadruped.
Something
of
that
sort.
B
Something
you'd
see
in
the
real
world
and
then
a
benchmark
is
a
standardized
test
or
a
set
of
tests,
and
it
will
usually
consist
of,
like
some
it'll
measure,
it
measures
some
sort
of
like
metrics,
that
we
have
in
our
standards
and
then
many
multiple
of
these
benchmarks
would
be
a
benchmarking,
Suite
right
and
then
we've
been
talking
about
like
this
data
set
versus
Ross
Bag.
B
So
data
sets
are
going
to
be
syn,
I
can't
say
that
word,
sorry,
not
yes,
sorry,
synonymous,
Ross
bags
and
data
sets
so
yeah.
Basically,
we
just
want
to.
We
want
a
method
of
recording
the
Ross
message
data
and
you
know
playing
that
back.
So
then,
when
we
run
our
Benchmark,
we
have,
you
know
same
data
to
evaluate
different
Hardware
on,
so
these
data
sets
would
most
likely
come
from
a
specific
application.
B
So
you
can
imagine
if
there's
a
car
driving
down
the
road-
and
we
are
collecting
all
this
sensor-
information,
whether
it's
like
lidar
and
then
the
like
position
of
it
cameras
things
of
that
sort.
B
B
B
And
then,
of
course,
we
can
dial
it
down
to
like
a
specific
package,
which
is
just
a
basic
unit
of
code
that
has
like
Ross
nodes,
libraries,
a
launch
file,
configuration
files,
all
the
necessary
nitty-gritty
stuff,
to
make
it
a
ros2
package.
So
then,
within
our
benchmarking,
Suite
women
are
benchmarking
Suite.
We
could
be.
You
know,
evaluating
a
specific
meta
package
right
and
then
each
one
of
our
benchmarks
would
since
would
evaluate
like
a
specific
package
within
that
meta
package.
B
So
it
could
be
like
a
specific
function
like
stereo
image
or
some
some
sort,
something
that
sort
and
then,
of
course,
we're
going
to
evaluate
that,
based
on
their
standards
which
have
been
proposed
in
the
rip
2014
and
then.
Finally,
that
brings
us
to
reference
implementation,
which
is
just
like
a
standardized
implementation
of
a
benchmark
that
we
can
use
as
like
a
baseline
for
comparisons
with
other
implementations.
B
So
if
you
evaluate
this
data
set
on
a
specific
piece
of
Hardware,
so
that
will
be
your
Baseline
and
then,
once
you
add
different
types
of
Hardware,
you
can
then
compare
it
to
something.
So
that
would
be
what
a
reference
implementation
is.
B
C
B
The
way
yeah,
if
you
want
it,
let
me
let
me
pause.
Is
there
anything
so
far
that
is
very
out
of
line
or
something
that
the
group
thinks
should
be
defined
in
a
different
ways.
A
So
maybe
maybe
I
can
I
can
just
comment
a
couple
of
things
so
well,
maybe
taking
a
step
back,
I
I
love
that
you
took
the
lead
on
that
Jason.
So,
let's,
let's
see
if
we
all
agree,
let's
just
dive
into
trying
to
converge
into
this
first,
because
I
think
it
has
priority
in.
D
A
Contribution
is
great
and
then
I'd
love
to
maybe
say
some
time
to
also
do
the
same
with
rep
2014,
which
has
received
some
contributions
from
some
of
you.
So
maybe
all
of
us
can
also
take
a
take
a
deep
dive
into
that,
or
or
not
a
deep,
but
at
least
a
dive
into
that,
and
also
somehow
get
get
things
going
as
we
progress
now
regarding
the
contributions,
I
I
just
must
admit:
I
love
it.
So
thank
you,
Jason
and
also
everyone
else
involved
by
and
frankly
speaking
thing.
A
Something
like
this
works
very
needed
Matthew.
Also,
thank
you
for
the
follow-up.
I
I
love
them.
The
only
concerns
that
I
have
is
regarding
some
of
the
definitions
that
originally
were
included
concerning
tasks
and
task
hierarchy
and
subtasks.
A
What
is
what
is
your
take
on
that,
and
can
we
I
mean
I
do
understand
that
there
is
some
Heritage
in
here
you're
trying
to
incorporate,
but
Frankly
Speaking
I.
My
only
comment
would
be:
let's
not
force
things
just
for
the
sake
of
keeping
Heritage
for
mlperf,
because
Frankly
Speaking
we're
speaking
about
a
totally
different
domain
in
here
and
there's
some
nomenclature
already
present.
In
in
the
field,
and
if
we
just
introduce
no
ones
or
rename
things,
then
it's
going
to
make
things
very
hard
for
people
to
grasp.
A
So
maybe
it's
worth
asking,
can
you
can
you
elaborate
on
on
task,
not
hierarchy,
some
tasks,
and
why
do
we
need
this.
B
You
know
yeah
Victor
you're
completely
right,
so
our
initial
idea
was
to
structure
like
keep
the
terminology
very
similar
to
what's
been
done
in
ml
Perth,
but
the
more
we
talked
about
it
in
our
group.
I
think
we're
converging
to
you
know
what
you're
talking
about,
because
in
robotics,
yes,
we
are
talking
about
a
different,
a
completely
different
domain
and
so
like.
B
If
we
keep,
if
we
start
defining
things
in
a
really
absurd
way,
it
might
just
add
unnecessary
complications
so
like
having
said
that
our
robotics
and
there
are
like
a
few
other
things
we
want
to
discuss.
So,
let's
start
with,
let's
start
with
like
the
robotics
Pipeline,
and
so
actually,
let's
start
with
task.
So
we
were
initially
imagining
that
a
task
would
be
a
part
of
the
robotics
pipeline
like
perception,
localization
control
whatever,
but
like,
as
you
said,
I
think
in
the
community
of
Robotics.
B
Those
different
parts
of
the
pipeline
should
be
referred
to
as
categories
is
that,
like
is
that
a
better
way
of
referring
to
them?
In
your
opinion,.
A
So
when
I
read
when
I
read
when
I
read
task,
what
comes
to
my
mind
is
node
I,
don't
know
if
everyone
else
would
agree,
but
it
does
sound
like
specific
computation
to
me.
B
Okay,
that's
not
what
we
okay,
I,
see,
I,
see
how
you
could
understand
it.
That
way.
That's
not
what
we're
trying
to
go
for,
but
we're
trying
to
go
for
is
like
what
is
each
part
of
the
robotics
pipeline.
Perception
would
be
a
task
but
I
think
it's
better
defined
as
a
category
so
like
within
the
so
we'd
have
multiple
categories,
and
then
we
can
have
like
sub
categories
and
I
kind
of
I
kind
of
like
Drew
this
out.
B
So
we
can
like
better
think
think
of
it
and
I
already
I
already
changed
here:
the
terminology
from
tasks
to
categories,
because
I
definitely
agree
with
you.
Victor.
A
Before
we
move
forward
on
this
there's
one
I
mean
this
is
just
based
on
my
humble
experience,
but
there's
there's
various
ways
to
categorize
things
in
the
robotics
Pipeline
and
there's
no
unified
agreement
on
how
to
actually
Define
the
robotics
pipeline.
Certainly
perception
manipulation,
navigation,
localization
control
are
often
the
case
included
as
part
of
the
robotics
pipeline.
Actually,
I
have
a
picture
in
here,
which
shows
you
can
I
can
I
real
quick
share,
yes
of.
A
This
comes
from
a
patent
application,
actually
we
filed
a
while
ago
and
it
it
does.
It
does
get
inspired
actually
by
other
Publications,
but
I
can
tell
you
you
can
come
up
with
counter
examples
this.
This
is
yet
another
representation
of
the
robotics
Pipeline,
and
it
has
a
number
of
of
the
tasks
that
you
often
the
case
see
within
Frameworks
like
like
Ross
and
there's
various
ways
to
depict
and
represent
this.
A
A
If
everyone
agrees,
I
would
only
ask
us
to
maybe
hold
a
bit
on
the
latest
items
regarding
tasks,
task
hierarchy,
subtask
and
workload,
and
maybe
we
can
push
these
into
a
different
pull
request
and
maybe
have
a
deeper
conversation
on
this,
because
right
now
it
feels
it's
a
bit
confusing
how
we're
mixing,
how
we're
mixing
kind
of
like
the
robotics
Pipeline,
with
the
benchmarks
that
we
plan
to
to
kind
of
like
do,
because
for
for
each
one
of
the
categories,
we
can
have
multiple
benchmarks,
I,
agree
and
I,
guess:
I!
A
Guess
then
this
would
match
with
a
task
so
so
that
we
have.
Then
there
are
an
overlap
between
the
the
definition
of
a
task
definition
of
a
benchmark.
So
so
yeah,
that's
a
concern
that
I
I
would
like
to
raise
again.
From
my
perspective,
this
is
good
to
go.
A
I
would
only
ask
maybe
for
tasks,
that's
hierarchy,
subtask
and
workload
to
maybe
be
pushed
into
a
different
pull
request,
and
and
then
we
can
discuss
that
deeper
and
then
maybe
workload
is
something
also
I
had
I
had
a
bit
of
hard
time,
understanding
better.
Okay.
Can
you
guys
maybe
comment
on
that
real
quick.
B
Yeah
so
like
with
within
a
specific
task
and
a
subtask,
we
were
thinking
a
workload
would
be
like.
B
A
how
how
should
I
say
this,
let's,
let's
say
we're
taking
like
a
package
of
The
Stereotype
image
that
stereo
image
package,
then
workload
would
be
that
package
on
a
certain
set
of
data,
but
I
think
like
I,
think
we
should,
as
I
was
reading
over
I.
Think
workload
is
also
adding,
like
some
confusion,
also
in
my
mind
so
like
we
will,
let's
I'll
separate
this
pull
request.
B
I
think
the
first
half
kind
of
makes
sense
and
then
I'll
separate
the
second
half
and
we
can
take
another
pass
through
it
and
try
to
try
to
make
it
less
confusing.
Yeah.
C
The
way
I
was
thinking
about
it
when
I
wrote,
it
was
task
would
be
like
a
part
of
the
pipeline,
so
like
perception,
and
that
perception
has
a
bunch
of
other
tasks
associated
with
it.
Like.
C
A
Okay,
I,
don't
quite
grasp
that
last
bit,
you
mentioned
regarding
various
applications,
because,
if
I,
if
at
the
end
of
the
day
again
if
we
map
so
that
that's
what
I
was
kind
of
like
that
was
my
Gap,
like
maybe
as
part
of
this
additional
pull
request,
we
can
make
a
table
a
map
like
map
each
one
of
these
words
with
an
example
like
task,
subtask,
Rectify
and
resize,
and
then
workload
and
I'm
I'm,
I,
guess
I'm
missing.
That
piece.
C
A
C
A
C
A
Yes,
yeah,
okay,
a
bit
confusing
but
yeah
yeah,
I,
think
I
think
we
can
follow
what
we
agreed
on.
If
that's,
okay
with
you,
Matthew
and
Jason,
move
forward.
B
B
Is
there
a
way
you
can
share
the
resource
for
that
robotics
pipeline
figure
that
you
yeah.
A
B
C
D
Data
set
so
I
think.
Last
time
we
mentioned,
we
were
using
grossback,
but
we
moved
to
data
set
for
a
generalization
as
such,
but
if
like
in
the
definition
right
now,
it's
like
you
synonymously
with
the
Ross
back.
So
if
we
are
in
the
end,
the
input
is
going
to
be
rosback,
then
why
not
just
use
loss
back,
but
now,
if
you're
using
data
set,
we
can
generalize
it
a
bit.
D
Let's
say
because
video
you
can
just
mention
the
data
set
can
be
a
video,
but
the
it
needs
to
be
converted
to
a
who
wrote
the
message
format
for
it
to
be
an
input.
So
if
you're,
just
using
it
synonymously
with
roswax,
then
why
not
keep
cross
back.
But
if
you're
using
data
sets,
we
can
generalize
it
a
bit
more,
but
the
image
can
be
a
set
of
images,
but
the
they
need
to.
You
need
to
have
your
own
conversion
to
standard
Ross
message
format
to
be
an
input
to
this
framework.
A
Yeah,
so
so
I
think
I
think
that
so
you,
you
have
a
point
particular
indeed
like
if
we
are
using
it
synonymously,
why
don't
you
we
just
use
washback,
because
my
reaction
to
this
is
more
that
I
think
we
want
to
consider
more
than
just
prospects
down
the
road
and
we
want
to
change
device
in
a
way
correct.
We
need
to
slightly
modify
the
wording
and
not
not
point
out
that
we
are
using
it
synonymously
with
prospects,
but
just
point
out
that
rollsbacks
are
one
of
the
possible
examples
of.
A
A
And
yeah,
so
so
people's
changing
it,
it
may
change
down
the
road
even
farther,
as
we
have
no
robotic
operating
systems
coming
up
so
I
I
in
general.
I,
like
the
data
set
wording,
I
think
it
also
matches
with
lots
of
people
operating
in
the
benchmarking
world.
I.
Think
Json.
You
and
Matthew
are
more
of
a
let's
say:
systems,
architect
native
folks
and
I.
Guess
you
can
confirm
this.
Whether
data
set
or
Ross
back
makes
more
sense.
B
Sure
I
I,
like
how
you
phrase
it
where
Ross
Bag
would
be
a
type
of
data
set,
and
then
that
would
make
it
easier
to
abstract
later
on.
If
we
want
to
pass
in,
like
some
other
sort
of
information
to
package,
yeah
yeah.
A
A
Okay,
so
I'll
just
be
sharing
my
screen
in
here
covering
that
up,
and
maybe
we
have
a
few
minutes
to
touch
on
next
steps
afterwards.
Okay,
so
rev
2014
on
my
sharing,
yes
I,
believe
so
yeah.
So
as
I
was
saying,
lots
of
people
commenting
lots
of
people
having
opinions.
So
sometimes
it's
a
bit
hard
to
progress.
Documents
like
this
but
I
I
do
appreciate
the
effort
that
Rice
are
put
into
making
this
suggestion
right.
So
would
you
like
to
comment
on
this
or
should
I
go
through
it.
E
Well,
this
is
like
a
test
suggestion,
because
one
of
my
first
suggestions
in
the
comments
was
to
put
this
Tracy
and
benchmarking
section
where
you
define
what
you're
talking
about
earlier
in
the
document,
because
I
think
in
terms
of
logic,
it
makes
sense
like
if
even
if
someone
doesn't
know
what
we're
talking
about
so
we
Define
it
early
on,
and
so
this
this
change
was
just
like
basically
moving
that
section
upwards
in
the
document,
but
I
don't
know
if
it's,
if
it's
clear
from
within
the
suggestion.
E
A
No
I
I
personally,
like
it
I,
don't
know
if
anyone
else
has
an
opinion,
so
technically
speaking
and
I
would
recommend
this
view,
for
everyone
aiming
to
also
review
the
pull
request
which
allows
you
to
do
and
check
comments
in
somehow
organized
manner
other
than
the
the
discussion
which
tends
to
be
very
messy.
A
So
I
do,
like
it
I
think
so
we
are
having
essentially
the
abstract,
which
is
short
and
concise.
Then
we
have
an
introductory
section
which
Dives
on
essentially
was
a
value
for
stakeholders
and
then
there's
a
section
in
here
touching
on
tracing
and
benchmarking,
because
it's
a
it's
a
pretty
relevant
topic.
I
believe
this
is
a
subsection
right.
E
Yeah
yeah,
it
is
I
I
would
like
to
like
switch
it
to
to
a
section,
and
maybe
my
next
suggestion
would
be
to
add
a
rational
section
where
we,
how
do
I
put
this?
We
justify
writing
a
rep
on
benchmarking,
which
is
I.
Think
what
Andrew
suggested
in
one
of
the
comments.
A
Yeah
I
I
would
love
to
review
that
yeah
and
so
just
to
get
to
get
everyone
on
the
same
page.
So
technically,
the
actual
the
actual
contribution
is
pushing
the
definitions
that
right
now
are
in
here.
A
So
these
definitions
are
being
pushed
into
here
right.
A
I
I
would
go
for
it
any
any
concerns
regarding
this.
A
There
we
go
all
right
so
that
scene
already
and
that's
the
one
I
wanted
to
review,
there's
a
bunch
of
others
that
I
need
to
go
through
again.
Maybe
some
of
these
actually,
this
one
also
removes
it
so
I'm,
just
gonna,
Commit
This
one
as
well.
Thank
you
Raisa
and
with
those
two
out
of
the
way,
I
think
that
I
didn't
leave
aside
any
of
the
others
right.
A
The
the
other
ones
that
you
sent
previously
were
just
suggestions
in
text
which
need
to
be
formatted
as.
A
Awesome
yeah,
so
so
thanks
a
lot
and
yeah
do
feel
free
to
start
adding
your
name
somewhere
in
the
document.
So
maybe
you
can
get
that
up
with
the
next
contribution
as
well
and
yeah,
encouraging
everyone
to
also
take
a
take
a
a
dive
into
this,
especially
you
know
the
Harvard
folks,
Jason
Matthew,
as
you
are
diving
into
nomenclature,
it'd
be
awesome
to
maybe
map
this
document
and
any
any
sort
of
like
Edge.
A
You
detect
feel
free
to
contribute
it,
and
maybe
polish
it
that
that's
a
that's
a
great
way
to
round
it
and
to
make
sure
that
we
have
both
the
the
standardization
approach,
as
well
as
the
robot
perf
implementation.
Somehow,
in
the
same
place,.
E
Well
about
nomenclature,
there
is
one
of
the
comments
about
I
think
it
was
Ken,
Chen
I,
don't
really
remember
the
name.
They
said
something
about
not
using
the
name:
Black
Box
performance
test,
yeah,
that's
the
one
yeah.
What?
What
do
you
guys
think
about
that.
A
Yeah,
so
so
I
I
did
see
this
one
I
didn't
answer
him,
because
I
didn't
have
time
and
I
forgot.
Thank
you
for
bringing
it
up.
Actually
I
I
do
have
a
different
opinion
than
than
this
person,
not
sure
if
he's
a
she
or
he
but
yeah,
I
and
I
pointed
this
out
in
the
yeah
here.
It
is
so
there's
a
document
in
here
which
was
reviewed
by
community
members
in
the
past.
It's
titled
performance
testing
in
Wall
Street.
A
It
was
built
by
a
group
comprised
by
various
companies
involving
some
some
of
them
who
got
acquired
in
the
past,
and
they
do
provide
a
really
nice
overview
of
performance
testing
which
can
be
applied
directly
to
benchmarking
and
they
categorize
Things
based
on
this
nomenclature,
and
this
comes
also
from
a
literature.
A
So
this
this
wording
has
been
actually
used
in
literature
for
quite
a
while,
so
I
do
get
that,
depending
on
which
field
you
come
from.
This
may
sound
confusing,
but
it
sounds
like
this
is
the
trend
in
the
testing
world
so
I
don't
see
a
big
reason
why
we
shouldn't
keep
this.
However,
if
someone
has
a
counter
example
like
that's
the
one
that
I
would
provide
and
maybe
for
best
references
to
Black.
A
A
Awesome
all
right,
so
we
I
guess
we
just
filled
out
the
half
an
hour,
maybe
both
of
these
things,
which
is
what
I
wanted
to.
So
that's
fantastic.
A
Any
next
contributions
that
we
should
note
down
so
I
can
get
started
from
from
our
side,
from
acceleration
Roblox
and
from
my
side
will
continue
contributing
devops
to
Benchmark,
tooling,
which
I
presented
last
time.
If
you
guys
remember
with
the
tutorial,
so
you
can
expect
some
additional
contributions
and
then
the
next
thing
we're
working
on
is
on
closing
the
loop
on
CI
pipeline.
A
That's
going
to
push
the
beta
automatically
to
the
actual
benchmarks
and
right
now
we're
targeting
a
true
reference
platforms
right
now,
we're
testing
things
out
with
with
amd's
technology
and
nvidia's
technology
and
with
two
instances
of
each
one.
But
just
just
to
be
very
clear.
A
The
moment
we
get
the
thing
closed
the
loop
closed
and
then
we
can
add
many
words
real
quick,
which
we
have
so
it
shouldn't
be
a
bottleneck.
Yeah
I
can
see
Json
self-assigning
the
nomenclature
aspect.
Thank
you
guys
and
I
guess
there'll
be.
Maybe
some
also
some
contributions
from
you
right
side
regarding
following
up
with
reviewing
web
2014.
A
Awesome
so
anyone
else.
A
Okay,
Michelle
nafis
do
feel
free
to
let
us
know
if
you
have
some
bandwidth
and
would
like
to
get
started
with
anything.
My
recommendation
would
be
to
maybe
follow
up
with
Jason
and
Matthew
on
the
catch
up
of
of
the
actual
robot
Earth
benchmarks,
repository
and
maybe
start
thinking
or
considering.
A
If
you
can
just
produce
a
first
101
Benchmark
by
following
up
with
the
tutorial
that
I
gave
last
time
concerning
how
to
create
your
first
Benchmark
so
yeah,
this
recording
I
think
we're
going
to
point
people
to
this
a
few
times
so
yeah
do
feel
free
to
review
that
enough
is
particularly
in
but
yeah,
looking
forward,
guys
to
to
get
your
contributions
and
and
yeah,
hopefully
very
soon.
We
can
start
speaking
about
this
scaling
up.