►
From YouTube: RobotPerf subproject - meeting#4
Description
Weekly meeting of the RobotPerf (https://robotperf.org/) subproject of the ROS 2 Hardware Acceleration Working Group (https://github.com/ros-acceleration).
RobotPerf is an open reference benchmarking suite that is used to evaluate robotics computing performance fairly with ROS 2 as its common baseline, so that robotic architects can make informed decisions about the hardware and software components of their robotic systems. The group meets weekly at this room and discusses robotics computing architectures and benchmarking across various compute substrates.
Minutes of the meetings are available at https://docs.google.com/document/d/185Cy1xjpAOgJygEOnlf5OCgOQTywmF0qgSpS3GiW16Q.
A
All
right
welcome
everyone
to
the
fourth
robot
Perth
weekly
meeting,
let's
as
always
start
with
the
minutes,
so
I'm
sharing
those
in
the
chat,
and
let
me
also
share
my
screen
and
get
started
with
that
here.
We
are.
A
All
right
so
yeah
we
had
a
few
actions
from
last
week,
so
just
knocking
down.
We
have
advanced
a
bit
on
our
side
with
the
continuous
integration
pipeline,
as
reported
last
time,
we're
focusing
on
AMD
and
Nvidia
fpg
and
GPU
socs
for
starters,
and
we
will
be
skating
to
more
platforms.
Last
week
we
pushed
and
managed
to
get
the
infrastructure,
let's
say
to
a
decent
stage,
but
still
it's
not
fully
ready
to
Output
stuff
directly
to
GitHub
to
the
benchmark.yaml
files.
A
It's
going
to
take
a
bit
more
of
hacking
to
get
that
done
and
I
think
we'll
continue
today
after
today's
working
group
meeting.
Actually
so
so,
yeah
expect
more
updates
on
on
that
regard,
certainly
been
progressing
and
and
getting
ready
for
running
as
many
benchmarks
as
as
possible.
So
so
yeah
we'll
report
next
week
on
this
again,
no
don't
want
to
share
specific
expectations
because
we're
encountering
sorry
hurdles
on
the
way,
but
in
principle
the
pipeline
is
working.
A
A
Sorry!
So,
besides
that
I
think
we
have
exciting
updates
from
Jason
Jason.
Do
you
want
to
take
it
over
sure.
B
So
yeah,
so
we
have
gotten
in
a
third
Benchmark
in
the
perception
category
of
robot
perf,
so
I
thought
it
would
be
helpful
to
go
through
the
Benchmark
demonstrate
it.
So
then,
for
people
who
want
to
you
know
contribute
their
own
benchmarks.
They
can
have
a
better
idea
like
how
to
set
a
benchmark
up,
and
if
anyone
has
like
questions,
we
can
talk
about
them
and
then
I'd
love
to
get
your
feedback
about
this
Benchmark.
B
What
we
should
change-
and
we
should
add
to
it
before
we-
you
know-
push
it
into
the
main
branch
of
robot
version.
So
having
said
that,
I'm
gonna
introduce
this
Benchmark,
it's
the
it's
a
stereo
image
Benchmark.
B
So
basically
we
are
taking
a
look
at
the
stereo
image
process
package
in
the
image
pipeline
of
Ross,
and
this
specific
package
takes
in
two
images:
a
left
and
a
right
image
and
it
outputs
some
information
like
a
disparity
method
and
basically
a
disparity
map
is
showing
like
how
deep
images
or
how
deep
objects
in
front
of
you
are
using
the
to
the
left
and
right
image
that
you
pass
it
and
then
that
allows
you
to
like.
You
know
know
how
far
away
you
are
from
things.
B
So
that's
a
little
bit
of
a
background
about
what,
like
a
stereo
image,
what
the
stereo
image
package
does
so
I'm
gonna
go
ahead
and
and
like
demo,
it
I
have
all
of
the
steps
in
the
readme
how
to
install
it
and
then
all
the
commands,
you're
gonna,
need
to
run
through
everything.
You're
gonna
see.
Let
me
demonstrate
so:
let's
go
ahead
and
yeah
get
started
so
the
first
step,
I'm
gonna,
do
is
I'm
going
to
play
back
a
Ross
Bag
that
I've
recorded
okay.
B
So
we
can
just
take
this
Command
right
and
replay
Ross
Bag
in
a
loop
and
then
I'm
also
going
to
like
for
visualization
purposes
of
this
demonstration,
I'm
also
going
to
start
like
an
rpt
graph
to
see
like
what's
going
on
so
right
here
we
have
like
the
Ross
Bag
player.
Node.
B
Next
thing
we
can
do
is
visualize.
The
feed
the
Ross
back
player
is
publishing.
So
let's
go
ahead
and
do
that.
B
All
right
so
here
you
see
a
left
and
a
right
image
of
a
little
robot,
navigating
through
a
gazebo
World
I've
set
up
like
the
configuration
files
for
this
arviz
I.
Don't
know
what
you
call
it
for
yeah.
This
configuration
file
for
Arvest
so
then,
like
all
of
these
settings,
will
come
up.
If
you
run
the
command.
I
showed
you
to
like
better
understand
if
your
like
Ross
back
player,
is
working
things
of
that
sort.
B
So
yeah,
as
you
see
here,
we
have
like
the
robot
navigating
in
the
world
and
and
to
if
you
want
to
reproduce
the
simulation.
I
have
all
the
steps
here.
Basically,
it's
launching
like
a
gazebo
simulation
and
then
recording
all
the
topics
that
are
being
published
and
the
world
looks
something
like
that
cool,
so
yeah.
B
So
as
of
now,
let's
refresh
our
graph
and
let's
see,
what's
happening
now,
so
we
have
the
Ross
Bag
player
node,
and
we
can
see
that
it's
publishing
a
left
image
and
the
info
for
that
left
image.
And
then
we
also
have
a
right
image
and
the
info
for
that
right.
Image
and
that's
you
know
being
produced
from
the
Roswell
all
right.
B
Let's
see
what
the
next
step
is
so
next
we
can
go
ahead
and
start
our
benchmarks,
and
we
have
a
launch
file
inside
of
this
director
here
that
you
can
just
do
like
Ross
to
and
then
the
package
name
and
just
watch
it.
So
let's
take
that
command,
throw
it
in
here,
okay,
cool
all
right,
so
our
Benchmark
has
technically
been
initiated.
Now,
so
let's
refresh
our
graph
and
see
what
we're
working
with.
B
So
this
is
what
our
Benchmark
added
to
our
graph.
So
I'm
going
to
go
through
it
quickly
and
explain,
what's
happening
here,
so
we
have
a
node
that
subscribes
to
the
topics
being
published
from
our
Ross
Bag
player.
So
it
takes
in
the
images
both
in
the
Raw
image
and
the
info
about
the
camera,
and
then
what
this
does
is
every
time
a
new
image
is
being
published
into
these
topics.
B
It
has
a
callback
function
which
basically
start
a
trade
point
and
then
publishes
that
same
image
back
to
like
another
topic.
So
the
purpose
of
this
node
here
is
to
start
a
trace
point.
Every
time
like
a
new
image
is
coming
in
okay,
so
and
and
that
will
better
help
us
like
contain
everything
that
we're
measuring
inside
of
the
benchmark,
which
brings
me
to
like
this
part
of
the
graph.
So
here
we
are
like
once
we
are
starting
to
like
Trace
each
input.
B
We
can
then
take
the
information,
that's
being
published
onto
these
topics
and
then
our
stereo
image
node
is
going
to
use
that
information
right
and,
as
I
mentioned
earlier,
the
stereo
image
package
you
know
takes
in
this
information
and
then
outputs
a
disparity
map
and
that's
and
that's
what
we're
benchmarking
so
it'll
take
this
information
and
then
you
know,
do
all
the
calculations
necessary,
output,
disparently
and
then.
B
Finally,
when
that
happens,
we
also
have
like
an
output
node,
which
starts
another
Trace
Point,
so
that
we
can
like
measure
like
how
much
time
it
took
an
image
to
like
first
be
inputted
into
this
node.
All
the
calculations
happen
and
then
what
time
like
the
calculations
ended
basically
we're
just
like
measuring
latency,
okay
cool
and,
let's
also
visualize,
what's
happening
just
to
make
sure
like
we
fully
understand
what
our
Branch
Mark
or
like
what
this
node.
B
This
package
is
outputting,
and
to
do
that,
we
can
run
this
command.
Basically
we're
just
going
to
be
viewing
the
despair
development.
So
let's
go
ahead
and
pop
that
into
terminal.
B
Here
you
can
see
it's
the
same
images
of
the
robot
navigating
of
the
world,
but
this
time,
instead
of
seeing
like
a
regular
raw
image,
you're
seeing
a
disparity
map
which
shows
like
how
close
you
are
to
objects
so
like
when
you
get
closer
to
objects,
they
can
heat
map,
will
change
cool,
so
yeah,
that's
what's
basically
being
outputted
from
our
graph
here
and
we
can
like
refresh
this,
and
this
is
basically
the
I'm
subscribing
to
the
node,
to
like
view
this
disability
button.
B
So
that's
the
full,
the
full
graph
of
our
Benchmark.
We
can
go
ahead
and
end
end
our
Benchmark
and
go
look
at
the
results.
So
to
do
that,
we
can
take
a
look
at
this
command
and
to
my
understanding,
I
think
like
Victor.
You
can
correct
me
about
log
here,
but,
like
most
of
the
information
is
stored
within
this
directory,
so
you
can
navigate
to
that
and
then
we
can
take
a
look
at
all
the
output
that's
being
produced
from
those
Trace
points
by
using
this
command.
B
All
the
information
from
the
trace
ones
I
have
yet
to
like,
take
this
data
and
analyze
it,
which
is
like
a
Next
Step,
but
that's
basically
the
full
pipeline,
the
stereo
image.
B
So
if
you
guys
have
any
feedback,
you
can
either
provide
it
now.
A
B
Yeah,
if
anyone
has
any
feedback
or
questions
happy
to
take
them,
but
there
is
a
current
like
a
pull
request
to
add
this
Benchmark
into
our
repo.
So
if
you
want
to
like
take
go
there
and
you
know
add
some
comments
feel
free,
but
yeah.
A
Well,
first
of
all,
I
think.
Congratulations,
sorry,
Northern,
right!
I
think
this
is
a
fantastic
contribution.
Jason
in
the
team
in
Harvard,
so
very
nicely
done.
Folks,
I
think
that,
with
this
work,
crackling
will
finally
be
the
pace
at
which
we
can
contribute
more
and
more
benchmarks.
So
now
we
just
need
to
get
100
and
we'll
have
a
hell
of
a
Benchmark
Suite
I
guess
no!
This.
This
is
looking
great
well
done,
good
work
overall,
maybe
a
few
remarks.
A
I
I
need
to
look
at
the
pull
request.
A
bit
deeper
I
just
had
look
at
it
when
I
woke
up
this
morning,
which
is
I,
think
exactly
at
the
time
you
were
going
to
bed
because
we
were
exchanging
messages,
I
think
very
late
for
you
very
early
for
me,
it
was
a
bit
fun
and
I.
Think
I
kept
you
awake
a
bit
longer!
So
sorry
about
that.
But
you
know
natural.
A
So
three
remarks,
I
guess:
first,
the
rhythmics
are
out
generated.
Sorry,
if
you
guys
remember
we
we
have
this
vision
of
having
the
Benchmark
dog
yaml
file,
grabbing
everything
and
then
from
there
generating
the
readme
periodically.
The
rational
behind
that
is
because
the
Benchmark
data,
where
we
contributed
to
these
yaml
files
I
mean
every
time.
Sorry,
every
time
it
gets
changed,
we
will
update
the
readme
automatically
and
reproduce
it
so
I.
Just
look
at
your
branch
and
I
didn't
see
the
Benchmark
or
yaml
file
yeah.
B
Exactly
yeah
I
realize
I
still
need
to
do
that.
I
was
doing
the
readme's
manually
a
question.
A
good
question
is:
does
it
have
to
like
in
my
readme
I
added
like
a
simulation
reproduction
and
like
section,
does
it
have
to
follow
the
exact
sections
here?
Can
we
add
some
more
to
the
CM
file,
or
should
we
just
like
stick
to
this
and
not
like
add
anything
extra?
We.
A
Can
add,
we
can
add
more
sections
to
it.
We
just
need
to
revisit
the
source
code
that
takes
care
of
and
producing
automatically
the
corresponding
thingy.
I
can
point
you
to
to
where
that
is
happening.
It's
in
the
repo
as
well.
There's
nothing
I!
Guess
we
just
need
to
make
sure
that
that's
contemplated
it's
particularly
as
living
in
I.
Think
if
you
go
to.
B
A
Yeah
no
problem,
no
problem
yeah,
but
nevertheless,
just
just
keep
in
mind
that
please
so
adding
The,
Benchmark
or
yaml
file
is
probably
important.
Yes,
yes
also
take
take
note
of
the
fact
that
if
you
put
like
fancy
gifs
or
images
that
they
they
may
need
to
get
added
also
into
the
yaml
file
in
a
specific
way,
something
worth
worth
noting
as
well
and
I
know
that
in
your
source
code,
that
you
rewrote
input
and
output
components
from
The
Benchmark.
A
So
at
the
end
of
the
day,
we,
the
idea
is
that
we
should
have
a
common
sort
of
like
package
which
has
input
and
output
General
components.
If
it's,
if
it's
general
purpose
images
going
in
and
out,
we
should
reuse
this,
because.
B
Yeah
yeah,
so
the
reason
I
rewrote
the
image
component
is
so
that
I
can
like
subscribe
to
two
different
topics
of
images,
but
like
now
that
I'm
thinking
about
it,
maybe
I
could
have
just
created
two
of
those
image
components
exactly
okay,
but
then
I
also
created
like
a
disparity
input,
output
component,
because
the
message
type
that's
being
published
by
the
stereo
image.
The
like
process
package
is
of
type
like
disparity
or
something.
It's
not
like
an
image
yeah.
A
That
with
that
I
agree,
I
was
I
was.
My
remark
was
more
towards:
let's
try
to
reuse
this
technology
components
because
at
some
point
we
will
need
to
revisit
this
input
and
output
bits
and
just
push
it
to
a
common
boss
package.
A
I
have
have
most
of
the
benchmarks,
reutilize
things
or
contribute
input
and
output
components
to
that
instead
of,
like
you
know,
enlarging
the
the
amount
of
code
that
that
just
gets
written
again
and
again
so
for
the
input
components
you
can
reuse
the
one
that
we
have
in
I
think
it's
A1
and
you
can
just
use
remapping
for
changing
the
topic.
Names.
That's
that's
as
as
far
as
As
It
Gets.
B
That
brings
me
to
like
one
quick
question,
sorry
to
interrupt,
but
wait
where's.
My
bookmark
I
can
just
like
go
together
here,
so
I
added
a
trace
point
for
each
of
the
images
like
two
Trace
points.
Is
that
okay,
or
should
we
just
have
like
one
Facebook.
A
That's
a
good
question
and
I
guess
that
I
guess
that
brings
me
to
whether
we
need
a
new
component
or
not
for
input
data
I
need
to
think
about
that
a
bit
more
in
principle:
I!
Guess!
If
it's
just
an
image,
Trace
point:
it's
an
image:
Trace,
Point
I!
A
Don't
think
we
I
don't
think
we
need
to
know
which
one
is
what
we
just
need
to
account
for
the
fact
that
there's
two
Trace
points
before
the
actual
Benchmark
starts
like
two
identical
image:
Trace
points,
but
that
that
is
something
that
you
will
figure
out
once
you
start
analyzing
the
data
and
that's
the.
A
Which
was
some
sort
of
like
input
regarding
you
do
produce
your
script
for
analyzing
things,
take
a
look
at
what
was
produced
for
by
A1,
which
is
essentially
building
a
launch
file
which
just
leverages
Babel,
trace,
and
much
of
that
you,
you
would
be
able
to
just
re-utilize
yeah
yeah
over
there
and
analyze
exactly.
A
In
much
of
that
could
be
reutilized,
much
of
which
might
not
be
useful,
but
you
you,
you
decide
that
on
your
own
in
the
introduction,
so
yeah,
no
overall
I'm
excited
about
this
I,
don't
mind
getting
this
inside
and
now
that
has
been
cleaned
up,
but
it's
up
to
you
Jason.
Would
you
like
to
maybe
work
a
bit
more
on
the
on
the
analyzing
and
then
we
merge
it
all.
Or
would
you
prefer
that
we
get
this
in
right
now.
B
I
don't
mind
either
way
honestly
I'll
leave
that
up
to
you.
Whichever
way
you
think
whatever
workflow
you
think
is
best
I
I
can
go
ahead
and
like
clean
it
up,
and
then
we
can
just
do
one.
A
Let's,
let's
do
it
like
that,
then,
let's
just
have
a
look
at
it
see
if
we
you
can
re-utilize
the
components
for
images
at
least
and
then
ping
me
and
then
we'll
get
it
in.
B
A
B
That,
yes,
yes,
yes,
oh
yeah
I
was
going
to
talk
about
that.
Thanks
for
reminding
me,
let
me
just
pull
it
up
real
quick.
B
So
yeah
so
last
time
we
were
talking
about,
we
don't
want
to
have
overlapping
like
people
working
on
the
same
benchmarks.
We
kind
of
want
to
know
what
What's
what
the
community
is
doing
right.
So
we
created
a
template
to
create
an
issue.
So
if
you
go
to
like
once,
this
is
like
March
with
a
real
Fork,
you
can
go
to
the
issues
Tab
and
then
create
a
new
tab.
B
I
haven't
created
a
new
issue
and
then
you
can
use
the
new
robot
curve
Benchmark
in
progress
announcement,
issue
templates
and
basically,
what
that
will
do
is
have
you
answer
a
few
questions
which
is
like
specify
the
Ross
package
you're
working
on
provide
a
link
to
that
Ross
package.
Like
what
category
does
it
fall
under
like
what
metric
are
you
planning
to
measure
the
estimated
date
of
completion,
Hardware
you're,
going
to
use
to
run
the
reference
implementation
and
then
some
information
about
the
person
who's
submitting
Ben
Benchmark?
B
But
this
is
I
just
this
is
a
draft,
so
you
know
happy
to
get
some
feedback
about
what
an
issue
template
should
look
like.
C
A
So
my
feedback
on
this
is
this
is
fantastic
I,
think
it's
what
we
want
and
I'm
very
happy
to
get
this
in
right
away.
My
only
comment
would
be
if
you
can
remove
this
from
the
existing
pull
request
and
push
it
into
a
different
pull
request,
we'll
get
that
right
away
inside
and
if
anyone
or
if
anyone
has
any
contributions
to
this
like
template,
do
feel
free
to
submit
any
other
follow-up
for
pro
request
on
it.
A
I
can
see
some
like
minor
bits
that
I
probably
will
sharpen
a
bit,
but
but
overall
I
think
this
is
fantastic,
and
this
is
exactly
what
we
want
so
so
yeah
if
you
can
submit
that
in
a
different
pull
request,
just
just
to
make
sure
we
we
keep
things
tidy.
That
would
be
fantastic.
B
Awesome
yeah
that
sounds
great
yeah
I.
D
B
The
reasoning
behind
this
is
that
we
would
like
robot
perf
to
be
you
know,
utilized
by
our
community,
and
we
think
that
you
know
one
of
the
major
steps
to
get
a
benchmark
like
popular
is
to
get
some
like
the
big
players
in
the
room,
not
to
say
that,
like
we're,
restricting
people
from
you
know
submitting
benchmarks,
but
I
think
it
is
definitely
helpful
for
robot
perf
to
be
like
taken
very
seriously
to
have
like
some
of
the
big
robot.
B
Hey
Victor!
Let
me
know
if
that
like
aligns
with
your
vision,
yeah.
D
I
can
give
you
another
and,
like
an
example,
later
offline,
maybe
because
I
use,
we
use
this
one
from
the
OE
40
like
template
like
this,
when
people
want
to
create
issues
and
then
give
us
like
the
context
which,
like
machine,
has
used,
as
you
mentioned,
like
the
hardware,
how
yeah
you
can
produce
like
the
steps
you,
if
you
can
to
reproduce
these
things,
Etc
yeah
I
can
give
you
some
insight.
Definitely
cool
yeah,
I'd
love
to
chat
regarding
your
benchmark,
which
platform
you
use.
B
I
was
using
like
a
RTX
well,
actually,
I
wasn't
using
like
GPU.
It's
just
using
a
regular
CPU
I
can
add,
like
the
specifications
of
it
to
The
Benchmark
once
I
had
the
results.
I
was
planning
to
do
that
once
like
I
kind
of
analyze
and
like
get
a
number
for
that
latency.
D
About
the
your
PR
but
yeah,
we.
B
Will
do
yeah
that'll
be
awesome
if
you
can
throw
it
into.
You
know
that
pull
request,
and
we
cannot,
you
know,
continue
the
conversation
there
thanks
thanks.
C
C
A
Yeah
I
mean
so
so
on
that
regard
again,
everyone's
everyone's
welcome
to
include
whatever
they
want.
We
won't
punish
someone
who
who
doesn't
want
to
like
describe
his
organization
or
her
organization,
but
I
I
do
think
that
there's
Merit
in
what
in
what
Jason
was
describing
in
the
sense
that,
just
just
to
make
sure
we
coordinate
amongst
ourselves,
but
also
down
the
road
provided
this
gross
in
popularity.
I.
Think
it's
great
that
you
know.
A
Maybe
there
are
various
divisions
within
your
big
organization
that
are
working
on
benchmarks
actually,
and
you
would
want
to
know
who
is
working
on
that.
So
that's
One,
Good
Reason,
also
for
competitive
reasons,
it's
fantastic
to
show
that
there's
like
big
or
small
players,
putting
their
resources
into
a
specific
direction.
That
gives
not
only
a
business
insights
but
also
interest
insights
from
a
community
perspective.
So
I
think
there's
good
reasons
why
you
know
it's.
C
A
A
This
is
a
community
and-
and
it's
kind
of
like
a
best
effort
at
this
stage
at
least
so
yeah
no
I
I
would
support
let's,
let's
nevertheless
move
the
discussion
to
the
pull
request
same
with
regard
the
Benchmark
and
just
kudos
for
Jason
and
Tim
nicely
done.
Keep
it
up.
B
I
also
want
to
thank
you
pratik,
you
know.
Definitely.
I
was
definitely
stuck
at
that
like
publishing
the
censor
data.
So
thank
you
so
much
for
that.
Help
definitely
want
to
give
you
a
shout
out
for
that.
That's.
A
Awesome
that
we
have
interact
corporations
in
here
within
the
group
so
very
much
appreciate
it
all
right,
so
I
think
we've
reached
the
top
of
the
half
an
hour
folks,
so
let's
meet
next
week,
hopefully
we'll
bring
some
more
updates
and
move
forward
with
more
benchmarks
looking
forward.
Thank
you.
Everyone
have
a
good
one.
Thank
you.
Bye.