►
From YouTube: ROS 2 Hardware Acceleration Working Group - meeting #21
Description
21st meeting of the ROS 2 Hardware Acceleration Working Group (HAWG, https://github.com/ros-acceleration).
The ROS 2 Hardware Acceleration Working Group is an open and community-driven robotics group that drives the creation, maintenance and testing of hardware acceleration kernels for optimized ROS 2 interactions over different compute substrates, including FPGAs and GPUs.
For more including source code, check https://github.com/ros-acceleration. Minutes of the meeting available at https://docs.google.com/document/d/185Cy1xjpAOgJygEOnlf5OCgOQTywmF0qgSpS3GiW16Q/edit?usp=sharing
For commercial support on hardware acceleration topics in robotics, refer to https://accelerationrobotics.com.
A
All
right
welcome
everyone
to
the
21st
Ross
2
Hardware,
acceleration
working
group
meeting
I
will
start
sharing.
My
screen
with
the
minutes
participated
it
in
the
chat
and
now
I'm
sharing
my
screen
and
if
you
can
hear
some
piano
in
the
background,
that's
not
my
music.
A
That's
my
neighbor
playing
the
piano
very
nice
I
must
say
all
right,
so
we've
got
a
few
agenda
items
today,
however,
we
had
some
people
reporting
that
they
had
some
conflicts
last
minute,
so
we'll
we'll
cut
it
short
I
think
today
and
just
have
a
shorter
discussion
than
the
usual
I
wanted
to
report
on
a
couple
of
admin
aspects
on
our
site.
A
First
off
concerning
the
robot
perf
project
based
on
the
recent
attendance
and
the
recent
review
we've
been
having
internally
I
believe
we
are
going
to
take
a
step
back
in
terms
of
the
periodicity
of
those
meetings
and
going
back
to
monthly,
also
to
try
to
save
everyone's
time
and
be
more
effective
in
terms
of
reporting
progress.
So
the
next
meeting
will
be
the
23rd
of
May.
After
this
one
and
Pablo
I
believe
you
are
handling
the
organization
of
everything.
A
So
if
you
can
take
note
of
this
and
confirm
that
this
is
fine
for
you,
I'd
appreciate
that.
Are
you,
okay,
with
this
yeah
I'm?
Okay,
great
got
it.
Thank
you
yeah!
So
23rd
is
going
to
be
the
next
one.
So
that's
going
to
be
the
22nd
meeting
and
then,
after
that,
one
we'll
get
into
the
end
of
June
for
the
next
one.
A
So
last
Tuesday
of
the
last
week
of
June
actually
same
time,
Nothing's
Gonna,
Change
and
I'll
take
the
action
on
my
side
to
get
in
touch
with
open,
Robotics
and
record
about
the
new
periodicity,
all
right
so
yeah,
that's
one
video
I
wanted
to
report
on.
If
anyone
has
any
concerns
or
any
needs
to
meet
on
demand,
you
can
always
feel
free
to
reach
out
to
me
and
we'll
find
a
way
to
to
tackle
your
needs.
No
issues
at
all.
A
Second
bit
that
I
wanted
to
report
and
I'm
really
happy
about
this-
is
that
internal
tests
that
we've
been
doing
in
terms
of
these
robot
perf
first
release
are
actually
pretty
pretty
nice.
Both
the
results,
as
well
as
the
material
that's
coming
out
of
our
work,
is
very
promising.
A
Still,
the
current
schedule
and
plan
of
record
is
to
Target
the
next
meeting,
so
by
next
meeting.
I
think
we
will
be
in
a
good
place
to
release
the
first
release
of
robot
perf
with
a
bunch
of
official
benchmarks,
as
well
as
resource
for
various
Hardware
Solutions,
so
yeah
just
reporting
about
that
all
of
the
CI
CD
resource
will
also
be
disclosed
within
this
first
release,
so
that
everyone
can
have
a
look
and
maybe
reproduce
or
comment
on
them.
A
So
having
having
touched
on
these
two
kind
of
like
admin
aspects,
I
wanted
to
have
a
quick
discussion
on
the
new
benchmarks.
I
know
the
Harvard
folks
have
been
working
on
some
of
them,
but
Jason
couldn't
make
it
today.
A
So
we'll
postpone
that
for
next
time,
I
did
wanted
to
go
real,
quick
over
two
updates
that
essentially
my
team
and
Martini
in
here
has
been
working
on
the
first
one
is,
and
you
can
go
yourself
to
the
link,
there's
a
new,
very
simple
Benchmark
concerning
resize
operations
that
is
essentially
designed,
yeah
inspired
by
A1
and
hu,
which
do
both
Rectify
and
resize
and
H2
does
rectify
so
A5
resize
is
not
much
different
from
what
exists
in
in
those
first
two
that
I
mentioned.
B
A
Martini
also
contributed
an
analyze
logic,
with
its
corresponding
log
file
for
A2,
which
is
very
appreciated.
So
folks
are
welcome
to
go,
have
a
look
at
the
code
and
if
any
concerns
should
be
raised,
feel
free
to
open
a
ticket
and
report
as
usual.
Another
thing
that
pin
worked
recently
by
our
team
at
acceleration
robotics
is
the
fact
that
we
have
noticed
that
how
essentially
and
purposely
we
have
been
reducing
source
code
from
Benchmark
to
Benchmark,
and
this
certainly
is
not
the
best
software
engineering
practice.
A
So
we
decided
to
spend
some
Cycles
in
refactoring
the
source
code
that
the
analyze
scripts
are
are
using
and
I
believe
Martino
has
been
working
on
it
for
a
while.
So
we
were
hoping
to
have
a
now
an
open
session
and
an
open
discussion
about
that
and
essentially
discuss
it
jointly
and
try
to
see
whether
this
has
promise
or
or
what
feedback
we
can
provide
at
this
stage.
A
So
martinho
would
you
like
to
maybe
take
the
screen
and
maybe
give
us
a
ride
over
your
work
so
far,
and
we
will
try
our
best
to
give
you
feedback.
B
Perfect
so,
as
Victor
was
saying,
we
were
previously
basically
using
the
same
code
for
for
every
analysis
launch
file
for
for
every
Benchmark.
So
what
we
try
to
do
with
these
refactor
is
to
create
well,
as
you
can
see
here,
we
have.
B
We
have
this
code
for
the
analysis
file
for
for
A1
and
if
we
switch
to
to
another
Benchmark,
we
basically
see
the
same
code,
but
with
some
some
tweaks
here
and
there
to
to
adapt
to
the
specific
benchmark.
So
what
we
try
to
do
with
this
refactoring
is
to
create
to
create
another
Ros
package,
which
is
called
Benchmark
utilities,
and
here
we
actually
create
the
Benchmark
analyzer
class.
B
So
we
can
then
use
it
on
on
every
analysis
file
for
for
every
benchmark,
so
this
is
basically
more
or
less
the
code
that
we
had
in
the
previews
in
all
of
the
previous
Benchmark
analysis
launch
files.
So
with
these
refactoring,
the
the
analysis
would
be
much
cleaner.
So
here
we
can
see,
for
instance,
the
A1
package.
We
only
have
these
lines
of
code,
so
here
we
import
the
so-called
Benchmark
analyzer.
We
tell
it.
B
What's
the
name
of
the
Benchmark
we
are
trying
to
analyze,
and
then
we
here
we
have
a
series
of
configuration
regarding
the
the
traces
that
we
previously
generated
with
our
with
lttng
and
then
finally,
once
we
configure,
for
instance,
here
we
can
see
that
we
are
checking
for
for
this
target
chain.
For
instance,
we
we
want
to
see,
then
they
have
yeah
and
so
on,
and
then
it
says.
B
So
this
is
the
world
now
and
now
what
I
discuss
is
the
following,
which
is
to
basically
improve
this
part
here
so
I
believe
now
it's
cleaner
than
before,
but
it's
not
I
think
ideal.
It's
still
pretty
pretty
cumbersome
for
me.
B
So
what
I
was
proposing
on
the
on
the
pull
request
was
to
somehow
improve
on
refactoring
this
code
and
make
this
section
here
more
more
cleaner.
So
what
I
propose
is
what
previously
we
had
something
like
this.
B
A
Is
just
basically
any
chance
you
can
get
a
bit
closer
to
the
to
that
section
of
the
code
so.
A
B
B
B
Yeah,
so
this
is
basically
a
dictionary
where
we
specify
different
configuration
for
for
each
trays
entry.
So,
for
instance,
we
have
the
name.
We
have
the
color
that
we
want
to
display
using
the
Poke
Library
we
have
the
layer
and
so
on.
So
my
idea
here
would
be
to
well.
This
is
right
now
very
we
are
forcing
the
user
to
specify
a
lot
of
a
lot
of
things
and
I
think
this
can
get
repetitive
over
benchmarks.
For
instance,
here
we
are
using.
B
We
are
using
the
directify
node
on
the
resize
node,
but
then
we
have
some
other
tests,
some
other
benchmarks,
that
we
also
use
these
these
nodes
on
on
Raw.
So
I
think
this
can
we
can
avoid.
You
know
this
repetition
here.
B
So
as
I
was
saying,
my
proposal
on
on
this
PR
would
be
to
perhaps
create
some
kind
of
class
which
in
this
case,
could
be
called
Rectify
image
trace,
and
here
we
would
specify.
B
Benchmark
so,
for
instance,
the
the
Rectify
Trace
would
be
this
Trace
here
the
call
Guitar
then
the
rectified
column.
He
needs
the
Rectify
any
the
refini
and
then
the
directive
I
call
that
Fini
and
the
covalent.
So
that
would
be
the
same
for
independently
of
but
Benchmark
we
are.
We
are
trying
to
measure,
and
then
this
also,
this
would
also
help
us
to
to
create
in
the
in
the
sorry,
I
forgot
the
name
Benchmark
analyzer
class.
B
It
would
also
allow
us
to
to
create
more
easily
plots
for
for
our
images,
because
right
now
we
are
also
hard-coding
pretty
much
the
results
and
it's
kind
of
difficult
to.
B
Let
me
find
the
function
properly.
We
could
do
that
yeah.
So,
for
instance,
here
we
we
have
a
series
of
hard-coded.
Let's
call
them
functions
where
we,
where
we
measure
the
time
between
different
traces.
So,
for
instance,
here
we
are
measuring
the
callbacks
of
the
of
the
Rectify
traces.
B
So
this
part
here
right
now
is
a
bit
difficult
to
I
would
say
to
to
generalize.
So
that's
what
my
my
proposal
would
try
to
address.
A
Yeah
right,
yeah,
I,
think
I
think.
Nevertheless,
this
is
this
is
rather
specific
for
plotting
purposes.
Isn't
it.
B
I
I
would
say
it's
both
for
plotting
and
then
also
for
configuring,
the
the
trades
that
we
are
trying
to
to
measure.
A
Yeah
but
but
this
is
specific
code
is
actually
just
picking
and
building
up
data
structures
that
are
then
used
for
plotting,
I
believe.
B
A
So
so
other
than
that
other
than
plotting
itself
I
think
it's
not
that
useful
for
the
rest
of
the
aspects,
so
yeah
I'm
not
so
sure
how
much
we
want
to
have
struggled
away.
A
B
So
I
I
was,
as
I
was
saying
here:
I
would
create
some,
for
instance,
for
for
the
Rectify
node
I
would
create
some
some
class
and
I
would
encapsulate
there.
The
information
that
would
be
configured
for
for
the
tracing
and
for
the
plotting
so.
A
A
Yeah
I'm
not
sure
about
that
either
I
think
we
will
be
benchmarking
plenty
of
no
notes
over
time
and
I
think
that
I
do
appreciate
the
effort
of
trying
to
create
classes
and
simplify
things,
but
I'm
not
really
sure.
If
this
is
a
direction,
we
want
to
go
so
I
appreciate
that
you
brought
this
for
discussion.
Nevertheless,
I
think
it's
it's
a
great
piece
of
of
refactoring.
A
What
you've
done
in
terms
of
simplifying
how
data
can
be
added
for
the
Tracer
so
that
it
can
be
located
and
I
I
like
that
significantly
so
so,
yeah
great
contribution,
great
effort
and
thank
you
Martin,
you
I'll
have
a
closer
look
at
this
and
I
think
we
can.
We
can
get
this
in
as
soon
as
possible.
I
think
for
now
just
leave
it
as
says.
A
B
A
Yeah
I
think
this
is
definitely
a
direction
we
want
to
move
forward
and
consider,
especially
as
we
have
more
and
more
benchmarks,
so
yeah
very.
B
A
Important
awesome,
thank
you
cool,
so
all
right,
I
think
that
clears
the
second
agenda
item
that
I
had
prepared
and
with
that
that's
pretty
much
it
any
an
additional
topic.
Anyone
would
like
to
bring
in
today.
A
Otherwise,
I
think
we
can
adjourn
for
today
and
again
reminding
everyone
next
meeting
will
be
the
23rd
I,
think
Pablo
will
coordinate
and
make
the
announcement
as
appropriate,
and
by
this
time
you
should
expect
essentially
us
to
be
releasing
the
first
output
of
the
robot
birth
effort
and
and
from
this
one
we
should
be
more
active,
just
making
releases
yeah.
Thank
you,
everyone
for
participating,
thank
you
for
showing
up
and
I'll,
see
you
at
the
end
of
the
month.
I
guess!
Thank
you.
Everyone.