►
From YouTube: ROS 2 Hardware Acceleration Working Group - meeting #20
Description
20th meeting of the ROS 2 Hardware Acceleration Working Group (HAWG, https://github.com/ros-acceleration).
The ROS 2 Hardware Acceleration Working Group is an open and community-driven robotics group that drives the creation, maintenance and testing of hardware acceleration kernels for optimized ROS 2 interactions over different compute substrates, including FPGAs and GPUs.
For more including source code, check https://github.com/ros-acceleration. Minutes of the meeting available at https://docs.google.com/document/d/185Cy1xjpAOgJygEOnlf5OCgOQTywmF0qgSpS3GiW16Q/edit?usp=sharing
For commercial support on hardware acceleration topics in robotics, refer to https://accelerationrobotics.com.
A
All
right
welcome
everyone
to
the
20th
Ross
to
Hardware
acceleration
working
group
meeting
and,
as
always,
the
minutes
will
be
available
in
this
link,
which
I'm
just
sharing
again
just
to
make
sure
everyone
keeps
it
in
mind.
A
We
have
essentially,
let's
captioning
today,
that's
going
to
turn
around
the
contributions
Json
and
the
team
at
Harvard
sent
over
so
I
wanted
to
touch
base
on
that
and
hopefully
merge
it.
All
I
did
merge
one
of
them,
but
I
just
wanted
to
make
sure
we
could
have
a
short
discussion
around
some
changes.
I
saw
in
A4
Benchmark
Jason,
and
then
we
can
wrap
the
meeting
by
designing
some
next
actions
concerning.
A
A
Okay,
so
you
guys
can
see
the
source
code
directly
in
here,
so
A3
has
been
in
total
success.
Thank
you,
Jason,
for
that
I
I
did
have
a
quick
look
at
it
and,
to
be
frank,
I
have
no
no
strong
comments
regarding
it.
I
just
made
a
small
change
concerning
the
instructions
in
the
yaml
file
which
just
changed
it,
but
I
guess
yeah.
It's
not
included
in
here.
A
It's
not
a
different
branch
but
in
a
nutshell,
just
something
that
wasn't
necessary
because
it
was
already
defined
in
the
package.xml
file,
and
so
with
that
I'm
pretty
happy
I
tested
both
and
in
principle
the
results
are
pretty
solid.
A
I
think
I
commented
on
A3
concerning
the
result
that
I
was
getting
which
are
I,
think
slightly
better.
Concerning
the
max
latency
you
get
actually
quite
a
bit.
Maybe
we
can
have
a
quick
look
at
that,
but
I
was
impressed
by.
A
I
was
impressed
by
some
of
the
resource.
You
were
reporting
in
the
pull
request
concerning,
so
this
is
actually
Upstream
right
now.
So
if
we
go
to
benchmarks
perception
A3,
we
should
have
it
in
here.
A
What
so
yeah
this
this
number
came
a
bit
as
a
surprise
to
me:
did
you
run
it
with
any
particular
configuration
in
your
machine
Jason.
A
Well,
well
so
This
Is
Us
of
now
being
integrated
now
that
I
have
validated
being
integrated
in
the
CI
CD
pipeline.
So
you
can
expect
more
results
across
various
Hardware
platforms.
We
have
enabled
yeah
I
got
I,
got
something
slightly
different,
just
running
it
directly
on
my
that
machine,
so
I
got
a
Max
of
34
milliseconds,
which
is
significantly
smaller,
Spike,
so
I
think
Meanwhile.
Your
results
were
better.
It's
just
that
the
maximum
value
that
you
were
getting
was
was
significantly
higher.
A
You
can
see
that
there
is
yeah,
there's
quite
a
elite,
and
that's
also
why
I
was
discussing
previously
in
past
meetings
that
I
I
disagree
with
this
approach
of
discarding
Max
and
mean
values
for
variability
things.
So
much
like
that.
I
think
that,
well,
if
your
pipeline
depends
on
getting
the
right,
you're,
certainly
a
bit
compromised,
so
so
yeah
I
think
that's
why
it's
so
relevant.
A
That's
what
I
was
asking,
but
I
can
see
that
you,
you
run
it
through
more
than
one
thousand
groups
of
packages
throughout
the
pipeline,
so
maybe
I
don't
know,
I
did
try
it
over
the
complete
recording
and
in
in
my
case
it
was.
B
A
And
if
you're
stressing
your
processor
artificially
with
this
stress
command,
which
it
sounds
like
you
are
not
doing
on
purpose,
so
either
that
or
you
have
you
know,
maybe
your
crypto
mining,
I
I-
guess
not
or
maybe
you
would
stop
it
I
think
I
think
we
should
be
fine.
Let's
just
run
this
on
the
on
the
cicd
infrastructure
and
then
let's
just
have
values
for
various
other
machines
over
a
period
of
time.
Yeah.
B
A
We'll
we'll
get
that
over.
B
Sorry,
yeah,
what
what
Hardware
are
you
gonna
be
running
it
on.
A
And
so
in
principle,
a
portfolio
of
them
will
be
using
a
combination
of
Intel
and
AMD
machines.
The
Intel
machine
workstation
is
going
to
be
this
one.
It's
a
nice,
seven,
essentially
very
stupid
use
of
3.7
gigahertz,
nothing
like
extremely
fancy.
Like
average
DDR,
it
has
eight
gigabits
gigabytes
of
DDR.
Something
like
that.
So
nothing.
Nothing
too,
fancy
to
be
frank,
we'll
be
also
running
on
embedded,
including
on
some
of
the
Korea
and
yet
some
platforms,
as
well
as
on
some
other
platforms.
A
Also
on
on
Arc
64
I
do
plan
to
run
it.
I
I
got
an
ask
to
run
it
on
a
MacBook,
M1
and
M2
I.
Don't
think
we're
doing
that,
yeah
I,
don't
think
we're
doing
that.
A
B
A
Technically,
the
code
itself
does
not
need
to
be
changed.
What
you
need
to
is
essentially
ensure
that
you
have
the
right
accelerators
and
then
use
them
as
components.
So
so
technically,
technically,
you
could
find
a
way
without
modifying
the
code.
The
simplest
way,
though,
would
be
to
change
the
launch
file
and
instead
of
using
the
components
that
you
are
launching
in
here
in
your
let's
just
take
sorry
just
take
the
trades
script
so
right
in
here,
you're
launching
some
components
right,
yes,
image,
proc
right,
we
find
out.
A
A
You
need
to
implement
it
yourself
or
you
need
to
leverage
it
and
provide
it
and
package
it
properly
yeah
for
many
of
those
Primitives
it
is
available,
so
you
can
easily
easily
find
it
and
any
someone
else.
That's
definitely
something
possible.
So
I
wanted
to
touch
base,
since
we
have
limited
time
again,
kudos
for
A3,
total,
Milestone
and
I'm
really
happy
about
this.
A
A
So,
first
of
all,
so
my
expectation
was
just
for
clarity
to
see
modifications
only
pertaining
A4
itself
right,
but
you're
modifying
here
A
bunch
of
files
concerning
A3
and
A1.
So
can
you
can
you
elaborate
a
bit?
Why
are
you,
for
example,
in
this
case
modifying
A1.
B
Yes,
let
me
open
up
my
pull
request,
real
quick
and
look
at
the
files.
Give
me
one
moment.
B
Oh
okay,
so
for
the
image
input
component,
dot,
CPP
I
wanted
to
dynamically
pass
in
the
name
of
like
the
topic
name
instead
of
just
input.
So
let
me
see
if
I
can
pull
it
like.
A
Okay,
but
you
can
you
can
remap
that
at
runtime,
using
using
launch
files
and
and
parameters,
you
don't
need
to
modify
it
in
here.
B
Okay,
I
can't
remember
exactly
what
like
did.
Let
me,
let
me
look
into
it
and
then
like
I'll,
okay,
I'll
make
in
the
pull
request.
I
can't
remember.
A
So
that
that's
okay
I
just
wanted
to,
let
you
know
I
tested
the
Benchmark,
it
looks,
sweet
and
fantastic
regarding
the
contribution
thumbs
up
for
it.
Only
if
possible,
I
would
I
would
rather
have
Clarity
on
some
of
this
matters
like
why
so
I
can
guess
that
this
these
are
new
dependencies.
You
required
in
your
Dev
amp
and
I'm.
Okay
with
this.
A
No
no
issues
with
those
the
thing
that
that
surprises
me
a
bit
more
is
the
changes
on
A1
and
the
changes
on
A3,
I
I
would
not
expect
so
many
changes.
Only
three
I.
B
Think
A3,
that's
just
I,
didn't
change
anything
from
the
A3,
it's
just
because
it
wasn't
already
in
the
main
repo
it
hasn't
at
the
time.
This
pull
request.
It
wasn't
merged,
so
I
can
create
a
new
pull
request.
A
Right
so
so
yeah
I
mean
whatever
you
prefer.
You
can
choose
rebase
your
pull
request
and
modify
that
or
you
can
close
it
and
open
a
new
pull
request.
That's
totally
okay,
but
yeah
A3
I
think
should
go
changes
in
length.
Three
should
go
away
and
then
the
the
change
in
A1
I,
don't
think
you
can
I
mean
you
can
elaborate
it.
We
can
discuss
it
also
in
the
next
meeting.
B
And
remappings,
okay,
okay
and
my
I
might
not
have
thought
about
that.
Let
me
I'll
I'll,
take
a
look
at
what
I
did
in
A1
and
then
I'll
make
a
comment
in
the
pull
request
and
then
yeah
I'll
try
to
rebase
my
Branch
so
that
it
doesn't
show
the
changes
of
A3,
because
there
shouldn't
be
any
changes
in
A3.
A3
is
and
then
make
any
actual
changes
there.
Yep.
A
I
mean
we,
we
definitely
won
full
request
which
are
sort
of
like
folk
news,
online
contribution
so
yeah.
No,
that
makes
sense,
and
let's
go
with
that
other
than
that,
as
I
said,
I
just
tested
this
and
in
principle,
I'll
run
in
parallel
with
this
brand.
A
So
if
you
want
just
leave
this
Branch
alive
so
that,
for
the
sake
of
paralyzing
work,
we
can
also
continue
moving
forward
and
feel
free
to
close,
maybe
the
pull
request
but
open
a
new
one,
once
you're
ready
with
a
new
branch
and
that
way
that
way
we
can
run
in
parallel
and
by
running
parallel
I
mean
we
will
start
integrating
this
A4
and
A3
into
the
build
farm
so
that
we
can
start
throwing
data
and
resource
with
all
of
the
platforms.
We're
maintaining.
Does
that
make
sense.
B
Yeah
that
sounds
great
I
had
a
quick
question:
if
that's
okay,
about
quality
of
service,
is
there
is
there
so
the
point
Cloud
that's
being
produced,
isn't
like
it
only
works
when
I
choose
the
quality
of
service
to
be
like
best
effort.
A
What
do
you
mean?
Can
you
elaborate?
What
do
you
mean
by
it's
only
working
like
what
exactly
is
your
experience?
What
is
the
result?
You're
getting?
Is
it
that
you
don't
get
the
whole
frame
rate
expected?
You
don't
get
data
at
all
propagated
in
the
in
the
computational
graph.
B
When
I
yeah
like
when
I
try
to
visualize
it
in
Orvis,
there's
some
points
where
the
you'll
see
the
images
are
updating.
But
then
the
point
Cloud
doesn't
update
for
a
few
seconds.
B
A
I
think
so
so,
which
TDS
implementation
are
you
using
and
are
you
testing
with
by
the
way.
A
Fast,
EDS,
okay,
so
so
one
one
aspect
that
I
can
recommend
is
try,
try
things
out
with
Cyclone
videos:
okay,
there's
there's
notable
bad
performance
issues
with
fastidious,
especially
on
embedded
and
and
I'm
I'm.
A
Just
being
really
nice
like
by
saying
this,
and
in
this
way,
so
by
default
we
in
our
build
Farm,
we
launched
things
with
Cyclone
videos,
because
comparing
things
with
fast
EDS
often
the
case
is
really
hard
because
because
it
just
breaks,
that's
that's
as
far
as
I
can
I
can
say
it
so
I
know
it's
the
default
one.
Many
people
is
unhappy
about
this.
A
Many
people
in
Industry
is
using
Cyclone
DDS
for
obvious
reasons,
so
yeah
try
that
out
and
maybe
feel
free
to
report
in
the
ticket
directly
like
or
open
a
new
issue
and
share
your
experiences.
My
guess
is
that
things
will
improve.
Just
by
switching
this,
do
you
need
a
pointer
on
how
to
do
this.
A
Let
me
get
back
into
my
stream
sharing,
and
so,
if
you
go
into
any
of
the
any
of
the
actual
benchmarks
that
I
produce,
you
you'll
see
that
in
the
readme
or
in
the
benchmark.yaml
file,
there
is
essentially
in
here
prepended
to
the
actual
invocation
of
the
Trace.
A
Middleware
layer
like
price
DDS,
you
can
change
it
to
to
RTI.
If
that's
something
you
have
available,
though
it
should
be
available
actually
from
the
repost,
an
evaluation
version
of
it
at
least,
and
then
you
may
want
to
do
exactly
the
same.
While
you
use
watch
to
back
to
play
the
data.
B
A
B
A
A
no-brainer,
otherwise
there's
plenty
of
documentation
online
shouldn't
be
tough
for
you
to
figure
it
out,
but
yeah
I'm
interested
about
what
you
get
out
of
this.
So
if
it's
not
too
much
of
an
issue,
I
would
appreciate
if
once
you
test
this
out,
you
can
report
it
in
an
issue
and,
and
then,
let's
see,
let's
see
what
comes
out
of
it.
A
My
guess,
based
on
past
experiences
again,
that
you
you'll
get
a
much
more
reliable
communication
by
simply
just
changing
the
DDS
implementation,
which
you
shouldn't
expect
right,
but
it
is,
and
then,
if
you
still
You,
observe
the
same,
behavior
then
feel
free
to
compare
the
Behavior
between
reliable
and
best
effort
across
the
Matrix.
A
That
appears
using
those
two
3DS
implementations,
like
true
DDS
implementations,
to
keep
quality
of
service
configurations,
also
just
keep
in
mind
that
best
effort
and
best
effort
and
reliable
are
just
to
let's
say
names
for
a
global
set
of
qos
parameters.
You
can
further
fine-tune
the
qos
parameters
for
your
use
case.
If
that's
something
required
as
well
and
and
again
that's
something
we
can,
we
can
look
into,
but
I
think
for
now.
Yeah
I
think
for
now
we're
we're
okay
in
here,
okay,
awesome.
B
A
Good,
so
maybe
yeah
I'll
I,
guess
I'll
just
note
down
based
on.
If
that's
okay
with
you,
that
you
will
look
into
pay
for
again,
if
that's,
okay
and
clean
it
up,
does
that
make
sense.
A
Then
Jason
will
report
on
his
experimenting
concerning
PDS
implementations
and
and
it's
actually
going
to
be
a
great
contribution
community-wise,
because
this
is
a
widely
argued
topic
like
you
know,
TDS
vendors
tend
to
throw
benchmarks
from
time
to
time,
and
then
you
know
on
paper
paper,
everyone
looks
super
pretty,
but
then
experimentally
things
things
tell
a
different
story.
So
I
think
this
would
be
interesting
input
for
us
down
the
road
yeah
good,
awesome
yeah.
So
let's
do
that
more
things.
A
So
again,
the
integration
into
the
pipeline,
sorry
into
the
pill
form
is,
is
almost
ready.
So
you
should
expect
very
soon
to
to
get
data
into
most
of
the
benchmarks
in
principle.
A
We'll
aim
for
these
four
first
four
perception
benchmarks,
I,
know
Raisa,
and
some
of
you
guys
also
are
working
on
independently
on
some
other
benchmarks,
so
feel
free
to
just
throw
me
a
line
or
speak
out
right
now,
once
you
are
ready
to
show
something
happy
to
give
you
guys
time
and
and
yeah
showcase
it
as
Jason
has
been
doing
it.
A
One
other
aspect
concerning
maybe
a
bit
of
additional
dissemination
is
we
spoke
last
time
about
maybe
making
a
first
version
of
Road
perf,
maybe
an
0.1
version
that
packs
together
a
minimal
set
of
benchmarks
that
are
known
and
tested
in
a
variety
of
initial
platforms
and
somehow
showcase
it
and
advertise
it
and
ask
for
additional
contributions.
B
A
There
anyone
who
opposes
to
us
going
for
that
with
this
first
four
benchmarks
or
is
there
something
else
we
should
be
picking
and
I'm
bringing
this
up
because
I
know
in
the
last
meeting
Json
you
brought
up
a
nice
list,
nice
Excel
sheet
of
potential
other
packages
and
and
functions
we
may
want
to
Benchmark.
B
Yeah,
so
let
me
share
my
screen.
B
Yeah,
so
I
made
a
list
of
some
potential
benchmarks
that
we
could
do
next.
I,
don't
know
if
we
want
to
complete
these
buy
the
first
version
of
robot.
First
that
we
release-
or
this
is
something
we
want
to
do
more
long
term,
but
I
was
looking.
I
was
trying
to
find
some
of
those
GPU
accelerated
packages
and
make
a
list
of
them.
B
B
So
yeah
I,
don't
know
if
we
wanna
shoot
for
implementing
a
few
of
these
before
the
first
robot
curve
release,
or
we
want
to
just
do
this
down
the
line
I'm
open
to
either
and
I
also
would
love
to
get
like
if
there's
any
feedback
about,
maybe
which
of
these
packages
would
be
best
to
to
Benchmark
next
or
if
there's
any
like.
If
we
want
to
pick
and
choose
any
of
them,
if
they're
all
good,
if
there's
only
a
few
that
sounds
reasonable,
that'd
be
great
to
get
get
some
feedback.
A
Yep
yep,
no,
so
this
makes
total
sense.
So
right
now,
besides
contributing
to
the
to
the
build
fund
results,
it
will
come
out.
Our
team
is
also
working
as
part
of
our
robot
core
perception
product
to
essentially
streamline
some
of
the
resources
we're
getting
so
that
they
are
API
compatible.
This
this
means
that
it'll
be
a
straightforward
to
swap
GPU
accelerated
notes,
even
if
they
run
on
Nvidia
Hardware,
because
right
now
the
the
GPU
implementation
is
available
from
Nvidia
are
API
not
compatible.
A
So
so
you
need
to
often
the
case,
make
modifications
Beyond
just
simply
changing
the
component
in
the
launch
file.
In
some
cases,.
B
A
So
this
is
a
bit
inconvenience,
so
so
we
will
pick
part
of
the
burden
of
eventually
benchmarking.
The
same
notes,
like
the
disparity
note
and
the
Rectify
note
on
as
examples
on
both
CPU
and
gpus
GPU
enabled
by
Nvidia.
So
that's
something
we
will
definitely
be
reporting
on,
because
otherwise
it
would
be
unfair
to
report
that
Nvidia
does
Nvidia
platforms,
that's
X,
but
it's
only
on
CPU.
So
definitely
you
want
to
Leverage
The
GPU
right
so
part
of
that
we
will
pick
it.
A
So
you
can
count
on
us
for
taking
and
doing
that
that
heavy
lifting
the
other
side
is.
We
need
to
enlarge
over
time.
Obviously,
the
amount
of
benchmarks
we
we
cover,
maybe
start
thinking
about
going.
Beyond
perception,
purely
I
think
there's
still
quite
a
bit
of
Primitives
in
perception
that
we
may
want
to
have
a
look
at
just
looking
at
your
list
right
in
here,
I
say:
you've
put
together,
essentially
the
knap
stack
and
also
some
some
of
the
Ross
control
components.
A
Yeah,
to
be
frank,
I
think
the
knob
stack
you
have
selected
is
is
pretty
the
packages
particular
are
pretty
fair,
there's
also
a
series
of
kind
of
like
integration
tests
within
app
stack
that
are
kind
of
like
low
hanging
fruits.
If
we
wanna
grab
something
that
already
does
something
reasonably
elaborated
and
we
can
surround
it
with
instrumentation
and
then
make
measurements
out
of
that.
So
that's
also
a
good
direction
to
take
to
take
on
it.
B
Where
are
those
located?
Do
you
ever.
A
I
can
I
can
help
search
that
real,
quick.
A
A
Yeah
I
think
I,
just
don't
have
them
handy,
so
here's
some
planner
and
smoother
pinch
marking
I
think
the
planner
wants
were
distant.
There's
lots
of
very
interesting
stuff
from
here.
A
The
the
point
is
that
they
Implement
in
here
a
rather
I,
would
say
rudimentary
way
of
taking
measurements
and
and
doing
timestamps,
which
is
non-real
time
safe
and
with
Frankly
Speaking
uses
RCL
pipe,
which
is
definitely
not
recommended
for
performance
benchmarking
purposes,
and
you
can
tell
this
from
the
fact
that
if
you
use,
if
you
use
the
rosary
tooling,
which
are
coded
with
RCL
pi
and
you,
for
example,
collect
the
frame
rate,
you
will
get
a
very
different
number
than
if
you
measure
it
using
RCL
CPP.
A
So
what
we
are
doing,
as
is
actually
collecting
things
at
the
C
plus
plus
level,
with.
A
So,
but
besides
of
that,
there's
quite
a
lot
of
useful
stuff
in
here,
we
just
need
to.
We
just
need
to
kind
of
like
discard
the
processing
of
the
data
and
the
time
stamping
but
other
than
that.
The
overall
bring
up
is
usable
like
this
is
to
the
launch
file.
A
The
overall
bring
up
is
very
usable
and
should
get
you
up
to
speed
with
something
that
that
does
something
that
makes
sense
and
that
that's
reproducible,
because
it's
an
integration
test
and
also,
at
the
same
time,
something
that
we
can
just
place.
C
plus
plus
trains,
points
to
then
just
go
ahead
and
Benchmark.
It.
A
Sure
but
I
can
also
add
it
to
minutes
emergency.
A
There,
it
goes
so
also
drop
it
in
the
let's,
so
you
guys
will
sit
there.
Okay,
I
think
we're
running
out
of
time,
so
just
wrapping
in
here
and
summarizing
Jason
will
look
into
A4
and
clean
it
up
so
that
we
can
integrate
it.
Jason
will
also
report
on
his
3DS
experimentation.
Build
Victor
and
team
will
continue,
working
on
the
real
farm
integration
and
hopefully
bring
results,
and
then
I
will
probably
also
assign
to
myself
start
working
on
preliminary.
A
For
robot
Earth
server
release,
that's
okay
and
then
what
I'll
do
is
Json
and
the
team
are
in
the
Harvard
and
some
of
us.
Maybe
we
can
make
a
smaller
group
to
iterate
faster
on
that
and
and
sort
of
like
prepare
everything
for
for
lunch,
so
I'll,
probably
Reach,
Out
offline
I'll,
disclose
it,
of
course,
in
the
working
group,
but
but
I
just
want
to
make
sure
that
we
trade
fast
in
there
so
as
soon
as
I
have
something
ready,
I'll,
just
Reach
Out.
If
that's
okay,
right.
A
Where
well
I
mean
I,
guess
just
the
usual
robotics
channels
like
nothing
to
fancy,
not
not
I,
don't
have
in
mind
any
conference
or
anything
like
that.
If,
if
that's
something
you
guys
want
to
plan
around
I'm,
okay
with
it
I
know,
we
need
to
write
a
paper
around
robot
perf.
There's
an
ongoing
draft
right
now,
but
to
be
frank,
I
would
rather
wait
until
we
have
maybe
a
few
more
benchmarks
for
writing.
The
robot
paper
I'm
going
a
bit
more
scientific
I.
A
Think
this
right
now
is
just
I
guess
just
a
shout
out
to
the
community,
mostly
the
Ross
Community,
by
saying:
hey
guys,
here's
a
first
release.
You
know
it
does
show
already
some
capabilities
and
it's
tested
across
various
Hardware
Solutions,
which
gives
you
that
vendor
neutral
approach
we
want
to
highlight
it's
also
tested
on
essentially
at
various
setups
and
environments.
It's
reproducible
and
it's
ready
to
start
growing
so
check
it
out
and
consider
contributing
I.
Think
that's
the
message.
A
If,
if
everyone
agrees,
then
I
think
we
should
convey,
but
definitely
definitely
I
think
if
we
keep
this
up
and
keep
the
Rhythm
I'm
sure
that
within
yeah
a
few
months,
we
should
be
in
a
very
good
place,
Json
to
write
that
paper.
I
know
that,
from
your
perspective,
it's
also
something
that
well
you're
interested
in
I.
Think
everyone's
interested
on
that.
So
definitely.
A
Awesome
all
right
guys
any
questions,
final
comments,
otherwise
I
think
we
can
adjourn
for
today
and
continue
next
week.
Thank
you,
nope,
okay.
So
maybe
a
final
comment
from
my
side
have
a
look
at
Jason's
benchmarks
list
of
possibilities.
If
everyone
is
curious
about
giving
it
a
try
again,
let's
coordinate
over
our
GitHub
so
feel
free
to
open
an
issue
open
a
ticket
and
just
mention
that
you're
working
on
something
so
that
we
don't
step
on
each
other.
Toes
and
yeah.
A
Just
follow
the
lead
of
Jason.
So
far
he's
been
producing,
I
think
pretty
high
quality
benchmarks.
Maybe
one
thing
we
can
touch
on
next
agenda
or
in
the
next
week
is
Jason
I.
Think
you
have
apj
I
think
you
have
been
trying
things
out.
Also
for
other
benchmarking,
Frameworks
did
you?
Did
you
manage
to
get
any
results
using
ros2
benchmark.
A
I'm
also
playing
in
the
background
with
it
again
lots
of
issues
reproducing
what
they
claim
that
I
think
it
would
be.
It
would
be
interesting
to
also
share
some
notes
on
that,
because
I
think
there's
definitely
some
material.
We
can
leverage
and
and
mainstream
it
into
our
work
right.
Yeah.
B
A
Very
good,
all
right
guys
so
we'll
meet
next
week,
thanks
everyone
for
your
time
and
have
a
good
week.
Thank
you.