►
From YouTube: ROS 2 Hardware Acceleration Working Group - meeting #22
Description
22nd meeting of the ROS 2 Hardware Acceleration Working Group (HAWG, https://github.com/ros-acceleration).
The ROS 2 Hardware Acceleration Working Group is an open and community-driven robotics group that drives the creation, maintenance and testing of hardware acceleration kernels for optimized ROS 2 interactions over different compute substrates, including FPGAs and GPUs.
For more including source code, check https://github.com/ros-acceleration. Minutes of the meeting available at https://docs.google.com/document/d/185Cy1xjpAOgJygEOnlf5OCgOQTywmF0qgSpS3GiW16Q/edit?usp=sharing
For commercial support on hardware acceleration topics in robotics, refer to https://accelerationrobotics.com.
A
Welcome
everyone
to
the
22nd
robot
acceleration
working
group
meeting
today,
we'll
be
mostly
discussing
about
robot
Perth
updates
across
contributions
from
various
members
of
the
working
group
and
as
always,
the
meeting
will
be
recorded
and
the
minutes
are
available
in
the
same
place
as
always
just
share
it
in
the
chat,
the
minutes
so
that
everyone
can
access
and
take
a
look
at
it,
we'll
be
filling
them
up
this
time
on
the
go,
because
I
didn't
have
lots
of
time
to
prepare
the
minutes.
A
Apologies
for
that,
let's
start
with
new
business
concerning
the
project
itself.
So
in
the
past
meetings
we
had
mentioned
the
objective
of
setting
first
preliminary
release
of
robot
perf
approximately
around
this
dates.
I'm
happy
to
report
that
there's
been
very,
very
good
advances
on
our
site,
which
gets
us
pretty
pretty
much
ready
for
essentially
making
a
Tech
release.
However,
we
don't
have
the
marketing
content
associated
with
it,
so
I
believe
we're
gonna
push.
A
This
Alpha
I
believe
release
that
we
discussed
a
few
days
more
and
I'll
coordinate
with
a
few
of
you,
especially
Json
and
BJ,
happy
to
coordinate
with
you
guys
on
how
we
can
orchestrate
that
together,
but
the
material
is
just
coming
along.
It's
it's
a
bit
beyond
us
because
we're
we're
requesting
external
help
to
put
together
all
these
fancy
looking
contents.
A
So
just
waiting
for
that
and
again,
let's
let's
degree
from
that
offline
but
I
can
certainly
report
is
on
the
technical
progress
which
again
is
pretty
much
finalized,
a
bit
of
additional
refinements
required
from
from
my
site
and
my
team
site,
but
pretty
much
getting
there.
So
the
first
thing
that
I
wanted
to
share
with
everyone
is
updates
in
this
in
the
CI
CD
infrastructure.
A
Allow
me
to
share
my
screen
real
fast,
so
long
story
short,
we
we
just
pretty
much
got
there,
and
this
is
a
sneak
peek
of
of
my
Endeavors
as
of
this
afternoon,
trying
to
fix
the
the
CI
pipeline.
As
you
can
see,
there's
plenty
of
targets
in
terms
of
Hardware
we've
got
targets
from
Nvidia
from
AMD
from
Intel
from
Qualcomm.
A
There
are
a
few
from
microchip,
so
there
is,
you
know
all
colors
in
here,
so
this
is
pretty
exciting
for
us
and,
of
course,
the
results
for
each
one
of
these
targets
from
a
computational
perspective
are
getting
reported
directly
in
the
way
that
we
disclosed
aligning
with
the
format
and
aligning
with
the
corresponding
guidelines
that
we
marked
within
the
documents
for
the
specification
that
are
right
in
here
in
the
repo,
as
you
know,
and
as
we've
been
discussing
for
a
while.
A
Thank
you,
everyone
who
contributed
to
that
so
yeah
pretty
much.
That
is,
that
is
done
and
the
results
are
coming
along
again,
just
the
final
few
bits
for
me
to
fix
to
get
all
those
results.
For
now
we
are
reporting,
as
we
discussed
results
and
benchmarks
on
perception,
and
particularly
for
the
existing
benchmarks
that
have
been
accepted
within
the
within
the
current
repository
at
this
current
stage.
A
There's
a
still
margin
for
accepting
a
few
more
benchmarks
for
those
of
you
working
on
some
of
those,
so
encouraging
people
working
on
benchmarks
to
make
maybe
a
final
push
regard
so
that
we
can
essentially
include
as
many
as
possible
there's
also
some
people
working
on
some
that,
hopefully
will
make
it
so
crossing
fingers.
For
that
to
happen
again,
the
cut
will
happen
anytime
in
the
coming
few
days
and
just
stay
tuned
if
you're
interested,
so
that
kind
of
like
kills
the
update
on
the
CI
concerning
code
refactoring.
A
So
one
of
the
things
that
my
team
actually
has
been
working
on
and
credit
to
both
martinio,
who
is
with
us
today
in
here,
as
well
as
Alejandra,
who
work
together
with
him
on
this.
Initially
the.
As
you
know,
each
Benchmark
has
two
sides
to
it.
Let
me
just
click
a
different
one.
This
one,
for
example,
there
is
a
tracing
effort
that
does
profiling
and
obtains
the
corresponding
information
on
the
computational
graph
and
then
there's
a
an
analysis
which
happens
at
posteriori.
A
The
interest
behind
this,
as
opposed
to
some
other
benchmarking
efforts,
such
as
Nvidia
Rose,
2
Benchmark,
is
the
fact
that
we
don't
artificially
extend
the
computational
graph.
If
it
was
this
graph,
as
it's
the
case,
the
way
Nvidia
and
others
are
approaching,
benchmarking
is
they
are
creating
here,
a
new
node,
but
is
somehow
subscribing
to
the
topics
they
want
to
capture.
That's
artificially
distorting
the
actual
computational
graph,
because
you're
measuring
something
that
actually
is
not
realistic.
It's
not
the
real
computational
graph.
A
What
we
are
doing
is
we
don't
create
any
external
or
additional
computational
graph
element
and
we
take
measurements
by
using
Pro
styling.
That,
of
course,
is
a
bit
more
cumbersome
and
requires
additional
tooling
and
two
phases,
the
profiling
or
tracing,
and
then
the
analysis,
that's
why
we
separated
into
at
the
same
time,
complexity
comes
with
advantages.
Again.
We
can
profile
real,
realistic,
robotic
systems
and
then
a
posteriority
or
in
the
future,
analyze
them
appropriately.
A
We
can
also
get
much
more
realistic
information
about
it
and
the
reason
why
I'm
saying
all
of
this
is
because
we
noticed
based
on
feedback
that
people
were
struggling
to
come
up
with
their
own
benchmarks
because
producing
producing
their
own
analysis.
Scripts
was
very
cumbersome
and
I.
Think
Json.
You
experience
this
yourself
and
you
suffered
a
bit
with
it.
A
The
only
thing
that
you
do
is
you
add
targets
and
for
each
one
of
these
Trace
points
you
just
add
the
name
and
then
additional
metadata
on
how
you
want
it
plotted
and
that's
it
after
adding
the
targets.
There's
nothing
else.
You
just
invoke
the
corresponding
metric.
You
want
to
analyze
on
that
graph
and
boom.
It
just
happens
so
that
again
credit
to
my
teammates,
who
did
I,
think
an
amazing
work
right
now.
The
repo
is
exactly
transformed
to
comply
with
this
philosophy.
A
So
right
now,
all
of
the
analysis
scripts
comply
with
this,
so
encouraging
folks
trying
things
out
on
their
own
to
pay
attention
to
this
and
and
essentially
to
try
to
align
with
it
if
possible
as
well.
You
can
also
do
it
in
the
old-fashioned
way.
It's
just
much
more
cumbersome
for
you
probably
also
using
this
means
that
we
will
maintain
for
you
all
the
underlying
abstractions,
which
I
think
comes
really
handy.
A
So
these
concerns
and
for
those
of
you
interested
in
exactly
what
does
that
abstraction
mean
I,
encourage
you
to
check
this
package.
Sorry,
just
accepting
here,
another
person,
this
Benchmark
utilities
package
I,
won't
get
into
the
details
because
we
only
have
30
minutes
today,
but
there
is
in
here
all
of
the
logic
that
you
would
love,
which
is
making
your
life
a
bit
easier
and
again
this
is
already
decently
big
so
that
maintaining
it,
for
you
should
help
out
so
count
on
that
down
the
road.
A
So
that's
one
thing:
that's
one
thing
down
or
one
more
thing
down:
I
wanted
to
then
get
into
a
bit
of
an
update
on
new
benchmarks
and
I
know
that
at
least
martinio
and
Alejandra
can
give
us
a
bit
of
an
update
on
what
they
are
working
on.
A
I
also
want
to
touch
on
some
of
the
fast
actions
based
on
that
I
could
I
think
you
took
how
about.
Maybe
we
hear
first,
what
my
opinion
and
Alex
have
been
working
on
and
then
maybe
Jason.
You
can
comment
on
it,
because
Martino
has
been
looking
at
your
particular
Endeavor
in
Benchmark
and
he's
been
building
on
top
of
yours
in
a
different
pull
request.
A
So
maybe
we
hear
first
martinio
with
the
time
we
have
and
then
we
can
think
a
bit
on
on
what
are
your
position,
your
positions
actually
and
then,
with
the
time
left,
we
we
give
the
the
room
to
to
Alex
and
then
everyone
else
who
wants
to
share
something.
Does
that
make
sense?
Folks,
okay,
I
see
people
nodding
in
here
great:
let's
do
it
then
martinia.
A
Would
you
like
to
take
over
maybe
share
your
screen
and
and
just
spend
five
minutes
describing
essentially
where
we
are
with
A4
I,
believe
sure,
okay,.
B
Okay,
let
me
show
my
screen:
Apple
is,
the
connection
is
not
the
best.
C
B
Okay,
so
let
me
know
if
you
see
my
screen,
we
can
okay
perfect.
So
yes,
Victor
mentioned
Jason
I
was
working
upon
your
pull
request
so
just
to
make
things
faster
because
we
wanted
I
afford
to
to
be
merged
to
main
as
soon
as
possible.
So
I
just
took
on
on
your
on
your
work
and
basically
everything
is
that
I'm
doing
it's
a
merging.
The
latest
changes
that
we've
made
from
Maine.
B
One
of
the
of
of
those
changes
is,
as
Victor
mentioned,
the
refactoring
for
for
the
analyzing
launch
file
and
then
just
a
few
details
on
the
on
the
readme
file.
Also
I
want
to
here.
I,
have
your
note:
yeah.
Okay,
just
basically
wanted
to
put
some
Trace
Trace
points
here
in
The
Benchmark
note,
because
currently
we
are
not
instrument
in
that
note.
So
that's
the
the
only
thing
left
that
I
have
to
do
and
then
basically
the
A4
Benchmark
will
be
complete.
A
Just
for
further
Clarity
and
correct
me
if
I'm
wrong
Martino,
but
the
the
rational
in
here
for
instrumenting,
the
the
note
being
benchmarked
is
because
we
may
want
to
compare
this
implementation
across
accelerators,
Json
yeah.
And
for
that
to
happen,
you
need
to
isolate
the
specific
function.
Call
where,
in
the
actual
disparity
is
happening
so
that
you
can
compare
apples
with
apples.
A
D
A
For
my
profiling
perspective
from
a
profile
perspective,
you
want
to
differentiate
essentially
the
computation
that
corresponds
with
the
Ross
to
graph
data
flow
from
the
computation
that
focuses
on
that
particular
perception.
Operation.
D
B
A
Okay,
awesome,
so
then
we
are
pretty
much
aligned
there.
Jason
are
you?
Are
you
okay?
So
you're
your
pull?
Your
pull
request
is
respected,
I
believe
Jason.
Your
comments
are
yeah
and
then
the
intention
is
that
once
martinia
robs
this,
adding
the
corresponding
the
corresponding
Trace
points
to
image
pipeline,
then
he'll
hit
back
in
the
issue
and
I
think
we
will
make
a
final
review
validating
it.
Are
you
fine
based
on
if
we
proceed
with
these
other
pull
request,
which
again
contains
your
commits
and
and
replace
your
authorship.
D
B
Yeah
because
we
have
a
fork
on
on
the
rose
acceleration
user,
so
we
can
work
from
there
and
actually
we
have,
for
instance,
on
the
A1
well
on
the
rest
of
the
benchmarks.
We
are
also
using
those
Trace
points
on
the
image
processing
package,
gotcha.
D
A
A
We
know
that
source
code
really
well
and
I'm,
pretty
sure
that
if
we
push
that
Upstream
it
will
be
accepted
for
now
we
will
just
contribute
it
to
a
fork
we
we
maintain,
but
that
that
should
be.
That
should
be
a
no
no-brainer,
no
no
issue
at
all.
So
and
again,
that
is
just
for
consistency
with
what
concerns
the
rest
of
the
benchmarks.
It's
not
necessary.
I
just
want
to
highlight
this.
A
This
aspect,
it's
really
not
mandatory
for
us
to
conduct
The
Benchmark
in
the
way
we
want
to
it's
really
not
it's
really
not
we're
just
refining
it,
because
we
also
internally,
are
comparing
from
a
pure
perception,
computation
perspective,
various
computational
resources
and
and
platforms,
and.
D
D
Gotcha
but
okay,
so
I
understand
the
trace
points
now
I'm
wondering
so
adding
those
Trace
points
obviously
isn't
going
to
change
like
the
reliability
of
how
often
the
point
cloud
is
getting
published
and
I
was
having
some
trouble
with
Point
Cloud,
occasionally
like
freezing
offense.
No
messages
are
being
passed.
A
I
would
assume
not
not
at
all,
however.
Have
we
have
we
experienced
so
far
or
martinio?
Have
you
done
any
any
experimenting
so
far
with
with
what
Json
is
describing
like
freestyle.
B
I
tried
to
replicate
this
from
error
on
the
what
issue
on
the
point
Cloud
topic,
but
I
was
not
able
to
I
I
got
a
continuous
information
from
from
that
note,
so
I
could
not
replicate
your
your
problem.
There,
oh
really
yep.
So.
D
D
Oh
sorry,
all
right,
all
right,
sorry,
I,
missed
I,
just
heard
that
yeah
I
think
I
I
think
I.
Do
it's!
It's
pretty
powerful!
That's
interesting,
I'm,
not
sure.
B
But
you
mentioned
something
about
some
Docker
replication,
I'm,
not
sure.
If
that's
related.
D
No
that
was
I
was
I,
was
having
Docker
issues
that
I
don't
think
it
was
related
to
this
specific
issue
and
I
tried.
I,
tried,
different
middlewares
I
think
previously
I
was
doing
like
Fast
DDS
and
so
I
used
Cyclone
TDS.
That
didn't
seem
to
change
anything
so
well.
A
I
mean
yeah
I,
think
getting
that
confirmation
essentially
helps
out
moving
forward.
I
can
propose
to
do
use
for
for
sanity
and
and
for
your
piece
of
mind
Json.
What
I
can
propose
is
once
martinio
gives
the
general
okay
to
the
code.
A
Changes
I
can
commit
to
take
a
few
minutes
and
push
this
into
the
build
Farm
and
then
we'll
get
the
results
of
this
particular
Benchmark
in
a
number
of
embedded
and
workstations,
so
we'll
get
it
we'll
get
it
across
AMD
and
Nvidia
in
in
an
Intel
kind
of
like
embedded
Edge
and
as
well
as
workstations,
and
then
we
can
see
whether
we
can
reproduce
this
isolated
issue
that
you
were
experiencing
and
conclude
based
on
that
I
think
this
is
a
general
policy.
A
We
need
to
set
up
before
accepting
formally
a
Benchmark
into
the
source
code.
I
think
you
can
run
it
on
the
CI
and
just
observe
that
it
does
make
sense.
D
D
It
works,
that's
awesome,
Martino,
do
you?
Do
you
visualize
it
using
like
Arvis,
and
you
see
the
point
cloud
or
are
you
inspecting
it
in
some
other
way.
B
D
Okay,
yeah
I,
think
I
think
that's
the
same
for
my
workstation
I.
Think
the
issue
that
I'm
referring
to
is
when
I
try
to
visualize
it
with
Arvis
too
and
I'm
like
looking
at
the
point
Cloud,
the
point
Cloud
doesn't
always
like
render
it
just
like
freezes.
Sometimes,
okay,
so
it'll
be
interesting.
If
you
could
try
that
out,
yeah.
A
Sorry
folks,
I
wasn't
aware
of
that
piece
of
information,
but
that
makes
sense
guys
I
mean
if
you're,
because
what
you're
doing
when
you
subscribe
with
Arvis,
what's
happening
from
a
graph
computational
perspective,
is
that
you
are
artificially
creating
a
node,
that's
subscribing
to
that
topic,
so
you
don't
have
any
more.
Only
one
subscriber
you
have
multiple
subscribers,
and
so,
if
you
have
multiple
subscribers,
each
one
of
them
receiving
the
data
flow
from
from
a
cloud
point
that
is
quite
a
bit,
and
so
that
is
the
same
thing.
A
I
was
describing
for
artificially
creating
notes
to
do
writing
purposes
which
you
know
serves
as
a
kind
of
like
isolated
kind
of
like
yeah
test,
but
it's
really
not
realistic
and
it
doesn't
help
in
robotic
applications.
You
need
to
measure
things
in
in
the
realistic
data
flow
for
it
to
be
useful.
So
there's
already
one
subscriber
and
that's
why
we
are
using
this
output
node
in
here.
B
This
is
something
that
needed
to
be
upgraded.
Let
me
see
it
here.
B
A
I
can
see
over
there,
yeah
I
can
see
over
there.
That
yeah
get
one
more
note
to
the
right
side
of
it
and
I'm.
Assuming
that
that's
yeah,
that's
what
I
was
expecting
yeah
one
subscriber.
A
So
maybe
maybe
also
Json,
for
you
just
give
it
a
try,
Fork
martinios
fork
and
just
give
it
a
try
and
relaunch
on
the
analysis
and
just
let
us
know
feel
free
to
comment
so
for
now.
If
it's
okay
is
I,
will
I
will
go
ahead
and
unless
anyone
complains
and
shouts
to
me
right
now,
I'll.
A
And
close
initial
pull
request
that
was
opened
by
Json,
we'll
continue
the
discussion
we'll
continue
the
discussion
in
martinus
pull
request
which
preserves
the
authorship
and
remains
persistent
with
with
what
Jason
was
originally
doing
so
that
I
think
wraps
the
discussion,
Martini
Json
anything
else
to
add.
A
Awesome
Okay
cool,
so
maybe
a
real
quick
so
that
we
give
Alejandra
the
times
he
needs
to
explain
her
contributions.
What
I'll
do
is
Jason.
If,
if
there's
any
chance
I
know
you
have
an
open
issue
concerning
the
death
image
proc,
which
I
know
it
was
aiming
in
a
similar,
somewhat
Direction
I,
don't
know
what
what
state
of
development
that
is,
but
given
the
given
the
fact
that
we
plan
to
kind
of
like
cut
this
first,
this
first
Benchmark
for
perception
it'd
be
awesome.
A
So
my
understanding
is
that
this
was
a
slightly
different
Benchmark.
You
were
embodied
working
on,
but
maybe
maybe
this
is
the
one
we
just
discussed
this.
A
Okay,
so
there's
already
a
pull
request
opened
and
that
that's
my
bad
okay,
so
I
I
just
didn't
go
through
it
with
enough
detail.
It
seems
okay,
so
no
no
worries
and
then
we'll
get
this
included.
A
I,
don't
know
why
I
mentioned
it
in
here:
I
say:
six
yeah,
my
bad
okay,
so
we'll
wrap
this
essay
for
and
let's
then
discuss
a
possible
A6
with
Alejandra.
If,
if
we
have
marching
Freight
floor.
C
You,
okay,
so
I
just
wanted
to
introduce
you
a
little
bit:
The
Benchmark
I'm
working
on
it's
number
A6,
and
it's
going
to
be
about
the
Rectify
node
from
the
perception
stack.
However,
it's
slightly
different
to
A2,
which
is
also
based
on
Rectify,
because
it
will
add
the
throughput
metric,
which
is
something
different,
because
it's
the
first
time
we
introduce
the
metric
in
in
this
repository.
So
there's
a
little
of
work
behind
it
properly
on
other
packages
that
I
will
introduce
now.
C
So
just
and
let
me
share
my
screen
very
quickly
so
I'm
working
on
my
on
my
well
I
did
a
fork
of
two
of
the
repositories.
One
of
them
is
benchmarks:
robot,
perf
benchmarks,
of
course,
and
so
here
is
where
I'm
kind
of
updating
all
of
the
progress
I
will
go
into
more
technically
then
now.
C
But
please
stop
me
if
you
have
any
question
and
then
there's
another
fork
for
the
rose
acceleration
image
pipeline
because,
as
martinio
said,
we
are
introducing
new
trust
points,
and
so
we
needed
to
do
some
small
updates
on
this.
So
very
briefly,
I
will
directly
show
you
my
screen.
I.
Think
it's
easier
compared
to
the
other
benchmarks
that
we
have
been
talking
about.
We
need
to
actually
add
two
specific
pieces
of
information
which
are
the
size
of
the
messages
for
that
I'm,
using
a
serialization
from
our
our
clcpp.
C
Here's
just
how
I'm
doing
it
and
why
do
I
need
it,
because
I'm
going
to
do
two
different
metrics
from
for
throughput,
one
of
them
will
include
bytes
per
second
and
the
other
one
will
include
the
number
of
messages
per
second
we're
trying
to
kind
of
yeah
categorize
or
characterize
the
data
flow
in
this
way.
C
So
I
had
to
obviously
do
some
small
changes
in
this
one
once
I'm
I'm
done
with
this
I
will
try
to
make
it
also
compatible
with
the
other
benchmarks,
but
that's
the
second
priority,
and
in
order
to
be
able
to
use
this
there's
a
few
changes.
I
had
to
do
also
in
this
trace
and
analyze,
mainly
because
up
to
now
the
analyze.
Although
martinio
worked
here
in
the
refactoring,
we
only
had
this
analyzed
for
latency,
so
I'm,
currently
adding
new
methods
in
order
to
make
it
work
with
throughputs.
C
C
C
Everything
will
be
updated
and
correctly
documented,
but
just
so
you
know,
and
similarly,
the
equivalent
of
this
Benchmark
in
the
image
pipeline
also
had
to
be
modified
in
order
to
be
able
to
save
all
of
the
information
in
these
other
files,
which
are,
for
example,
rectifice.cpp,
which
includes
all
of
these
transpoints.
C
So
that's
the
current
work,
I'm
quite
Advanced
I
only
need
to
finish
the
analyze
part.
The
traces
or
Trace
points
are
correctly
stored
and
we
can
recover
them.
I.
A
Is
fantastic
and
and
kudos
for
Alex
and
her
work
spear
heading
essentially
this
this
new
metric
as
it
was
as
he
was
pointing
out,
you
may
want
to
stop
sharing,
maybe
Alex
or
you
can
skip
it
up,
but
yeah
one.
One
thing
that
I
think
is
worth
noting
is
she's
doing
an
amazing
work,
extending
it
to
a
new
type
of
metric.
A
So
far
we
were
mostly
measuring
latency
because,
as
you
know,
in
robotics
real
time
is
King,
and
for
that
we
need
to
always
account
for
the
maximum
latency
that
a
particular
operation
takes
so
that
we
can
comply
with
those
deadlines.
Real
time
specifies,
however,
it
does
definitely
appear
that
throughput
is
a
metric,
that
people's
reporting
and
caring
about.
So
we
just
thought
that
it
made
sense
to
try
to
accommodate
that
and
and
kind
of
like
jump
into
it
as
well,
so
yeah.
We.
A
We
strongly
believe
that
that
is
definitely
of
interest.
Alex.
Just
one
question
from
my
side:
if
I
may.
C
A
I
was
I
was
just
picking
at
the
way
you
implemented
the
acquisition
of
the
bytes
that
are
to
be
accounted
for
for
each
one
of
the
messages
you
you
capture,
the
bytes
that
you
obtained
from
the
image
itself
from
the
image
object
and
then
from
the
info.
Object,
correct
and,
and
so
I
saw
that
you
were
tackling
and
tacking
into
the
Ross,
the
Ross
two
abstractions
to
obtain
the
serialized
message
and
then
obtain
the
number
of
bytes.
Yes,.
C
A
Just
quick
question
on
that
I
do
appreciate
that
effort,
but
is
there
any
reason
why
you
didn't
just
similarly
to
what
you
did
with
image
when
you
just
invoked
a
function
or
a
method
to
obtain
the
bytes?
Why
didn't
you
just
obtain
the
bytes
from
the
info
message?
Is
there
a
reason
in
there
why
you
tapped
into
Rush
two
core
layers
and
serialization.
C
Let
me
share
my
screen:
actually
it's
in
both
cases,
I'm
doing
exactly
the
same.
Let
me
go
back
to
this.
The
truth
is
that
at
the
beginning,
we
were
having
some
issues,
as
you
know,
with
l
t,
t
and
D
in
order
to
store
the
information,
and
so
in
here
you
can
see
that
for
both
of
them
there's
this
method
called
size
that
we
can
use.
It's
actually
also.
We
can
use
capacity
instead
of
size.
They
are
very
similar,
but
for
throughput
it's
the
real
one.
C
Then
we
care
about
size
instead
of
capacity.
However,
in
order
to
get
it,
we
kind
of
needed
to
do
this
serialized
message
in
order
to
convert
it
to
one
a
byte
array.
So
that's
why
we're
doing
it
and
we're
doing
it
for
both
of
them.
A
But
but
sorry,
sorry,
if
I'm,
if
I'm
a
bit
slow
in
here-
but
this
is
fine
right
and
I
I
presume
this
is
operating
at
which
layer
is
this.
Rcl
CPP,
yes,
okay!
So
that
is
okay
and
it
gives
us
the
data
structure
at
the
Ross
client
Library
level,
which
is
above
DDS
and
that's
what
we
want.
So
this
is
correct,
I
think,
but
just
just
curious,
why
don't
you
invoke
size
of
data
image
or
sorry
size
of
image
message
and
that's
it.
C
You
mean
directly
this.
C
Can
I
can
try
it
again,
it's
just
because
at
the
beginning,
since
we
were
doing
the
lttng,
it
was
necessary
to
work
this
way
in
order
to
try
to
get
it
back
from
RCL
pi,
since
they
both
speak,
the
same
language
RCL,
let's
say,
but
I
can
research
and
see
if
it
works.
Also
with
a
more
simple
approach,
of
course,.
A
I
mean
this
is
this
is
don't
get
me
wrong.
I
think
this
is
great
and
we
definitely
can
find
use
cases
wherein
this
will
come
handy,
but
I'm
just
I'm
just
curious.
The
way
I
would
have
have
approached
It
Is
by
doing
this
in
I
just
pasted
it
in
the
chat
and
I
was
just
curious.
A
If
there
were
any
limitations
where
you
could
invoke
that
directly
into
the
into
the
object
itself,
so
yeah,
maybe
maybe
just
give
it
a
try,
we
don't
want
your
effort,
definitely
discarded
so
feel
free
to
use
the
one
that
that
is
the
same
place.
My
only
concern
is
that
I
think
that
the
information
we're
getting
out
of
RCL
cpb
is
the
serialization
and
I
think
when
you
are
serializing.
A
The
data
you're
kind
of
like
I,
think
that
you
are
somehow
cooking
it
so
that
it
passes
to
the
layer
below,
however,
by
Vijay.
However,
I
think
that
when
it
comes
to
messages
like
this
and
it's
intra
process,
communication
between
those
two
processes
or
intra
node
communication,
I
think
you're
not
getting
below
RCL
CPP
you're.
A
Staying
at
that
level,
at
the
RCL
level,
you're
just
passing,
pointers
between
nodes,
so
I'm
concerned
that
if
we
are
just
obtaining
the
serialized
data,
we
are
actually
obtaining
the
data
that
it
would
that
it
would
use
if
it
was
to
push
it
below
to
DDS
to
lower
layers.
So
it's
I'm
actually
really
interested
to
see
to
see.
What's
the
result,
if
you
try
this
out,
if
the
if
the
sizes
are
the
same,
then
no
worries
if
they
are
different,
I
think
that
is
definitely
something
to
consider.
Okay,.
C
I
will
consider
it
just
one
thing:
if
I'm
not
mistaken,
in
order
like
the
information
that
is
exchanged
in
between
nodes,
it
is
serialized
so
that
both
it
can
be
like
if
it
weren't
serialized
I,
don't
think
that
both
Python
and
C,
plus
plus,
could
work
with
the
same
information.
That's
why
I
think
it
serializes,
however,
maybe
I'm
mistaken,
or
maybe
not,
but
maybe
it's
a
different
size,
we'll
just
discover
when
I,
when
I
take
it
back
and.
A
I
think
I
think
you're
right,
I,
think
you're
right,
I
think
probably
what's
doing
is
on
the
RCL
CPP
level.
It's
serializing
it
it's
pushing
it
to
RCL
and
then
at
the
RCL
level.
It's
passing
pointers,
because
it's
already
it's
already
C
Centric
I,
think
you're
right,
I,
think
you're
right
yeah.
C
But
but
maybe
like,
if
the
other
method
still
gets,
the
same
information
I
mean
in
this
case
since
since
it's
C
plus
plus,
it
would
be
even
better
so
I'm
very
open
to
trying
the
the
other
option.
No.
A
This
this
makes
sense
good
good
point.
Definitely
so
yeah
I
mean
let's
discuss
this
offline,
but
I'm
I'm,
curious
about
this
and
and
yeah
at
least
I
would
learn
something
I
guess
so
awesome
all
right.
Folks,
I
think
we
ran
out
of
time.
This
was
this
was
an
exciting
discussion.
Jason
I
know
BJ
had
to
run,
but
is
there
anything
we
should
consider
before
we
wrap
today
from
your
site?
I
know
we
discussed
that
you
guys
were
doing
some
experimenting
with
Alternatives
anything
to
share
in
that
regard.
D
Not
just
yet,
but
maybe
soon
we'll
have
some
updates
we'll
reach
out.
Okay,
okay,.
A
Good
awesome,
fantastic,
so
guys
go
for
it.
A
No,
no,
no
worries,
okay,
so
thanks
everyone
and
we'll
meet
I
believe
next
month,
end
of
June
for
the
next
working
group
meeting,
hopefully
I
will
have
exciting
results
by
then.
Thank
you,
Jason
ciao,
bye-bye.
Everyone.