►
From YouTube: ROS 2 Hardware Acceleration Working Group - meeting #19
Description
19th meeting of the ROS 2 Hardware Acceleration Working Group (HAWG, https://github.com/ros-acceleration).
The ROS 2 Hardware Acceleration Working Group is an open and community-driven robotics group that drives the creation, maintenance and testing of hardware acceleration kernels for optimized ROS 2 interactions over different compute substrates, including FPGAs and GPUs.
For more including source code, check https://github.com/ros-acceleration. Minutes of the meeting available at https://docs.google.com/document/d/185Cy1xjpAOgJygEOnlf5OCgOQTywmF0qgSpS3GiW16Q/edit?usp=sharing
For commercial support on hardware acceleration topics in robotics, refer to https://accelerationrobotics.com.
A
Right
welcome
everyone
to
the
19th
plus
two
Hardware
acceleration
working
group
meeting.
As
always,
let
me
start
by
sharing
the
minutes
in
the
screen,
so
there
they
go
to
FaceTime
in
the
chat.
A
Very
good,
okay,
so
welcome
again
everyone.
Today
we
have
a
small
agenda,
only
30
minutes
for
discussion.
We
will
have
a
loner
meeting
next
time
and
probably
a
guest
coming
to
speak
about
the
use
of
Hardware
acceleration
in
the
context
of
DDS.
So
so
look
up
to
that
definitely
worth
coming
for
what
concerns
these
April's
meeting
I
just
wanted
to
touch
base
on
essentially
robot
perf,
there's
very
various
minor
updates
on
other
projects,
but
I
think
at
this
point
not
big
enough
at
least
not
enough
changes
worth
mentioning.
A
So
let's
focus
the
discussion
on
globalperf.
As
you
know,
Robo
perf
is
the
reference
robotics
benchmarking
Suite,
that
we
are
pushing
forward
as
part
of
the
Ross
to
Hardware
acceleration
working
group
and
there's
been
some
I
think
pretty
decent
contributions
coming
from
Json
and
the
team
from
Harvard.
Thank
you
guys.
I
spent
some
time
looking
at
the
source
code
of
especially
A3
and
I,
think
it's
outstanding.
It's
pretty
much.
There
I
struggled,
though,
to
reproduce
it
a
bit
because
I
was
missing
the
data
set.
A
A
So,
let's,
let's
have
place
a
discussion
around
that
and
then
I'll
volunteer
to
take
on
an
action
on
my
own
to
wrap
that
up
and
get
it
in
I
think
it's
It's
a
Long
View
and
about
time
we
we
begin
and
that
way
it
may
also
give
you
a
bit
of
additional
bandwidth
Json
to
finalize
A4,
which
I
saw
the
pull
request
and
and
went
through
it
I
think
it's
looking
fantastic
I
know,
there's
a
few
more
things
spending
on
your
end,
but
once
that
is
out
it
should
be.
A
B
Yeah
that
sounds
great
and
I
saw
your
comment
about
the
data
I.
B
So
previously,
I
tried
like
pushing
the
Ross
bags
and
they
were
too
large
because
I
recorded,
like
maybe
a
minute
of
data,
but
I
did
try,
recording,
like
maybe
two
or
three
seconds
very,
very
quickly
and
then
I
pushed
that
I
tried
pushing
that
data
for
both
the
A3
and
the
A4
data
of
benchmarks,
and
both
of
those
brass
bags
were
too
large,
even
with,
like
maybe
two
or
three
seconds.
B
I
think
oh
I
can't
remember
what
the
push
request
I
mean
the
push
command
gave
me.
Oh
I'll,
look
in
that
I'll.
Look
into
that.
A
Let's
do
one
thing
just
for
the
sake
of
for
the
sake
of
being
distracted
in
here,
I
was
going
to
volunteer
if
everyone
agrees,
why
don't
I
contribute
virtual
data
room
in
Google
Drive
that
we
will
find
ourselves
and
for
now
we
can
use
that
if
everyone's
okay
with
that,
unless
everyone
has
a
better
choice
or
suggestion
I,
think
Google
Drive
is
something
pretty
much.
Everyone
knows.
A
So
what
I'll
do
is
if
it's
okay,
I'll
just
create
a
folder
that
I'll
call
robot,
perv
and
I.
Think
I
think
we
called
it
What
rostax.
Should
we
should
we
now
take
the
chance
and
rename
it
to
you?
Let's
just
call
it
robot
birth
prospects.
A
Right
and
I'm
just
gonna
get
this
and
share
it
with
everyone.
A
And
I'm
gonna
copy
the
link
and
just
show
it
here
in
the
chat.
So
can
you
guys?
Can
you
guys
confirm
if
you
can
enter
that
empty
folder
right
now,.
A
And
so
to
overcome
the
limitations
of
GitHub
we're
going
to
temporarily
use
Drive.
A
Okay,
and
so
I
would
suggest
that
we
move
forward
with
this.
Okay
and
I'll
I'll
just
investigate
and
then
figure
out
a
way
to
programmatically
just
automate
this,
but
Frankly
Speaking,
for
what
concerns
benchmarking
and
testing.
A
Typically,
you
would
have
all
of
these
files
in
your
wood
file
system,
and
then
you
would
volume
Mount
them
in
the
corresponding
Docker
containers
that
are
used
to
run
the
jobs
for
benchmarks,
so
just
having
them
locally,
starting
whatever
folder
in
your
root
file
system
should
be
fine
and
that's
actually
what
other
Frameworks
are
doing
as
well.
What
Nvidia
is
doing
as
well,
so
I
think
I
think
it's
fair.
We
we
follow
this
approach
for
now,
only
just
for
the
sake
of
reaching
some
consensus
in
here.
A
What
I'm
going
to
do
is
I'm,
going
to
reproduce
the
exact
I'm
actually
gonna.
Do
it
right
away?
I'm
going
to
download
and
I
can
send
there
like
an
option
to
download
this
yeah
I've.
Never
done
this
so
I'm,
just
gonna
fetch
this
and
upload
it
to
the
folder,
as
is
right
now,
if
that's,
okay
and
and
that
way,
we
could
the
data
structure
that
we
have
right
now
in
the
repository.
Okay,
so
essentially
perception
should
be
a
folder
existing
and
then
within
perception.
We
have
image
and
image
here
and
so
Json.
A
Can
you
please
take
the
action
of
uploading,
the
image,
3
folder
and
then
over
there.
Just
just
pumping
your
minute
of
recording,
don't
be
shy
about
it,
so
so
yeah
I'll
just
say:
Json.
A
Here,
right
and
I
think
that's
going
to
get
us
to
move
forward
and
then
once
you
do
that
Jason,
if
you
can
ping
me
back
in
the
issue,
I'll
take
the
action
of
then
reproducing
again
locally,
because
I
was
I
was
launching
the
simulation
and
I
was
playing
around
I
I,
like
the
work
that
you've
been
coming
together
by
the
way
and
yeah
that
that
just
worked,
fine
I
just
didn't
wanted
to
create
a
new
data
set
because
again,
results
are
not
going
to
be
reproducible
in
that
way,
so
just
push
that
up
to
the
shared
Google
Drive
and
then
I'll
I'll
rub
it
in
and
then
I
just
wanted
to
touch
base
real
quick
on
the
issue
you
are
experiencing
for
analyzing
the
data.
A
Can
you
please
walk
us
through
it?
If,
if
you
can
do
you
have
it
handy?
If
not,
you
can
just
use
the
images
I'll
just
give
you
the
screen.
Yeah.
B
Yeah
and
I
should
let
me
pull
that.
B
B
There
we
go
so
once
I,
once
I
run
the
Benchmark
like
the
the
trace
launch
file,
then
I
can
I
go
into
the
directory
where
the
data
is
stored
and
I
can
do
like
the
Bible,
Trace
command
and
I'll
see
I'll
see
all
this
data
right
and
I
have
I
have
like
the
image
input
for
both
both
images
and
then
a
trace
point
for
the
image
output.
B
So
this
is
kind
of
like
a
chain
that
I'm
following
yeah.
So
let's
put
this
and
then
if
I
go
to
my.
C
A
On
can
you
can
you
show
me
to
just
just
for
us
to
work
together
with
you?
Can
you
please
go
back
to
the
bubble
Trace
by
the
way
bubble.
A
Trace
gives
you
a
much
better
output
like
color,
colored
and,
and
it's
generally
a
much
better
tool,
so
choose
tip
use
bubble,
Trace
too,
but
that
that's
you
know
that's
okay
for
now,
so
so
can
we
yeah
if
you're
gripping
you're
not
gonna,
see
it,
but
can
you
show
us
or
highlight
with
the
cursor
start
of
the
chain
in
where
it
finishes
to
just
for
the
sake
of
getting
it
clear
and
here
so.
C
Also
input,
okay,
so
I
think
this
is
a
good
place
to
start
actually
one
second
output,
all
right.
This
might
be
a
good
place
to
start
and
then.
A
I,
just
I
just
want
to
see
the
sequence,
so
so
don't
worry
too
much
if
you
like.
B
We
start
with
so
we
have
two
images
right:
it's
a
stereo
image,
so
we
have
the
trace
point
for
the
first
image,
which
is
this
these
two
and
then
the
trace
point
for
the
second
image,
which
is
this
and
this
right
and
then
we
once
we,
you
know
you,
we
come
up
with
the
disparity
map
and
then
we
produce
that
then
my
Trace
points
for
the
output
disparity
map
is
this
right
here:
cut.
A
A
Yeah
yeah
makes
sense
okay
and
then,
if
we
go,
if
we
go
into
the
graph,
then
that
makes
total
sense
because
I
think
you
have
big
graphs
somewhere
handy
in
here.
C
A
Yeah,
so
so,
and
that's
here
we
are
so.
The
trace
points
are
living
inside
of
the
image
input
component,
there's
one
over
there
right
and
since
that's
what
produced
twice,
you've
got
that
twice
the
unit
infinity
and
then
finally,
the
the
output
one
is
in
the
this
disparity
output
component:
correct!
Yes,
yes,
and
that's
where
you're
getting
both
and
and
so
okay
now
I
haven't
understood
that
if
we
go
back
into
your
python
launch
file
and
we
see
the
trace
chain
the
target
chain,
sorry
you're
having
Civic
needs
to
be
fitting.
A
Okay
for
the
first
image,
the
same
for
the
second
image,
cut
it
and
then
you're
getting
okay.
So
that's
what
you're
searching
for
and
now
how
are
you
calling
this
in
the
in
this
ambiguous
manner?
So
if
you
go
below
the
target
chain
is
ambiguous.
What
gives
you
essentially
is
a
way
so
that
if
you
plot
this
out,
when
you
plot
it
so
that
you
can
distinguish
the
first
image
from
the
second
image,
so
typically,
what
you
want
to
do
is
you
want
to
name
differently?
A
You
want
to
name
differently,
the
second
one.
Oh,
so
you
just
put
a
two
at
the
end
and
that's
it
yeah
for
both
yeah
yeah.
This
is
just
for
depiction
purposes
for
what
concerns
following
the
trace
points,
the
one
that
is
is
relevant
is
the
first
one.
Okay
and,
and
now
with
this
in
mind,
you
should
be
able
at
least
to
execute
what
what
corresponds.
If
you
go
down
there
down
down,
you
should
have
now
the
logic
right.
You're
loading.
A
Yeah
and
and
then
over
there
you're
checking
whether
you
get
you
get
messages
or
not.
So
can
we
can
we
maybe
stop
there
just
comment
everything
after
that
and
just
print
out
yeah.
You
can
comment
everything
below
yeah
one
exactly.
A
And
just
at
that
point
just
print
the
the
sets.
A
If
these
things
become
cumbersome,
we
can
certainly
extend
the
Ross
to
CLI
tooling
so
that
it
happens
automatically.
Have
you
built
the
workspace.
A
Can
you
maybe
real
quick
build
it
again.
C
A
A
B
Okay,
the
so
I
saw
there
is
this
Ross
to
Benchmark
package
and
it's
being
called
inside
of
the
analysis.
Script
I
think
that's
I
was
messing
around
with
that
and
that's
why
I
was
getting
some
errors
right
now
and
is
this
is
located
in
the
other
folder.
This
is
the
package
we're
referring
to
right,
gotcha
and
then
to
Source
or
to
build
this
package.
B
Do
we
do
we
just
do
we
always
have
to
build
this
package
before
running
the
analysis
file
or
how
does
that
work?
I
wasn't
very
sure
so.
A
A
Wall
Street
Benchmark
is
this
thing
and
it's
a
series
of
CLI
tooling
to
simplify
the
use
of
robot.
A
We
have
via
the
CLI,
so
you
do
watch
through
space
Benchmark
and
then
it
has
a
series
of
commands
that
it
becomes
handy.
That's
not
strictly
necessary
to
run
the
benchmarks.
It
should
not
be,
but
it's
it's
just
I
mean
the
package
itself
is,
is
together,
prostitute
Benchmark
was
to
underscore
Benchmark,
which
is
what
you
had
I
think
I.
A
Think
that's
a
different
thing
and
I
think
that's
isn't
that
Nvidia
kind
of
like
yeah,
that's
the
Nvidia
kind
of
like
I,
think
that's
a
different
thing,
but
you
might
have
mistyped
it
or
maybe
you
were
doing
some
tests
so
yeah.
B
Oh
yeah,
yeah,
sorry,
okay,
that's
probably
why
it
was
carrying
around,
but.
A
If
you
go
back
into
your
into
your
items
group
the
launch
file
I'm
just
interested
in
seeing
how
many
sets
do
you
find
out
of
that?
If.
B
Terminate
anyways
so
yeah.
A
C
A
You
should
Source
I
need
to
slash,
opt
yeah.
A
So
you've
you
have
encountered
1457
matches
of
that
of
that
chain.
That's
right,
and
so
what
that
means
is
that
you've
got
plenty
of
results
that
you
should
get
results.
You
were
pretty
much
saying
that
yeah,
you
know
I'm
not
really
getting
like
what
I
was
expecting
and
actually
the
the
the
final
kind
of
like
demonstration
or
or
evidence
why
you
indeed
are
getting
those
results.
Is
it
the
last
the
last
block
itself?
A
It
says
chain
index
five
and
the
the
last
kind
of
like
the
last
Trace
point
is
the
finish
of
the
output
image.
So
you
are
getting
matches
in
fact
more
than
one
thousand.
A
So
with
this
you
should
be
able
to
then
get
statistics
out
of
them
and
that's
what
the
code
commented
under
the
under
what
we
just
printed,
thus,
and
so
at
some
point
in
one
of
those
procedures
or
functions:
you're
you're
getting
some
issues
because,
as
you're
massaging
data
you're
getting
some
sort
of
like
conflict,
this
is
just
going
to
give
you
an
informal
overall
timeline.
B
A
A
B
Okay,
so
this
is
the
the
latency
right
yeah,
but.
A
That
is,
that
is
not
something
I
mean.
Show
me
the
code.
Can
you
go
a
bit
below?
Are
you
taking
one
sample,
or
are
you
yeah
here
you're
plotting,
okay,
you're
plotting
the
the
pipeline
of
the
last
message,
so
the
number
1457th
in
this
case,
which
is
okay
and
yeah.
Indeed,
that's
the
total,
the
total
time
from
the
star
taking
the
start
from
the
first
indeed
yeah
I.
A
Think
technically,
though,
you
don't
want
to
measure
up
until
the
latest
Trace
point
I
think
you
want
to
measure
up
until
the
previous
to
the
latest,
one
okay,
just
technically
speaking,
technically
speaking,
the
message
has
already
arrived
in
the
process
at
the
Emi
at
the
init.
You
need
to
wait
until
the
Finish,
because,
technically
what
what
that,
what
the
output
image
does
is
just
subscribes
to
the
image
and
then
puts
the
trace
point
and
finishes
the
Callback.
A
So
you
don't
need
to
wait
until
that,
but
yeah
it's
it's
kind
of
like
what
is
that
micros
micro?
Second,
below
it,
something
like
that,
so
so
yeah
overall
23
milliseconds
it
sounds
it's
what
it's
taking
and
now
with
this
you
can
do
plenty
of
things
you
can
you
can
average
it.
You
can
and
that's
that's
up
to
you.
B
Yeah
so
so
then
I
can
just
I
can
go
through
I
guess
this
list
get
the
average
latency
and
then
report
that
number.
A
So
so
let's
just
note
that,
as
opposed
to
some
others,
we
should
be
mindful
about
Robotics
and
we
should
be
mindful
about
what
we
are
measuring
here.
Robotics
is
the
is
the
scenario
where
we
care
a
lot
about
real
time
and
in
real
time
we
care
a
lot
about
maximum
frequencies.
A
So
if
you
discard
the
mean
and
Max
values,
as
some
are
doing
out
there
for
variability
purposes,
what
you're
doing
is
hacking
the
measurements
and
you're
really
not
reporting
appropriately
you're
just
favoring
certain
architectures,
and
in
this
case
we
don't
want
to
favor
any
architecture.
We
want
to
be
fair,
Fender
agnostic,
that's
a
baseline
of
robot
curve,
and
so
we
want
to
report
as
far
as
possible
in
a
in
a
systematic
and
automatic
manner,
via
the
the
build
farm
that
we
have
put
together.
So
so
you
know
this
is
awesome.
A
I'll,
please
push
the
image
as
we
produce
it
once
manually
in
my
machine
and
then
I'll
get
this
into
the
build
form
and
it'll
start
reporting
on
various
platforms.
B
Okay,
cool
and
so
the
the
value
in
the
yaml
file
for
the
specific
Hardware
should
I
be
running
it.
On
my
Hardware
I
have
a
workstation.
Do
I
need
to
should
I
report
that
number
or
once
you
run
it,
it's
going
to
run
on
the
pieces
of
hardware
on
your
side,
I.
A
Think
it's
totally
okay,
if
you
run
it
and
report
on
your
own
Hardware
I
think
we
shouldn't
limit
ourselves
to
specific.
You
know
instances
that
we
are
maintaining,
because
we
we
we're
maintaining
them
for
now.
We
hope
to
maintain
them
down
the
road
infinitely,
of
course,
but
I
think
everyone
should
have
the
Liberty
to
be
able
to
at
least
publish
and
contribute
with
their
own
results.
That's
that's
what
this
community
is
for.
A
So
absolutely
you
know,
that's
that's
awesome
and
then,
and
then
based
on
all
of
this
data
which
is
yaml
based,
then
we
can
definitely
augment
this
with
you
know:
additional
CLI
tooling,
like
Ross,
to
Benchmark
the
package
that
we
were
discussing
before
and
then
you
can
grab
all
of
those
yamos
and
then
massage
the
data
and
then,
of
course,
yes,
you
can
make
averages
and
Report
based
on
that,
but
again,
that's
just
for
reporting
and
marketing
purposes.
C
A
C
C
A
A
This
is
fantastic
man,
I'm
I'm,
happy
that
we
Over
clean
that
that
issue.
Okay
and
I
think
Joe
did
an
awesome
job.
Reviewing
the
pull
request.
To
be
honest,
so
I
think
it's
pretty
much
done
so
yeah
just
wait
until
you
push
that
last
bit
and
then
with
that
I
think
we
can.
You
can
get
it
in
and
here's
by
the
way
Jason
for
this
new
Benchmark
I
think
this
is
looking
really
fantastic,
so
so
yeah
very
excited
to
get
that
A4
also
into
it.
A
Maybe
back
to
the
minutes,
since
we
have
had
a
long
discussion
on
on
this
Tech
issue.
So
if
it's
okay,
Jason
I'll,
take
on
the
action
myself
on
CE,
you
push
that
try
to
wrap
it
and
merge
in
A3
and
and
I'll.
Let
you
continue
with
A4,
which
is
the
other
four
request
in
here.
A
One
piece
of
B,
one
piece
of
news
that
I
wanted
to
share
is
I
think
we
should
start
thinking
as
a
group
as
a
community
to
release
0.1
of
rubber
curve.
Like
kind
of
like
make
a
first
release.
I
know
it's
going
to
be
very
perception.
Focused
I
was
maybe
we
can
pack
the
four
benchmarks.
A
We
have
right
now
and
kind
of
like
ship
them
in
in
a
first
like,
like
Beta
release,
well,
better
alpha
or
0.1
release
like
non-official,
at
least
for
now,
to
start
getting
feedback
and
so
on
and
so
forth.
What
do
you
guys
think
about
this.
C
B
Go
ahead
should
we
aim
to
have
like
a
few
more
benchmarks?
Do
you
think
four
is
okay.
A
I
mean
four
is
the
starting
point,
the
more
the
merrier,
I,
think
or
or
maybe
I
should
say
that
the
better
benchmarks,
the
merrier
but
I,
think
for
now.
Having
you
know,
four
is
a
starting
point.
It's
gonna
be
just
the
first
unofficial
release.
A
A
This
is
just
the
first
release
out
there
getting
the
code
out
there,
marketing
around
it
in
some
banners,
giving
credit
back
to
the
community
contributions
and,
at
the
same
time
also
publishing
data
resource
for
various
Hardware
platforms
like
at
least
I,
should
I
think
we
should
be
hoping
for
four
to
six
Baseline
Hardware
configurations,
including
gpus
and
fpgas
and
and
CPUs
of
course,
I
think
that's
a
great
starting
point,
and-
and
we
can
collect
Community
input
based
on
that
and
then
continue
working
on
more.
A
Under
the
hood,
we're
working
on
many
more
and
as
I
advanced
in
the
last
meeting,
we
will
be
contributing
in
the
coming
few
months.
Quite
a
few
more.
So
it's
just
a
matter
of
time
that
they
land
into
the
public.
A
Benchmarks
but
yeah
I
would
I
would
essentially
cheer
for
us
to
to
get
a
first
release.
But
if
you
Jason
believe
that
you're
going
to
contribute
three
more
in
the
coming
few
weeks,
then
I
don't
mind
waiting
but
yeah.
It
sounds
like
I
think
we're
starting
to
get
the
handle.
I
think
also
there's
enough
recordings
right
now,
with
descriptions
and
walkthroughs
on
how
to
navigate
around
the
issues
that
people
can
follow
as
well,
so
yeah
I'll
link
for
that.
Okay.
A
So
unless
anyone
complains
about
it,
I
would
say
that
maybe
we
can
agree
on
by
next
month-
maybe
try
to
have
at
least
source
code
wise,
the
first
version
of
what
we
will
release
as
0.1
and
and
try
to
maybe
land
it
by
then
it
will
be
very
close
to
the
date
with
the
new
Ross
release.
So
let's
not
push
ourselves
necessarily
too
much
but
yeah.
A
Let's,
let's
see
if
we
can
make
it
by
Iron
and
if
not
you
know
in
tune,
we
can
launch
it
in
June
and
that's
okay,
awesome
all
right!
So
I
think
that
touches
on
some
of
the
topics
that
I
wanted
to
discuss.
Real
brief
and
yeah
concerning
future
actions,
this
is
on
me.
Definitely
I
need
to
find
some
time
to
look
at
the
actions
which
I
didn't
manage
to
sorry
about
that
guys.
A
I'll
try
my
best
anything
else.
Anyone
wants
to
bring
into
the
group
or
share.
B
I
was
awkwardly
at
this:
I
was
working
on,
make
making
a
plan
of
some
benchmarks
that
we
can
some
packages
that
we
can
Benchmark
soon
I
made
a
spreadsheet
here.
Let
me
share
it
in
the
chat.
B
C
B
Just
give
us
an
idea
of
some
the
patches
Within.
A
This
is
awesome
yeah.
This
is
fantastic
I
like
this
very
much
great
work,
yeah,
big
plus
one
to
this.
Do
you
do
you
mind
if
I
base
this
in
the
minutes,
yeah.
B
And
I'm
trying
to
find
some
of
the
packages
that
have
like
a
GPU
implementation
and
a
CPU
implementation.
Most
of
them
have
been
done
by
Nvidia,
but
I'm
looking
I'm,
not
sure
if
I
can
find
I'm
not
sure
what
other
packages
exist.
That
have
also
been
implemented
to
work
on
different
types
of
Hardware.
So
on
the
look.
A
I
mean
the
the
reality
is
that
right
now,
unfortunately,
there's
very
few
implementations
for
Hardware
acceleration
for
robotics
algorithms.
Most
of
them
come
from
Nvidia
as
far
as
I
know
and
there's
a
few
that
come
from
AMD
and
others,
but
they're
very,
very
limited
right
now,
so
I
think
relying
on
and
building
upon
what
Nvidia
is
releasing
I.
Think
it's
a
fair
approach.
I
Frankly
Speaking
have
been
reviewing
the
Ross
to
benchmarking,
approach
and
I.
Think
there's
quite
a
few
things.
We
we
should
be
reusing.
Do
you
have
any
updates?
A
B
From
each
other,
yeah
we've
reached
out
to
them
we're
still
waiting
on
response,
but
we're
hoping
to
meet
with
them
and
chat.
A
Okay,
okay,
looking
forward
to
hear
what
comes
out
of
that
again
would
be
lovely
to
push
together.
My
I
mean
the
approaches
are
very,
very
similar.
A
Both
of
them
comply
very
nicely
with
web
2014,
which
we've
the
the
biggest
difference
is
the
fact
that
they
are
collecting
the
benchmarking
data
with
the
standard,
C
library,
C,
plus
plus
Library,
and
the
user
Space
level,
whereas
we
are
relying
on
proper
profiling
and
an
instrumentation
based
on
on
essentially
well-known
tools
for
doing
profiling
and
instrumentation
such
as
lttng
or
roster
tracingness.
A
In
this
case,
and
the
the
advantage
of
that,
besides
the
fact
that
it's
meant
for
real-time
measure,
measurements
is
the
fact
that
is
fully
fully
instrumented
already
with
the
same
tooling,
which
means
that
down
the
road
as
we
as
Architects
are,
you
know:
visualizing
our
computational
graphs
and
computational
flows
and
are
trying
to
decide
what's
going
to
get
pushed
to
which
Hardware
device,
like
which
fpga,
which
GPU
you're
gonna,
have
everything
in
the
same
language
and
not
things
in
you
know
Json
and
then
XML
and
then
CTF,
and
this
is
especially
Troublesome
and
I,
know
what
I'm
speaking
about
because
I'm
working
with
many
of
these
Hardware
vendors,
because
some
vendors
output
things
in
this
flavor
and
this
format,
some
others
in
this
other
format
without
timestamps
and
then
Things
become
very
messy
and
and
so
as
Architects.
A
We
should
go
for
things
that
simplify
our
life
and
CTF
is
a
standard,
common
tracing
format
or
common
Trace
format.
That's
not
cooked
by
us,
but
essentially
peeled
by
people.
That
I
think
is
reliable,
so
yeah,
that's
kind
of
like
the
the
thing
we
can
or
we
should
be
trying
to
agree
on.
But
aside
that
I
think
approaches
are
very
similar,
so
yeah,
let
us
know
what
comes
out
of
that:
okay,
awesome
so
guys
and
girls.
Thank
you.
A
So
much
for
attending
again
expectations
for
next
month
should
be
that
maybe
we
should
be,
if
not
ready,
very,
very
soon,
to
be
ready
to
release
that
first
0.1
release
and
thank
you,
everyone
contributing
thank
you,
everyone
reviewing
and
and
supporting
the
work
and
chat
to
you
next
week,
if
you're
available,
if
not
next
official
meeting,
is
next
month.
Thank
you.
Everyone.