►
From YouTube: ROS 2 Hardware Acceleration Working Group - meeting #15
Description
15th meeting of the ROS 2 Hardware Acceleration Working Group (HAWG, https://github.com/ros-acceleration).
We discussed the Robotics MCU (robo-v-mcu) project, RobotPerf benchmarking updates and REP-2014 contributions.
For more including source code, check https://github.com/ros-acceleration. Minutes of the meeting available at https://docs.google.com/document/d/185Cy1xjpAOgJygEOnlf5OCgOQTywmF0qgSpS3GiW16Q
00:00 Welcome
03:05 Updates on Robotics MCU project (robo-v-mcu)
10:45 Updates on RobotPerf, robotics computing benchmarking suite
32:45 REP-2014 review
For more including commercial support on hardware acceleration topics in robotics, refer to https://accelerationrobotics.com.
A
All
right
welcome
everyone
to
the
15th,
Ross
2
Hardware,
acceleration
working
group
meeting.
Today
we
have
a
bunch
of
topics
we'll
be
covering,
so
let's
get
into
it,
as
always:
I'll
be
sharing
first,
the
minutes
in
the
chat.
A
So
let
me
push
it
right
in
here
for
all
of
you
and
then
let
me
share
my
screen.
A
Okay,
so
again,
15th
thing
of
the
working
group
to
get
started.
I
wanted
to
share
a
couple
of
updates
with
everyone.
The
first
one
is
that
the
working
group
is
growing
actually
significantly.
We
are
increasing,
also
the
periodicity
of
both
previously.
If
you
remember,
we
were
meeting
once
a
month
and
we
will
continue
doing
so.
Typically,
at
the
end
of
each
month,
we
will
have
limitings,
but
now
with
the
Advent
and
increase
as
well
as
increase
of
popularity
of
some
of
these
projects
of
the
working
group.
A
We
are
moving
also
to
weekly
meetings,
especially
for
the
rubber
curve.
It's
a
project
so
very
happy
to
share
it
with
everyone.
Last
week
we
had
the
first
robot
perf
weekly
meeting
with
a
pretty
decent
attendance
for
being
the
first
one,
the
recording
and
material.
As
always,
is
available.
You
can
check
it
out
it's
in
this
same
resource,
so
we're
going
to
be
using
the
same
resources
of
the
working
group
just
to
make
sure
we
leverage
and
make
everything,
usable
and
accessible.
A
So
again,
the
resources
are
in
here
check
it
out.
So
that's
one
first
bit
of
excitement.
The
second
one
is
that
we
are
holding
also
today
as
part
of
this
meeting
concurrently,
the
first
meeting
of
the
architectures
and
processors
working
group
of
the
Spanish
Association
in
semiconductor
industry,
we're
doing
lots
of
stuff
that
relates
to
essentially
building
up
and
bringing
both
as
soft
cores
and
hard
course.
A
So
we
thought
it
was
going
to
be
a
good
opportunity
to
bring
Isme
folks,
which
we
are
part
of
within
the
Spain
Spanish
domain
into
this
International
Community
we've
been
leading
now
for
quite
a
few
months.
I
would
say
officially
more
than
a
year
now,
so
they
can
take
part
of
it
contribute
collaborate
and
maybe
also
bring
their
two
cents.
A
That's
about
it.
With
this
I'd
like
to
welcome
everyone
to
the
meeting
we're
new
to
the
working
group,
you
can
check
out
the
attendance
in
the
linking
event.
We
will
not
be
recording
every
single
participant
because
there's
already
more
than
30
persons
that
join
today,
so
to
save
my
time
and
and
the
time
of
my
people
helping
me
admin,
this
working
group,
you
can
just
refer
to
the
linking
event
and
and
that's
gonna,
be
faster.
Let's
start
with
the
progress
review
on
various
of
the
projects.
A
The
first
one
I'd
like
to
start
with
is
the
robotics
MCU
or
the
robot
5
MCU
project.
That's
LED,
together
with
Plan
B.
We
have
today
with
us
Mustafa,
which
can
give
us
an
update.
Most
of
our
stage
is
yours.
B
Can
you
see
no
sorry.
B
Awesome:
okay,
thank
you.
It's
the
first
time
I'm
doing
this
presentation
on
this
Ubuntu
computer,
so
it
was
kind
of
problematic,
so
we
have
added
the
Ethernet
Mac
on
the
general
structure
with
the
udma
part
just
right
here,
that's
mostly
the
only
modification
we
have
done
to
the
general
system,
except
we
had
to
create
125
megahertz
rjmi
clock,
so
we
also
modified
this
clock
generator
other
than
that
we
didn't.
We
did
not
yeah
modify
any
of
the
other
projects
so
that
it
can
go
smoothly.
B
So
the
the
inside
of
this
of
our
module
looks
like
this.
We
have
this
resistor
interface
and
two
cross
clock
domain
controllers
for
the
forensics,
IPS
interface
and
and
then
the
buffers
this
buffers
and
the
controllers
are
also
required
to
switch
from
this
regular
udma
interface
between
regular
udma
interface
and
axi
stream
interface.
B
Controlling
the
module
is
pretty
much
the
same
with
the
other
peripherals
in
the
system
by
just
like
uart
SPI
I,
Square
CD,
you
have
those
peripherals
already.
We
kept
the
same
register
structure.
Actually
we
just
tied
the
yeah.
We
just
type
the
connections
and
stuff
and
added
some
more
status
information.
B
You
can
either
pull
the
receive,
receiving
status
or
receiving
interrupt
is
much
a
better
way
after
receiving
the
after
getting
the
receive
interrupt,
you
have
to
switch
the
receive
address
to
not
lose
the
next
data.
B
So
as
the
conclusion
actually,
on
the
RTL
side,
we
are
mostly
completed,
we
are
having
it
tested
for
the
software
driver.
We
just
began
to
work
on
it.
We
are
glad
to.
We
will
be
glad
to
receive
the
feedback
on
the
RTL.
B
I'm,
very
sorry,
it's
like
our
hands
are
warm
right
now.
So,
if
we,
if
we
are
required
to
add
more
functionality
or
do
something
differently,
we'd
be
glad
to
know
it
immediately,
so
that
we
can
like
change
it
fast
and
we
need
the
clarification
of
the
API
requirements.
B
And
since
we
are
starting
to
build
a
driver
like
it
might
be
the
right
time
to
tighten
the
core
cooperation
with
acceleration
robotics
so
that
they
can
okay,
they
can
provide
the
necessary
information
about
the
API
requirements.
A
That's
awesome,
thank
you
so
much
yeah.
So
maybe
a
couple
of
of
questions
in
here
to
follow
up
from
this
conclusion.
Can
you
can
you
share
with
us
the
pointer
to
the
RDL
so
that
we
can
have
a
look
and
and
if
appropriate,
just
provide
feedback?
That
is
that
is
kind
of
like
a
a
first
one
and
then
I
had
another
question
regarding
the
testing
that
you're
currently
performing?
Are
you
currently
using
any
sort
of
like
verilog
test
bench
environments
such
as
Coco,
TV
or
related.
B
Yeah,
we
are
just
controlling
this
controlling
the
modules
via
software
yeah.
A
Yeah,
so
if
you're,
if
you're
testing
with
the
same
tooling,
that's
provided
in
the
very
log
ethernet
repository,
then
you're
using
Coco,
TV,
definitely
and
and
possibly
also
leveraging,
eCards
verilog,
which
is
the
simulator
that's
used
in
there.
B
No
I'm,
sorry,
we
are
not
using
the
Coco
TV.
We
are
I'm
I'm
using
the
same
environments
with
the
core
5
MCU.
C
A
B
You're,
so
it's
already
integrated
it's
it
cannot
it's
yeah,
it's!
It
can
already
be
controlled
via
software,
so
we
I
actually
do
like
this
skip
the
booting
stage
and
like
just
before
the
booting
stage,
I
write
my
code
and
test
it
so
that
I
don't
have
to
wait.
The
booting
during
the
simulation.
A
Okay
got
it
got
it
so
that
that
makes
sense,
I
guess
so
yeah.
If
you
can
provide
in
the
minutes,
I'll
just
put
a
placeholder
in
here
a
URL
for
the
RPL.
A
That
would
be
fantastic
in
case
it
has
changed
or
anything
and
then
yeah
we
from
our
side,
at
least
from
from
acceleration,
robotic
side.
We
will
definitely
follow
up
and
and
try
to
provide
feedback,
as
well
as
as
things
so
that
we
can
meet
in
the
middle,
but
yeah
fantastic
progress,
Mustafa
very,
very
excited
in-
and
this
is
this-
is
super
cool,
very
much
looking
forward.
A
I
I
think
I
did
sing
with
you
guys,
especially
with
Max,
on
the
Hardware
that
you're
using
for
the
setup,
so
we
should
be
in
the
same
position
and
having
the
same
tools
and
the
same
things.
I
should
be
able
to
represent
pretty
quickly
so
yeah
looking
forward
for
that.
Thank
you.
So
if
you
can
drop
that
link
in
the
minutes,
I'd
appreciate
that
otherwise
I'll
I'll
chase
you
to
try
to
get
it
there.
B
A
Okay
sure,
no
problem
all
right,
so
anything
else
to
comment
on
the
on
the
robotics
MCU
project.
Folks,
any
questions
that
we
may
have
right
now.
Otherwise
we
can
jump
into
robapurf.
A
None
all
right
thanks
a
lot
man
yeah
that
was
fantastic,
all
right,
so
maybe
back
into
robot
perf.
That's
the
second
topic
of
the
agenda
today
and
again:
I'm
sharing
the
minutes.
I
know
some
of
you
guys
just
joined,
so
there
you
go
with
the
minutes
again.
A
A
The
first
one
is
a
Hands-On
session
on
essentially
how
to
build
and
launch
a
first
benchmark.
I
was
planning
to
do
it
today.
However,
in
the
last
week's
meeting
of
the
working
group,
we
actually
recorded
it.
A
So
what
I've
done
is
just
place
the
link
in
the
minutes
that
points
to
the
recording
of
last
week,
so
if
anyone's
interested
in
a
deep,
diving
source
code
wise
on
how
you
can
build
in
a
reproducible
manner,
as
well
as
create
your
own
new
benchmarks,
please
refer
to
that
particular
pointer.
A
That
way,
we
optimize
time
second
topic
that
I
wanted
to
touch
on
is
the
Benchmark
specifications,
so
we've
requested
feedback
last
month
on
the
actual
specification
on
how
no
benchmarks
should
be
created
and
again
this
is
hosted
within
the
benchmarks.
Repository
of
the
benchmarks,
repo
within
the
organization,
I
believe
Matt
I,
don't
know
if
Matthew
joined
us
today.
Jason
do
you
know
if
he's
with
us.
C
He's
not
with
us,
but
I
can
speak
on
what
we
did
go
for
it,
oh
yeah,
so
there
should
be
a
pull
request
coming
soon,
but
basically
we
organize
the
definitions
of
what
each
part
of
our
Benchmark
means,
because
a
lot
of
the
definitions
and
the
words
we're
throwing
out
can
get
pretty
complicated.
So
we
took
a
stab
at
like
what
is
a
benchmark.
What
is
the
Benchmark
Suite?
C
It's
a
workbook,
what's
a
reference
implementation,
so
we
clearly
defined
all
of
those
things
and
then
soon
when
we
you
know
when
we
get
that
pull
request
in
if
we
can
get
the
community
feedback
on
that.
That
would
be
really
awesome,
because
all
these
definitions
are
not
Sandstone.
We
just
started
to
be
like
place,
the
a
starting
point
that
we
can
discuss
and
change
the
definitions
around.
If
we
want
to.
A
C
A
Awesome
awesome
so
looking
forward
to
get
that
pull
request
and
I
know
that
Matthew
I
reached
out
asking
for
privileges
I
think
it
gave
him
what
he
needed.
However,
for
sending
up
a
requests,
you
don't
need
any
privileges.
So
yes,
so
just
yeah
just
feel
free
to
shoot
and
that'd
be
awesome,
all
right,
so
back
into
I,
guess,
sharing
and
and
going
into
some
more
topics
that
we
will
discuss
today.
A
So
I
wanted
to
touch
on
two
more
things:
at
least
benchmarking
wise.
The
first
one
is
I
wanted
to
report
on
some
work
that
I
did
recently
on
creating
a
benchmark,
CLI,
tooling
extension
of
the
Ross
to
CLI
tools
that
I'll
just
show
you
just
in
a
bit
right
now
and
then
I
wanted
to
so
we
have
with
us
today
also
Michael
Carol
from
intrinsic
now
hi,
Michael
and
so
Michael
has
been
spearheading.
I.
A
Think
that's
fantastic
initiative
on
Boss
2,
profiling,
I'm,
actually
pretty
excited
about
it
as
well.
So
maybe
we
can
touch
on
that
Michael
bit
and
try
to
align
on
on
some
of
the
aspects
that
I
think
we
could
benefit
from
each
other.
A
A
I
won't
touch
again
too
much
on
the
structure
of
the
benchmarks.
Today,
that's
recorded
and
and
thrown
last
week,
so
you
can
go
and
check
that
out
what
I'll
show
you
is,
and
what
we
are
doing
right
now
is
getting
into
that
depth
containers
so
we're
using
the
same
repo
itself.
Has
the
containers
prepared
so
that
we
can
reproduce
things
systematically,
regardless
of
which
operating
system
you
use?
A
If
you
use
BS
code,
you
can
just
open
things
up
in
the
way
that
I
just
did
technically
speaking,
what's
happening,
is
that
you're
actually
entering
and
developing
from
the
container,
so
the
container,
the
docker
container,
the
virtualization
itself
is
sharing
volumes
and
it's
sharing
links
with
the
host
system
so
that
you
can
reproduce
the
same
experience
that
you
can
have
in
your
host
system.
That
way,
we
get
a
Linux
kind
of
like
a
base
line
set
up
with
a
series
of
things
installed
that
allow
us
to
allow
service
benchmarking.
A
So
what
I
wanted
to
show?
You
is
how
so,
let's
just
build
the
workspace
that
it
gets
out
right
away,
so
I'm.
A
A
second
so
right
now
we're
building
it
and
then
we're
going
to
see
how
we're
gonna
we're
gonna
be
able
to
visualize
this
new
contribution
this
week,
which
includes
a
series
of
tools
that
we
will
be
incrementally
maturing
and
adding
to
manage
and
essentially
play
around
with
the
benchmarks.
A
So
yeah,
that's
it
now
we
can
Source
this
as
an
overlay
workspace
and
finally,
we
can
see
how
to
the
typical
tools
that
we
get
in
the
default
was
to
CLI
installation.
We
now
have
a
new
one
Benchmark
one
which
reads
as
various
performance:
benchmarking
related
subcommands
right,
and
so,
if
we
watch
to
Benchmark
now
we
have
two
suburbs
or
two
verbs,
one
that
allows
us
to
list
the
benchmarks
and
one
that
allows
us
to
The
Benchmark.
A
So
the
first
one
was
to
Benchmark
list
it
actually
recursively
goes
through
each
one
of
the
first.
Two
packages
goes
searches
for
benchmarks.
Each
one
of
these
benchmarks
is
defined
by
the
corresponding,
as
you
know,
benchmark
dot,
yaml.
So,
technically
speaking,
each
Benchmark
is
our
last
two
package.
A
The
only
difference
is
that
there
is
this
extra
file
benchmark.yaml
file,
which
complies
with
the
specification
that
Matthew
and
his
team
are
reviewing
right
now,
and
this
is
a
super
simple
yaml
file,
which
just
specifies
a
series
of
aspects
that
are
useful
for
reproduction
launching
and
then
reporting
aspects
in
mind.
A
So
hopefully,
this
is
something
also
we
can
touch
on
with
Michael
now
in
the
discussion,
so
the
tooling
that
we've
created
the
least
verb
or
the
update,
verb,
help
you
navigate
around
benchmarks
in
that
sense,
so
once
we
know
which
benchmarks
are
available
right
now
build
in
this
overlay
workspace,
we
can
pick,
for
example,
the
second
one
watch
to
Launch,
and
then
we
can
just
and
that's
gonna
once
the
computational
graph
and
then
as
we
throw
in
input
into
the
sorry
data
into
the
corresponding
inputs.
A
It's
going
to
start
registering
and
tracing
data
that
we
can
later
on
post
process,
with
the
corresponding
analyze
launch
file.
That
says,
AC
or
is
it
complex
as
it
gets
so
yeah
more
verbs
will
start
showing
up
in
this
CLI
tooling
the
update,
where
right
now
updates
automatically
the
readmiss
of
each
Benchmark
as
well
as
the
top
readme
one,
and
this
is
going
to
get
connected
to
a
CI
job
within
the
GitHub
repository
that
Updates
this
every
time
we
change
anything
and
over
time,
we'll
be
adding
more
and
more
functionality
for
now.
A
This
is
what
I
prototype
and
delivered,
and
there
are
a
few
next
aspects
that
will
come
down
the
road,
including
prototyping,
this
CI
job
that
I
just
described
so
with
that.
Maybe
we
can
spend
some
time
discussing
ross2
profiling
in
in
alignment
with
this
Michael
I.
Think
that
maybe
you
can
say
a
few
words
about
the
the
F4
you're
leading
in
in
where
you
guys
are
heading
foreign.
C
A
Yeah
yeah,
sorry,
yes,
I
can
I
can
Jason
and
it
nevertheless
that's
kind
of
like
not
something
that
that
I
I
thought.
Certainly
not
it
is.
It
is
something
that's
pretty
standard
in
vs
code,
it's
so
the
best
resource
that
you'll
find
out.
There
is,
if
you
type
Dev,
containers,
vs
code
and
you
read
through
the
documentation,
that's
the
best
resource,
but
nevertheless
I'll.
Take
note
of
this
and
I'll
see
what
I
can
do
probably
hard
to
make
it
better
and
the
official
documentation,
but
yeah
yeah.
D
Oh
yeah
so
Victor
we
kind
of
interacted
a
little
bit
on
discourse
earlier
this
week.
So
this
this
Ross
II
profiling
tool,
kind
of
grew
out
of
the
desire
to
be
able
to
inspect.
You
know
a
an
entire
running
Ross
graph
and
be
able
to
derive
a
set
of
metrics
or
analytics
from
that
which
I
think
probably
has
a
lot
of
overlap
with
the
same
kind
of
metrics.
D
That
you'd
want
to
get
out
of
a
benchmark
and
then
be
able
to
do
analysis
and
and
eventually
write
kind
of
unit
tests
and
assertions
against
the
performance
of
a
system,
and
so
some
of
the
kind
of
core
tenets
of
of
what
we
were
trying
to
do.
One
was
kind
of
be
minimally
invasive,
so
not
a
substantial
amount
of
recompilation
or
modifications
to
your
existing
ross2
code
base.
So
that's
kind
of
how
I
arrived
at
the
Ross
II
tracing,
which
obviously
in
the
rep
2012
or
20
14.
D
You
know
you
kind
of
arrived
at
the
same
conclusion
right,
so
ros2
tracing
is
more
or
less
Universal
at
this
point,
hopefully
by
the
time
that
iron
comes
around
we're
going
to
have
it
enabled
by
default
in
the
Debian
packages
and
it's
using
lttng,
which
is
a
kind
of
an
agreed
upon
standard
way
of
doing
tracing.
D
So
what
this
essentially
does
is
builds
a
framework
on
top
of
that
for
doing
the
types
of
analysis
that
you
would
want
from
the
Ross
to
tracing
data,
and
so
the
big
things
are
being
able
to
look
at
latency
and
Jitter
on
topics.
So
if
you
have
Publishers
and
subscribers
in
your
graph,
you
want
to
know
like
who's
publishing
who's
subscribing.
How
frequently
are
they
sending
at
the
time
that
they
claim
they
should
be
and
then
also
being
able
to
trace
events
all
the
way
through
a
graph
which
the
ross2
tracing
the
trace
tools?
D
Analysis
right
now,
so
Kristoff
and
and
some
of
his
colleagues
have
have
done
a
whole
bunch
of
work
on
being
able
to
trace
like
message
flow
through
the
graph
and
use
eclipses
Trace
Compass,
to
be
able
to
like
Analyze,
That
and
basically
I'm,
just
kind
of
surfacing
the
same
data,
the
same
information
in
a
way
that
you
can
make
CI
like
tests
against
so
I
I,
think
yeah,
I
think
there's
a
lot
of
kind
of
overlap
between
maybe
the
analysis
that
I'm
doing.
D
Maybe
you
could
just
leverage
that
as
part
of
the
benchmarking
tool,
but
yeah
really
I'm,
trying
to
stay
in
use
using
things
that
are
available
in
ros2
core.
Nothing
super
crazy
or
are
out
there
and
and
kind
of
surface
the
type
of
metrics
that
we'd
be
interested
in
cool.
A
Cool
thank
you
for
explaining
all
that
yeah.
We
are,
as
you
said,
we
are
building
up
on
the
same
baselines.
A
We
are
both
leveraging
roster
tracing,
which
is
the
right
thing,
in
my
humble
opinion
as
well,
so
others
disagree
just
to
be
clear
for
everyone.
Roster
tracing
is
not
specifically
tied
to
any
particular
Tracer.
You
can
use
lttng
or
any
others.
A
So
those
of
you,
Windows
users,
you
can
actually
use
other
tracers
cross
street
racing
is,
is
prepared
for
that
actually,
and
it
was
validated
in
that
context
and
and
I
love
the
fact
that
you're
looking
into
graph
tracing
the
way
it's
been
done
right
now
for
us
in
the
group
has
been
literally
by
not
leveraging
directly
the
information
model
that
Kristoff
built
initially
as
part
of
his
roster
tracing
Authority.
A
So
he
builds
this
information
model
and
then
you
can
query
that
information
model
to
try
to
extract
the
flow
of
the
data.
My
understanding
and
please
correct
me
on
this
Michael,
but
from
what
I've
read
I
I
feel
that
what
you've,
what
you're
planning
to
do
is
push
that
into
the
graph
right
is
that
correct.
A
The
sense
that
so
so
Kristoff
has
some
sort
of
like
representation
of
the
timings
that's
stored
locally,
but
kind
of
like
in
a
in
in
a
particular
memory.
Section.
I
was
under
the
impression
that
what
you
were
going
to
do
was
kind
of
like
push
that
data
back
into
the
computational
graph
into
the
Ross,
to
graphs
that
you
can
query
it,
but
maybe
I
just
got
this
from
reading
your
comments.
That
was.
D
Yeah,
so
so
not
necessarily
pushing
it
back
into
the
roster
graph,
but
at
least
having
a
an
API
that
closely
matches
what
you
would
expect
from
from
the
roster
graph,
so
I
guess
I
can
I
can
actually
share.
I've
got
just
like
a
little
notebook
here,
so
this
is
basically
I
was
not
not
prepared
to
present,
but
I've
got
something
so
so
this
is
basically
using
the
recorded
events
that
that
Kristoff
had
set
up.
So
there
are
all
these
various
Trace
points.
D
You
know
when
the
Publishers
and
subscriptions
are
initialized
when
Publications
and
subscriptions
actually
happen
when
timer
callbacks
happen,
and
so
from
that
I
build
this
graph
data
structure
from
those
events,
so
it
basically
gives
you
a
different
way
of
introspecting
the
events
that
have
happened
so
one
like
one
way
of
looking
at
it
is
an
event
sequence
through
the
graph,
and
so
here
you
can
see
this
is
using
the
reference
system
from
like
the
autoware
reference
system,
so
that
it's
kind
of
got
a
bunch
of
Publishers
and
subscriptions,
and
so
what's
happening
here
is
if
we
want
to
figure
out
why
a
certain
callback
is
firing
at
a
at
a
time
and
that's
kind
of
the
end
point,
and
what
this
lets
us
do
is
Trace
all
the
way
back
through
the
graph
and
see
what
sequence
of
events
allowed
us
to
arrive
there
and
it
was
from
a
from
a
timer
at
the
you
know,
four
nodes
back,
so
this
kind
of
information
would
be
really
helpful
and
if
you
especially,
if
you
have
like
long
processing
chains
like
you,
would
see
in
like
an
image
pipeline
or
if
you
have
like
complex
data
dependencies,
it
would
let
you
sort
of
trace
and
see
what
the
cycle
time
between
maybe
a
perception
input
and
a
control
output
would
be
because
you
you
get
this
chain
of
events
for
every
time
that
that
callback,
fires
or
every
time
a
particular
test
point
or
at
tracing
point
is
reached.
D
You
could
do
this
kind
of
analysis
and
say
this:
this
control
output
is
attached
to
this
perception
input
and
it
should
happen
every
100
milliseconds,
and
what
we
found
is
that
it's
happening.
You
know
not
every
100,
milliseconds
or
whatever.
A
And
so
just
just
for
me
to
confirm
this,
the
you
are
you're.
Moving
from
the
Chris
from
Kristoff's
information
model,
you're,
creating
your
own
graph
like
structure.
Is
that
correct,
correct?
That's
that's
pretty
awesome,
yeah
and
that's
going
to
be
very
useful
down
the
road
so
just
to
give
you
back,
maybe
some
feedback.
A
So
far.
Most
of
the
analysis
we've
been
doing
on
crafts
have
been
reasonably
simple
and
has
been
done
manually
directly
from
the
trace
files,
so
by
leveraging
Babel
Trace
python
API
right,
which
is
very
cumbersome,
assuming
you
know
very,
very
cumbersome,
but
at
the
same
time
it
gives
you
a
huge
amount
of
granularity
and
data.
A
So
if
you
want
to
do
fancy
stuff,
especially
when
related
to
Hardware
acceleration,
which
is
what
we're
doing
here,
you
often
the
case
need
that
level
of
of
details
and
and
especially
because
we
are
often
the
case
not
just
tracing
things
at
the
CPU
side
of
things.
We're
also
mixing
things
up
with
tracers
that
run
in
accelerators
enough.
In
the
case,
each
vendor
has
a
specific
tracers
and
profilers,
and
so
we
need
to
mix
it
all
together
at
the
CTF
level
and
the
common
tracing
format
level.
A
So
that's
why
most
of
the
cases,
even
though
it's
kind
of
like
very
low
level
I'm
often
the
case
just
handling
things
at
the
CTF
level,
with
bubble
Trace
directly
digesting
lttng
Trace
files.
That
said,
as
we
evolve
into
tracing,
maybe
more
complex,
graphs,
I
think
we
would
leverage
much
of
what
you're
doing
right
now.
So
it
sounds
to
me
like
we
will
connect
quite
nicely
very
exciting
man.
Thank
you.
So
much
for
sharing
this.
D
Yeah
happy
to
you,
know,
de-duplicate
and
not
not
reinvent
the
wheel,
wherever
possible
right,
and
so
this
was
primarily
for
the
idea
is
that
this
could
connect
to
a
system
under
test
without
modifying
the
system
under
test,
because
that's
a
pretty
big
component
of
kind
of
these
safety
standards
is
that
the
the
system
as
it
goes
in
the
field
needs
to
be
the
system
that
you
test.
So
in
that
way,
you
know,
there's
this
isn't
modifying
the
system
under
test,
and
then
it
allows
you
to.
D
You
know,
make
sure
that
the
things
that
you
say
in
your
safety
case
are
actually
being
met
as
part
of
the
process.
So
I
think
it's
very,
very
similar.
You
know
to
to
benchmarking,
just
just
a
slightly
different
end
goal
right,
so
I
think
there's
a
lot
of
overlap.
I
guess
you
could
call
it
a
benchmark
right,
it's
it's
all
about
where
exactly
you
want
to
draw
lines.
Yeah.
A
I
mean
so
definition
wise.
There
are
some
bits
between
profiling
and
benchmarking,
but
they
are
very,
very
close.
I
mean
we
are
using
profiling
and
tracing
to
produce
our
benchmarks.
So,
technically
speaking,
Benchmark
has
more
that
comparison
side
of
things,
but
yeah
we're
very
close
and,
and
we
will
diverge
everything.
No,
this
is
fantastic.
A
So
what
I'll
do
is
I'll
be
trying
to
follow
up
your
progress,
Michael
and
maybe
I'll
ping
you
from
time
to
time
so
that
we
can
sync
yeah
I
would
love
to
not
right
now
immediately
because
we're
running
out
of
time,
but
at
some
point,
I'd
love
to
pick
your
brain
on
how
to
do
data
collection
and
and
then
analysis,
so
that
we
can,
maybe
you
know,
get
the
beginning
and
the
end
aligned
and
that
way
we
can.
A
We
can
build
up
on
each
other
right
now
again
we're
doing
things
very
raw
in
the
sense
that
we're
just
grabbing
the
CTF
Trace
data
in
in
going
with
that
which
is
bubble,
Trace,
python,
apis
and
and
again
we're
not
Reinventing
the
wheel.
We're
just
leveraging
launch
files
which
just
use
bubble.
Trace,
python,
API
but
still
I,
do
see
the
value
of
of
maybe
leveraging
that
graph
like
understanding,
you're
getting
yeah.
D
So
the
other,
just
the
other
part
of
this
is
It's
actually
kind
of
similar
to
your
command
line
tooling
as
well,
and
that
it's
kind
of
a
three-phase
so
there's
a
there's,
a
launch
and
basically
you
can
give
it
an
established
launch
file
without
modifying
there's
a
configure
that
tells
which
tracing
things
you
want
to
enable
which
instrumentation
points
you
want
to
turn
on
and
those
two
things.
It
will
write
to
a
particular
place
on
disk,
and
then
there
is
a
convert
that
will
kind
of
pre-pre-process.
D
Like
you
said,
the
CTF
is
kind
of
brutal
to
iterate
through
it's
it's
much
better,
to
put
it
in
an
intermediate
format
and
there's
some
other
discussion
on
ross2
tracing
about
potentially
picking
a
better
intermediate
format
and
then
there's
a
final
stage
which
basically
lets
you
take
a
converted
data
log
and
run
it
against
a
python
unit
test.
D
D
A
Awesome
once
again,
and
and
this
is
fantastic-
we're
going
to
meet
in
the
middle,
having
heard
you
and
again
we
for
the
for
the
reasons
that
I
explained,
because
we
are
dealing
off
in
the
case
with
very
low
interface,
low
level
interfaces
and
accelerators.
We
need
to
deal
with
things
at
the
CTF
level,
but
then
I
think
down
the
road
as
we
start
exploring
more
and
more
complex
use
cases.
Definitely
we
want
to
build
upon
your
stuff
so
we'll
be
paying
close
attention.
Cool
thanks.
A
Having
me
anytime
so
yeah,
we
are
pretty
much.
We
reached
the
totally
half
an
hour,
folks
I'm.
Sorry,
we
dragged
it
a
bit
longer
today,
but
it
was
definitely
worth
it.
I
just
wanted
to
very
quickly
just
give
a
heads
up
about
two
last
bits
that
I
think
are
worth
you
knowing
the
first
one
is,
and
it
was
mentioned
by
Michael-
also
rep
2014..
So
we
in
the
past
asked
for
feedback
regarding
web
2014.
I
will
just
do
it
once
again.
Please
guys
do
comment.
A
Do
provide
feedback,
try
to
contribute
we're
very
open
to
have
new
Authors
come
join
us
in
in
writing,
I'm
very
contaminated
every
time.
I
read
it
again:
I
I
get
more
contaminated
and
I've
spent
so
much
time
looking
at
it
that
right
now,
I
think
I
need
some
extra
feedback.
A
I
wanted
to
say
kudos
for
rifa,
who
is
doing
a
fantastic
work.
Reviewing
this
is
working
on
this
branch
and
so
that
the
Outlook
of
that
is
is
very,
very
positive.
Just
wanted
to
send
word
to
those
of
you
that
want
to
contribute
to
maybe
consider
converting
things
into
something
that
saves
us
all
myself
yourself.
A
Also,
as
a
reviewer
and
contributor,
sometimes
so
one
thing
you
can
do
and
and
I
encounter
often
the
case-
people-
that's
not
so
familiar
with
with
this
tool,
so
GitHub
give
gives
you
a
great
tool
to
actually
make
suggestions.
So
if
you
are
going
through
that
text
and
I'll
just
demonstrate
it
just
right
now,
if
you're
going
through
the
text-
and
you
encounter
one
bit
that
you
think
is
worth
modifying-
let's
just
say
this
line-
you
can
literally
press
in
here
and
then
make
your
suggestion
in
here.
A
For
example,
let's
just
put
an
extra
T
in
here
right,
and
so
this
is
going
to
come
out
as
a
suggestion
and
if
you,
if
you
launch
this,
it's
gonna
land
into
everyone,
and
so
we
can
accept
or
reject
or
comment
on
the
suggestion.
I
know
this
sounds
silly,
but
given
the
complexity
of
this
piece
of
text
and
the
amount
of
people
involved,
this
is
actually
the
best
way
to
move
things
forward.
So
I'll
say
I'll,
say
just
for
anyone
interested
in
getting
familiar
with
this
aspects
of
benchmarking
and
Hardware
acceleration.
A
Please
go
through
the
document
and
try
to
be
constructive
by
sending
comments
in
in
this
manner.
I
spoke
with
with
Raisa
about
his
approach.
Hopefully
rice,
you
can
send
in
some
reviews
this
way
and
and
that
way
we
can
get
them.
We
can
get
them
in
really
fast.
A
Okay.
So,
besides
that
and
encouraging
again
everyone
to
just
please
send
some
additional
feedback,
just
last
words
from
my
site
regarding
the
group
growing.
So
as
I
reported,
the
group
continues
to
have
quite
a
bit
of
let's
say
attendance.
A
We
are
30
persons
today
and
again,
we're
increasing,
also
the
periodicity,
so
that
made
us
reach
out
to
the
TSC
to
ask
whether
it
made
sense
to
make
this
an
official
working
group,
because
it's
right
now,
possibly
the
most
popular
working
group
in
terms
of
attendance
and
and
contributions.
So
the
TSC
reacted
at
least
intrinsic
folks,
Catherine
Scott,
a
colleague
of
Michael,
reacted
very
positively.
They
suggested
to
apply
for
the
TSC
and
that's
what
we
did,
what
they
suggested.
A
We
made
a
presentation
for
the
TSC
and
we're
still
pending
on
on
knowing
the
results,
I
guess
cat
and
everyone
will
inform
us,
regardless
of
that
result,
we'll
continue
with
our
work
meetings
and
so
on
and
so
forth,
but
anyhow
hoping
good
news
on
that.
On
that
side,
yeah
not
much
more
to
share
from
my
side
questions
comments.
Anything
from
you
folks,.
A
I
know
we're
10
minutes
over
time,
so
possibly
people
want
to
get
out
and
prepare
dinner.
In
my
case,
nothing
else
all
right
folks,
so
we'll
close
it
in
here
thanks
everyone
thanks
Mustafa
thanks
Michael
for
joining
and
Reporting
thanks
also
Jason
and
everyone
from
the
team
in
Harvard
for
supporting,
with
with
bubble
perf
we'll
continue
next
week
with
weekly
meeting
of
Robert
perf
and
next
month
with
a
week
with
a
review
meeting
of
the
progress
of
the
working
group
thanks,
everyone
have
a
great
one.