►
From YouTube: ROS 2 Hardware Acceleration Working Group - meeting #17
Description
17th meeting of the ROS 2 Hardware Acceleration Working Group (HAWG, https://github.com/ros-acceleration).
The ROS 2 Hardware Acceleration Working Group is an open and community-driven robotics group that drives the creation, maintenance and testing of hardware acceleration kernels for optimized ROS 2 interactions over different compute substrates, including FPGAs and GPUs.
For more including source code, check https://github.com/ros-acceleration. Minutes of the meeting available at https://docs.google.com/document/d/185Cy1xjpAOgJygEOnlf5OCgOQTywmF0qgSpS3GiW16Q/edit?usp=sharing
For commercial support on hardware acceleration topics in robotics, refer to https://accelerationrobotics.com.
A
All
right
welcome
everyone
to
the
fifth
robot
Perth
meeting
of
the
Ross
to
Hardware
acceleration
working
group.
As
always,
I'll
share
in
the
chat,
a
link
to
the
minutes
and.
B
A
That
we'll
get
started
so
very
ghost
and
I
just
took
the
liberty
of
preparing
some
minutes
prior
to
the
meeting.
So
there's
some
admin
stuff
which
I
had
to
do
and
I
just
did
it
about
a
minute
ago,
which
was
publishing
the
prior
meeting
Robert
meeting
so
I'm
just
gonna
put
a
link
in
there
to
where
I
pasted.
A
Okay,
let
me
do
this
differently,
video
here
there
we
go.
So
that's
that's
one
aspect
done
I'll
share
my
screen
here
so
that
you
guys
can
follow
my
progress.
So
as
I
was
saying
so
my
admin
staff
that
was
spending
on
my
site
regarding
publishing
the
previous
robot
curve
meeting,
that's
been
done
and
you
can
access
the
recording
over
there.
A
We
had
some
action
items
that
we
recorded
last
time.
So,
let's
dive
into
those
and
then
try
to
discuss
some
new
topics
which
which
I
got
in
case
on
I,
did
get
your
email
didn't,
prepare
much
to
be
frank
with
you,
but
we
can
probably
do
a
hunting
session
and
and
see
how
far
we
get.
Also
I
would
love
to
get
anyone's
input
so
feel
free
to
interrupt
me
interrupt
everyone
and
share
any
input
you
may
have
about
the
work.
A
So
far,
so
in
the
last
meeting
we
did
record
some
ongoing
progress
at
what's
happening.
Concerning
The
Continuous
integration
contributions.
We
were
making
to
make
sure
that
for
every
Benchmark
we
could
have
pull
requests
periodically
happening
for
the
various
Hardware
platform
that
we've
been
working
on
last
time.
Last
meeting
I
shared
how
essentially
the
overall
implementation
was
done
and
I
depicted
and
showed
you
how
the
pipeline
looked
like
I'm
happy
to
report
right.
A
After
a
final
push
on
our
side
right
now,
pull
requests
like
this
are
automatically
being
opened.
So
this
happens
directly
from
the
CI.
It
builds
the
corresponding
kernel,
it
launches
The
Benchmark.
It
then
goes
into
an
analyze
phase
and
then
the
corresponding
results
that
come
out
of
that
analyze,
phase,
get
digested
and
wrapped
into
a
pull
request
that
contributes
and
enhances
the
robot
perf
benchmarks
repository
the
changes
that
actually
are
contributed
here.
A
They
seem
like
a
lot,
but
this
is
just
the
way
the
yaml
library
reorganizes
the
content,
but
once
this
is
done,
the
the
thing
in
Greens
going
to
be
just
the
final
bits
which
I'm
highlighting
right
in
here.
A
So
this
this
will
be
the
the
actual
contribution.
I
haven't
put
time
into
reorganizing
how
a
yaml
damp
sort
of
like
dance
things.
I
guess
this
could
be
done
if
we
wanted
to
optimize
a
bit
better.
The
first
pull
request,
but
Frankly
Speaking
I
think
it's.
It's
not
so
much
worth
it
the
effort
because
it
should
be
machine
readable
regardless
so
yeah
details
or
better
Set.
Contributions
like
this
is
what
you
should
expect:
a
new
entry
with
a
new
result
detailing
the
category.
A
This
particular
Hardware
corresponds
with
again
Edge
workstation
in
the
future.
We
may
consider
data
centers
and
even
Cloud
categories
data
source,
which
corresponds
with
the
repository
of
raw
specs
that
we've
been
discussing
also
in
past
meetings.
The
hardware
that
has
been
used
there
is
a
Time
stamped
and
then
there's
the
value.
The
value
corresponds
with
the
maximum
value
observed
in
this
particular
in
this
particular
latency
Benchmark.
A
However,
just
for
the
sake
of
completeness,
we
are
also
dumping
as
a
node
other
values,
including
the
the
mean
and
the
the
RMS,
as
well
as
other
other
values
which
are
relevant
to
take
note
off
and
that's
pretty
much
it.
A
That's
that
happens
right
now,
automatically
the
corresponding
pipelines
and
information
related
to
this
pull
request
is
also
reported
in
the
body
of
the
pull
request
and,
as
in
here,
it's
reported,
essentially
the
corresponding
job
and
pipeline
URL
for
the
sake
of
debugging
and
introspecting.
This,
while
enhancing
or
maintaining
the
pipeline
pipeline,
looks
like
this
for
a
simplified
example.
A
Again
right
now,
our
pipeline
took
significantly
larger
because
there's
many
more
phases
and
many
many
more
Hardware
instances,
but
this
is
just
a
depiction
for
the
sake
of
discussing
it
within
the
working
group
context.
The
job
itself
looks
like
this
and,
as
we've
been
discussing
over
the
past
few
weeks,
the
job
essentially
produces
and
builds
upon
prior
jobs
that
produce
a
series
of
artifacts.
These
artifacts
get
dumped
initially,
with
benchmarking.
B
A
The
trace
folder
within
this
Trace
wall
they're
automatically
as
part
of
the
benchmarking,
the
computational
graph,
is
sampled
and
generated
in
this
form.
We
discussed
this
last
week
just
for
the
sake
of
again
completeness
and
Clarity,
and
then
what
happens
is
that,
based
on
all
of
this
information,
we
produce
a
series
of
digested
outputs
that
includes
first,
the
plots
which
I
think
we
traded
on
past
sessions
as
well.
Second,
it
makes
a
fork.
A
It
makes
a
fork
of
the
benchmarks,
repo
and
here
is
actually
where
the
modification
that
then
dives
into
the
pull
request
happens.
So
this
this
might
be
useful
in
some
cases
when
we
want
to
debug
exactly
how
those
results
were
created.
So
these
artifacts
are
kept
in
the
job
for
a
certain
amount
of
time
and
for
those
cases
wherein
we
may
want
to
have
discussions
around
certain
results,
we
can
go
back
into
that
fork
and
then
dive
deeper
into
it.
If
required.
A
This
obvious
repo
was
just
an
attempt
and
a
test,
so
I
just
closed
it
immediately
afterwards
and
the
the
branch
was
removed,
but
I
guess
I,
don't
see
any
reason
why
immediately
now,
but
I
I
guess
as
soon
as
we
start
having
more
and
more
benchmarks,
you
can
expect
more
and
more
of
these
results
to
start
coming
up.
We
want
to
work
a
bit
more.
A
The
hardware
when
I
first
run,
as
I
said
we
already
have
at
least
various
workstation
devices
connected
to
it
and
and
GPU
and
fpga
SOC
instances
we're
just
trying
to
pack
an
initial
set
and
then
throw
our
first
form
of
pull
request
with
with
those
should
be
happening
essentially
soon
and
yeah
that
that
wraps,
essentially
the
effort
on
the
CI
side
of
things
I'm
happy
that
this
pretty
much
came
to
an
end,
a
successful
end
and
right
now
in
principle,
we
are
fully
capable
of
getting
automated
results
in
the
development
of
acceleration.
A
Kernels
and
Hardware.
Acceleration
should
be
simplified,
there's
some
things
that
still
might
be
worth
considering,
such
as
the
fact
that
whenever
we
may
accept
a
pull
request,
we
may
want
to
trigger
automatically
a
CI
job
within
this
repository
that
then
regenerates
the
readmiss.
We
discussed
this
also
in
the
past.
This
is
not
implemented
yet
and
certainly
something
that
I
think
it's
worth
taking
note
of
so
maybe
I'll
I'll
just
do
that.
Future
actions.
A
Right
so
that
that's
that
I
wanted
to
also
real
quick,
have
a
look
at
the
outstanding
contributions
in
GitHub.
First
off
is
the
GitHub
issue
template.
Thank
you,
Jason
for
contributing
that
awesome,
I.
Think
there's
a
couple
of
things
we
may
want
to.
Oh
I,
see
I,
see
you've
modified
this.
B
B
Files
that
shouldn't
have
been
part
of
this
build
request,
I
just
changed
them
back
to
their
original,
what
they
were
originally
yeah,
yeah.
A
B
A
Okay,
yeah
this
this
seems
to
be
not
harmful
at
all,
so
I'm,
okay,
I'm
going
forward
with
this
anybody.
Any
comments
on
this
template.
A
I
know
we
discussed
it
in
the
last
whole
group
working
knitting
and
there
were
some
comments
concerning
why
we
asked
this
or
that
in
a
nutshell,
this
is
just
an
initial
template
anyone's
welcome
to
submit
a
pull
request
against
it,
and
there
are
good
reasons
why
we
may
want
to
obtain
data
concerning
people
participating
for
the
sake
of
us
having
Clarity
on
who
can
contribute
to
what
and
then
also
on
grouping
us
together
up
on
common
fronts
and
interests.
A
So
anyone
any
comments,
otherwise,
I'm
gonna
accept
this
and
we'll
get
it
in.
B
Okay,
awesome
thanks.
Victor
I
also
want
to
encourage,
like
anyone
else,
on
the
call
who's
who
might
be
thinking
of
making
a
benchmark.
It
would
be
really
awesome
to
go
and
fill
that
template
out.
Now
that
we
have
it
just
to
see.
You
know
what
kind
of
contributions
we
might
be.
Looking
at.
A
That
seems
to
be
seems
to
be
working
right
now.
So
yeah
looks
pretty
good
awesome
thanks
a
lot
and
team
at
Harvard
super
cool
okay.
So
that's
that's
one
down
and
the
other
one
is
the
A3
I
think
you
also
contributed
recently
to
this
yeah.
B
Yeah
Joe
did
a
great
job
going
through
my
code
and
making
some
comments.
So
I
made
some
changes
recently,
but
I
did
have
a
few
questions
that
I
think
would
be
worth
discussing
in
the
group
sure
some
more
general
questions
before
I
made
the
change.
Let's
see.
A
B
Okay,
so
the
first
thing
I
want
to
discuss
is
including
the
simulations
within
each
Benchmark
that
create
the
Ross
bags,
so
for
the
Benchmark
that
I
created
I,
you
know
have
this
simulation
with
a
robot
running
what
around
or
whatnot,
but
Joe
was
recommending
to
kind
of
abstract
that
out
of
a
specific
benchmark.
So
if
you
want
to
reuse
it
in
later
benchmarks,
we
could
so
should
we
move
those
simulations
to
the
Ross
bags
repo.
B
A
I
mean
so
so
maybe
sorry
for
my
weird
face
I
was
confused,
but
so
so
maybe
maybe
can
you
explain
a
bit
better?
What
is
the
rationale
behind
not
having
the
simulation
with
the
benchmarks,
because
I
I
have
I,
have
I
guess
a
good
opinion
on
on
doing
that
at
least
for
now,
but
I
I
didn't
hurt
or
I
did
not
get
any
counters
against
it.
So
can
you
elaborate
on
but
I'm
sure
that
there's
there's
some
good
arguments?
I
just
don't
know
them.
Can
you
can
you
share
those.
B
Yeah
yeah,
of
course,
So
currently
I,
have
let's
say
the
simulation
with
the
robot,
with
the
two
cameras
that
I
can
drive
around
and
collect
images.
So
if
we
then
make
another
Benchmark
that
needs
or
a
benchmark
for
instead
of
disability
Maps
it,
it
creates
Point
clouds
and
we
maybe
want
to
collect
some
sort
of
other
data
or
we
want
to
add
another
Benchmark
that
is
very
similar
to
the
stereo
image
Benchmark.
B
That
I
have
right
now,
but
maybe
has
another
sensor
on
the
robot
for
us
not
to
like
have
to
generate
the
whole
thing
again
and
put
all
that
simulation
code
in
the
new
Benchmark.
B
Maybe
we
can
kind
of
move
them
all
somewhere
like
this
is
this
is
a
repo
for
all
the
simulations
that
create
the
Ross
bags
that
we're
saving
I
I,
don't
know,
but
it
was
recommending
yeah.
Maybe.
A
Guess
what
I'm
hearing
and
everyone
please
feel
free
to
jump
in
and
share
your
your
abuse,
but
what
I'm
hearing
doesn't
sound
very
convincing
to
be
frank
with
you
and,
and
my
argument
in
favor
will
be
robotics?
A
Is
the
art
of
system
integration,
the
the
the
one
of
the
things
that
you
need
to
consider
when
you're
building
a
robot
is
that
you
spend
a
huge
amount
of
time
on
making
things
to
work
just
once
and
the
same
happens
even
when
you
use
simulation
so
the
more
we
can
automate
the
more
we
can
facilitate
the
more
we
can
simplify
the
life
of
a
developer,
aiming
to
reproduce
to
reproduce
our
Benchmark.
So
the
key
word
is
reproducibility,
the
more
we
can
do
this.
A
In
my
humble
opinion,
the
the
better
work
we're
doing
for
robo
turf
and
Frankly
Speaking.
It
sounds
like
we'd,
be
just
saving
a
few
kilobytes,
if
not
just
bytes,
of
work,
because
the
technically
simulation
gets
triggered
from
a
world
which
is
an
XML
file
which
is
extremely
extremely
small
and
and
then
just
a
couple
of
launch
files.
A
So
I
think
it's
really
worth
the
the
space
and
the
the
safe
and
the
win
for
the
developer
is
going
to
be
very
significant,
possibly
in
the
order
of
hours,
depending
on
the
skill
set
and
depending
on
the
familiarity
of
certain
things.
A
So
I
would
I
would
argue
that
unless
there's
a
good,
strong
argument
against
you
know
adding
just
a
word
file
and
a
launch
file,
I
would
say:
let's
just
do
it,
even
even
if
you
just
want
to
have
the
same
word,
just
capturing
slightly
different
data
with
a
different
perspective
or
more
cameras.
Whatever
I
would
argue
that
there's
still
value
in
in
just
dumping
those
files
again
and
then
changing
whatever
plugins
you're
using
I
that
that's
my
gut
feeling
and
Frankly
Speaking.
A
Every
single,
let's
say
change,
possibly,
should
trigger
a
discussion
whether
that's
a
new
benchmark,
because
if
you're
measuring
something
different,
then
that's
likely
a
different
Benchmark,
maybe
you're
just
trying
to
get
something
like
a
different
metric.
Maybe
you're
not
measuring
latency,
you're
measuring
throughput
or
something.
A
B
A
You
can,
or
you
can
just
copy
and
paste
the
files
and
just
download
modifications
yeah
up
to
you.
Jason
I
I
personally,
have
a
preference,
that's
going
to
be
to
be
very
verbal,
so
I'd
rather
reproduce
the
same
files
again
and
facilitate
things
so,
ideally
in.
In
my
view,
if,
if
every
single
repo
that
has
simulation
can
let
me
share
my
screen
here
can
somehow
have
a
and
we
can.
A
We
can
agree
on
a
convention
for
this,
but
if
every
report
that
packages
simulation
as
part
of
it
every
Benchmark
sorry
and
have
something
like
if
we
assimilation.launch.pi
I,
think
that
would
be
awesome
and
that
that's
in
my
humble
opinion
worth
us
getting
together
on
because
again
then
then
it's
the
same
flow
for
every
single
Benchmark,
that
packages
package
simulation
and
then,
of
course,
there's
going
to
be
some
variability.
A
If
you
Benchmark,
based
on
simulation
because
the
data
you
might
be
getting
might
be
slightly
different
than
the
one
coming
from
the
rosback.
That's
that's.
The
whole
thing
about
reproducibility
also,
but
regardless
we
may
want
to.
You
know
extend
the
Benchmark
duration
for
whatever
reasons,
and
we
don't
want
to
reproduce
over
and
over
the
roseback,
which
is
typically
what
I'm
doing
so
I'm
grabbing
small
bags
and
then
I'm
replaying
them
in
in
a
loop.
A
That's
some
somehow
very
that's
not
really
realistic.
To
be
frank
in,
in
some
cases,.
B
A
There's
nothing
there's
nothing
wrong!
I'm,
just
saying
that,
depending
on
what
you're
trying
to
measure
depending
on
what
you're
trying
to
measure
it
might,
it
might
not
be
as
informational
as
in
other
context,
if
you're
just
measuring
latency,
for
example-
and
you
just
want
to
see
the
computational
graph,
then
that
might
be
just
totally
okay.
But
if
you're
measuring
I
don't
know
throughput
of
compressed
images
and
then
the
images
are
always
within
some
sort
of
like
the
same
region
area
in
the
map.
A
Then
it's
not
the
same
as
if
you're
suddenly
change
the
region
and
then
the
coloring
starts
changing
and
obvious
position
is
going
to
be
different
and
the
size
of
the
image
is
going
to
change.
That's
kind
of
like
what
I'm
trying
that's
just
a
silly
example.
But
oh
that's
what
I
meant
it
really
depends
on
what
you're
trying
to
measure,
but
for
the
sake
of
robot
perf
I,
do
agree
that
we
need
to
kind
of
like
get
together
on
a
series
of
Ross
packs,
like
data.
A
Benchmark
based
on
those,
but
then
for
doing
real
experimentation
and
translating
into
real
word,
you
may
want
to
like
test
things
out
in
how
wider
range
of
scenarios,
and
for
that
simulation,
it's
just
fantastic.
A
A
And
then
Jason
I
think
so
so
I
guess
that
have
some
homework
to
do
reviewing
the
pull
request.
That's
that's
on
me,
but
I
I
wanted
to
touch
base
real,
quick
on
what
you
brought
up.
You
requested
to
kind
of
like
have
a
walk
through
on
the
on
the
analysis.
A
B
We
we
can
do
it
in
another
session.
Maybe
I
can
pull
up
the
code
exactly
like
what
I'm
looking
at
in
my
Benchmark,
and
we
can
look
at
it
in
more
detail.
I
was
reading
through
your
analysis,
launch
file
I
was
trying
to
replicate
it
for
the
Benchmark
that
I
created,
but
I
was
just
having
trouble
with
like
logic
of
how
you
gather
all
the
data
that's
being
produced.
B
Maybe
if
you,
if
you
could
quickly,
go
through
yeah
this,
this
file
of
what's
happening,
I,
think
that
would
be
really
helpful.
Sure.
A
Sure
I
can
do
that
in
in
five
minutes.
I
don't
need
more
time
that
does
anyone
has
any
other
more
urgent
topic
we
should
be
discussing,
or
can
we
dive
into
this
okay?
A
A
So
the
the
the
key
thing
in
here
is
it's.
A
pretty
lengthy
file
has
lots
of
functions
for
usability
purposes,
but
what
we
are
doing
is
we
are
defining
a
Target
chain
of
events.
A
Okay
and-
and
these
guys
are
the
ones
that
we
identify
from
the
CTF
Trace
file,
and
you
can
introspect
this
by
using
bubble
Trace,
which
I
think
Json
you
have
been
doing
so
far,
so
you
can
get
clear
text
dumps
of
the
CTF
files
via
bubble,
Trace,
two
which
we
will
trace
and
we
will
Trace
to
work.
Well,
you
fine,
there's
just
some
functionality.
Differences
in
terms
of
the
implementations
of
ultra
is
two
tends
to
work
pretty
better
and
also
the
capis
of
verbal
Trace.
A
2
are
better
for
modern
CTF
tracing
I've
done
lots
of
like
hacking
with
CDF
lately
so
I'm
happy
to
open,
discuss
the
ups
and
downs
of
each
one.
But
in
a
nutshell
the
definition
of
the
target
chain
is
pretty
straightforward
and
and
this
this
is
derived
from
your
source
code.
So
essentially,
some
of
these
events
correspond
with
rmw
RCL,
some
others
with
the
user,
Space
level
application
and
how
we
map
that
is
defined
in
here.
A
So
for
every
single
one
we
we
say
where
this
belongs
to
so
there's
there's
some
that
correspond
with
RCL,
CPP
layer
and
and
just
for
clarity.
A
Let
me
give
you
a
so
here's
the
here's,
the
image
so
again,
I'll
just
get
get
it
so
rmw,
RCL
and
RCL
CPP
are
the
some
of
the
layers
of
ours
too,
and
so
typically,
the
trace
points
that
you
put
in
your
benchmark
live
in
the
user
land
space.
Okay,
we're
just
setting
up
kind
of
like
a
new
category
of
Benchmark,
just
to
depict
from
where
to
where
we
are
measuring
in
here,
but
that
that's
sort
of
like
craze
logic
needs
to
be
said
by
you,
then
the
corresponding
layers.
B
Here's
the
list
of
layers:
where
do
you
get
that
information
from
is
that
from.
A
The
price
from
the
trace
point:
you
should
know
what
each
Trace
Point
corresponds
with
some
of
the
trace
coins
are
introduced
within
the
Ross
to
RCL,
CPP
layer
and
that's
why
we
assign
to
those.
If,
if
your
Trace
point
is
within
the
user
space
within
the
user
land,
then
it's
it's
usernet
we're
just
giving
a
specific
subset
of
the
user
land
Trace
points
as
petmark,
which
is
the
third
one
and
the
last
one
so
that
we
kind
of
like
depict
where
it
start
and
where
it
ends.
A
So
that's
a
logical,
Association
you're
doing
there
and
then
the
rest
of
the
definitions
in
here
are
just
kind
of
like
information
for
plotting
stuff,
which
you
may
not
necessarily
need,
depending
on
what
you're
doing
since
I'm,
generating
automatically
Trace
Trace
files
like
like
this
once
then
I
need
to
say
where
I
want
what
with
what
color
and
so
on
and
so
forth.
A
But
again
you
may
not
need
many
of
that.
Stuff
is
the
only
thing
with
this
General
processing
and
then
we
use
this
function
in
here,
which
essentially
grabs
grabs
this
trays
chain,
and
then
it
goes
into
the
complete
Trace
file
and
and
finds
events
in
the
right
order
in
a
non-concurrent
manner.
So
so
this
function
assumes
that
there
is
no
concurrent
change
of
this
happening
in
parallel.
It
does.
It
assumes
that
they
happen
seriously.
A
There's
another
function
that
implements
an
alternative
for
concurrent
processing
up
to
a
certain
key
and
that's
defined
above
but
I
I.
Guess
you
can
take
a
look
at
that
yourself
and
then
there's
just
some
minor
data
and
kind
of
like
digestion
in
here.
But
then
each
each
section
corresponds
with
what
you're
trying
to
do.
You
can
draw
the
trace
points.
You
can
make
statistics
out
of
them.
You
can
make
bar
plots
and
so
on
and
so
forth
and
the
rest
of
it
is
it's
kind
of
like
self-explanatory.
A
You
can
go
to
the
corresponding
function
definition
and
just
check
it
and
go
through
it.
The
key
thing
is
understanding,
in
my
humble
opinion,
the
fact
that
it
all
has
been
coded
in
a
somewhat
Trace
agnostic
manner,
via
the
definition
of
this
Trace
Target
chain.
Sorry,
so,
provided
that
you
provide
the
right
target
change
and
again,
the
target
chain
can
be
as
complex
or
as
simple
as
you
want.
B
Yeah,
no,
this
makes
a
lot
more
sense.
I
I
think
I
realized
what
my
issue
is
I
defined
this
target
chain,
but
I
didn't
make
any
modifications
to
the
lists
under
it.
B
A
Yeah
yeah
I
mean,
of
course,
related
that
you,
you
provide
proper
CTF
data,
there's
also
a
debug
kind
of
like
argument
in
this
functions,
which
gives
you
kind
of
like
a
lot
of
information
about.
What's
going
on
and
by
lots
of
information,
I
mean
really
lots
of
information
about.
What's.
A
So
you
can
see
here's
the
the
debugging
effort
of
how
the
the
trace
the
the
trace,
the
target
Trace
is
being
built
up.
What
you
expect
next
and
so
on
and
so
forth.
So
you
can
enable
that
and-
and
you
can
debug
it-
and
that's
handy-
that
comes
handy
when
you
are
having
some
issues,
but
my
recommendation
overall
in
in
you
know
we're
out
of
time.
A
You
know
you
grew
up
like
maybe
the
first
and
the
last
or
or
just
a
couple
of
them
I
mean
clearly
you
you'll
have
this
one
and
and
likely
this
one
as
well,
so
so
just
grab
a
couple
and
go
for
those.
B
You
got
it
yeah,
okay,
that
I
think
that's.
This
makes
a
lot
more
sense.
Cool
yeah
thanks
thanks.
I
really
appreciate
that
picture.
No.
A
Worries
no
worries
so
yeah.
Let
me
know:
I'll
try
to
have
a
look
soon
into
A3,
though
hearing
that
you're
gonna
be
working
on
the
analyze
script,
I
might
use
hold
a
bit
Json
and
helping
me
once
that
is
done
and
then
I
can
I
can
have
another
look
at
it,
but
the
pull
request
looks
really
fantastic.
I
think
we're
pretty
much
there
to
to
get
it
accepted
and
and
continue
building
benchmarks.
So
awesome
all
right.
Everyone.
Any
final
comments.
A
No
none,
okay
juice
does
a
heads
up
as
I
advance
from
our
side
will
probably
just
start
making
more
and
more
benchmarks
public.
A
couple
of
guys
in
my
team
are
going
to
start
working
on
it,
yeah,
possibly
close
to
full
time,
but
it's
going
to
take
still
a
few
weeks
because
they
are
finishing
something
else.
So
yeah
stay
tuned
to
the
rest
of
you
guys.
I
would
just
encourage
you
to
try
to
build
a
simple
minimalistic.
A
Benchmark
I
build
just
as
a
as
a
suggestion.
I
built
this
Rectify
as
an
example
for
the
last
for
the
first
actually
robot
curve
session
we
had
so
I
would
really
encourage
people
to
think
about
maybe
an
A4
which
is
resize,
which
is
the
other
one
that's
used
within
the
first
one.
A
It's
really
really
simple:
it's
a
great
exercise
to
get
yourself
comfortable
with
it
and
then,
if
anyone
needs
suggestions
or
ideas
on
which
other
benchmarks
they
can
getting
for
perception
or
any
other
stack
in
Ross
I'm
happy
to
give
suggestions.
So
just
ping
me
and
I'll
I'll
provide
them
very
good,
awesome
everyone.
Thank
you!
So
much
see
you
next.
Actually.
Just
sorry,
one
more
thing,
I
think:
next
week
we
are
possibly
gonna
skip
it
for
everyone's
peace
of
mind,
because
I
think
many
people
will
be
on
vacation.
A
So
yeah
many
places.
It's
it's
national
holiday
next
Tuesday.
So
if
you
agree,
let's
postpone
next
week
and
we
meet
in
two
weeks,
I
think
that's
that's
yeah!
It's
a
good
idea,
all
right!
So
I'll
cancel
next
week
and
then
we
will
meet
in
two
weeks.
Very
good
awesome.