►
From YouTube: RobotPerf subproject - meeting#3
Description
Weekly meeting of the RobotPerf (https://robotperf.org/) subproject of the ROS 2 Hardware Acceleration Working Group (https://github.com/ros-acceleration).
RobotPerf is an open reference benchmarking suite that is used to evaluate robotics computing performance fairly with ROS 2 as its common baseline, so that robotic architects can make informed decisions about the hardware and software components of their robotic systems. The group meets weekly at this room and discusses robotics computing architectures and benchmarking across various compute substrates.
Minutes of the meetings are available at https://docs.google.com/document/d/185Cy1xjpAOgJygEOnlf5OCgOQTywmF0qgSpS3GiW16Q.
A
All
right
welcome
everyone
to
the
third
robot
Perth
meeting.
Let
me
start
by
sharing
the
minutes
in
the
chat.
As
always
and
with
this
I'll
share
my
screen
and
we
can
start
a
discussion.
A
So
today's
meeting
as
I
was
saying
it's
going
to
be
30
minutes,
so
let's
try
to
get
the
best
out
of
it.
So
previous
business
that
we
took
on
last
week,
so
I
took
two
actions
on
continuing
to
contribute
in
Ross
to
Benchmark,
tooling,
that
we
are
pushing
forward
and
creating
on
this
regard.
A
Essentially,
we
did
follow
up,
follow
this
one
in
alignment
with
the
progress
that
we've
been
doing
on
the
CI
pipeline,
but
so
far
no
new
needs
have
been
identified,
we're
trying
to
not
over
develop
the
tooling
but
only
develop
tools
that
help
out
maintain
the
CI
infrastructure.
So
for
now
we
can
probably
kill
that
one
and
once
we
identify
a
new
Need
for
maintaining
the
CI
pipeline
or
the
repository
will
configured
pack,
but
for
now,
after
further
reviews,
we
believe
that
the
tools
out
there
should
be
enough
for
now.
A
If
anyone
has
any
needs,
just
let
us
know
and
we'll
we'll
see
if
we
can
allocate
time
We've
also
been
putting
time
into
further
maturing
the
CI
pipeline.
The
objective,
as
I
described
previously,
was
through
and
is
to
push
automatically
the
resource
for
selected,
if
not
all,
benchmarks
automatic
need
a
repository
by
filling
up
the
benchmark.yaml
file
of
each
one
of
the
benchmarks,
and
this
should
happen
automatically
from
an
external
build
Farm
which
will
run
a
cicd
infrastructure
for
various
ports.
A
For
now
and
during
this
week,
we've
pretty
much
defined,
which
are
the
targets
we
are
initially
going
to
be
running
on.
We
will
not
be
running
on
all
of
the
targets
we
have
initially
but
we'll
be
bringing
them
over.
For
now
we
have
selected
AMD
fpga
socs
and
the
Nvidia
GPU
SOC,
Oren,
agx
orange,
so
we'll
be
running
on
on
that,
and
possibly
also
on
two
solutions
from
AMD
embedded
Solutions
and
with
those
three
targets,
we'll
kick
it
off
and
then
soon
after
we'll
be
adding
more
AMD,
Nvidia
and
likely
also
Intel
targets.
A
So
that's
kind
of
like
the
schedule
and
agenda
don't
want
to
give.
B
An
answer
so
one
of
the
questions
around
this
I
have
is
so
these
platforms
right.
How
do
we
determine
which
of
these
platforms?
Is
it
like?
We
know
that
those
are
the
platforms
that,
for
instance,
you
know,
obviously
there's
the
general
consensus
of
which
platforms
are
Jetsons
and
so
forth,
but
drums
is
specific,
like
AMD
platforms
and
so
forth.
How?
A
Good
question
so
we're
having
conversations
with
some
of
these
companies
some
of
these
silicon
vendors
and
asking
them
directly
what
are
the
devices
that
is
of
most
interest
to
them?
In
some
cases
we
get
direct
feedback
in
some
others.
We
need
to
dig
a
bit
deeper
and
in
cases
where
we
don't
get
direct
feedback.
What
we
do
is
we
investigate
what's
being
used
in
the
Ros
ecosystem.
We
just
pick
those
low-hanging
influence.
A
Of
course,
there's
many
possible
targets
and
we'll
be
incrementing
this
over
time
and
we'll
be
happy
also
to
include
others
we'll
dive
into
this
in
future
sessions.
But
for
now,
as
we
incrementally
dive
more
and
more
into
the
robot
birth
project,
we
want
to
be
conservative
with
three
initial
targets
embedded
targets.
All
of
them
are
going
to
be
fpga,
socs
or
GPU
socs
and
then
down
the
road,
we'll
start
adding
PCI,
gpus
and
pcie
fpgas.
A
We
think
that's
the
responsible
approach
and
then,
if
future
projects
allow
for
it,
we
may
also
look
into
Cloud
instances
like
big
fpga
or
GPU
instances,
and
for
that
we'll
probably
have
to
reach
out
to
you
some
of
the
cloud
providers,
but
step
by
step
for
now.
Bj
showdancer
we're
having
conversations
with
them
and
picking
what's
of
most
interest
to
them
and
what's
being
used
in
in
the
Ross
Community.
A
If
you
guys
have
any
feedback
on
that
happy
to
take
it
as
I
was
saying
again,
I,
don't
think
it
will
be
ready
for
next
week,
but
we're
shooting
in
here
for
maybe
two
weeks
from
now
to
have
an
initial
preview
of
how
data
gets
pushed
into
the
repo.
So
just
for
for
everyone
to
keep
in
mind
if
you're
working
on
benchmarks
just
know
that
soon
or
in
few
weeks
they
should
be
able
to
run
in
Hardware
directly.
A
So
pretty
pretty
good
news
for
those
of
you
starting
to
cook
your
own
benchmarks.
So
that's
that's
a
check
on
our
side.
We'll,
nevertheless,
have
we
have
listed
that
one
as
a
follow-up
task
next
up
as
well.
So
we'll
continue
working
on
that
and
continue
reporting
now
I
took
a
couple
of
items,
I
think
Json.
You
contributed
them
at
the
end
of
the
meeting
you,
you
left
a
comment
there
regarding
updating
the
nomenclature.
So
I
did
review
this
one.
A
So
I'm
gonna
check
it
if
you're
fine
with
that,
thank
you
for
that
contribution.
I
think
there
was
a
pending
left
over
from
the
initial
pull
request.
Do
feel
free
to.
Let
me
know
how
you'd
like
to
proceed
with
that.
One
thing
I
wanted
to
clarify
with
you
was
there's
this
other
pending
pull
request,
which
I
guess
is
now
superseded
by
five?
C
Yeah
I
think
that's
best,
I
think
if
we
go
ahead
and
close
that
because
it
made
all
the
changes
we
spoke
about
last.
B
C
In
the
new
code
request
for
request
five
and
then
I'm
gonna
make
another
pull
request
for
us
to
discuss
the
rest
of
the
nomenclature
that
we
didn't
come
to
a
conclusion
about
last
night.
A
So
that's
inside
already
so
again,
thanks
for
for
the
contribution
and
and
very
nice
to
to
have
you
guys
in.
A
Can
you
comment
briefly
since
you
have
the
mic,
you
I
think
you
listed
this
other
two
couple
of
items.
I
also
wanted
to
touch
briefly
on
red
2014,
but
I
before
doing
so,
I
just
wanted
to
make
sure
if
you
have
anything
on
the
yeah.
C
Yeah
so
I'm,
currently
working
on
like
the
stereo
image
process,
Benchmark
and
I'll
share
my
progress
on
it
and
I
do
have
like
some
questions.
That
I
think
would
be
helpful
if
I
can
get
answers
for
get
some
advice
about
how
like
I,
should
proceed
so
I'll
go
ahead
and
share
my
screen.
C
C
So
I'm
working
on
the
stereo
image
process
Benchmark
and
in
order
to
do
that,
I
like
set
up
a
gazebo
simulation
of
like
this
little
robot.
That
has
cameras
on
it
and.
C
Behind
this
is
that
I
need
to
simulate
this
robot
in
order
to
like
collect
some
Ross
bags
of
the
like
the
image
topics.
So
then
I
can
pass
that
into
the
stereo
image
function
and
So.
Currently
I
have
this
like
little
robot
that
just
runs
around
this
environment
and
it
has
two
cameras
on
it
and
if
you
go
like
into
Orvis,
you
can
see,
you
know
they.
What
data.
D
C
Coming
out
of
the
cameras,
but
the
pros,
the
problem
that
I
faced
was
and
the
stereo
image
process
function
depends
or
it
reads
from
like
the
left
image
raw
and
the
Right
image
raw
topics
and
the
senses
I
had
in
my
robot
and
gazebo
were
publishing
to
like
a
different
topic.
C
So
then
I
found
like
a
different
plug-in
and
it's
called
like
the
it's.
It's.
This
multi-camera
plug-in
in
gazebo
and
I
tried
to
add
that
to
my
robot.
So
then,
as
I
read,
it
would
like
publish
to
the
correct
topics
that
that's
like
what's
required
when
you're
running
the.
C
D
A
So,
just
for
us
to
understand
yeah
properly
the
initial
approach
you
took
on
with
the
two
cameras:
what
was
that
publishing
appropriately
just
on
different
topics
right.
C
It
was
well
that's
the
issue.
It
was
publishing
on
the
same
topic.
All
of
it
was
just
going
into
the
same
topic
and
it's
going
into
like
camera,
slash
image
raw
topic.
So
then
that's
why
I
thought?
Okay,
let
me
try
to
maybe
like
it
doesn't
want.
C
D
C
Guess
my
question
my
question
here
is:
should
I
have
like
two
separate
cameras
and
then
figure
out
a
way
for
them
to
publish
it
different
topics,
and
then
that
would
be
good
enough
for
the
stereo
image
package.
Or
does
it
have
to
be
a
specific
camera
sensor
that
add
onto
the
robot.
D
So
I
think
there
are
some
steel
camera
plugins
available
in
gazebo.
We
have
used
that
so
I
think,
let's
just
ping
me
like,
probably
tomorrow
or
day
after
we
can
just
get
on
a
quick
call,
and
this
should
be
an
easy
fix.
It
shouldn't
take
longer.
You
can
I,
can
help
you
with
getting
the
assimilation
up
and
running
with
whatever
camera
you
want
shouldn't.
A
Json
that
you
pretty
much
got
it,
there's
there's
a
functionality
in
Ross,
which
is
essentially
a
topic
remapping
which
allows
you
to
try
that
okay,
you
tried
at
runtime
to
essentially
remap
things
so
that
you
don't
get
collisions
in
terms
of
the
camera
and
naming
so
that
should
be
pretty
straightforward.
You
can
probably
also
use
some
some
text
directly
in
the
word
file
that
you
were
defining
so
that
you
can
publish
directly
in
different
topics,
so
that's
probably
a
pretty
low
hanging
fruit
and
then
in
the
launch
file
directly.
A
You
can
also
customize
that
further,
so
it
sounds
like
you're
pretty
much
there
with
what
you
have
just
a
tiny
tiny
bit.
Can
you
describe
what's
the
problem
you
encountered
with
remapping.
C
Yeah
so
well,
two
things
I
think
you
touched
on
it
exactly.
What
like
my
problem
was
the
took
when
I
had
the
two
cameras
on
the
robot.
There
was
a
good
Collision.
They
were
both
publishing
to
the
same
exact
topic.
So
then,
I
tried
to
remap
and
I,
like
I,
specified
the
exact
topics
that
I
want
each
camera
to
publish
to,
but
then
I
I,
just
when
when.
C
It's
like
some
something
I'm
doing
with
the
remapping
that's
incorrect
or
like
how
I'm
publishing
the
nodes
yeah.
A
And
I
just
did
a
quick
search,
and
this
one
should
be
a
pretty
everyone
on
topic,
but
nevertheless,
so
feel
free
to
work
it
out
with
pratik
If.
He
if
he
volunteers
to
to.
A
Yeah
and-
and
he
still
becomes
a
struggle-
feel
free
guys
to
jointly
open
up
an
issue
a
ticket
in
the
repo,
and
then
we
can
tackle
it
together
and
maybe
dedicate
some
further
time
on
the
next
call,
but
yeah
this.
This
is
definitely
something
we
can
take
off
line
I,
think
just
so
that
we
know
Jason.
Can
you
give
a
pointer
of
which
branch
are
you
working
correctly
on
on
this.
C
A
Haven't
yeah.
A
Yeah
is
because,
if
you
want
some
inspiration
on
how
to
structure
the
source
code
so
that
both
simulation
as
well
as
the
back
files,
the
roseback
files
is
somehow
consistent
and
I
would
recommend
that
because
then,
typically
for
The
Benchmark,
we
would
rely,
as
you
know,
on
the
data
set
on
the
roseback.
A
However,
it's
always
handy
to
have
you
know
the
simulation
pretty
much
there
with
your
code
so
that
you
can
trigger
it
anytime
and
then
generate
new
data
if
you
want
to
so
a
good
pointer,
for
that
is
probably
the
the
Supreme
two
notes
that
is
somewhat
well
tested
and
well
known,
and
yeah
contains
pretty
much
everything
you
need
bring
it
up.
A
subset
of
this
is
what
we
brought
into
the
first
Benchmark
into
robot
perfect,
just
as
an
FYI.
C
A
Yep,
so
essentially
so
here
it
is
yeah,
and
so
what
we
have
here
is
the
word
files,
and
then
we
have
the
launch
files
and
then
the
corresponding
launch
files
to
school
phone
from
those
ones.
So,
for
example,
if
we
were
to
check
out
this
launch
file,
everyone
see
how
we're
literally
pulling
from
one
particular
model
in
this
case
we're
using
this
one
and
then
we're
launching
the
corresponding
notes
as
a
purple.
A
Well,
these
are
the
parameters,
then,
whatever
is
coming
up
in
here
just
the
simulation,
but
we
can.
We
can
extend
this
launch
file
with
much
much
more
logic,
if
appropriate,
but
typically
just
to
keep
things
separated.
You
want
to
prepare
a
launch
file
that
does
the
simulation
and
then
separately.
You
want
to
launch
your
nodes
for
doing
benchmarking
or
whatever
that's
what's
pretty
much
done
in
here,
so
you've
got
the
simulation
launch
file
and
then,
whatever
other
launch
files
for
doing
tracing
in
here.
C
Got
it-
and
you
should
like
this-
is
the
simulation
part
of
the
Benchmark
should
go
within
the
Benchmark
that
that
we
submit
into
like
robot
perf
right
so
or
like
you
have
right
now,
like
the
A1
perception
two
nodes,
do
we
like,
within
the
Benchmark
that
I'm
going
to
submit
I'm
going
to
also
include
like
the
simulation
launch
file?
Do
you
think
that's
a
good
method,
I.
A
Think
it's
up
to
you
right
now,
right
now.
We
have
it
so
sorry
right
now,
right
now
we
don't
have
it.
Everyone
doesn't
have
the
actual
simulation
just
to
keep
things
minimalistic
for
the
first
example,
I
just
removed
everything
that
was
unnecessary,
except
the
rolls
pack
which
was
pushed
as
a
data
set
to
a
different
report,
but
it
could
be
added
very
easily
and
again.
The
file
structure
is
this
one
and
the
source
code
comes
from
this
other
repo,
this
other
package.
A
Technically
speaking,
so
it's
pretty
much
the
same
since
you're
developing
something
new.
If
I
were
in
your
shoes,
then
I
would
probably
just
push
everything
into
the
same
package.
Just
don't
don't
go
crazy,
don't
keep
two
of
them
like
the
one
you
would
submit
and
the
one
you
would
use
for
development
I
would
just
develop
one
into
the
simulation.
B
C
Right
and
then
yes,
and
so,
if
we're
going
to
include
the
simulation
inside
of
the
Benchmark
that
we're
going
to
submit,
and
but
we
also
want
to
save
the
Ross
bags
to
the
like
the
Ross
Bag
repo.
A
Just
just
I
guess
pretty
pretty
easy
for
what
concerns
The
Benchmark.
You
would
just
submit
a
pull
request:
adding
a
new
Benchmark
I
guess
A3
in
here
in
this
case
right
stereo,
whatever
you
name
it
and
then
for
the
corresponding
washback.
Once
you
have
produced
it
and
you're
comfortable
with
that
piece
of
roseback
being
representative,
then
you
can
just
submit
here
a
new
pull
request
with
the
corresponding
back
okay.
A
Just
Clarity
of
mind
we're
just
splitting
things
and
that's
how
we
agreed
it
so
I
would
say:
let's
keep
it
that
way,
awesome
cool,
so
it
sounds
like
you're
having
good
progress
so
tears
on
that
and
cool
keep
it
up.
A
Thank
you,
cool,
okay,
so
I'll
I'll
check
this
one
and
what
I'm
going
to
do
is
Json.
Do
you
plan
to
continue
on
this
might
proceed
so
yeah
decided
yourself
in
here
cool,
so
yeah
I'm,
just
gonna,
keep
in
mind
to
Mark
these
things
and
that
way.
Awesome.
A
A
I'm,
just
gonna
push
this
into
the
next
stop
topic.
What
I
can
do
is
with
the
10
minutes
we
have
left.
Maybe
I
can
bring
the
topic
to
the
group,
so
right
now,
I
did
go
through
the
pending
comments
of
web
2014..
The
document
I
think
in
my
humble
opinion,
is
pretty
consistent
but,
as
I
said
in
the
past,
also
I'm
very
contaminated,
so
very
welcome
to
receive
input
right
now.
A
The
only
unresolved
comment,
as
far
as
I
can
tell
is
this
one
there's
one
which
pretty
much
hints
about
the
fact
that
this
developer
doesn't
believe
that
we
need
a
motivation
and
just
to
get
everyone
on
the
same
page.
The
motivation
section
currently
describes
why
benchmarking
is
important.
The
value
for
stakeholders
as
well
as
introduces
terms
such
as
the
differences
between
tracing
and
benchmarking
and
then
with
various
approaches,
performance,
benchmarking,
including
the
ones
reviewing
the
state
of
the
art
and
the
prior
work.
A
And
finally,
what
is
the
current
state
industry
and
how?
Essentially,
this
complements
what
exists
out
there.
I
personally
believe
that
this
is
actually
a
very
needed
section,
especially
given
all
of
the
things
that
it
packs.
A
So
I
do
disagree
with
this
comment,
but
I
just
wanted
to
bring
this
up
to
the
group
to
see
whether
someone
considered
this
as
something
unnecessary,
and
maybe
we
should
jump
straight
into
the
metrics.
So
what
do
you
guys
feel.
C
I
can
like
see
how
maybe
it
it
seems
a
little
broad,
but
whenever
I'm
reading
something
I
always
like
to
see
that
motivation,
it
always
helps
like
ground
what
I'm
reading
so
yeah
I
definitely
agree
that
I,
don't
think.
That's
something
yeah.
A
And
it's
important
also
to
put
in
the
right
context
many
most
people
reading
this
typically
comes
from
robotics,
so
these
are
robodices
who
may
or
may
not
know
about
systems,
architecture,
benchmarking
and
similarly,
if,
if
whoever's
reading
this
comes
from
architecture-
and
he
or
she
is
actually
wanting
to
understand
the
context,
I
think
this
is
going
to
help
with
the
right
nomenclature.
A
So
maybe
I
was
actually
thinking.
Maybe
one
thing
we
can
emphasize
yeah
at
some
point,
which
is
what
I
was
planning,
is
that
a
reference
implementation
of
this
rep
actually
is
being
formalized
in
robot
Earth.
That's
one
of
the
ideas
I
wanted
to
bring
up
to
the
group.
If
that
makes
sense,
so
I
don't
feel
we
should
fully
connect
these
two,
because
that
that
would
just
contaminate
things.
A
Okay,
anybody
else,
no
okay,
so
I'll
just
resolve,
then
that
concern
shared
and
with
this
I
think
the
document
is
pretty
much
addressing
all
aspects
of
it.
So
I'll
leave
alif
asatudu,
adding
a
small
reference
to
robot
perform
going
efforts.
I
may
do
it
or
Json.
A
If
you
want
to
contribute
that
feel
free
to
do
it
and-
and
we
then
I
think
we've
progressed
with
web
2014,
so
once
we
get
that
in
I,
think
I'll
ping
again
the
folks
in
here,
maybe
it's
I
think
it's
time
for
moving
forward
with
this.
It's
been,
it's
been
quite
a
while,
since
we
started
with
this
initiative,
so
I
think
it's
starting
to
be
ready
to
be
accepted.
Okay,
so
that's
done
any
other
topics
we
have
or
we'd
like
to
discuss.
C
C
My
side,
foreign.
A
This
yeah
sure
yeah
thank
you
Patrick,
so,
okay,
there's
nothing
else,
maybe
as
an
FYI
we'll
do
a
hacking
session
to
try
to
push
forward
the
DCI
infrastructure,
so
the
data
starts
coming
out.
Maybe
I
can
pick
everyone's
mind
in
here.
A
I
was
thinking
so
I
was
thinking
whether
it
does
make
sense
to
extend
the
benchmark.yaml
format
to
have
like
a
flag
on
whether
that
should
run
on
the
Ci
or
not
so
that
the
CI
infrastructure
reads
the
whole
Benchmark
on
yaml
files
and
then
based
on
whatever's
written,
whether
it
should
run
or
not
in
the
CI.
It
will
do
it.
A
I
was
thinking
whether
this
makes
sense,
I
think
for
now
we're
going
to
default
into
every
single
repo
is
going
to
be
run
and
every
single
Benchmark
is
going
to
be
run
and
then
results
will
be
reported,
I
think
for
now.
This
is
an
acceptable
assumption,
but
as
we
grow,
if
we
grow
to
many
many
benchmarks,
then
things
may
start
taking
hours.
A
C
Sounds
good
oh
yeah.
D
C
Another
thing,
I
kind
of
want
to
bring
up
is
I.
Think
we
can-
or
this
is
a
thought
I
had-
is
to
create
like
a
sign
up
sheet
for
people
who
want
to
contribute
a
benchmark,
but
what
kind
of
like
format
would
be
best?
Would
it
just
be?
Okay
to
you
know,
create
a
Google
sheet
that
people
can
go
on
there
and
specify
like
what
Benchmark
they're.
A
I'm
down
I'm
down
for
anything
to
be
Frank.
If
you
want
to
take
the
lead
on
that,
Jason
feel
free.
To
put
it
together.
My
feeling
right
now
is
we
created
a
Google
sheet
for
for
bringing
awareness
and
and
people
to
participate,
and
so
far
the
responses
have
been
I,
guess
limited
to
the
folks
that
are
participating.
That's
how
I
would
put
it
I
guess
so.
I
guess
there's
still
work
to
do
on
dissemination
that
we
we
will
need
to
do
ourselves.
A
So
I
would
just
keep
things
simple
in
the
sense
that
my
gut
feeling
tells
me
you
know
if
you
want
to
contribute,
why
don't
you
just
open
up
an
issue,
a
GitHub
issue
and
just
say:
hey?
Is
there
someone
working
on
this
or
would
this
Benchmark
be
interesting?
A
And
that
way
we
keep
things
concentrated
in
the
repo,
because
if
we
start
spreading
too
far
with
too
many
documents
too
many
resources
I
think
it's
going
to
be
hard
to
maintain
down
the
road
and,
as
we
said
and
discussed,
I
think
with
you
guys
with
with
PJ
and
you
as
things
evolve
and
as
people
get
more
and
more
involved.
A
We
should
set
up
like
levels
of
commitment
and
maintainership
in
the
repo
based
on
their
contributions,
which
will
give
more
or
less
Privileges
and
thereby
those
persons
also
will
the
ones
who
have
certain
privileges
should
also
be
involved
into
somehow
steering
and
recommending
which
benchmarks
should
be
prioritized.
But
yeah
I
would
that's
my
guys
feeling
Jason,
but
give
it
a
thought
and
feel
free
to
admit,
but
my
my
two
cents
would
be.
You
yourself
can
probably
open
a
ticket
and
say
I'm
working
on
the
stereo
image.
C
That
sounds
good
and
it
might
be
a
good
idea
to
like
I
might
add.
Something
to
read
me
like
how
to
contribute
and
just
specify
like
create
a
whole
like
create
an
issue
and
then
specify
like
what
you're
working.
A
On
so
that's,
that's
actually
a
good
idea,
so
maybe
as
part
of
that,
since
you
are
doing
it
yourself,
maybe
one
thing
you
could
do
is
there's
something
called
GitHub
templates
for
issues
which
give
you
the
possibility
to
it's
a
super,
simple
document
and
the
moment
you
start
creating
an
issue.
It
gives
you
options.
A
Maybe
one
of
those
options
is
I
want
to
create
a
new
Benchmark,
and
so,
when
you
create
a
new
issue,
it
gives
you
kind
of
like
a
pre-filled
template
which
people
can
then
use
to
bring
up
like
their
queries
on
that
regard,
so
feel
free
to
look
into
that.
Maybe
that's
something
you
can
conflict
with
as
well.
Yeah.
A
Yeah
cool
all
right,
so
we
reached
the
top
of
the
hour
folks
I
need
to
take
off
to
the
next
meeting,
but
thank
you,
everyone
and
we'll
we'll
meet
next
week.
Thanks
a
lot.
Everyone.