►
From YouTube: ROS 2 Security Working Group (08 Nov 2022)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay
thanks
so
hi
everybody
thanks
for
having
me
here.
I'm
Michael,
Geronimo
from
open
robotics
I've,
been
with
open
robotics
for
a
couple
years
now
before
that
I
spent
over
23
years
at
Intel,
and
a
couple
companies
before
that
so
I'm
the
technical
lead
from
open
robotics
on
Space
Ross,
and
this
is
a
presentation
that
Jeff,
Biggs
and
I
did
at
roscon
with
a
few
extra
slides,
because
we
have
a
little
more
time.
A
A
So
giving
Ross
robots
are
everywhere
on
Earth
they're.
Also,
robots
are
used
in
space,
so
there's
Rovers
and
manipulators,
and
even
little
free
flying
helicopters
that
have
been
used
on
Mars
and
there's
a
quite
a
variety
of
robots
that
are
in
use
and
will
be
in
use
from
manipulators
and
humanoid
robots
like
robonaut
2
and
robonaut
5.
and
Rovers,
like
the
Viper
Rover
there.
A
On
top
and
robotic
spacecraft
and
Landers,
so
there's
quite
interesting
and
long
road
map
for
space
robotics
coming
up
and
there's
an
increasing
amount
of
software
on
these
systems
from
the
lunacott
of
1970,
which
was
the
first
Rover
from
deployed
from
the
Soviet
Union
and
up
to
the
Rovers
of
today
on
Mars,
there's
a
quite
a
large
increase
in
software
and
complexity,
which
is
driving
demand
for
reuse
and
you're
already
starting
to
see
this
reuse
in
the
space
in
space,
Robotics
and
Space
Systems,
there's
Frameworks,
like
f
Prime
and
the
core
flight
system.
A
These
are
Frameworks
that
for
communication
and
and
for
flight
systems
that
have
been
flown
many
times
and
they
have
kind
of
a
plug-in
architecture
that
people
can
write,
components
for
and
we're
also
there's
flight.
Sorry,
there's
Mission
Control
Systems
like
yams
and
open
MCT
as
well
and
they're,
starting
to
use
open
source
as
well.
The
previous
ones
that
I
mentioned
are
open,
source
and
other
pulling
in
other
open
source.
Libraries
as
well
such
as
April
tag,
which
was
used
on
the
perseverance
Rover.
A
A
It's
it's
it's
similar,
but
they're
looking
at
reuse
and
cost
savings.
These
are
kind
of
the
driving
factors
for
these
systems.
A
Ross
itself
has
actually
been
to
space
in
a
couple
couple
times
in
robonaut,
2
and
Astro
B.
These
were
limited
circumstances.
They
made
safety
cases
for
that
on
the
right.
You
can
see
the
Astra
B
free
flyer
that
can
host
science
experiments
on
the
ISS
and
also
there
was
robonaut
2..
So
a
little
bit
of
Ross
already
coming
into
space
in
those
limited
situations
and
coming
up
is
NASA's
Viper,
which
is
mission
to
the
lunar
South
Pole
in
2024.
A
The
prospecting
for
resources
looking
for
for
water
in
particular,
and
Performing
science
experiments
as
well.
In
this
system,
Ross
2
will
be
used
on
the
ground
software
and
it's
also
making
heavy
use
of
gazebo.
So
there's
simulation
of
the
lunar
surface.
There's
a
lot
of
tests
run
against
this
system,
and
it's
also
using
the
the
yams
and
bridging
from
CFS
to
Ross.
A
So
what
space
Ross
is
all
about
is
an
open
source,
robotics
framework
space,
robotics
framework
in
order
to
develop
flight
quality,
Robotics
and
autonomous
Space
Systems,
that's
kind
of
what
we're
striving
to
achieve
and
the
goal
is
to
ease
the
adoption
of
the
of
the
Ross
framework
into
space
robotics
systems
and
how
we
want
to
do
that
is
to
have
it
to
the
certification
ready,
in
other
words,
provide
artifacts,
provide
the
software
along
with
artifacts
that
are
aligned
with
Aerospace
standards
so
that
they
can
be
used
in
a
certification
and
also
which
is
very
important,
is
bring
the
benefits
of
Ross
to
space
robotics
in
terms
of
Open,
Source
and
open
community.
A
So
there
is
some
open
source.
For
example,
NASA
will
open
source
things
like
CFS,
but
it's
not
the
kind
of
vibrant
ecosystem,
for
example,
that
Ross
has
around
the
packages
so
in
summary,
kind
of
a
space
certifiable
and
reusable
robotics
framework
that
would
support
flight
software
standards
like
do
178c,
which
is
an
aerospace
standard
and
NASA,
has
their
own
internal
documents
and
standards
like
NPR
7150,
which
is
a
kind
of
a
high
level
meta
requirements
almost
and
provide
artifacts
to
allow
projects
to
gain
a
head
start
in
their
certification
efforts.
A
We
want
to
be
aligned
with
NASA,
so
it
can
be
adopted
for
the
highest
levels
of
missions
eventually
and
then
enable
rapid
development
of
capabilities
and
facilitate
reuse
and,
as
I
mentioned
open
community,
so,
instead
of
so
being
able
to
leverage
Ross,
II
and
the
expertise
with
Ross,
II
and
University
and
all
of
the
vibrant
ecosystem,
and
not
just
do
prototyping
but
have
a
path
towards
torch
flight
software.
A
Space
Ross
started.
Probably,
almost
a
couple
years
ago
now,
between
blue
origin
and
NASA
and
blue
origin
brought
in
open
robotics
about
a
year
and
a
half
ago,
and
our
role
has
been
growing
over
time
and
this
year
we've
we've
contributed
quite
a
bit,
and
so
what
exactly
is
face?
Ross?
A
A
All
of
those
kind
of
the
the
heartbeat
of
the
project.
We've
also
selected
a
subset
of
packages
from
ross2
to
focus
on.
As
far
as,
what's
in
the
core
of
space,
Ross
We've
created
several
Docker
images
of
the
core
of
move
it
to
built
on
on
Space
Ross
and
some
sample
applications
as
well
built
on
top
of
that.
We're
also
targeting
some
embedded
systems
in
particular
rtems.
So
we're
just
now
working
on
a
project
to
bring
up
the
core
on
artems.
A
But
that's
the
basic
Foundation
is
basically
the
typical
things
that
we
would
do
at
open,
Robotics
and
then
the
center
areas
where
Jeff
and
I
presented
at
rosscon.
It
was
really
focused
on
the
tools
and
processes.
We've
been
focused
quite
heavily
on
requirements
Tools
in
particular,
and
traceability
and
Analysis
of
requirements.
A
So
we've
wanted
it
to
be
open
source
because
you
know
that's
kind
of
our.
You
know.
High
level
objective
is
to
have
this
be
open,
so
there
are
a
couple
tools
that
I'll
talk
about
and
there's
doorstop,
which
is
an
open
source
requirements,
management
tool
and
NASA
has
a
tool
called
fret
which
can
be
used
to
analyze
requirements.
A
So
we've
also
focused
on
code
analysis
tools
quite
heavily
over
the
last
year
and
we've
augmented
the
standard,
Ross
code
analysis
tools
with
a
couple
more
from
NASA
one's
called
icos
and
another
is
called
Cobra
and
I'll
talk
about
those
as
well,
and
so,
given
the
code,
analysis
tools
and
the
output
that
they
generate,
which
is
a
standard,
serif
file,
format,
static
analysis,
resource
interchange,
format.
A
We
then
are
working
on
a
dashboard
to
be
able
to
visualize
these
these
issues
and
navigate
through
the
issues
and
interface
to
systems
for
dispositioning,
and
things
like
that
so
kind
of
looking
at
the
whole
development
workflow
of
using
this.
You
know
the
dashboard
and
interfacing
and
addressing
issues
and
then
how
that
works
with
the
Ross
quality
level.
A
We're
looking
at
defining
a
higher
quality
level
that
comprehends
requirements
where
the
Ross
quality
levels
really
don't
have
too
much
to
say
about
requirements
yet
and
then
currently
working
as
well
on
adding
an
mcdc
testing
tool,
multiple
condition,
decision
coverage,
which
is
called
out
by
do
170hc
as
a
desirable
test
to
run
on
the
code.
So
we've
been
focused
a
lot
on
the
tools
and
the
processes
this
year,
but
space
Ross
would
also
consist
of
space-specific
functionality.
A
So
there's
some
other
things
that
we've
been
doing,
which
is
to
add
an
Eventing
and
telemetry
subsystem
and
Jeff
is
working
on
that
Jeffrey,
Biggs
and
William
Woodall
has
been
working
on
a
using
a
PMR,
allocator,
C
plus
allocator,
and
ensuring
that
the
core
of
Ross
II
properly
uses
the
allocator
that
you
provide
in
all
cases
and
we're
adding
sample
applications.
A
So
I
probably
don't
need
to
go
into
detail
on
this
part.
This
is
just
listing
that
the
stuff
that
we've
been
doing
in
the
foundation
we
set
up
a
GitHub
organization.
Automated
builds
Docker
images.
You
can
get
the
bike
whoops,
you
can
get
the
binaries
sorry
there
we
are.
We
have
serif
output
in
addition
to
the
normal
junit
output
that
allows
you
to
interface
into
the
Jenkins
CI.
A
We
have
serif
output,
as
I
mentioned,
we
are
going
with
the
Autos,
RC
plus
plus
14
standard,
so
we've
we've
added
support
in
a
Cobra
tool
to
analyze
about
half
of
the
autosar
rule
set
I'll
talk
about
that
in
a
bit
we're
working
towards,
as
I
mentioned,
more
of
this
continuous
qualification
idea
that
could
generate
the
output
reports
and
artifacts
to
support
certification
and
have
them
ready
and
working
on
our
tems.
Another
thing
is
also
looking
using
Earthly
to
provide
a
uniform
environment
between
CI
and
local
developer
environments.
A
A
So
we've
been
running
these
analyzers
across
the
code,
the
core
of
the
code
and
starting
to
address
the
issues
identified
by
static,
static,
analyzers
and
also
upstreaming.
These
changes
into
the
core
of
ros2.
We
want
to
avoid
forking
as
much
as
possible,
so
any
issues
that
we
can
Upstream.
We
are
upstreaming
defining
a
currently
working
on
this,
defining
the
process
and
Associated
tools
and
checking
using
tools
like
to
analyze
the
requirements
for
consistency
and
conflicts,
integrating
the
mcdc
tool
and
for
the
ETS,
which
Eventing
and
Telemetry
subsystem.
A
We
Define
the
requirements
first
for
that
subsystem
and
then
ran
the
requirements
check
on
that
and
kind
of
ran
through
the
whole
process
on
that
I'm,
also
working
on
backboarding
requirements
for
an
existing
package,
starting
with
something
easy
like
RC
utils,
and
you
know,
documenting
the
process
because
eventually,
we'll
want
requirements
for
all
the
core
space
Ross
packages,
not
just
any
new
ones
that
we
add
but
look
at
be
looking
at
backboarding
requirements.
So
we
can
have
eventually
traceability
through
the
different
levels
and
then
updating
the
quality
levels.
A
So
one
of
the
things
about
requirements
Management
in
Aerospace,
is
that
you
know
it
would
typically
be
managed
through
a
proprietary
tool,
and
you
know
that
would
be
you
know.
The
process
would
be
according
to
standards
like
the
0178c.
The
requirements
need
to
be
complete.
You
don't
want
software
without
requirements
and
highly
detailed,
and
there
would
typically
be
multiple
levels
from
not
very
high
level.
L0
type
of
requirements
way
down
to
detailed,
maybe
L5
kind
of
requirements
and
being
able
to
trace
through
them
is
essential.
A
So
we
want
to
be
able
as
well
to
go
to
kind
of
you
know,
requirements
all
the
way
down
to
the
implementation
and
then
maybe
even
a
link
to
the
code
or
a
test
that
that
verifies
that
and
then
back
so
require
these
requirements
are
used
to
you
know
as
to
support
the
certification
process.
A
However,
on
the
open
source
side,
as
you
know,
I'm
sure
that
requirements
are
typically
non-existent
or
very
light
lightly
managed
if
they,
if
they
are
there,
they
may
have
gotten
out
of
date
a
bit
and
we're
trying
to
avoid
heavy
processes
to.
We
don't
want
to
discourage
contributions
and
make
the
bar
too
high.
So
what
we're
trying
to
do
is
balance
these
these
kind
of
competing
or
conflicting
sides
from
the
strong
processes
of
Aerospace
to
the
open
source
community.
A
So
what
we,
what
we
settled
on
is
a
tool
door.
Stop
it's
open
source,
it's
a
very
straightforward
command
line,
oriented
requirements,
management
tool.
You
can
create
so-called
documents
and
then
add
requirements
to
those
and
then
make
links
between
those
and
see
the
hierarchy
of
requirements.
These
requirements
are,
then,
you
know,
are
stored
in
a
yaml
files,
separate
yaml
files
for
each
requirement,
or
you
can
also
use
markdown
pretty
pretty
similar
and
human
readable
easy
to
parse.
You
can
write
in
natural
language.
A
You
can
use
some
restricted
language
like
ears,
but
the
main
point
is
it's
open
source
freely
available.
We
can
modify
it
if
we
like,
and
it
really
meets
a
lot
of
the
and
using
git
for
the
you
know,
checking
in
requirements
into
git.
You
have
version
history,
you
can
have
discussions
about
it,
excuse
me
and
you
know,
and
then
you
can
use
a
tool
for
analysis
and
traceability
and
and
walking
through
the
requirements.
A
But
in
order
to
do
that,
the
requirement
has
to
be
expressed
in
a
language
called
frederish,
which
is
a
just
a
kind
of
a
rigorous
layout
or
syntax
for
requirements,
not
that
difficult
to
convert
and
ramp
up
on,
and
once
you
have
the
requirements
in
fredice,
you
can
then
analyze
them
and
check
them
for
consistency
and
and
so
on
and
so
freely
available
again.
The
learning
curve,
probably
a
bit
steeper
than
doorstop
to
in
order
to
learn
fetish,
but
it's
a
really
good
adjunct
to
doorstop.
A
So
we're
looking
at
using
both
of
these
tools
and
the
interesting
thing
about
fret
and
there's
an
Associated
tool
where
you
can
actually
generate
a
safety
monitor
from
requirements.
So
this
safety
monitor
could
be
used
during
verification
kind
of
in
a
VNV
process
where
you
could,
you
know,
select
the
requirements
from
which
you
want
to
generate
a
monitor
and
then
that
monitor
can
run,
as
your
system
is
running,
to
verify
that
requirements
are
being
met.
So
it's
an
interesting
idea,
but
that's
possible
once
you
get
the
requirements
into
fret
as
well.
A
So
this
is
kind
of
what
we're
thinking
you
know
as
requirements
coming
in
using
doorstop
and
git
to
capture
them
and
right
now
we're
looking
at
how
we
can
sync
between
fret
and
doorstep
whoops
to
do
like
in
front.
You
would
do
the
validation,
so
the
file
formats
are.
We've
just
now
implemented
an
import
infret,
although
we're
really
driving
towards
just
having
a
common
file
format.
So
you
could
have
multiple
tools
working
on
the
requirements
for
traceability,
for
validation,
perhaps
visualization
other
other
needs.
A
You
could
have
a
common
format,
so
they're
not
too
far
off,
but
that's
what
we're
kind
of
working
towards
and
then,
when
door
stop,
you
can.
You
know
Trace
all
the
way
into
a
source
or
or
tests
as
I
said,
or
perhaps
even
generated
safety
monitors
as
evidence
of
satisfying
that
requirement
and
then,
of
course,
generating
being
able
to
generate
reports.
A
A
I
think
it
basically
covered
all
of
this.
This
is
the
summary
doorstop
for
the
requirements
and
traceability
and
generation
of
artifacts
and
fret
used
for
analysis
and
consistency,
checks
having
the
single
source
of
Truth
and
get
and
being
able
to
trace
all
the
way
down
to
the
implementation
and
tests
before
I
Veer
into
static
analysis.
Any
questions
on
that
so
far,
it
was
going
to
shift
gears
a
little
bit
from
requirements
to
static
analysis.
A
Okay
now
feel
free
to
jump
in
if
you
have
any
questions
on.
We've
also
spent
a
lot
of
effort
on
the
static
analysis
side
this
year,
in
addition
to
requirements,
so
the
intention
is
to
you
know,
provide
evidence
again
into
the
certification
process,
about
code
quality,
supporting
verification
so
Ross
already
had
you
know
several
analyzers
and
code
formatters
from
Blue
origin
to
NASA.
A
There
was
requests
to
add
icos
and
cobra
so
talk
about
that
in
a
sec,
but
we're
also
adding
mcdc
testing
and
code
coverage,
so
the
analyzers
all
generate
serif.
They
don't
do
this
directly.
The
analyzers
have
the
output
I
think
none
of
the
tools
natively
support,
serif,
yet
except
for
Cobra,
but
I,
think
this
is
the
format
in
the
future
that
static
and
analyzers
will
hopefully
output
directly
to
so
we're
basically
taking
the
output
and
generating
serif
by
capturing
the
output
from
the
tools
and
then
doing
the
serif
generation
ourselves.
A
We're
also
looking
at
we've
removed
some
redundancy
in
the
output,
because
you
can
imagine
running
a
static
analyzer
on
multiple
source
files.
That
themselves
include
the
same
header
file.
So
it's
quite
easy
to
get
a
lot
of
redundancy
from
running
these
tools.
So
we've.
What
we
do
is
we
take
all
the
static,
analyzer
output
and
we
basically
create
a
bundle.
So
when
you
run
the
full
test
pass
over
the
source
code,
we
have
a
bundle
of
all
of
the
output
and
some
meta
information
and
basically
an
archive
format.
A
We
have
the
original
serif
files
and
then
we
post
process
them
to
remove
redundancy,
and
then
we
have
the
post
process
seraphiles
as
well
in
the
dashboard
and
the
in
the
bundle,
and
these
results
are
then
made
available
to
our
dashboard,
which
is
currently
a
visual
studio
code
plugin
that
allows
you,
then,
to
view
the
output
and
so
we're
trying
to
keep
down
the
noise
as
much
as
we
can
we'd
like
to
remove
what
I
was
calling
semantic
equivalence
which
are
maybe
multiple
tools
reporting
the
same
issue
right,
but
the
they
might
have
different
different
verbiage
and-
and
we
can
identify
by
that
and
and
even
improve
further
and
cut
down
on
the
noise
on
the
output.
A
Another
thing
we're
looking
to
do
in
the
future
is:
we've
enabled
a
lot
of
these
analyzers.
We
really
want
to
find
the
right
mix
of
the
analyzers
to
get
to
balance
the
signal
to
noise
ratio
and
get
the
best
results
without
having
to
have
a
burden
of
sorting
through
all
of
the
output.
So
that's
in
process
as
well
that
over
time,
we'll
learn
which
generate
the
highest
quality
output
and
what's
the
right
set
in
order
to
capture
the
best
results,
so
I
mentioned
icos
a
couple
times.
A
I
have
got
Links
at
the
bottom.
You
can.
You
can
look
into
this,
but
icos
is
a
static
analysis
framework
based
on
the
theory
of
abstract
interpretation,
so
that
is
a
an
acceptable
Tool
for
333..
A
So
it
is
a
framework.
Let
me
move
on
here,
so
it's
basically,
it
uses
llvm
front
end
and
it
ends
up
with
can
creating
an
intermediate
representation
that
is
useful
for
static
analyzers,
and
so
there
are
implemented
analyzers
and
you
can
also
use
it
as
a
framework
and
add
additional
analysis
analyzes
if
you
like.
So
it's
a
very
powerful
one,
very
deep
analysis.
A
Extensible-
and
this
is
this-
and
in
sync,
with
the
standards,
on
the
other
hand,
there's
another
one
called
cobra,
which
is
entirely
different.
It
doesn't
work
by
doing
a
full
parse
of
the
source
code.
It
only
excuse
me,
it
only
does
a
tokenization
of
the
source
code
and
then
it
has
rules
across
that
token
set.
So
you
can
it.
It
works.
Well,
it
scales
up
very
well
to
really
large
code
bases.
It
can,
for
example,
take
in
the
whole
Linux
kernel
and
then
you
can
have
rules
or
queries
against
that
whole
entire.
A
You
know
set
of
token
streams,
so
it's
less
powerful
because
of
that's
what's
using
regular
Expressions
rather
than
you
know
the
full
parse,
but
it
can
do
a
lot
of
code
very
quickly,
so
you
can
write
your
own
Checkers.
It
comes
with
a
lot
of
different
rule,
sets
like
for
the
JPL
rule,
set
common
weakness,
enumeration
rule
set
power
of
10
rule
set,
and
we
have
added
Autos,
RC
plus
14,
but
because
of
the
nature
of
the
tool,
and
because
some
of
the
rules
in
that
are
not
automatically
verifiable.
A
It
also
has
an
interactive
mode
by
the
way
that
we
don't
use.
There's
an
interactive
query
engine,
so
you
can
just
bring
in
a
whole
bunch
of
code
and
perform
queries
against
it
and
do
searches
through
the
code
for
certain
patterns
that
may
you
may
not
that
are
not
desirable.
A
So
for
these
codes,
we've
written
an
Ahmed
front
end
like
you
have
in
Ross
for
all
of
the
other
tools.
You
know
there
might
be
like
comment
Clank
tidy,
for
example.
So
it's
there's
a
uniform,
pretty
uniform
interface
with
these
tools,
so
we
have
almond
Cobra
and
almond
icos.
Here's
an
example
of
Allman
Cobra
command
line.
So
you
have.
The
typical
include
directories
and
exclude
directories
of
all
the
oment
tools
have
that
X
unit
output
file,
which
they
all
have
that
and
now
they
all
have
the
serif
file
output
as
well.
A
And
serif,
by
the
way,
this
is
kind
of
the
the
framework
of
the
serif
schema.
There's
information
about
The
Tool
version
number,
the
the
schema
used
for
the
serif
and
and
the
real
Crux
of
it
is
there
there
are
runs,
so
there
can
be
one
or
more
runs
for
our
analysis,
analysis
sessions,
so
to
speak,
so
you've
got
the
tool
info
and
the
rules
that
fired
for
that
tool.
So
it's
empty.
A
So
this
is
an
example
of
a
visual
studio
code
and
the
plug-in
the
serif
plugin.
So
with
this,
this
is
how
you
would
view
the
results
so
I
mentioned
previously.
We
create
an
archive
of
all
of
these
results,
and
now
you
can
navigate
through
these
results.
I've
highlighted
one
here
and
down
in
the
lower
right.
A
You
can
see
the
details
about
the
rule,
the
description
of
it
and
the
rule
identifier
and
then
over
on
the
left,
it's
kind
of
hard
to
see,
but
it's
highlighting
the
line
that
it's
referring
to,
basically
just
allowing
you
to
navigate
the
code
through
through
all
of
these
issues.
It
is
fairly
convenient.
What
we're
adding
to
this
plugin
are,
you
know,
charts
and
graphs,
and
at
the
request
of
blue
origin,
they
wanted
to
see
more
project
management
level
interface,
so
like
burn
down
of
issues
over
time.
A
Where
are
these
issues
coming
from
which
source
files
which
tools
are
reporting
them
all
of
that
kind
of
thing?
So
we
have
multiple
charts
that
we've
added
to
the
visual
studio
code
plugin.
A
So
we
have
a
GitHub
repository
for
all
of
the
space
Ross
work
and
this
plugin
is
along
with
all
of
the
other
code.
Is
there.
A
I
won't
go
into
that
one,
but
on
the
space
specific
functionality.
So
that's
basically
what
we're
doing
on
the
tools
and
process
side
a
lot
of
requirements,
a
lot
of
static
analysis,
A
visualization
of
the
results
and
on
the
space
specific
side,
a
really
cool,
a
cool
and
important
one
is
the
CFS
ross2
bridge
I
mentioned
that
CFS
is
quite
readily
used
in
on
the
flight
system.
So
we
want
to
bridge
an
open
source
bridge
to
that
track.
A
I
mentioned
the
Eventing,
inflammatory
subsystem
and
the
memory
allocators
that
we're
working
on.
We
also
have
navigation
and
manipulation
demo
apps,
so
we
have
Mars
Rover
here
as
well
as
the
Canada
arm
and
we're
currently
adding
move
it
to
to
the
Canada
arm.
Previously,
it
was
just
you
know,
built
on
Space,
Ross
and
kind
of
you
could
drive
the
mechanisms,
but
we
didn't
have
the
the
planning
motion
planning
and
now
that's
being
integrated
this
week.
A
So
what
we're
looking
at,
we
want
to
integrate
in
summary
kind
of
the
open
source
tools
and
processes
to
improve
code,
quality
requirements,
analysis,
workflow,
quality
levels,
and
this
is
we're
doing
all
this
in
the
context
of
space
Ross.
But
we
have
recognized
certainly
that
it
could
be
useful
the
tools
and
processes
in
particular
to
other
domains.
So
we're
looking,
you
know,
I
I
present
this
quite
a
bit
to
other.
You
know
roscon
elsewhere,
and
so
we
would
welcome
contributions
and
and
input.
A
You
know
your
thoughts
about
perhaps
from
the
security
perspective,
of
course,
and
and
tools
that
could
could
or
should
be
integrated
and
we're
also
looking
for
you
know
to
collaborate
on
some
of
this,
given
that
it's
all
open
source,
yeah
so
I
think
that's
that's
kind
of
the
that's
the
story.
So
any
questions
that
at
that
point.
B
C
Right,
well,
thanks
Michael
for
a
great
presentation.
That's
really
really
interesting!
C
C
A
Know
the
right
or
not,
yeah,
that's
that's
an
excellent
question.
Yeah
honestly,
we've
been
focused
on
the
Ross
code
and
kind
of
the
Ross
quality
levels,
but
there
are
many
system
dependencies
and
how
are
those
integrated
and
certified?
It
is
a
very
large
network
of
dependencies.
A
I
I
would
suppose
that
you
know
I
haven't
thought
about
this.
Much
actually
I've
been
so
focused
on
the
Ross
code,
but
I
think
that
was
you
know
there.
A
There
are
systems
that
have
flown
like
CFS
that
are
based
on
like
an
operating
system,
abstraction
layer
inside
CFS,
there's
another
library
that
abstracts
the
OS
and
that
has
been
certified
and
flown
so
I
would
imagine
the
lowest
levels
of
Ross
would
have
to
be
ported,
and
that
is
maybe
a
you
know
to
the
Target
system
right
like
we're
working
on
artems,
for
example-
and
we
have
you
know
right:
Ross
works
with
you
know
Linux
and
windows
and
and
so
on,
but
there
could
be
an
abstraction
or
a
mapping
to
something
like
osal
that
could
encapsulate
kind
of
the
lower
levels
of
the
operating
system
or
something
that's
already
been
certified.
A
I
think
that's
how
I
would
think
about
it
or
approach
it,
but
honestly
I
haven't
kind
of
mapped
out
into
the
dependencies.
All
the
way
down,
at
least
on
on
Linux
I,
think
you'd
have
to
you'd
have
to
get
it
on
top
of
something
like
that.
C
Right
and
the
question
that
someone
had
followed
to
to
the
previous
one
is
as
part
of
the
certification
are
you
asked
to
provide
a
software
bill
of
material
or
ultimate
and
as
s-bone.
A
My
honest
answer
is
I,
don't
know
we
haven't
gone
through
the
certification
process.
I've
been,
you
know,
working
with
them
on
what
are
the
kinds
of
things
the
kinds
of
evidence
in
trying
to
drive
to
the
details
NASA
does
have
I
will
tell
you.
This
NASA
has
another
tool.
A
That's
in
development,
that's
not
open
source,
yet
that
is
meant
to
help
in
the
auditor
auditing
process,
so
that
is
more
like
checklist,
based
and
so
on,
and
so
they're
working
on
having
and
I'm
very
interested
in
that
because
I
think
that
would
be
another
interesting.
A
good
tool
to
integrate
is
something
to
support.
A
So,
if
you
could
say
customize
that
for
your
particular
standard
and
you're
walking
through
the
checklist
but
I
don't
know
specifically
if
they
have
a
bomb
of
all
of
the
all
of
the
you
know,
binaries
or
or
you
know,
elements
that
that
comprise
the
entire
system.
I,
don't
know,
you
know
a
lot
of
these
standards
from
what
I've
seen
so
far
very
high
level
kind
of
meta
level
requirements
and
not
very
prescriptive
about
what
you
must
do.
You
know
it's,
it's
so
I
think
you're
developing
a
case
for
the
the
Auditors.
A
You
know
for
the
certification
and
evidence
that
you
provide.
You
know
it.
It
seems
like
to
me
I'm
a
little
bit
naive
because
I
haven't
gone
through
the
process,
but
it
seems
that
the
the
better
evidence
you
provide
could
only
help.
A
D
Sure
could
you
give
a
little
more
detail
on
the
work
with
the
the
PMR
allocator
and
in
particular,
are
there
strategies
for
validating
that
the
allocator
is
in
fact
being
used
in
all
places
where
we'd
like
it
to
be
used.
A
I
can
try.
Let
me
see
if
I
have
one
second
here.
A
Yeah
William
is
working
on
that
one.
So
what
I
think
he's
found
occurrences?
He
had
a
couple
issues
on
GitHub,
where
we
could,
where
he's,
ensuring
that
the
allocator
is
used
everywhere,
so
he
found
some
occurrences
and
has
made
some
fixes
for
that.
So
what
we're
trying
to
do
with
the
allocator
is
that
space
Ross
applications
could
make
use
of
a
user
supplied
allocator
and
it
would
be
consistently
used
in
all
cases
and
we're
providing
a
sample
application
and
a
sample
allocator.
A
That
demonstrates
the
use
of
of
an
allocator
and
then
he's
documenting
that.
So
the
issues
were
that
you
know
he
found
some
occurrences
where
it
wasn't
being
properly
used.
Let's
see
what
else
here
messages
there
are
currently
no
way
to
specify
the
PML
or
allocator
if
messages
contain
strings
or
vectors
currently
no
way
to
use
the
allocator.
So
there's
an
issue
entered
for
that
and
I
guess
there's
an
issue
with
the
c
rap
C
plus
allocator
a
couple
more
issues.
A
So
a
lot
of
what
William
is
doing
here
for
the
rest
of
the
year
is
kind
of
cleaning
all
of
this
up
and
then
documenting
the
use
of
of
that
type
of
allocator
and
providing
the
example
so
I
can
provide
you
these
issues,
if
you
like,
but
that's
about
how
deep
I
go
with
William,
has
been
doing
all
this
work.
D
A
D
D
A
Yeah
the
methodology
he
was
doing.
He
was
running
this
when
he
had
a
package
that
would
check
for
dynamic
memory
allocations
like
Malik
and
free
and
so
on
so
William
there
was
a
package
in
Ross
memory
tools,
I
believe
it
is,
and
so
William
was
running
memory
tools
as
he
was
running
this,
and
that
was
helping
him
to
discover
allocations
that
were
undesirable
or
that
weren't
using
the
Alec.
The
you
know
this
custom
allocator.
D
Thanks
and
if
I
could
go
one
more
follow-on
question
we've
heard
that
are
there
particular
rmw
implementations
that
you're
targeting
or
is
that
meant
to
be
up
to
the
the
user
who's
deploying
a
system
to
pick
an
rmw
implementation,
yeah.
A
A
The
latter,
so
it
would
be
up
to
up
to
the
application
we
wouldn't
necessarily
dictate,
and
so
yeah
I
think
having
having
these
tools
and
and
processes
that
could
be
applied
in
different
steps.
There
may
already
be
you
know,
certified
middleware
or
you
could
even
write
a
you
know.
Cfs
could
be,
could
be
wrapped
as
as
middleware
as
well.
You
could
bridge
to
it
or
there's
different
architectures
that
you
could.
You
could
Envision,
but
I
think
the
latter
we're
not
dictating
the
rmw.
D
B
B
What
are
the
requirements
that
you
have
to
fulfill
for
in
terms
of
security
for
certification,
so
I
wasn't
able
to
find
like
a
link
that
I
was
sure
was
it,
you
know,
feel
free
to
share
a
link,
if
that's
an
easy
way
to
explain
it,
but
I'm
guessing
by
integrating
all
these
tools
and
fixing
issues,
you're
sort
of
aiming
for
a
certain
standard
or
following
some
a
set
of
rules
for
what
we
prioritize
for
what
kinds
of
issues
are
yeah.
A
We
don't
end
up
like
low-level
requirements
for
it,
but
there
are
like
documents
like
NASA's
7150,
I
think
that
you're
into
security
requirements,
the
projects
show
like,
for
example,
Century
Security
requirements,
there's
a
separate
document
that
I
haven't
delved
into
yet
so,
for
example,
7150
does
call
out
the
security
requirements
of
MPD
2810
right.
So
that's
that's
another
thing
that
you
could
look
into
so
sorry,
I
can't
give
we
don't
have
the
all
of
the
requirements
mapped
out.
A
That's
what
I'm
kind
of
working
on
is
what
is
NPR
7150
say
and
what
are
the
can
we
make
those
requirements
explicit
in
a
tool
like
doorstop
and
then
how
do
they
map
down
through
the
rest
of
the
requirements
down
eventually
to
packages?
A
That's
the
type
of
thing
that
we're
going
for
we've
been
working
on
kind
of
the
foundation
over
the
last
year
of
how
would
one
do
that,
and
now
we're
actually
kind
of
starting
to
do
that,
which
is
to
digest
these
documents
and
these
requirements,
and
there
is
like
a
hierarchy
of
documents,
for
example
from
NASA.
A
You
know
they
have
the
high
level
requirements
and
they
have
specific
requirements
here.
That
then
break
down
into
other
levels
and
really
how
much
we
want
to
be
aligned
with
this
type
of
thing.
We
don't
necessarily
want
to
reproduce
all
of
it,
but
we
have
to
find
what's
the
right
level
that
we
have
captured
requirements
right
for
space
Ross
itself,
so
that
we
can
be
used
by
a
A
system
that
is
Guided
by
this
document
hierarchy.
A
Hopefully
that
makes
sense
so
we're
trying
to
find
our
way
to
find
the
right
level
to
support
a
NASA
project
like
this
without
you
know,
redundantly
doing
extra
work
on
our
side.
So
it's
a
long-winded
answer,
but
there
is
a
document
that
talks
about
security
requirements.
So
if
you
look
for
NASA
7150
and
you
can
take
it
from
there.
B
Then
it's
kind
of
related
to
a
previous
question
and
it
has
to
do
with
sort
of
your
process
when
it
comes
to
making
use
of
the
results
that
you
get
through
these
tools
that
you
showed
before.
What
does
your
security
analysis
process?
Look
like
how
do
you
prioritize
and
triage
the
issues
to
be
fixed?
Let's
say.
A
How
do
we
prototype
the
issues
to
be
fixed,
I,
didn't
quite
capture
the
gist
of
your
questions,
so,
if
we
have
so,
can
you
repeat
that
sorry.
B
Yeah
for
sure,
so
how
does
your
triashing
of
these
issues
work?
Do
you
prioritize
just
basal
and
severity
what's
being
shown
as
more
critical,
maybe
should
be
fixed
first
or
do
you
to
some
packages?
That's
if
any
issue
is
found
in
those,
then
it
absolutely
needs
to
be.
B
C
A
So
so
far
we
have
the
way
we've
done.
It
is
pretty
straightforward,
so
William
took
say,
clang
tidy
first
and
he
was
looking
at
the
output
of
just
clang
tidy
but
you're
absolutely
right.
We
have
thought
a
little
bit
about
given
that's
a
problem
with
all
the
analyzers.
When
you
add
in
more
analyzers
you
introduce
redundancy
and
kind
of
noise
and
things
like
that,
and
that
was
really
what
I
was
focused
on-
is
their
higher
quality
from
one
versus
the
other.
A
Or
is
there
one
that,
like
icos,
has
been
used
by
NASA
before
and
that's
a
really
important
one?
Maybe
we
find
that
CPP
check
is
just
redundant
and-
and
you
know
nowadays
it's
slow,
so
maybe
we
drop
that
so
we're
kind
of
in
the
process
of
of
dialing
that
in
and
and
what
you
say
is
like
is
you
know
really
important.
There
could
be
categories
of
issues
that
are
more
important
than
others.
That
kind
of
gets
to
the
to
the
analysis
of
all
that
output.
A
I've
talked
to
people
at
Nasa
that
have
used
static
analyzers,
and
they
have
done
things
that
are
like
really
simplistic
on
Project,
which
we're
just
looking
at
the
top
20
issues
right
now,
because
there's
so
much
information,
that's
generated
so
even
on
a
lot
of
these
projects,
I,
don't
think
they're,
sorting
through
and
prioritizing,
but
I
think
that's
a
really
good
thing
to
be
thinking
about.
Is
there
you
know
quality
of
analyzer.
You
know,
maybe
one
has
a
really
good
output,
and
so
that's
prioritized
or
certain
types
of
issues
are
prioritized.
A
You
know
something
discovered
by
a
security
check
or
a
memory
related
issue.
A
I
think
that
that's
something
that
would
be
helpful
for
us
in
the
that
dashboard
to
not
just
display
all
the
stuff
but
to
be
able
to
sort
by
different
ways
we
can
currently
sort
by
or
filter
rather
would
be
a
better
term
filtered
by
analyzer.
We
can
do
that.
We
can't
really
filter
yet
by
type
of
issue,
but
I
think
it's
a
great
idea.
A
Yeah
interesting
issue
types
and
we've,
we
thought
about
kind
of
aggregating
issues
of
saying
you
know
all
of
these
are
in
a
certain
class,
so
you
have
a
hierarchy.
That's
one
way
to
address
the
volume
of
issues
you
know
gather
all
of
the
memory
related
issues
together
and
if
you
could
start
thinking
about
them
in
terms
of
categories
that
you
define,
you
could
have
then
assigned
priorities
to
those
categories,
and
that
would
be
a
pretty
straightforward
thing
to
do.
A
You
know,
and
but
you'd
have
to
go
through
the
issue
types
and
and
classify
them,
and
we
thought
about
doing
that
kind
of
on
the
Fly
of
being
able
to
tag
them.
For
example,
tag
an
issue
create
tags,
associate
tags
with
issues
and
then
filter
on
tags
that
could
go
towards
what
you're
talking
about.
B
A
Definitely
and.
B
A
I
think
I
think
that
that
is
an
excellent
idea.
You
know
we're
getting
training
right
now
on
kind
of
the
you
know:
safety,
culture
and
that
type
of
thing
and
and
we're
reading
a
book,
and
it
does
mention
that
you
know
security,
Now
is
kind
of
an
intrinsic
part
of
you
know
safe
system,
it's
hard
to
extract
anymore,
so
I
have
been
focused
on
requirements
in
terms
of
quality
levels,
but
developing
a
threat
model
could
certainly
be
something
that
is
done
for
a
higher
quality
package.
A
B
A
A
B
Okay,
so
I'm
actually
curious
to
when
you
do
make
a
fix
you
you
mentioned
that
you
propose
him
Upstream
as
well,
so
it's
available
for
the
whole
Community
do.
C
A
It
is
so
when
we
yeah,
when
we
make
a
fix,
it's
just
like
any
other
there's
you
know
fix
upstream
and
there's
not
a
notification
per
se.
A
There's
is
that
what
you're
asking
about
so
there's
just
just
a
normal
process
on
Ross
too,
so
he
would
William
would
submit
a
PR
against
a
particular
package
where
he
found
an
issue,
and
sometimes
it
was
things
like
to
be
able
to
to
you
know:
you're,
creating
noise,
because
one
you
know
sometimes
the
the
analyzers
are
turned
off
by
comments
and
trying
to
make
sure
we
can
control
control
the
analyzers
and
not
making
spurious
issues
because
of
a
particular
format
of
things
like
that.
A
There's
all
kinds
of
issues
that
he
was
addressing
both
important
and
kind
of
trivial,
just
to
increase
the
quality
of
output,
but
there's
no
special
notifications
or
anything
it's
just
a
normal
process,
but
we
can
see
so
we've
been
trying
to
Upstream.
You
know
the
code
as
much
as
possible,
but
you
can
see
over
time.
Eventually
there
could
be.
A
You
know
conditionalization,
you
know,
for
this
could
be
generalized
too,
with
space
there
could
be
more
of
a
safe
Ross
and
at
some
point
there
might
need
to
be
some
code.
Divergence
and
you'd
want
to
be
as
careful
as
possible
with
that.
So
you
might
have
conditionalization
compile
time,
conditionalization
or
maybe
there's
runtime
configuration,
or
maybe
you
know,
there's
a
lot
that
we
could
do
before
having
to
Fork
forca
module.
A
We
want
to
avoid
that
because
in
Williams
experience
with
apex.ai,
for
example,
that
can
be
very
costly
to
maintain
a
fork
so
we're
trying
to
avoid
it
as
much
as
possible.
B
A
I
think
I
understood
properly,
I
mean
it
was
just
like
say
working
on.
You
know
any.
We
maintain
a
lot
of
the
core
Ross
packages
already
and
if
we
find
an
issue
through
a
static
analyzer,
it's
just
addressing
the
fix
in
the
normal
Ross
to
workflow
and
submitting
that
PR
and
integrating
it
and
we're
we're
tracking,
humble
right
now
with
space
Frost.
So
as
fixes
are
submitted
into
the
humble
Branch.
You
know,
then
we'll
we'll
pick
them
up.
C
The
artifacts
generated
by
all
those
they
are
publicly
available
is
that
correct.
A
C
A
So
what
happens
is
there's
a
so
right
now
we
have
space,
Ross
and
Docker
image
and
all
of
the
analyzers
are
available
in
that
Docker
image.
And
if
you
run
like
a
call
contest,
it'll
run
all
of
the
analyzers
and
there's
we
also
have
you
know
the
ability
to
create
the
archive
from
the.
So
once
you
run
all
the
analyzers,
the
output
is
in
the
normal
place
in
the
build
directory.
As
with
like
the
XML
files,
you
know
the
junit
XML.
Alongside
there
there's
also
serif.
A
We
can
run
another
process
that
then
Aggregates
all
of
those
and
creates
the
archive
for
the
dashboard.
So
it's
available
during
the
normal
like
test
cycle,
so
you
do
a
build,
but
you
can
also
do
a
test
and
that's
during
the
test
phase.
That's
where
it
runs
all
the
analyzers.
No,
it's
not
yet
like.
We
don't
have
the
artifacts
posted.
A
Yet
that's
what
we
want
to
go
to
like
have
a
continuous
integration
where
the
artifacts
are
already
available,
but
they're
not
like
posted
as
of
yet
they
can
easily
be
generated
through
using
the
docker
image,
we're
also
integrating
Earthly,
which
has
it's
kind
of
Earthly.
If
you're
not
familiar
with
it,
it's
kind
of
a
combination
of
Docker
and
make
in
a
way
you
can
have
different
targets,
and
so
with
Earthly
you
could
do
a
build
which
would
execute
all
the
different
Docker
commands.
You
could
run
kind
of
the
test
which
would
execute
those.
A
You
could
run
kind
of
the
packaging
phase
which
would
create
the
archive,
and
so
it's
very
easy
to
do
yourself,
and
we
do
want
to
get
to
the
point
where
all
of
this
is
just
available
after
a
normal
nightly.
Build
and
and
Earthly
actually
helps
us
to
do
that,
because
it
creates
all
of
these
different
steps
and
those
steps
will
be
identical,
whether
it's
on
the
development
machine
or
the
CI
machine.
A
C
C
However,
you
may
want
to
you
may
want
to
consider
not
making
them
public,
because
what?
If
these
Tunes
were
to
pick
up
an
actual
proper
security
issue
and.
B
C
A
An
excellent
point:
hadn't
gotten
there,
with
kind
of
considering
the
security
and
vulnerability
and
Reporting
process,
and
that
type
of
thing
that's
an
excellent
point.
Thank
you.
C
I
do
have
a
last
small
question
if
we
were
to
add
our
own
tooling
to
this
framework,
your
own
static,
analyzer
Etc.
Where
do
we
start.
A
A
Certainly
it
doesn't
yet,
and
so
you
would
how
you
would
do
that
is
look
at
one
of
the
Ament
tools
like
Ament
clang
tidy
and
how
it
works
to
interface,
to
an
external
tool
like
Clank
tidy
and
then
how
it
generates
the
serif
output.
So
there
are
several
examples
in
immense
lint
repository
on
the
space
Ross
branch
of
how
to
implement
these
tools,
there's
not
documentation
as
far
as
how
to
explain
how
to
do
it
for
somebody
else.
A
We've
added
a
couple
ourselves,
but
that's
how
it
would
be
done
if
and
I
could
certainly
provide
the
information.
But
that's
a
good
idea
for
an
article.
What
we're
doing
is
we
have
a
space
Ross
website
and
we
have
a
space
Ross
documentation
site.
That's
coming
up,
so
we
could
write
an
article
on
how
to
add
another
analyzer.
The
steps
that
are
going
through
that
I
will
make
note
of
that.
I
think
that's
a
really
good
idea.
If
somebody
wanted
to
add
another
one,
but.
C
B
All
right,
let
me
add
another
quick
question:
do
you
plan
to
continue
using
the
same
tooling
or
code
analysis
as
you
have
so
far,
the
icos
and
cobra?
Have
you
found
that
it
works
well
in
combination
and.
A
Yeah,
so
we
want
to,
we
haven't.
We
need
to
make
more
use
of
icos,
it's
a
very
sophisticated
tool
and
we
haven't
yet
Cobra
Works
quite
well
for
what
it
does
it's.
It's
great.
You
know
it's
fast.
It's
a
lot
of
different
rule
sets.
You
can
write
custom
rules
once
you
learn
how
to
do
that.
That's
a
nice
tool.
A
Icos
is
a
very
sophisticated
tool
and
it
works
on
it
works
basically
on
by
starting
at
the
main
function
and
doing
kind
of
a
deep
parse
and
looking
at
the
whole,
the
whole
program.
So
it's
instead
of
looking
at
individual
source
files,
it
really
is
more
oriented
towards
the
entire
program
and
we
need
to
get
to
the
point
where
we
have.
These
are
full
examples.
Running
like
I
mentioned
the
manipulator
example
that
uses
move
it
and
things
like
that.
A
So
once
we
have
a
full
stack,
we
can
make
better
use
of
icos
by
pointing
it
at
you
know
either.
You
know
in
this
case
we'll
have
to
do
multiple
nodes,
but
basically
you
want
to
point
it
at
a
main
and
then
it
can
analyze
the
code
all
the
way
through
the
the
the
dependencies,
so
I
found
Cobra
to
be
very
flexible,
very
easy
to
use
and
I
cost
very
sophisticated,
but
we
haven't
made
extensive
use
of
it.
Yet
we
haven't
realized
its
potential.
Yet
let
me
put
it
that
way.
B
Great
I
think
we're
a
bit
over
time
already.
I
know.
If
anyone
has
final
questions,
comments
or
Michael,
do
you
have
any
requests
for
the
group
anything
that
you
hope
that
we
can
help
contribute
to
the
project
yeah.
A
I
think
if
you
have
any
particular
analyzers
that
you're
looking
at
I,
think
that
would
be
interesting
to
know
you
know
which
ones
and
why,
and
you
know,
hopefully,
we
would
be
aligned
with
our
open
source.
You
know
intentions
as
open
source
analyzers
that
you
think
would
be
good
to
add.
A
That
would
be
great
information
and
other
points,
like
you
mentioned,
like
we
hadn't
gotten
to
yet
like
security
vulnerabilities
and
on
you
know,
being
careful
not
to
just
put
information
out
there
about
a
vulnerability.
I
think
that's
a
very
great
point,
so
any
type
of
or
if
there
are
security
requirements.
Those
are
those
are
kind
of
the
areas
I'm
thinking
about
is
the
the
analyzers
and
the
quality
level.
What
should
we
do
if
anything,
about
the
security?
If
we
were
to
define
a
quality
level
zero
for
Ross?
What
would
it
say
about
security?
B
Although
you're
referring
to
setting
some
kind
of
security
standard
for
those
packages,
yeah.
A
It
doesn't
there's
nothing
really
said:
is
there
or
that
I
I
recall
about
you,
know
to
achieve
the
highest
quality
level?
Is
there
anything
security
related
that
should
be
done
for
a
package?
Is
there
an
analysis
that
should
be
performed,
or
should
there
be
a
consideration
of
a
threat
model
for
that
package,
anything
that
would
that
should
be
introduced.
A
You
know
we
can
imagine
the
highest
quality
level.
There
should
be
requirements
specified
for
the
package
right,
something
like
that.
The
requirements
could
be
analyzed
to
make
sure
that
they're,
consistent
and
non-conflicting,
and
and
all
of
that
so
I
can
imagine
statements
at
the
highest
quality
level
that
are
requirements
related.
A
C
D
B
Thank
you
so
much
Michael.
So
if
you
wanted
to
contact
you
we'll
continue
to
that
over
email
and
follow
up,
this
yeah
feel.