►
Description
Introduction to ClairCore and ClairV4 Release Update
Louis DeLosSantos (Red Hat)
OpenShift Commons Briefing 2020-01-116
A
B
It
is
a
redesign
of
a
current
application
called
Claire
and
we're
going
to
go
over
a
little
bit
of
the
features
and
make
it
basically
an
upgrade
from
the
current
Claire
application.
This
is
basically
what
we'll
be
discussing.
Claire
core
architecture
features
the
timeline
that
we're
looking
at
and
any
contributions.
B
So
this
is
Claire
core
architecture.
We
split
the
functionality
of
Claire
into
to
go
packages
called
Lib
index
and
live
Bhan
live
index
is
responsible
for
concurrently
scanning
container
layers
and
obtaining
lists
of
artifacts
right
now.
Claire
is
heavily
focused
on
packages
right
now
heavily
focused
on
Linux
packages,
but
as
time
moves
on
we're
going
to
be
focusing
more
on
software
packages
and
then
other
extensible
causes
so
live.
Wan
now
is
responsible
for
actually
taking
the
results
of
an
Indian
and
now
matching
this
to
a
database
of
vulnerabilities.
B
Something
in
the
background
that
happens
in
live
bone
also
is
just
keeping
the
vulnerability
database
up-to-date,
but
we
don't
go
into
that
too
much.
It's
mostly
focused
around
just
the
use
cases
so
to
obtain
this
functionality,
we
have
a
data
flow
which
exists
of
you.
Providing
us
a
manifest.
Manifest
is
really
just
an
outline
of
where
we
can
go
and
grab
the
layers
of
your
container.
We
scan
the
contents.
B
So
let's
go
into
some
of
the
features
that
Claire
Corr
can
offer,
and
some
of
the
design
points
that
kind
of
led
us
through
this
new
project.
One
thing
we
wanted
to
do
is
that
if
you
look
at
the
workloads
between
indexing
and
actually
matching
vulnerabilities,
they
have
very
different
performance
characteristics,
so
it
made
a
logic.
It
made
logical
sense
to
kind
of
split
those
functionalities,
which
also
means
that
you
can
asymmetrically
scale
whether
you
are
an
upload
heavy
or,
if
you're
a
request,
heavy
application.
B
Upload
being
you
know,
the
workers
which,
on
tar
layers,
do
a
lot
of
heavy
lifting
on
the
file
system
versus
the
read
heavy,
which
would
be
you
just
have
a
ton
of
people
who
are
requesting
vulnerability,
insights
based
on
previous
indexes.
This
also
allows
operations
engineers
to
distribute
the
application
over
a
network,
or
they
can
run
the
libraries
together
in
a
single
process.
So
it's
just
a
doula,
more
flexibility.
B
Test
ability
has
been
increased
because
now
what
we
can
do
is
we
can
test
those
two
functionalities
in
isolation.
We
can
test
our
indexing
works
correctly.
We
can
also
test
vulnerability,
matching
works
correctly
by
mocking
our
index
reports
that
was
not
previously
available
to
us.
One
of
the
definitely
one
of
the
design
decisions
was
to
increase
just
overall
testability
content.
B
B
A
concept
where
a
unique
identifier,
normally
a
sha-256
hash,
uniquely
identifies
the
contents
of
either
a
tarball
a
layer
or
manifest.
In
our
case,
claire
v2
had
concepts
of
layer
content,
address
ability,
but
never
at
the
manifest.
So
we've
created
content
adjustability
as
a
first-class
citizen.
It's
in
our
data
model
and
its
core
to
the
way
claire
core
works,
and
that
helps
us
in
the
next
slide,
with
a
simplified
data
model.
Now
that
we
focus
on
content
addressability,
the
data
model
actually
became
a
lot
slimmer
because
we
can
use
them
as
primary
keys.
B
The
data
model
now
also
has
first-class
support
for
stores
packages,
which
is
a
little
bit
of
a
linux
package.
Standing
nice
knowledge,
but
basically,
if
you
have
a
binary
package,
often
you'll
want
to
know
the
source
that
was
used
to
compile
that
therapy
to
didn't.
Have
this.
We
built
this
in
declare
core.
Also,
we
have
package
architecture
support
in
the
data
model.
Right
now,
which
means
we
can
filter
vulnerabilities
based
on
system
architecture,
the
vulnerability
data.
Consistency
basically
involves
us,
creating
data
consistency,
business
logic
in
our
vulnerability
database
application
code.
B
In
short,
that
just
means
that
when
we
go
out
to
a
vulnerability
source
on
day
one
we
go
and
we
index
everything
we
go
out
on
day
two.
If
things
have
been
removed
or
if
they
have
been
changed,
we
now
account
for
that
in
Prior
versions
of
Claire.
We
would
just
continually
store
the
different
elements
and
the
advisories
that
came
so
now.
We
actually
have
consistency
in
the
vulnerability
database
standards.
First.
This
is
a
an
effort
of
ours
to
basically
always
choose
a
standard
type
of
advisory
database.
B
To
work
with
right
now,
opal
is
something
that
you
know.
We
utilize
quite
a
bit
I
think
there
might
be
other
standards
that
will
be
emerging
and
ones
that
we
would
like
to
keep
abreast
of.
We
would
like
to
always
use
a
standard
over.
You
know:
custom
HTML
sites
that
have
been
scraped
over
github
that
has
been
scraped,
I,
think
it
helps
bolster
the
standards
and
it
helps
us.
You
know
just
having
structured
data
extendable
use
case.
This
is
kind
of
something
I'm
really
excited
about.
B
You
know
the
models
that
we're
working
with
in
the
and
the
fact
that
we
split
you
know.
Indexing
containers.
Content
from
vulnerabilities
scanning
also
means
that
we
can
use
that
indexing
for
different
objectives.
Right
we
can
index
I.
Think
Hank
had
a
really
good
idea
about.
You
know
finding
private
keys
that
you
might
not
actually
want
in
the
container.
We
might
want
to
look
for
the
characteristics
of
those
files
report
that
in
the
index
report
and
actually
show
that
as
a
vulnerable
object.
B
That
tech
preview
is
basically
where
a
lot
of
our
efforts
to
get
to
be
one
are
really
going
to
show
because
I
mean
way
is
basically
going
to
be
the
low
testing
of
the
application.
We're
moving
we're
rapidly
developing
to
be
one
I'm,
pretty
excited
to
get
it
into
Quay,
and
actually
you
know
get
it
get
some
volume
onto
the
services
itself
and
then
here
are
some
areas
where
we
could
use
some
help
and
kind
of
looking
for
community
contributions.
B
B
Anyone
who
is
on
packaging
software,
packaging
teams,
rpm
Deb,
know
pick
all
those
assets,
even
just
for
knowledge
share,
would
be
fantastic
and
then
just
being
in
an
internal
Red,
Hat
team
I
think
it
would
be
a
good
goal
for
us
to
try
to
bridge
the
gap
between
our
team
and
our
internal
set
off
steams.
You
know
to
have
a
quick
feedback
loop
around
the
way
we
package,
our
containers,
oval
definitions,
those
type
of
categories,
so
yeah
I
would
love
to
see
some
more
people
like
reaching
out.
B
So
that's
that's
the
overview
of
Claire
core
Claire
v4
will
be
basically
an
implementation
of
their
core.
That's
why
we
kind
of
focused
on
click
or
heavily,
and
that
will
be
again
slated
for
late
January
2020
for
a
tech
preview.
So
here
are
some
links
clear
core
repository
the
Quaid
project.
This
is
my
email
address.
This
is
my
coworker
Henry's
email
address
where
the
core
maintainer
is
for
Claire
for
right
now
and
then
there's
a
link
to
project
Kwai.
A
Awesome
so
thanks
thanks
Louis
that
that's
really
amazing
to
hear
about
the
rear
connecting
and
everything
so
I'm
really
appreciative
of
you
guys
taking
the
time
to
share
this
with
us.
The
only
question
that
I
really
have
is
for
people
who
are
using
the
the
current
Claire
architecture.
Is
there
any
migration
issues
or
anything
that
they
should
be
aware
of
or
heads-up
that
you
that
they?
If
someone
has
already
started
and
is
currently
using
Claire
with
this
Noori
architecting
yeah.
B
I
think
we
have
to
kind
of
spend
a
little
bit
more
time
to
figure
out
a
clean
migration
path.
It
is
a
new
database
schema,
so
it's
not
exactly
compatible
with
previous
versions
of
Claire,
but
this
is
something
that
we
will
have
to
tackle
with
Quetta
io.
So
if
solution
will
be
around
the
corner,
yeah.
A
You
said
that
it
was
going
to
be
around
the
end
of
January
or
most
of
that,
we're
mid-january
now,
so
that
should
be
in
not-too-distant
future
too,
so
we'll
have
both
Louis
and
Hank
Henry
back
on
again,
not
too,
and
not
too
far
far
out
from
here.
Once
we
get
some
feedback
on
how
that
migration
went
and
see
what
the
effect
is
of
having
this
new
architecture,
hopefully
the
scaling
and
everything
will
be
wonderful.
So
thanks
again
thank.