►
From YouTube: Clair Comm Dev 2021 03 23
Description
Clair community development meetings.
Follow us at http://github.com/quay/clair
A
March
23rd,
it's
claire
community
development
meeting.
I'm
going
to
share
my
screen
now
to
show
the
current
agenda.
A
Yep
all
right
cool
yeah,
so
a
couple
participants
today
myself
as
always
hank,
probably
as
always,
then
we
have
ivan
and
iran
the
ivan.
You
want
to
introduce
yourself
real.
B
Quick
yeah
sure
so
my
name
is
ivan.
I'm
a
quay
flash
claire
support,
engineer
working
from
india
currently
in
ireland.
Hopefully
that
will
change
sometime
in
the
future.
Thanks
for
inviting
thanks
for
inviting
me
to
this
meeting.
C
Hey
hello
guys,
my
name
is
arun
based
out
of
bangalore,
india,
I'm
majorly
working
on
reddit,
devtools,
analytics
team,
so
we
are
actually
building
a
software
compulsion
analysis
platform.
So
mostly
I
you
guys
will
will
see
me
working
on
various
integrations
like
integrating
our
sa
platform
with
the
various
tools
like
vs
code
extension,
intellij
and
clear
yeah.
That's
what
I
do.
A
Very
cool
cool,
so
a
couple
things
on
the
agenda
today:
I'm
gonna
go
over
the
enrichment
specification,
moving
to
non-blocking,
clear,
initialization,
better
integration,
testing
and
then
a
question
that
I
kind
of
wanted
to
bring
yvonne
into
because
it's
more
of
like
what
I
would
think
it's
like
an
operations
concern.
So
I
want
to
get
your
opinion
on
that.
One
of
the
reasons
that
I
wanted
to
pull
you
in
then
hank's
going
to
go
over
some
stuff
around
the
changes
to
notify
our
json
work.
A
Yvonne,
wants
to
talk
a
little
bit
about
sen
os
and
then
runs
going
to
introduce
their
remote
matching
concepts
that
he
added
to
claire.
So
first
things.
First,
the
enrichment
specification.
We
have
this
repository,
it's
quay
clare,
enrichment,
spec,
and
this
is
where
we're
housing
the
specification
for
enrichments.
So
in
the
initial
design
of
claire,
we
spent
more
time
making
sure
the
matching
was
accurate
and
we
made
a
somewhat
conscious
decision
to
ignore
metadata.
A
So
we
want
to
hit
those
those
two
concepts
we
want
to
bring
those
back
into
the
equation,
so
we've
been
working
on
this
specification
called
the
claire
enrichment
specification
and
I'm
not
going
to
dig
through
the
entire
thing,
but
I'm
going
to
try
to
give
you
a
quick
overview
of
basically
how
this
is
going
to
work.
So
on
the
last
step
here
we're
going
to
add
an
enrichment
field
into
the
vulnerability
report.
A
This
is
basically
the
schema
you
get
that
expresses
which
packages
were
found
in
the
container
and
what
vulnerabilities
affect
them.
Now
there
will
be
an
enrichments
field
with
a
string
and
an
array
of
raw
json
objects.
So
when
your
client
tooling
goes
and
deserializes
this,
it
can
handle
arbitrary
schemas
right
because
we're
just
giving
it
a
json
raw
message.
A
So
if
I'm
a
client
and
I'm
looking
at
the
vulnerability
report,
I'm
going
to
go
ahead
and
look
at
the
enrichments
field,
I'm
going
to
look
at
the
string
and
then
the
string
is
going
to
give
me.
If
it
can.
A
You
know
if
the
schema
data
is
available
somewhere,
it's
going
to
give
me
a
hint
on
that.
So
this
allows
us
this
kind
of
in
between
where
claire
doesn't
really
have
to
care
about
the
schema
of
metadata,
which
I
know
hank.
That
was
one
of
your
big
goals,
but
it
can
also
inform
the
kind
of
exactly
how
to
interpret
this
metadata.
A
So
the
next
obvious
thing
your
brains
are
probably
doing
is
like
okay.
How
does
that
key
work
right?
So
inside
this
topic,
the
mime
type
usage?
This
defines
exactly
what
that
key
looks
like
that.
The
client
is
going
to
find
on
that
enrichment
data
map.
A
I
won't
dig
too
much
into
it
because
it
you
know
it's
all
here.
You
can
basically
read
the
specification,
but
it
provides
a
mime
type
like
well.
I
guess
it
is
a
mime
type.
We
follow
the
same
structure.
A
It
provides
a
mime
type,
which
kind
of
expresses
the
container
format
I'll
get
into
that
in
a
second,
the
enricher,
which
is
the
enrich
the
updater
inside
of
claire
that
actually
put
this
data
into
the
database,
a
schema.
So
now.
This
is
the
important
part
when
the
client
is
looking
through
the
enrichment
data.
If
it
finds
a
schema,
that's
great
right,
it
can
go
right
out
to
the
internet,
grab
the
json
schema
file
and
understand
the
metadata.
A
We
have
a
mechanism
placed
in
here
that
if
the
schema
is
not
net
hosted,
there
is
no
place
to
retrieve
it
from.
You
will
be
able
to
place
a
well-defined
type
in
there,
which
really
just
means
that
you've
created
a
type
you've
documented
it
in
our
documentation
and
that
the
client
can
go
to
the
claircore
docs
and
actually
look
up
that
type
to
understand
how
it
works
and
then,
if
it
decides
to
place
no
data,
no
schema
data
whatsoever.
A
So
now
you
might
be
wondering
about
this
portion
right
here.
So
this
is
a
concept
we
have
where
we
defined
container
mime
types
and
what
this
allows
is
basically
you
to
wrap
the
mbd
data
with
an
associated
vulnerability
id
and
why
we
do.
This
is
because,
most
of
the
time
when
you're
looking
at
enrichment
data,
you
want
to
map
it
to
a
vulnerability.
A
A
The
rest
of
the
spec
is
mostly
just
plumbing
the
spec,
basically
just
piggybacks,
on
all
the
already
implemented
updater
work,
our
updater
content,
business
logic
and
and
whatnot.
We
already
do
this
right.
We
do
it
with
vulnerability
data.
So
now
we're
just
extending
the
concept
to
enrichment
data.
A
We
called
it
enrichment
because
we're
enriching
the
vulnerability
report
with
auxiliary
data.
So
that's
the
specification
inside
here.
I
started
working
on
the
implementation,
so
this
is
the
you
know
the
nitty
gritty
this
will
probably
get
parsed
out
into
actual
tickets
in
our
ticketing
system,
and
this
can
be
read
over
to
understand.
A
Literally
the
code
changes
we're
going
to
make
to
to
make
this
specification
work.
This
is
still
in
a
little
bit
of
review.
I
think
we're
getting
it
into
a
pretty
good
place,
but
I
do
suggest
that
anyone
that's
interested
in
you
know
the
current
state
that
we
do
not
have
well
that
we
have
missing
severities
for
particular
vulnerabilities
if
you're
interested
in
that
and
why
that
is
and
how
we're
going
to
fix
that.
Then
you'll
probably
want
to
go
and
actually
take
a
look
at
this
claire
enrichment
spec.
A
This
is
slated
for
quay
36,
it
might
be
it
will
I'm
assuming
it'll
be
upstream
before
then,
but
yeah.
So
that's
the
enrichment
spec
just
keep
it
in
mind,
it's
currently
being
developed
and
it's
the
whole
purpose
is
to
bring
back
metadata
into
claire's
results
cool.
So
let
me
get
back
to
the
agenda.
A
So
the
move
to
non-blocking
claire
initialization,
currently
in
claire
v4,
the
stable
release,
the
the
releases
we
ship
with
quay
the
matcher
completely
blocks
until
everything
is
updated,
or
at
least
we
run
a
full
update
interval.
So
obviously
there's
some
downsides
to
that.
Right,
like
we
blocked
everything
we
didn't
even
return
like
health
checks,
the
entire
thing
locked
up,
so
we
just
changed
that
concept.
A
I
think,
after
spinning
around
with
it
and
some
group
discussion,
we're
moving
to
the
stance
where
claire
does
not
block,
but
claire
will
return
a
particular
http
code
if
it's
not
initialized
so
the
history
here
and
this
and
the
way
we
even
got
here-
is
that
there's
been
a
lot
of
tickets
around
bad
responses
in
claire
v2
responses
where
the
database
isn't
initialized.
So
it
says:
hey,
everything's,
okay,
but
it
wasn't.
There
was
just
no
data
in
the
database
yet
so
that
was
one
of
the
first
tickets.
A
We
got
and
yvonne
I'm
sure
you're
aware
of
that
issue
too,
because
it's
just
an
ongoing.
It
was
an
ongoing
thing
about
how
to
actually
do
this
correctly.
So,
where
we
landed,
was
it
won't
block,
so
you
can
actually
use
the
service
and
we're
going
to
return
you
an
http
code
that
a
client
can
basically
ask
for
vulnerability
reports
until
they
get
a
200.
A
Now,
if
your
client,
you
know,
doesn't
really
care
that
much
about
accuracy,
they
can
take
with
it,
they
can
take
what
they
get,
because,
while
we're
updating
will
still
return
vulnerability,
reports
they're
just
not
complete
data.
So
now
you
have
the
you
have
the
option.
You
can
either
take
what
you
get
real
fast,
knowing
that
you
might
request
it
a
little
bit
later
or
you
can
have
the
client
sit
there
and
wait
for
a
200
and
then
return
the
vulnerability
report.
So
that's
kind
of
how
we're
approaching
that
problem.
A
All
in
the
effort
of
informing
the
client
that
hey
you
know,
these
results
might
not
be
completely
correct.
We
haven't
finished
initializing
the
database
yet
so
hank
you
worked
a
little
bit
on
that.
Did
I
get
that
mostly
correct.
D
So
that's
that's
not
correct,
but
we
will
I'd
start
serving
api
requests
right
away,
we'll
just
like
sort
of
swallow,
your
request
and
return
a
non
200
return
and
then
I
think,
as
part
of
this
effort
before
the
next
release,
we'll
probably
add
an
explicit
readiness
probe
at
some
point.
A
D
A
B
D
The
upside
is
any
sort
of
monitoring
system.
That's
trying
to
care
about
whether
the
api
port
is
like
up
and
accepting
traffic
can
now
do
that,
because
we'll
start
serving
traffic
immediately
and
not
wait
for
an
entire
updater
loop
to
run.
A
Yeah,
I
think
I
think
the
premise
really
came
from
the
fact
that,
right
now
we
have
the
tng
operator
correct
me
if
I'm
wrong
hank,
but
tng
operator
now
like
just
blocks
until
claire
is
available
and
claire
doesn't
become
available
until
we
run
what
could
be
a
rather
long
update
interval,
so
we
wanted
to
skate
around
that
problem,
but
also
be
able
to
tell
clients
like
hey
these.
Were
this
request
isn't
valid?
Yet
we
haven't
completely
initialized
everything,
especially.
A
D
D
The
http
server
until
everything
was
good,
yeah,
this
yeah.
This
makes
it
so
it
actually.
You
can
do
at
least
some
useful
work
with
it
as
soon
as
possible.
B
Yeah,
the
tng
operator
actually
blocks
it,
because
the
validation
will
not
go
through
until
claire
claire
responds
with
something
yeah
that
that's.
Why
that's?
Why
it's
blocking
it's
blocking
the
whole
deployment
that
can
be
circumvented
by
just
saying:
hey,
don't
validate,
and
I'm
also
thinking
that
maybe
health
checks
should
not
be
connected
to.
D
D
Mean
I
think,
that's
just
that's
just
sort
of
an
artifact
of
way,
the
actual
like
kubernetes
manifest,
is
written
right
now
because,
like
we
do
serve
a
health
check
on
a
different
in
a
different
way
that
comes
up
immediately.
C
D
More
so
that,
like
I
think
not
so
like
I
don't
know
nuts
talk
but
like
I,
I
told
alec
like
what
the
differences
were
and
then
he
said
yeah
I
wanna
I
wanna
look
at
serving
api
responses.
D
B
Also
be
I
mean
I'm
thinking
that
if
we
have
a
health
check,
teleport
8089
for
overall
clear
health,
we
can
actually
implement
something
similar.
I
mean
you
can
implement
something
similar
as
queer
does
quay
returns
a
json
that
says
these
components
are
alive.
These
components
are
not
alive.
B
Registry
that
is
hooked
up
to
claire
can
actually
know
when
to
start
sending
data
across
and
that
wouldn't
block
the
validator
either
because
yeah
we
would
see
that
claire
is
up.
That's
fine,
so
quake
can
continue
bootstrapping,
but
we
won't
send
any
manifest
to
it
until
the
health
point
says
hey.
We
are
now
available
because
the
database
is
updated.
D
Yeah,
that's
I
mean
I
definitely
want
to
get
it.
I
want
to
get
it
sort
of
like
the
kubernetes
api
health
endpoints.
That
does
something
like
that,
but
yeah
we
just
haven't
got
around
to
it
needed
it.
Yeah.
A
Yeah
we
have
all
the
plumbing
set
up
to
basically
inject.
You
know
a
health
check
of
arbitrary
complexity.
We
just
haven't
been.
You
know
we
just
haven't
gotten
around
to
actually
writing
that
health
check
of
arbitrary
complexity.
So
it's
all
there
it
can
be
applied,
but
I'm
not
exactly
I'm
not
sure.
If
the
concept
we
have
of
a
non-blocking
start
and
your
health
check
concerns
are,
you
know,
mutually
exclusive.
I
think
they
basically
live
together.
B
A
A
I
mean,
I
think
we
all
have.
We
definitely
have
the
plans
to
get
there.
I
do
know
quays
quays
health
check
is
actually
pretty
nice,
so
I
think
we
we
do
use
that
as
inspiration
but
yeah.
I
totally
agree
with
you.
It'll
get
granular
as
we
move
on
cool.
A
Well,
that's
just
a
heads
up
to
you
know
watch
that
because
it
does
change
the
behavior
of
claire
a
little
bit
if
you
have
any
kind
of
like
mechanisms
that
are
sitting
there,
assuming
that
it
will
block
it
won't
no
longer
so,
you'll
have
to
actually
check
okay,
better
integration
testing.
So
right
now
we
have
a
pretty
poor
testing
story.
We
don't
really
we
build
via
tests,
but
we
don't
really
do
much
in
terms
of
verifying
and
especially
comparing
to
previous
builds.
A
So
I
was
thinking
about
ways
to
actually
attack
this
problem
and
what
first
comes
to
mind.
The
simplest
thing
is
that
we
have
a
local
development
environment
and
github
actions
allows
for
docker
compose.
I
think
I
mean
I
have
to
confirm
that,
but
it's
possible
that
we
just
run
the
local
development
environment
inside
github
actions
and
then
basically
have
some
kind
of
comparison,
logic,
hank.
We
talked
a
little
bit
about
creating
a
testing
harness
around
this
right.
A
So
that
would
basically
just
be
like
an
executable
that
knows
that
the
local
development
environment
is
up
right.
D
C
D
Like
what
does
what
does
it
look
like?
What
are
the?
What
are
the
actual
tests?
We
want
to
run
because,
right
now,
it's
it's
pretty
fuzzy
as
to
like
what
a
successful
run
looks
like
like
it's
very
easy
for
a
human
to
classify
whether
this
is
okay
or
not.
D
A
A
So
one
of
the
things
I
was
thinking
about
is
that
you
know
somewhere
we
cash,
so
the
actual
testing
system.
Right,
like
let's
say
we
started
it
tomorrow,
right
it
has
no
data,
just
run
with
me
here,
like
just
conceptualize
this
with
me,
because
let's
say
we
started
tomorrow,
it
gens
vulnerability,
reports
and
index
reports.
A
It
cashes
those
reports
somewhere
now,
the
next
time
we
run
it.
It
checks
against
the
last
build
to
confirm
that
things.
Look
the
same,
so
the
onus
is
on
us
to
make
sure
that
first
run
is
correct.
At
least
correct,
as
can
be
right,
there
could
still
be
bugs,
but
that's
just
you
know
what
are
you
gonna?
Do
we
have
to
identify
those
in
another
way?
Com
pure
comparison
isn't
going
to
be
enough,
but
this
is
just
I'm
trying
to
scope
this
small
to
begin
with,
which
would
just
be
like
hey.
A
B
D
A
D
I
don't
think
we'll
be
able
to
just
like
bite.
Compare
things
absolutely
yeah,
so
we'll
have
to
write
whatever
that
equality
function
is.
But
that
sounds
good.
I
think
we
can
have
that
run.
We
might
want
to
split
it
in
half
and
have
it
think
about,
like
one
part,
runs
against
one
set
of
like
images
yeah,
one
set
of
containers
that
we
pushed
up
to
like
our
own
quay
repository
and
have
to
like
handle
regressions
and
then
some
that
pull
against
live
containers
to
handle
like
changes
that
are
actually
happening.
A
Live
containers
to
evaluate
any
new
bugs,
basically
yeah.
I
get
what
you're
saying,
because
we'll
we'll
be
very,
you
know
we'll
know
exactly
the
differences
in
the
managed
containers,
but
then
there's
a
little
bit.
You
know
whenever
we
go
with
live
containers,
there's
a
little
bit
of
concern,
but
it
is
what
it
is.
I
mean
if
we
pick
containers
that
don't
shuffle
their
tags
around
too
much
or
they're,
pretty
dependable,
it
should
work
fine,
but
I
do.
E
A
A
Okay,
so
I
think
you
know
with
the
with
this
strategy,
I
think
there's
a
lot
of
room
with
testing
claire,
especially
because
you
know
claire's
a
little
a
little
complex
to
test,
because
it
does
all
that
deferment
of
work.
So
that
becomes
a
pain
because
you
know
as
soon
as
you
make
one
index
you're
caching.
So
it
gets
a
little
bit.
D
D
Have
something
that's
just
authentication.
A
D
Good
but
then
we
need
to
like
be
able
to
specify
multiple
ones
and
multiple
ones
mean
it
like
with
different
power
levels,
and
I
don't
know
I
think
if
we
do
this,
we
should
just
shove
it
on
the
debugging
on
the
introspection
port
to
start
with.
A
A
A
Okay,
all
right
yeah,
so
I
mean
we're
we're
splintering
a
little
bit
but
I'll
make
that
another
topic
soon,
which
is
like
how
do
we
start
busting
the
index
or
caches?
So
we
can
actually
do
things
repetitively,
because
it's
just
a
pain
in
the
ass
to
test
right
now,
without
that
you
know,
like
I,
just
dump
the
database
and
start
the
database
and
dump
the
database
and
start
the
database,
sometimes
I'll,
run
truncate
commands
and
queries
that
I
then
lose
because
I
don't
keep
them
around.
A
A
Cool
so
now
a
question
that
I
wanted
to
pull
in
yvonne
for
as
well
and
a
question
that
me
and
hank.
You
know
it
came
up
on
a
pr
and
we
were
both
like
I
don't
know
so,
should
claire
fail
startup
if
introspection
fails.
A
So
if
we
can't
connect
to
syncs
for
event
data
or
for
some
reason,
we
can't
set
up
prometheus,
which
is
probably
not
going
to
happen
since
all
local,
but
whatever
it
might
happen,
should
clear
fail
totally
and
we
or
should
it
continue
running
and
there'll
be
no
metrics,
I'm
not
exactly
sure,
because
you
know
like
running
in
a
production
environment
without
metrics.
A
B
I
think
that
the
primary
reason
why
someone
runs
claire
is
to
get
their
containers
scanned,
and
these
these
scanner
scans
should
be
complete,
or
at
least
they
should
be
complete
as
they
could
be.
B
So
if,
if
introspection
fails
in
the
sense
of
sources
are
not
available
and
updaters
cannot
function,
then
players
should
definitely
report
this
in
one
way
or
another,
I'm
not
so
sure
about
prometheus.
I
mean
we
haven't,
had
any
questions
about
premise:
prometheus
metrics
that
are
exposed
by
claire,
yet
because
the
the
thing
is
rather
new,
so
there
is
no.
There
is
no
data
that
I
can.
I
can
share
that's
cool.
B
D
Context
when
we
talk
about
introspection,
we're
talking
about
the
second
http
server
that
it
spins
up
that
serves
like
profiling,
information,
health
checks
and
metrics
information.
B
D
D
Yeah,
I
guess
my
thinking
is,
if
you're
actually
paying
attention
to
the
health
checks,
you'll
be
unable
to
read
them
and
the
container
won't
get
owned,
though.
D
A
Yeah,
but
you
do
make
a
good
point.
I
mean
it's
not
well
one.
Yes,
if
you
can't
get
the
health
checks,
the
system's
going
to
pull
it
down
anyway,
but
two
there's
not
like
a
clear
boolean
like
introspection
is
on
or
off
I
mean
there
is
no
there's
not
because
we
can
configure
aspects
of
introspection.
D
Yeah,
I
mean
for
I
guess
for
historical
curiosity.
The
reason
why
I
like
implemented
it
as
everything
keeps
chugging
along
if
this
doesn't
come
up
is
because
I
was
like
running
a
bunch
of
these
locally
and
was
too
lazy
to
change
like
change
two
port
numbers
in
a
bunch
of
configs
and
didn't
care,
so
they
just
like
tried
to
try
to
open
the
socket
and
failed
and
kept
running,
because
that
was
easy.
B
I
mean
this
this
whole
discussion
with
introspection.
It's
it
connects
to
the
to
the
second
point:
we
had
the
discussion.
We
had
about
non-blocking,
clear
initialization.
B
So
if
I'm,
if,
if
the
health
point,
if
the
health
checker
is
returning
a
json
or
something
similar
like
an
object
that
says
these
components
of
claire
are
functioning,
these
components
are
not
yet
functioning
and
we
go
to
towards
that
approach.
B
D
Yeah,
that's
fair,
yeah
yeah.
I
guess
this
is
sort
of
a
question
of
how
much
misconfiguration
do
we
want
to
tolerate,
and
I
guess,
if
you
frame
it
like
that,
my
answer
is
less.
D
A
Think
that's
a
good
point
too.
It
seems
like
you
know.
I've
made
a
note
here
that
an
emphasis
on
a
good
health
check
is
going
to
kind
of
clear
the
fog
on
a
lot
of
these
questions
that
we
have,
because
it
does,
it
will
provide
some
granular
details
about
what's
actually
working
and
what's
not
yep.
At
that
point
we
can
make
a
more
educated
decision
about
you
know.
Does
the
client
pull
it
down?
For
that
reason,
it
maybe
moves
the
responsibility
of
what's
acceptable
to
not
on
us.
B
In,
in
my
opinion,
we
we
could
tolerate
prometheus
going
down.
I
don't
see
it
as
a
highly
critical
component.
Metrics
can
now
can
can
be
like
restored
at
any
point.
A
D
A
A
B
Well,
it
depends
on
how
how
how
frequent
you
actually
check
the
health
check.
B
And
it
also
has
a
threshold.
So
if
the
threshold
is,
for
example,
five
continuous
errors
and
you
have
a
flaky
instance
or
a
flaky
service
that
is
going
up
and
down
constantly,
then
you
might
miss
things.
I
don't
know.
A
A
D
Oh
sure,
so
I
at
one
point
a
week
ago,
two
weeks
ago
now
was
working
on
the
notifier.
We
think
we
need
some
like
structural
changes
to
the
notifier,
because
the
way
the
way
it
works
now
is
it
is
an
update
like
takes
that
one
update
processes,
everything
in
one
node's
memory
and
then
sends
it
off
to
be
delivered
so
that
probably
be
or
yeah.
C
D
Are
structured
when
they
show
up
for
the
first
time
they
might
be
quite
large
and
they're
showing
up
they
can
show
up
at
any
time.
It's
not
like.
We
can
just
white
list
new
ones.
So
so
we
need
to
like
split
that
into
a
I
don't
know:
checkpointing
work
model
that
gets
spread
across
everything
and
out
of
both,
so
I
started
on
the
design
of
that
a
little
bit.
But
before
that
I
did
some
efficiency
work,
which
included
reworking
how
we
handle
json.
D
A
So
when
I'm
working
in
the
http
layer
now
are
there
changes,
I
need
to
consider,
do
I
have
to
use
the
codec
package
now
that
you
added.
D
A
Cool,
so
basically
just
look
at
the
the
functions
in
the
codec
package.
That's
all
I
need
to
care
about
when
I'm
just
you
know,
munching.
D
Yeah,
when
you're
reading
and
writing
json
just
use
those
like
I,
the
pr
that
pulled
them
in
changed
all
the
handlers
to
use
those
those
packages.
So
just
read
a
handler
make
it
look
like
that.
D
D
A
B
Yeah,
so
this
is,
this
is
quite
a
huge
issue
for,
for
a
bunch
of
our
a
bunch
of
our
clients
who
are
still
using
centos
images,
and
not
only
it's
is
a
problem
for
images
that
they
are
building
on
centers,
but
there
are
also
a
bunch
of
other
images
like
open
source
projects
that
used
and
are
using
sent
us
as
base
images.
B
B
Scans
differently,
it
uses
different
sources,
and
I
understand
that
because
of
that,
the
functionality
of
the
new
player
is
different
than
the
old
player,
but
I
really
think
that
we
should
do
something
about
enabling
centos
scanning,
at
least
until
centos
is
alive,
which
it
still
is,
and
also
if
we
don't.
If
we
say
that
dentos
cannot
be
scanned,
then
we
should
move
it
to
the
unsupported,
completely
unsupported
list
of
operating
systems.
B
Currently,
the
the
problem
is
that
when
you,
when
you
push
an
image
of
that,
is
based
on
on
centos,
either
seven
or
eight
to
to
quay
and
it's
being
scanned,
the
results
that
claire
sends
back
is
passed
and
it
doesn't
show
any
vulnerabilities,
and
we
had
a
case
where
a
client
pushed
the
same
image
to
quay
io
in
their
way.
Local
and
cue,
I
o
returned
a
bunch
of
vulnerabilities,
while
their
local
quay
did
not,
and
there
was
a
question
why
these
why
these
things
are
different,
like
so
different.
A
Yes,
so
my
thoughts
on
this
right
now
is
that
I
would
love
to
support
centos.
I
think
whether
we
can
do
that
reliably
needs
a
research
spike.
I
know
that
in
clear
v2
there
were
quite
a
few
issues
with
package
alignment
around
matching.
A
I
personally
have
not
done
any
research
into
that,
so
I
need
to
do
that
research
or
talk
to
an
individual
who
knows
about
centos
packaging
with
rel
and
whether
they
are
completely
compatible
right.
So,
if
I'm
searching
through
an
rpm
database
on
centos,
will
those
virgin
will
those
package
names
and
versions
match
up
directly
with
vulnerabilities
in
in
the
rel
ecosystem?
A
D
B
D
Only
focused
on
red
hat,
I
think
so
like
it
sent
to
us,
was
completely
out
of
my
school.
A
Yeah,
that's
totally
cool,
but
we
do
have.
D
As
far
as
as
far
as
I
know,
that's
not
the
case,
it's
usually
the
case
being
how
centos
I
don't
know
classic.
I
guess
eight
is
a
downstream
of
rel.
It's
usually
the
same,
but
not
always,
which
is
like
part
of
the
ambiguity
that
we
wanted
to
avoid
by
using
security
databases
provided
by
the
distribution
publishers-
and
I
I
mean
I
don't
think
this
is
going
to
be
satisfied,
or
I
think
this
is
only
going
to
get
worse
with
stream,
where
that's
now
an
upstream
of
rel.
D
So
the
rail
data
is
even
less
even
less
relevant
to
the
centos
packages.
A
D
A
Yeah,
but
now
to
your
other
point
as
well,
I
do
think
that
we
need
some
kind
of
mechanism
that
says
this
container
is
not
supported,
especially
with
quay
doing
that
we're
going
to
need
to
take
a
look
about
take
a
look
on.
Where
is
the
appropriate
place
to
kind
of
place
that
business
logic
in?
A
A
D
B
A
A
So
that's
interesting
yeah.
We
can
definitely
play
around.
With
that
I
mean
it
would
be
a
tiny
quay.
Pr,
that's
just
like
hey.
If
you
don't
see
a
distribution
say
that
this
container
is
not
supported.
The
only
thing
that
you
know
does
it
run.
Does
it
run
into
issues
where
you
know
the
user
knows
it's
like
a
fedora
container,
but
we
just
didn't
identify
it
correctly,
like
there's
an
os
release,
file
missing
or
is
that
okay.
A
Okay,
okay,
so
that
I
mean
that
would
be
future
parody.
So
yeah,
that's
a
good
point.
Okay,
we
can
definitely
take
a
look
at
bare
minimum.
We
could
take
a
look
at
that
and
maybe
approach
the
problem
with
just
a
quick
quay
pr
to
at
least
you
know
that,
but
we'll
have
to
do
a
little
bit
more
research
on
on
the
state
of
centos
yeah.
B
I
I
shared
a
link
to
aqua
securities
trevi,
which
is
used
for
harbor.
I
mean
harbor
uses
it
now,
because
claire
is
being
deprecated
by
harbor
and
it
has.
It
does
support
centos
completely
and
it
also
supports
distro
less
containers.
So
we
might.
We
might
check
that
out
as
well,
because
we've
had
questions
about
digitalized
containers.
A
Yeah,
I
think
I
mean
it's
a
small
blip,
but
this
journalist
is
is
on
our
radar.
It's
got
brought
up
and
I
did
do
some
like
early
analysis
and
it
seemed
possible.
I
don't
think
it's
a
big
hurdle,
we'll
just
have
to
take
a
re-look
at
it,
but
I
do
agree.
I
think
that's
a
hot
topic
and
I
don't
think
it's
really
that
hard
for
us
to
support
at
this
time
so
we'll
put
that
on
the
radar.
A
Okay,
I'll
type
up
some
notes,
a
run.
If
you
want
to
go
over
remote,
matcher
evan
are
you?
Are
you
good
with
all
that.
B
C
C
So
basically
the
clear
mainly
consists
of
two
parts:
lip
index
and
liver.
The
lip
index
is
responsible
for
extracting
the
package
inversion
from
the
container
layer
and
it
produces
the
index
report
and
the
index
report
is
fed
into
the
level
which
basically
consists
of
two
major
parts:
mature
and
updater.
So
the
updater
fetches,
the
advisories
from
various
publicly
known
sources
and
populates.
The
database
and
matcher
basically
matches
the
index
report
with
the
database
and
produces
a
vulnerability
report.
C
C
C
It
bypasses
the
actual
matcher
and
updater
from
the
lib1
yeah,
so
it
doesn't
actually
replace
it's
a
kind
of
an
add-on
to
the
existing
matcher
infrastructure
yeah.
C
So,
basically,
the
the
purpose
of
the
a
remote
matcher
is
to
talk
to
an
external
service
from
where
you
can
get
the
vulnerability
matching
done.
The
main
purpose
of
this
is
to
leverage,
for
example,
like
the
security
vendor
api,
where
you
may
not
get
the
complete
database
to
populate
into
your
local
database,
but
you
can
make
use
of
the
security
vendor
apis
and
do
the
matching
and
also
you
can
use
it
for
the
use
cases
like
where
your
org
has
a
set
of
packages
white.
C
I
mean
dollar
list
packages
and
you
want
to
check
the
container
against
that
download
list
and
if
you
don't
want
to
ship
anything
which
is
not
in
the
upload
list,
so
you
can
make
use
of
the
remote
matcher
for
all
these
kind
of
use
cases
and
okay.
So
why
do
we
want
to
do
this
specifically
for
the
work
which
you
are
trying
to
do
so
before
getting
into
the
nitty
gritty
like?
C
I
just
want
to
give
some
introduction
about
our
platform,
which
we
have
in
red
hat
as
part
of
devtools
team.
So
the
theme
name
is
redact
coded
dependency
analytics.
We
usually
build
the
software
composition,
analysis
platform,
which
is
focusing
mainly
on
the
security
analysis,
dependency
analysis
and
license
analysis,
and
we
have
basically
it's
a
hosted
platform.
It
is
hosted
in
openshift
osd
and
it
exposes
a
set
of
restful
endpoints
to
do
or
perform
all
this
analysis
listed
over
here.
C
So
also,
we
have
various
integrations
in
place
like
a
waste
code
intellij
where
you'll
get
in
ide
security
analysis
experience,
so
you
can
do
all
all
the
security
analysis
without
leaving
your
ide
and
also
like.
Currently,
we
are
focusing
on
integrating
our
platform
with
clear,
so
that
you
can
I
mean
you
can
do
the
same
with
the
container
scanning
as
well.
C
So
right
now
our
platform
supports
four
ecosystem,
python,
node
maven
and
go
so.
We
support
a
vulnerability
analysis
for
all
these
repositories,
and
the
main
point
here
is
that
our
data
vulnerability-
data
partner
is
sneak.
Most
of
you.
Folks
already
would
have
heard
about
sneak
and
they
provide
a
very
reliable
and
good
vulnerability
database.
C
Okay,
so
the
main
reason
for
building
this
remote
matcher
implementation
is
that
our
security
data
partner
is
not
allowing
us
to
share
the
database
to
the
clear.
Actually
they
want
the
data
to
be
served
through
our
layer
through
our
hosted
layer.
So
that's
why
we
built
this
remote
matcher
concept,
with
the
help
of
louis.
C
Yeah,
so
the
next
one
yeah.
So
this
one
is
like
suppose,
if
you
are
a
vs
code
fan,
probably
if
you
want
to
see
what
we
are
really
doing,
you
can
just
download
this
extension
and
give
it
a
try.
It
got
a
pretty
good,
download,
yeah
so
yeah.
So
this
is
what
finally
we
are
trying
to
realize
like,
as
I
said,
we
want
to
make
use
of
the
hosted
api
which
we
are
exposing
from
the
clear.
C
So
basically,
we
want
to
propagate
all
this
information
to
the
openshift
dev
console
through
various
layers
so
from
from
the
remote
matcher.
If
you
go
to
the
clear
then
from
clear,
it
will
go
to
the
query
and
from
quay
cso
through
a
crd,
it
will
reach
to
the
open
ship
tab,
a
dev
console
where
the
developers
can
see
all
the
vulnerabilities
associated
with
their
container
image,
which
is
deployed
into
the
cluster.
C
A
Very
cool,
so
will
we
eventually
see
language
support
in
claire
would
extend
it
to
that
supported
language
support
list
that
you
had
is
that
the
overall
goal,
because
right
now,
I
know
that
you
have
something
in
flight
for
java,
but
is
your
expectation
to
continue
the
same
path
to
get
python
node,
you
know
go
into
the
remote
matching
facilities
in
claire.
C
Yes,
yep
definitely
lewis
so
as
as
you
said
like
currently,
the
maven
support
is
implied,
so
basically
the
indexer
part
of
the
iron
is
done.
It's
kind
of
in
the
testing
phase
anyways,
like
the
remote
mature
outer
box,
supports
all
four
ecosystems
mentioned
over
here.
The
only
thing
is
like
we
need
to
take
care
of
the
indexing
part.
A
Very
cool
yeah
I
mean
that'll,
be
a
great
addition.
There's
a
little
bit
of
caveats
with
remote
matching
and
notifications.
I
don't.
I
don't
think
we
have
a
great
way
to
bridge
them
together,
because
notifications
requires
us
to
understand
when
database
is
have
been
updated
and
because
you
have
a
remote
database
here.
We
don't
have
that
concept,
but
we
might
want
to
spend
some
time
in
the
future
circling
around
whether
we
can
get
that
data.
A
Somehow,
maybe
we
can
bridge
the
system
to
work.
Obviously,
when
we
first
designed
notifications,
we
didn't
have
remote
security
databases
in
mind.
We
assumed
that
we
would
always
be
holding
the
data
and
understand
you
know
when
the
updaters
go
and
grab
new
vulnerability
databases,
but
you
know,
given
enough
brainpower,
you
might
be
able
to
bridge
the
notification
systems
into
the
remote
matching
concept.
A
C
That
currently,
the
integration
work,
which
I
did
only
supports
the
connected
environment
for
the
our
gap
environment-
we
still
don't
have
a
working
solution
right
now.
We
are
focusing
that
as
well.
C
No,
actually,
the
contract
is
something
like
we
can
serve
the
data
through
our
layer
yeah.
We
can't
deliver
the
data
as
a
whole
or
we
can
assert
the
partial
data
through
our
layer.
So
that's
the
color.
Probably
we
can
think
of
having
a
some
like
a
component
which
can
go
into
the
disconnected
environment
like
that.
Can
act
like
an
remote
matcher.
A
A
Cool,
so
that's
we're
about
at
the
end
of
the
agenda.
Brad,
I
see
that
you
have
joined
if
you
guys
don't
mind
waiting
another
couple
of
minutes.
Brad.
Do
you
want
to
introduce
yourself.
E
Good
morning
yeah
hi-
I
am,
I
don't
know
if
my
video
is
working
there,
we
go
so
yeah,
I'm
brad
from
aws
ecr
team.
We
are
currently
using
for
our
image
scanning
solution,
we're
currently
using
their
v2,
and
we
are
looking
at
migrating
to
glare
before
yeah,
just
trying
to
get
a
feel
for
the
road
map
and
what's
going
on
and
what's
what's
coming
down,
the
pipeline.
A
Very
cool
yeah
we
recorded
this
session
so
you'll
be
able
to
play
back
anything
we're
actually
at
the
end
of
it
now.
A
But
did
you
have
anything
you
wanted
to
bring
up
specifically
or
you
know,
will
we
just
wait
till
the
next
agenda
or
I
am
curious
about
poking
you
because
I
don't
know
much
about
how
aws
is
using
it
in
on
your
back
end
and
I
think
there's
some
good
really
good
conversation
points
around
there,
especially
with
claire
before,
but
I
don't
have
anything
particular
but
I'm
curious
about
you
know
just
with
your
experience
so
far,
do
you
have
any
comments
or
concerns.
E
A
Awesome,
well,
it's
great
to
meet
you.
I
look
forward
to
hearing
from
you
more
so
yeah
we'll
wrap
this
one
up
I'll
I'll
drop
the
video
in
the
agenda,
so
you
can
catch
up
on
anything
that
you
missed,
but
I
appreciate
it
as
a
great
presentations
by
everyone.