►
From YouTube: Clair Comm Dev 2021 04 06
Description
Clair community development meetings.
Follow us at http://github.com/quay/clair
A
Everyone,
this
is
claire
development
community
meeting
for
april
6th.
We
have
a
couple
things
on
the
agenda:
a
few
presenters
going
to
open
this
up
by
going
over
some
of
the
tickets
that
were
brought
up
in
our
last
community
meeting.
A
So
we
have
this
filter
now
it's
actually
public
and
if
you
were
to
go
to
our
community
meeting
agenda
scroll
up
a
little
bit,
we're
going
to
monitor
action
items
in
our
upstream
issues.red
hat,
so
things
come
up
during
the
community
meeting
and
we
want
to
track
them.
These
are
also
really
great
tickets
for
anyone
that
is
watching
and
wants
to
begin
contributing
to
claire.
A
Don't
if
you
don't
really
know
where
to
start
coming
to
this
link
here
and
taking
a
look
at
these
issues,
they're
great
places
to
kind
of
jump
on
to
so
we
had
a
couple
things
come
up
last
meeting.
We
wanted
to
look
into
distro
support
in
centos.
Not
there
hasn't
been
much
movement
on
that.
Our
team
at
red
hat
is
currently
a
little
busy
since
we're
packaging
up
a
quay,
35
release.
A
A
Link
vulnerability,
datas
with
confidence
between
the
rel
databases
and
set
os
package
databases
in
ascent
os
image,
so
I
think
we
need
to
either
do
a
little
bit
of
research
on
that
or
just
reach
out
to
centos
teams
how
to
get
more
information
support.
Distro
list
containers
hank-
and
I
threw
this
around
in
the
claire
collaboration
chat
with
a
couple
people
from
stack
rocks
as
well.
A
It
seems
feasible.
I
made
a
comment
in
this
ticket
that
if
digital
list
containers
are
actually
append
only
and
stay
append,
only
we
could
possibly
support
it
today,
with
the
caveat
that
if
they
do
start
actually
mutating
any
of
the
per
package,
databases
we'll
probably
miss
some
things
without
a
re
architecture.
A
You
know
without
a
look
at
how
to
support
that
better.
This
is
an
interesting
tick-up
ticket
that
yvonne
brought
up
from
customer
engineering,
and
this
will
be
prioritized.
This
is
basically
having
quay
ui
inform
the
client
when
a
container
being
scanned
isn't
supported
by
claire.
A
It's
just
a
usability
thing,
but
I
think
it's
important
right.
I
think,
after
we
finish
up
the
quay
3.5
release,
this
is
going
to
become
prioritized,
so
clients
will
be
able
to
understand
whether
claire
is
saying
hey.
I
don't
know
what
this
image
is
or
if
it's
saying
your
image
is
fine,
it's
okay,
it's
scanned
and
nothing
was
present
in
it.
Integration
testing.
A
I
came
up
against
the
wall
with
this
because
the
way
we
are
doing
claire
initialization,
we
moved
it
to
non-blocking,
so
claire
doesn't
wait
for
all
the
data
to
be
there.
Actually,
it
makes
this
testing
a
little
bit
harder.
A
So
I'm
going
to
punt
on
this
for
just
a
little
bit
until
me
and
hank-
probably
collaborate
on
yet
another
brain
brainstorming
session
on
how
initialization
should
work
and
then
implement
granular
health
check.
It's
just
still
on
the
back
burner.
It's
also
a
very
good
community
ticket
just
because
it's
not
extremely
difficult
to
dive
into
and
can
be
sectioned
out
from
the
rest
of
the
code
pretty
easily
so
yeah.
Those
are
the
tickets
that
are
in
play
as
far
as
action
items
for
the
community
meetings.
A
Let's
start
looking
at
agenda
items
so
hank,
it
looks
like
you
have
a
couple
items
here
to
to
kick
off
with.
So
if
you
want
to
just
go
ahead
and
take
over.
A
B
So
we've
I
don't
know
if
anyone's
actually
tried
to
run
claire
they'll
hit
some
rate
limits
when
talking
to
trying
to
fetch
red
hat
databases,
and
we've
talked
to
the
it
department
internally
and
they're,
unwilling
or
unable
to
change
the
way
that
works
so
we're
just
going
to
implement
a
rate
limiter
globally
it'll.
Just
the
plan
is
to
have
claire
just
do
a
rate,
limited
10
requests
per
second
to
a
given
host.
B
That's
just
what
it's
going
to
be
so
it'll,
slow,
it'll,
slow,
like
the
the
red
hat
requester
down
just
about
everything
else,
should
be
unaffected
because
we're
not
making
as
many
requests
or
we're
making
a
smaller
bounded
set
of
requests.
But
that's
that's.
Basically,
the
only
change
is
the
per
host
limit.
Everything
will
still
run
in
parallel
and
like
when
the
red
hat
updaters
start
running
they'll
just
sort
of
sit
in
the
pipeline
until
they
get
their
token
to
go.
Do
their
request
so
and
then.
A
Yeah,
with
that
rate,
limiting,
where
do
we
expect
to
actually
block?
Where
are
we
sitting,
while
we're
being
rate
limited
right
on
the
client.do
call.
A
B
Yeah,
like
all
the
cancellation,
everything
works
exactly
the
same.
B
No
there's
no
magic,
it's
the
change
in
claire
is
or
the
the
ray
limiter
is
in
claire.
It's
not
wired
up
to
anything
because
of
the
well
I'll
get
to
it.
My
next
items,
because
you
need
to
go,
do
some
wiring
and
clear
core,
but.
C
B
Rate
limiter
is
there
it's
very
easy:
it
just
uses
a
just
a
token
bucket
pulls
it
okay,
yeah
yeah
weights
waits
for
it
to
be
able
to
get
it.
So
the
context
cancellation
plums
through
everything
could
be.
It
should
be
pretty
simple:
okay,
the
configuration
rework
so
as
a
part
of
this
one.
One
component
of
this
is,
if
we're
going
to
use
like
a
raid
limiter.
B
Obviously
we
need
to
make
sure
everything
is
using
the
rate
limiter.
As
in
it's
using
the
configured
client,
that's
going
to
honor
it
so
part
of
the
clear
core
changes
of
this
is
I
went
back
through
and
previously
we
were
only
calling
configuration
methods
if
the
user
had
passed
some
sort
of
configuration
object
in
and
now
it
gets
called
unconditionally
if
it
exists
with
just
a
and
the
like
configuration
mechanism.
Just
gets
a
no
op
bit
for
that.
B
So
there
is
a
pr
against
claircore
that
does
all
this
for
all
of
the
all
of
the
entry
updaters
work
like
this
now
and
support
being
called
like
that
now,
so
that
work.
So
it's
done.
I
ran
it
for
a
couple
hours
yesterday,
just
sort
of
sitting
there
and
with
a
I
modified,
a
the
like
testing
binary
to
just
have
the
default
transport
just
return
errors.
B
A
B
A
Okay,
we
can
follow
that
same,
like
no
like
embed
a
no
op
thing
that
I
was
doing
in
the
metrics
or
the
enrichment
specification.
If
you
wanted
to
follow
the
same
pattern,
it
might
just
be
nice
because
then
it
just
looks
like
okay.
If
you
need
to
do
something,
that's
more
like
under
the
covers
that
you
don't
care
about
just
embed
this
thing.
B
A
B
A
Yeah,
no,
I
think
I'm
in
agreement
with
that.
So
let
me
make
a
note:
let's
actually
make
that
a
note
for
this.
B
A
So
would
you
say,
make
the
configure
method
mandatory
for
all
of
those
updater
scanners
and
factories.
A
A
Yeah,
so
if
they
don't
use
it
right
now,
will
we
panic
because
the
default,
the
non-default
well,
the
default
http
client
is
being
used.
B
No,
I
mean
the
like
messing
with
the
package
level
defaults.
Is,
I
don't
know,
I
don't
know
if
we
want
to
do
that,
but
I.
A
Oh,
so
that's
how
you
did
it
for
testing,
you
basically
poisoned
the
default
and
then,
if
it
was
used,
it
panicked,
gotcha,
okay,
gotcha
yeah!
That's
why
when
we
were
talking
in
slack,
I
was
like.
Are
you
gonna
upstream
that?
But
I
guess
the
you
wouldn't
right,
because
you
are
poisoning
the
d,
probably
not
yeah?
Okay,
I.
A
A
B
Yeah
and
then
so
as
part
of
this
work
digging
into
this,
it
sort
of
also
bled
into
some
work
with
the
air
gap
stuff.
B
So
I
ended
up
doing
some
reworking
of
the
rel
updater
and
scanner,
so
that
might
need
a
little
more
scrutiny
from
people
that
care
about
it,
which
I
guess
is
all
of
us
but
yeah.
So
it's
sort
of
the
idea
is
to
cut
down
on
the
number
of
like
side
channels
that
the
rel
update
was
doing.
It
was
like
spawning.
It
was
doing
some
like
a
little
bit
of
code
smell
stuff.
B
B
A
Yeah
I
mean
I
align
with
the
interest
in
sql
lite,
mostly
for
a
offline
indexer.
You
know
no
database
required,
so
I'm
not
opposed
to
at
least
exploring
opportunities
to
bring
sequel
light
in
we'll
just
have
to
leave
the
options,
because
right
now,
even
our
scratch
binaries
will
like
do
see.
Go
no.
You
know.
So
you
go
disabled.
B
A
Yeah
I
mean
that
might
be
where
we
want
to
start.
I
looked
at
that.
I
didn't
really
see
how
to
like
initialize
a
database
with
that.
So
I
think
you
still
need
to
sqlite
tooling,
like
on
the
node
on
the
container
inside
the
container,
because
it
won't
create
the
file
type
for
you.
I
think
I
haven't
looked
that
hard.
B
Yeah,
I
don't
know,
I'm
not
sure
my
my
concern
is
mostly,
I
think.
If
we
were
to,
we
could
make
the
way
the
offline
export
works
right
now.
Is
it
just
dumps
everything
it
finds
file?
Every
time,
I
think
if
we
had
the
the
sequel
light,
it
could
maybe
look
at
an
old.
You
could
tell
it
to
look
at
an
old
version
and
then
it
could
do
the
like
use
the
fingerprinting
mechanism
instead
of
just
starting
from
a
blank
slate
every
single
time
which
might
be
of
interest.
A
Cool
all
right,
so
I
have
a
couple
things
more
just
informational
database
metrics
are
now
in
actually
this
4.1
alpha
build,
which
I'll
I'll
touch
upon
a
little
bit
more
in
just
a
second
but
yeah.
We
basically
are
now
exporting
all
database
duration,
query,
durations
and
just
counts,
so
you
can
see
their
rates
and
if
you
are
interested
in
finding
out
a
little
bit
more
about
how
we
expose
those
again,
we
keep
most
our
open
designs
in
github
discussions
in
this
design
tab.
A
So
here's
our
claire
and
claircore
metric
pass
one,
and
basically,
if
you
come
down
to
here,.
A
The
general
api
general
database,
this
part
of
the
specification,
has
been
implemented
so
at
this
time
you
can
basically
go
to
prometheus,
explore
metrics
anything
starting
with
claircore.
I
do
have
to
make
a
small
amendment.
This
only
covers
database
metrics
in
clear
core.
A
A
I
just
need
to
basically
amend
this
just
a
bit
and
add
the
fact
that
there
are
database
metrics,
which
start
with
just
claire.
They
follow
the
same
naming
convention,
though
so
it's
clear
core
the
store
name,
which
happens
to
be
like
indexer,
notifier
or
phone
store.
Where
we
keep
the
vulnerabilities,
then
the
database
function
and
then
a
name
as
a
moniker
saying,
okay
totals
this
is
a
prometheus.
A
More
of
a
prometheus
idiom
total
is
is
for
counters
duration
is
for
histograms
with
the
unit
that
is
being
measured.
So
if
you're
running
claire-
and
you
have
been
questioning,
how
are
we
doing
as
far
as
query
optimization,
you
can
now
actually
pick
those
details
out.
So
if
you
guys
watching
this
or
currently
at
the
meeting,
actually
want
to
start
looking
at
this
stuff,
you
can
definitely
start
submitting
tickets
about.
A
You
know:
query
lengths,
durations,
we'll,
take
a
look
at
them
and
just
make
sure
that
we're
doing
a
good
job
at
optimizing
sql
on
our
end,
but
I'll
be
doing
that
work
myself.
Obviously,
there's
just
a
couple
things
going
on
at
the
same
time,
so
I
haven't
been
able
to
sit
with
that.
Yet,
but
it's
nice,
it's
it's
a
nice
community
asked
to
say
like
because
performance
affects
everyone
involved
with
the
application.
So
so
those
are
in
the
enrichment.
A
Work
is
about
to
kick
off,
so
you
know
what
let
me
show
another
link.
There
is
github.com.
A
Richmond
spec-
and
this
is
our
ins
specification
for
enrichment.
If
you
watch
the
previous
meetings,
then
you
know,
enrichment
is
our
way
of
bringing
back
nbd
metadata
and
other
types
of
auxiliary
information
to
the
vulnerability
report.
So,
for
instance,
if
we
wanted
to
bring
in
red
hat
grading
scores,
we
can
now
do
that
and
and
place
it
into
the
vulnerability
report,
and
then
clients
can
use
that
extra
metadata,
basically
to
supplement
the
data.
That's
there.
A
But
if
you
look
at
this
link
with
the
markdown
files,
there's
there's
two
ones
of
interest:
there's
a
specification
which
goes
over
how
we're
actually
going
to
implement
this.
A
This
new
metadata
we're
calling
it
enrichment
specification
and
then
more
recently
there
is
the
implementation
details
with
the
nitty
gritty
of
of
how
this
is
going
to
happen.
So
this
week,
I'm
going
to
start
on
this
work,
you
can
track
this
work
at
issues.redhat.com.
A
I
will
send
a
link
to
that
in
a
bit.
I
also
think
that
it's
on
the
agenda,
but
we'll
be
tracking
the
work
for
karen
richmond's
and
that'll,
be
kicking
off
this
week,
so
yeah,
if
you're
interested
in
that,
if
you
are
missing
severity
details
and
it's
really
affecting
your
use
cases
with
claire
just
watch
that
work
happening
in
issues.redhat.com,
I
will,
at
the
end
of
this
meeting,
actually
put
a
link
there.
A
If
you
are
interested,
you
can
track
it
and
then
you'll
know
as
soon
as
we
get
auxiliary
data
back
into
the
vulnerability
to
support
you'll
be
able
to
use
it
once
again
and
then
this
completed
as
of
yesterday,
there's
a
4-1
alpha
build
before
one
is
going
to
be
a
pretty
big
release.
I
mean
we're
going
to
have
a
ton
of
reliability,
fixes
this
enrichment
metadata
coming
out.
So
we
split
the
release
a
little
bit.
A
Four
one
now
has
four
one
alpha:
one
is
released
upstream,
you
can
go
and
grab
it
from
the
docker
repositories.
All
that
information
is
at
the
repository
declare
repository,
but
it
has
particular
releases
as
far
as
we
no
longer
block
in
clair
to
come
up
and
running,
we
don't
wait
for
initialized
data.
A
Notifier
is
more
efficient.
Now
this
actually
touches
upon
some
of
the
things
you
brought
up
beyond
so
we've
kind
of
got
all
those
changes
into
there.
There's
also
what
yan
is
going
to
cover
next,
so
I'll
just
wait
and
defer
to
him
to
actually
say
it's
now
in
the
401
alpha
build
and
just
a
couple.
Other
reliability
bug
fixes
dock
changes.
So
if
you
are
interested
in
that,
you
can
actually
just
look
at
our
change
log
on
the
claire
repository
in
the
clear
core
repository
so
yeah.
C
A
C
Okay,
cool,
so
as
part
of
the
release
that
lewis
I
was
talking
about
just
a
little
while
ago,
I
also
implemented
a
change,
that's
related
to
oval
data
published
by
red
hat.
So
if
you
don't
know
what
oval
is
you
can
check
out
at
their
site?
I
won't
go
into
depth.
I'll
just
say
that
it's
some
kind
of
open
standard
describing
how
to
track
vulnerabilities
and
redhead
is
producing
a
stream
of
data
that
conforms
to
the
standard
by
the
way.
All
of
the
things
you
see
here.
C
All
of
the
web
pages
I'm
visiting
are
also
in
the
agenda
document,
so
you
can
go
over
them
if
you
are
interested
so
now,
specifically
to
redhat.
Well,
this
is
a
simple
directory
which
looks
like
this.
You
have
information
for
l5
through
eight
and
then,
if
you
go
deeper
into
the
hierarchy,
you
can
see
there
are
streams
for
specific
products,
so,
for
example,
ansible
2.9.
C
C
So
this
is
basically
definition
of
one
vulnerability
in
one
stream.
There
are
a
lot
of
them
so
and
they
look
basically
like
this
one.
C
So
what
is
important
here
is
that
if
you
take
a
look
at
the
class,
you
can
see
that
the
class
is
patch,
and
that
means
a
fix
has
already
been
released,
and
it's
also
visible
here
in
that
in
that
id,
that
there
is
a
red
hat
security
advisory
tied
to
this
vulnerability.
C
C
If
a
record
in
oval
doesn't
have
this
affected
cpe
list,
then
claire
doesn't
care
about
it.
The
the
that's
important
information
for
what
I'm
about
to
say,
but
for
this
vulnerability
we
have
affected
cp
list.
C
Apart
from
those
vulnerabilities
of
class
patch,
we
also
have
vulnerabilities
with
class
vulnerability
and
there
are
two
types
of
them.
First,
one
is
unfixed
vulnerability
and
that's
basically
means
a
security
security
problem
that
has
been
identified
but
has
not
been
fixed.
Yet,
as
you
can
see,
there
is
no
security
advisory
related
to
this
and
again
in
the
id
we
don't
see.
Rhsa
we
see
just
cve
another
type.
C
This
is
actually
a
very
strange
item,
as
this
is
basically
just
the
confirmation
that
given
vulnerability
doesn't
affect
this
package,
and
you
can
see
that
there
are
some
criteria
for
the
vulnerability
to
match
and
unaffected.
Vulnerability
has
a
criteria
structured
in
such
a
way
that
they
will
never
evaluate
to
true
you.
Can
you
can
see
that
x
and
I
o
is
installed
and
is
not
installed,
must
be
true
at
the
same
time
and
as
you
can
surely
understand
understand
that
will
never
happen.
C
So
for
us
to
be
prepared
for
this
change,
we
need
to
do
two
things
and
basically.
C
The
intent
of
those
changes
is
to
take
those
unaffected,
vulnerabilities
and
discard
them
as
soon
as
we
encounter
them,
because
we
really
have
no
way
of
using
them.
They
don't
bring
us
any
value
from
the
point
of
security
vulnerability
scanning
as
soon
as
we
encounter
them.
We
just
discard
them
and
we
go
on.
We
don't
even
create
the
entry
in
volume
store,
on
the
other
hand,
with
unfixed
vulnerability.
C
A
I
mean
I,
I
think
I
understand
what
you
did
and
why
you
did
it.
So
basically
I
mean
the
straightforward
answer
is
just
that
those
unaffected
vulnerabilities
are
are
a
pointer
for
something
on
the
red
hat
side,
but
doesn't
really
mean
anything
to
claire.
We
would
just
not,
we
would
actually
accidentally
match
them
right.
C
Yes,
that's
right,
I
mean
we
would
we
would
we
would
process
them
and
in
the
end,
when
criteria
are
processed,
we
would
find
out
that
they
actually
do
not
resolve
the
true
do
not
evaluate
to
true,
but
that
would
be
a
lot
of
time
spent
on
nothing.
Basically,.
A
Yeah
totally
totally
makes
sense
all
right,
cool
yeah,
I
mean
the
change,
makes
sense
and
I'm
for
it.
It
is.
It
does
beg
the
question
about
the
usability
of
those
things
in
the
vulnerability
database,
but
you
know,
I
think,
that's
it
just
might
not
be
usable
for
us,
but
it's
usable
for
red
hat
somewhere.
You
know.
C
A
Yeah,
it's
it's
quite
the
oval
hack,
it's
like
yeah
just
make
an
impossible
condition,
but
I
mean
it
touches
upon
the
fact
that
I
think
at
some
point
I
think
in
claire
you
know
v5
or
something
we
should
start
considering
those
oval
trees
conditional
trees.
A
A
Yeah
yeah,
it's
just
a
little
bit
of
a
different
architecture,
because
right
now
you
know
we
decompose
the
report
into
streams,
so
we
wouldn't
be
able
to
do
that
anymore
because
to
consider
those
trees,
you
need
the
full
report.
You
need
to
know
what
the
distribution
is.
What's
actually
installed,
you
need
more
than
just
hey.
This
is
a
package
record.
Does
it
match
this.
B
Right,
but
only
only
for
some
types
of
those
trees
which
are
not
common
so
like.
C
B
There
are,
there
are
oval
like
right:
therian
which
are
like
file
exists
file
contains,
but
those
don't
seem
to
get
used
very
much
by
distribution.
B
A
Yeah,
I
think
I
mean
when
I
think,
if
I've
not
mistaken,
like
the
ubuntu
databases,
they
might
actually
be
like
it's
running
this
distribution
and
this
package
is
installed
and
it
has
this
version
x
or
something
like
that,
and
then
you
have
to
compile
that
all
together,
which
yeah
I
mean.
If
we
support
it,
we
should
probably
support
we'd
have
to
basically
grok
through
the
databases.
We
know
at
least
to
get
an
idea
of
what
conditions
are
in
those
trees
and
then
it'd
just
be.
A
You
know,
like
evaluation
period,
when
someone
wants
to
onboard
a
new
distribution
or
security
database,
we'll
just
have
to
ask
them
like
hey.
What's
your
conditions,
what
are
the
possible
conditions?
Do
we
support
that.
A
B
A
Okay,
yeah,
I
mean
it's
a
little
bit
ways
off,
but
it's
nice
to
keep
it
in
focus,
because
I
think
you
know
it's
also
interesting,
because
claire
has
to
play
nice
in
a
world
where
it's
not
just
oval
either.
So
it's
like
that's
kind
of,
I
think,
a
reason
why
we
didn't
immediately
take
that
effort
because
it's
like
yeah,
we
support
oval,
but
we
want
to
support
other
things
too.
So
digging
a
whole
bunch
of
time
in
the
oval
didn't
make
sense
right
now,
but
eventually
it
will.
B
Eventually,
we
want
to
do
sorry,
just
sorry,
just
rambling.
I
also
want
to
do
eventually
actual,
like
cpe
matching
for
real.
A
Yeah,
we
should
have
a
sync
up,
something
we
should
have
a
sync
up
with
stack
rocks
because
they
currently
do
that.
So
it
might
be
that
we
can
just
kind
of
take
how
they're
doing
it
and
massage
it
into
claire
v4
yeah.
I
mean
they
when
it
comes
up
every
time.
Everyone
cringes
in
the
room,
so
I
don't
know
how
well
it
works
yeah,
but
at
least
at
least
they
have
something
working.
I
don't
think
you
know
we.
A
A
All
right
cool,
so
diane
mueller
added
an
item
here
for
deep
dive
on
indexing.
This
is
a
talk
I
was
planning
on
doing
at
some
point,
which
looks
like
it'll
be
either
april
19th
or
26th,
we'll
probably
put
on
a
new
openshift
commons
that
does
a
deep
dive
on
the
indexer
shows
how
basically
it's
implemented
as
a
finite
state
machine,
how
content
addressability
works
and
basically
it's
functionality
and
try
to
do
as
as
low
level
as
we
can
get.
Try
to
explain
the
data
model.
A
Everything
like
that
because
it
just
can
help
you
learn
the
application
a
little
bit
more.
So
that's
all
the
agenda
item.
Let's
go
back
to
those
issues
for
a
bit
because
there's
a
couple
things
I
want
to
ask
about.
C
Oh
by
the
way,
luis
you
don't
share
the
screen.
A
So
if
we
go
back
to
these
issues,
I
am
kind
of
optimistic
about
at
least
taking
a
crack
at
this
correct
me
if
I'm
wrong
hank,
but
I
I
see
this
as
if,
if
these
containers
are
append
only
we
can
support
this
today
and
we
can
support
it
better
in
the
future
when
we
do
model
out
deletions
on
the
file
system,
but
we
I
think
we
can
support
it
today
and
if
we
can,
I
think
we
can
do
it
pretty
easily
and
it
just
checks
a
box.
I
don't
know
if
you
have
opinions
there.
B
I
mean
no,
no,
I
don't
think
so
because,
like
we,
we
already
know
that
people
do
dumb
things
like
install
packages
and
then
uninstall
packages
in
the
same
like
container
ancestry
or
in
the
same
container,
in
different
layers
in
the
same
container
and
with
distrolus.
We
will
never
see
the
removals
because
of
the
way
it
works,
which
is
basically
every
package
gets
its
own
file,
its
own,
like
debian,
debian
database
formatted
file
and
we'll
never
see
the
deletions.
B
A
B
I
mean,
I
think,
I
think,
most
of
the
build
tools
that
do
this
just
like
copy
everything
in
and.
A
C
B
B
We
could
throw
it
in
if
we're
willing
to
field
some
support
tickets
yeah.
We
bake
that
in
but
yeah.
A
A
Theoretically,
in
our
vulnerability
report,
we
show
the
package
databases
where
the
packages
were
found,
so
we
could
in
some
way
document
like
hey.
If
we
found
you
know
like
debian
rpm
packages
outside
of
or
even
in
the
directory,
where
distrolus
containers
or
packages
package
databases
are
found.
Consider
those
results
beta.
You
know
yeah,
so
all
right,
let's
spin
on
it,
for
you
know
we'll
talk
about
it
in
chat
a
little
bit
more.
I
mean,
I
think
we
are
kind
of
busy
but
yeah.
B
I
mean
I
think,
right
now,
yeah,
for
like
a
proof
of
concept
of
this,
it
would
be,
I
think,
pretty
easy
doing
it.
Correct
would
do
more
engineering
work
and
if
we
want
to
just
do
a
a
proof
of
concept,
we
need
like
a
document.
We
need
the
documentation
in
line
like
good
to
go
so
that
we're
not
sort
of
dealing
with
that.
A
A
Do
we
want
a
ui
element
that
says
this
or
anything
like
that,
but
that's
cool
this
I
kind
of
want
to
take
on
because
sometimes
I,
like
digging
into
quay
code,
it's
like
a
nice
little
break
from
what
we
do
on
the
everyday
basis,
I'm
just
waiting
for
it
to
get
prioritized.
I
don't
think
anyone
wants
to
do
a
merge
into
quay
until
3
5's
done,
but
it
seems
seems
pretty
easy
integration
testing,
I'm
just
gonna
pause
for
now
it's
yeah.
B
A
So
we
should
just
consider
you
know
like
it
would
be
a
nice
re-look
at
that,
but
yeah
I
mean,
I
would
say
at
least
start
poc,
some
of
the
sequel,
light
stuff
that
you
have
in
mind.
If
you
have
the
free
time,
maybe
hack
and
hustle,
because
yeah
I
mean
I
just
have
a
lot
of
optimistic
ideas
around
sql
light
and
becoming
a
little
bit
more
embeddable
and
locally
ran.
A
A
A
B
A
B
A
B
B
B
A
B
A
B
A
A
A
Cool
so
you're
spending.
You
know
the
majority
of
your
time,
working
on
the
rate
limiting
stuff
right
rate,
limiting
the
configuration.
A
B
A
Yeah
I
mean
I'd
like
to
talk
to
someone
that
works
around
those
teams
to
see.
Even
if
we
just
have
like
a
brainstorming
session
and
then
come
out
of
it
saying
hey.
No,
it's
not
gonna
work.
You
know
I
gotta
be
happier
with
that,
but
I
just
need
I
need
to
find
the
contacts
I
haven't.
A
B
A
Hard
for
us
to
get
those
contacts
I
mean
basically
under
our
umbrella,
so
cool.
Is
there
anything
else?
You're
thinking
about
the
notifier
work
you're
talking
about
making
it
full
discussed.
Is
that
what
we're
kind
of
going
with.
B
B
Okay,
all
right,
yeah,
okay,
that's
probably
a
better
way
to
start-
is
making
sure
that
that
that
configurations
are
honored
everywhere
and
then
see.
If
the
combination
of
that
and
not
eating
all
the
ram
in
the
universe,
make
it
work,
yeah,
okay,
that's
where
I'll
start
on
that,
then.
A
And
then
this
lock
needs
to
go
in
just
because
it's
going
to
get
rid
of
all
the
locking
connections
right
or
connection
using
locks
so
yeah.
I
should.
I
actually
spent
a
little
bit
of
time
just
like
confirming,
like
I
spent
just
hours,
just
testing
it.
Basically
because
I
got
paranoid
because
it's
a
big
change
and
it
looks
our
locks
so
but
it
seems
pretty
iron
proof,
so
I
wrote
tests
that
basically
just
spawned
a
bunch
of
random
go
routines
with
a
random
number
count
and
just
ran
that
thing.
A
For
like
hours
I
didn't
get
any
data
races
didn't
get
anything
unable
to
lock
no
deadlocks.
So
I
think
it's
cool.
I
looked
over
the
code
a
couple
times,
but
yeah
I'll,
probably
now.
I
feel
a
little
bit
more
confident
with
it
and
caught
up
with
it
I'll
probably
start
trying
to
implement
that
stuff.
That
should
help
with
database
connections,
at
least.
A
Yeah
yeah,
so
what's
it
gonna,
say
yeah?
If
we
do
change
the
notifier
to
do
more
of
like
this
stuff,
I
think
we
need
a
design
doc
just
because
that's
can
get
a
little
hairy
and
things
are
getting
complex
enough
and
the
need
to
correctly
identify
how
things
integrate
is
becoming
like
paramount
at
this
point,
so
we
could
definitely
like
collab
on
that,
even
if
you
want
to
take
charge
on
it.
A
If
we
do
get
to
that
point,
but
yeah,
I
think
we're
both
in
the
technically
that
service
should
work,
that
if
you
spawn
more
services
or
nodes,
they
can
each
parallelize
the
work
of
one
notification.
A
building
one
update
operations,
diff
basically
into
a
notification.
B
B
A
A
A
Cool
yawn,
thanks
for
presenting
that
was
great.
It's
good
to
get
that
information
out
to
samantha.