►
Description
Quay v3.0 Release Update and Road Map
Dirk Herrmann (Red Hat)
OpenShift Commons Briefing
hosted by Diane Mueller
@openshiftcommon
https://groups.google.com/forum/#!forum/quay-sig
A
All
right
well
welcome
everybody
to
another
openshift
Commons
briefing.
This
time
we
have
Dirk
Herman,
who
is
the
product
manager
for
Quay
who's,
going
to
give
us
an
update
on
Quay
v3?
What's
in
it
what's
new
and
what's
coming
down
the
road,
we're
gonna
use
the
same
format
that
we
usually
do
we're
gonna,
let
there
talk
or
his
whole
presentation.
There
are
no
live
demos
and
then
we're
gonna
have
live
Q&A
afterwards
there
are
a
few
people
that
are
monitoring
the
chat.
A
B
Way-Cool,
16
and
6-fold,
you
guys
were
attending
this
this
session
yeah.
My
name
is
Derrick
I
joined
Twitter
10
years
ago.
I'm
I
did
a
couple
of
things
that
were
a
couple
of
different
things,
so
I'm
the
guy
behind
the
record
container,
can
locate
the
container.
Also
next
and
last
year,
I
had
the
great
opportunity
to
take
over
the
responsibility
for
Quade,
red
clay
and
Cueto,
which
is
we
did
one
of
the
best
walls.
B
I
have
a
headed
record,
I
can
actually
say
ders
and
in
this
presentation,
I
will
talk
a
little
bit
about
red,
head
clay,
who
we
approached
the
product
development
and
what's
new
and
ratted
Cuevas.
We
know
it's
already
on
May
1st
and
what's
coming
out
soon
and
what's
coming
up
in
the
long-term
planning.
So
I
will
talk
about
a
little
bit
about
the
roadmap
earthquake.
So
let
me
let
me
start
with
explaining
what
quai
is
so
qui
effectively
it's
a
registry
yeah.
B
So
it's
a
registry
and
if
you're
doing
containers
in
couple
areas,
you
definitely
need
a
container
registry.
The
challenge
I
have
on
the
registry
side.
Its
registry
is
to
really
not
as
sexy
as
many
other
things
and
just
use
the
app
landscape
yeah,
so
you
just
need
it,
but
it's
not
really
sexy.
On
the
other
hand,
we
on
the
quayside
are
pretty
proud,
because
we
believe
the
registry
is
much
more
than
just
touring
playing
binary
blocks,
yeah.
So
the
use
cases
we
are
satisfying
for
our
customers
of
going
far
beyond
just
throwing
the.
B
So
we
are
talking
about
much
more
sophisticated
use
case
that
featuring
content,
ingress
points,
content,
Federation
or
distribution.
Keep
in
mind
that
the
registry
is
the
only
endpoint
kubernetes
directly
interacts
with
so
Canadians
doesn't
care
about
the
content
inside
an
image.
It
only
pulls
image
as
an
application
specification.
B
This
makes
the
registry
pretty
important
in
E
combinators,
especially
in
the
security
context,
and
this
also
includes
topics
such
as
the
single
source
of
truth
for
all
the
irrelevant
metadata,
which
then
is
supposed
to
be
used
on
the
platform
itself,
and
there
are
a
couple
of
other
things
which
are
pretty
relevant
in
the
context
of
an
enterprise
registry.
From
a
reddit
standpoint,
we
effectively
have
two
different
offerings.
B
We
have
Greta
Quay,
which
is
the
product
Li,
the
private
registry
product
and
the
FD
hosted
server
is
called
Cueto
and
one
of
the
great
advantages
and
one
of
the
key
differentiators
of
quai
is
we.
Typically,
we
have
an
release,
process
or
model
in
place,
which
means
that
we
develop
a
feature.
Then
we
push
it
into
quello
or
we
make
it
available
to
selected
customers
and
then
be
after
this
has
been
stabilized.
B
We
make
it
globally
available
and
after
this
has
been
stabilized
again,
then
we
finally
ship
the
product,
which
really
means
that
we
have
a
couple
of
hardening
and
really
large
scales
guesting
long
before
we
ship
the
products,
especially
since
we
know
that
a
couple
of
our
customers
are
running
away
at
a
massive
scale.
So
it's
not
about
a
single
cluster
deployment,
which
was
a
few
hundred
images
in
there.
We
are
talking
about.
B
We
do
large-scale
deployments
and
there
are
a
couple
of
features
which
are
needed
in
order
to
satisfy
needs
of
the
customers
we
are
serving
or
product
to,
and
this
is
much
more
than
just
the
plain
registry
itself.
On
the
registry
side,
it's
also
the
core
technology.
Of
course
it
has
to.
It
has
to
be
all
feature:
high
availability
capabilities.
B
We
are
pretty
proud
that
we
not
only
support
several
standards
or
specifications,
but
we
also
provide
long-term
follicle
support,
which
means
you
can
push
images
using
a
darker
client,
which
audio
runs
db2
version
and
then
finally,
pull
it
from
an
old
b1
client.
This
is
one
of
the
unique
features
of
earthquake.
We
have
an
additional
feature
called
the
application
registry,
which
allows
you
to
store
application.
B
Contents
become
trans
in
the
registry
as
well,
and,
of
course,
we
as
Retton
provide
enterprise
grade
support
for
the
product
and,
of
course,
this
also
includes
that
we
ship
regular
updates.
We
have
a
dedicated
lifecycle
of
2
years
and
minus
one
back
calling
and
we
plan
to
ship
minorities
every
3
months,
but
there
are
several
surrounding
topics
which
really
are
quite
important
and
those
are
the
key
elements
of
of
qwave,
and
some
of
them
are
really
unique.
Paretic
way,
let's
talk
a
little
bit
about
those
topics,
so
security
is
obviously
one
of
them.
B
We
have
a
built-in
vulnerability
scanning,
followed
by
Claire
way,
features
advanced
logging
and
auditing
capabilities,
and
we
have
a
great
notification
feature
which
could
be
used
to
do
a
loading
and
even
trigger
actions
on
various
other
endpoints
using
the
web.
Hooks
of
course,
quai.
It
has
a
couple
of
features
in
the
area
of
content
distribution.
Zero
replication
is
one
of
the
most
well
known
features.
Oddly
people,
wood
is
the
repository
main
feature.
B
I
will
talk
about
in
a
min
and
we
also
plan
to
improve
the
support
for
agate
or
disconnected
environments
pretty
soon
as
well,
when
it
features
a
couple
of
access
control
features.
This
includes
multiple
authentication
providers.
This
includes
a
fine-grained
role
based
access
control
model.
It
was
in
quake
and
feared
support
organizations
and
teams
and
users
within
those
teams.
So
this
is
pretty
powerful,
especially
since
Quay
is
typically
used
in
large
deployments
and
large
organizations
where
you
have
many
many
different
teams
and
users
within
those
teams
with
different
permission,
levels
and
staff.
B
I
guess
I
already
mentioned,
the
scalability
is
one
of
our
differentiators.
So
before
we
release
it,
we
do
this
massive
scale,
testing
at
Cuero,
but
also
don't
know
quit.
It
was
probably
one
of
the
biggest
weather
squeeze
out
there
yeah.
B
So
it's
much
much
bigger
than
many
other
registries,
it's
probably
20
times
bigger
than
the
weather
directory,
for
example,
and
this
huge
scale
settable
deployment
with
as
a
couple
of
other
things
and
those
are
again
features
of
Quay,
and
one
of
them
is
really
a
pretty
unique
feature,
which
is
the
real-time
zero
downtime
garbage
collection,
which
ones
in
the
background
that
automatically
cleans
up
unused
blobs
and
another
feature
in
this
area
is
the
automated
squashing,
which
also
supports
this.
The
scalability
of
the
registry
itself.
B
Other
web
hooks
is
one
of
the
automation
feature
which
are
really
pretty
powerful
on
the
quayside,
and
then
there
are
a
couple
of
features
which
allows
you
to
better
integrate
quaid
into
your
existing
landscape.
This
is
the
default
use
case,
so
registry
is
typically
not
a
standalone
or
isolated
thing.
It's
typically
directly
embedded
into
your
the
icd
pipeline
and
corresponding
workflows,
and
this
requires
that
you
have
an
extensible
api
which
is
a
default
feature
of
quade.
You
probably
want
to
have
something
like.
B
Oh,
oh,
it's
integration
and
you
definitely
want
to
use
something
like
that
hooks
or
the
robotic
Cowen's
we
already
have
in
quake
since
a
couple
of
years
million
times.
So
all
those
features
are
really
what
we,
what
we
define
as
why
we
call
it
an
enterprise
registry
which
is
which
obviously
goes
far
beyond
a
plain
registry,
to
store
a
couple
of
blocks
and
images
in
there
and
if
we
look
a
little
bit
deeper
into
how
we
approach
the
Quai
development
model.
So
this
is
a
slide.
B
I
used
last
summit
we're
presented
together
with
one
of
our
customers.
We
are
working
closely
with
where
I
try
to
explain
how
we
develop
the
product,
and
this
is
related,
especially
to
the
mid
for
future.
We
were
future.
We
will
talk
about
any
meaning,
so
obviously
we
as
wet
head
are
the
webinar
so
Quay
to
us
as
part
of
the
course
acquisition
and
no
it's
a
rat
head
product.
So
we
are
the
vendor,
of
course,
but
this
doesn't
mean
that
there
is
nothing
else
we
need
to
consider.
B
We
have
our
own
roadmap,
we
have
our
own
planning,
but
of
course,
a
couple
of
other
things
are
influencing
everything
we
are
doing
on
a
daily
basis,
including
our
vision
and
roadmap
on
the
quayside.
One
of
them
are
standards,
I
just
picked
one
of
them,
we've
used
in
the
past,
pretty
frequently,
which
is
the
NIST
800
190s
and
special
publication
from
list.
This
is
the
container
application
security
guide
written
by
three
different
people,
and
they
typically
used
this
document
as
a
very
good
example
to
explain
the
importance
of
the
registry
so
d.
B
This
special
publication
talks
about
five
major
risk
area:
the
image
registry,
the
orchestrator,
the
container
host
operating
system
in
the
run
in
container
as
those
are
the
five
major
risk
areas-
and
there
are
a
couple
of
of
risk
and
and
remediation
guidance
in
this
document
and
I
typically
used
it
to
explain
it.
How
the
registry
fits
into
the
broader
context
and
in
the
red
head
specific
context,
oak
wave
fits
into
a
broader
picture
together
with
openshift
or
rel
cores
and
other
products
we
as
well
at
own
and
other
technologies.
B
Our
customers
are
typically
using
including
partner
offerings.
Also
used
to
explain
how
it
fits
into
a
much
broader
landscape,
our
customer
PDF,
which
not
only
shows
red
hair
products,
but
also
third-party
partner
offerings
and,
of
course,
your
own
customizations
and
custom
products
or
technologies
you're
using.
So
this
is
one
example.
There
is
a
Commons
briefing
out
there,
so
you
can
watch
it
online.
B
There's
a
video
recording
of
the
Commons
briefing
I
did
last
year
with
the
city
of
twistlock
on
this
particular
topic,
and
there
are
a
couple
of
YouTube
videos
out
there
on
new
steak
and
other
channels
where
I
talk
with
the
CTO
of
twistlock
again
and
the
other
guy
from
NIST
who
wrote
this
special
publication.
So
this
is
one
area
we
are
working
on
another
area
which
might
be
worse
to
mention
@q
Khanna
you.
B
We
recently
had
a
couple
of
meetings
with
several
vendors,
where
I
have
to
say
one
of
the
guys
anyways
one
of
the
guys
in
the
church
was
one
of
the
cofounders
of
Quay,
who
has
been
really
driving
the
whole
conversation
to
define
the
future
vision
and
direction
of
several
standards
and
protocols
around
registries.
This
is
this.
Really.
It
probably
makes
us
unique:
we
have
the
expertise,
we
are
driving
the
long
term
vision
and
direction,
and
this
is
pretty
very
powerful.
Another
big
area
is
partners,
so
we
are
working
closely
with
several
partners
out
there.
B
So
it's
not
an
exception.
It's
really
the
reality
and
that's
why
we
are
working
with
so
many
partners
pretty
closely,
and
there
will
be
probably
a
couple
of
news
coming
out
pretty
soon
on
this
one
but
of
course
the
10th
row
of
everything
we
are
doing
our
customers
yeah.
So
we
have
thousands
of
customers.
We
are
working
very
closely
with
and
I
just
picked.
B
Two
slides
I
used
the
redhead
summit,
where
I
talked
with
BP
on
the
collaborative
approach
we
did
with
this
particular
customer,
where
we
started
with
with
a
customer
survey,
basically
reached
out
to
cut
several
customers
to
get
additional
input
and
feedback
on
a
very
complex
topic
we
wanted.
We
wanted
to
work
on
a
little
bit,
then
we
attended
a
roundtable
and
in
London
we
went
back
to
this
customer
and
did
a
two-day
workshop
with
those
customer,
and
then
we
started
with
a
POC
and
and
step-by-step
expanded.
B
This
collaboration
as
well
and
this
customer
really
demonstrated
okay.
What's
the
value
coming
out
from
this
collaboration
for
the
customer
and
of
course
we
have
all
view,
what's
the
value
coming
out
for
us
there,
and
so
we
have
a
couple
of
customers.
We
are
working
very
closely
with
and
one
of
the
goals
in
the
fully
next
six
to
nine
months,
as
we
need
expenses
and
two
scalars
even
broader
and
work
with
more
customers
and
the
broader
community
to
get
more
feedback,
more
inputs
and
more
collaboration
and
contributions.
B
All
of
them-
and
this
points
me
to
the
community
aspect
so
Quay
as
of
today-
it's
currently
probably
the
only
redhead
product
which
is
not
open
sourced.
Yet
so,
as
you
can
imagine,
we
have
an
one
half
percent
open
source
commitment,
so
we
will
open
source,
we
will
open
source
Quay,
and
so
we
are
effectively
already
working
on
it
and
then
I'm
quite
glad
that
Dianne
volunteered
to
help
us
exactly
Drive
in
this
particular
project,
as
you
probably
can
imagine
for
us,
it's
not
sufficient
to
just
throw
this
whole
school
over
a
fence
yeah.
B
So
this
is
not
the
idea
that
we
just
open
a
github
repository
and
say:
hey.
You
have
access
to
the
source
code,
it's
no
open
source
and
we
are
done.
That's
definitely
not
the
ideas
of
the
ideas
to
really
build
a
sustainable
community
with
our
customers,
with
the
open
source
community
risk
partners
to
really
build
something
meaningful
out
of
it,
and
we
are.
We
believe
we
are
in
a
very
well
in
a
very
good
position
to
do
that.
We
have
to,
of
course,
the
egg,
the
open
source
expertise.
B
We
have
a
couple
of
other
open
source
technologies
and
product
at
wet
head.
We
are
deeply
integrating
with
yeah,
so
we
have
everything
we
need.
We
just
need
to
execute
it
on
the
PM
side.
I
have
course
I
need
to
balance
all
the
lip
that
we
still
develop
the
product
they
still
develop,
features
to
bring
additional
value
stores
customers.
So
this
is
running
in
parallel
and
there
will
be
a
couple
of
great
news
coming
out
soon
and
effectively
via,
as
I
said,
the
only
brokenness
and
we
just
started
the
execution
a
couple
of
weeks
back.
B
Let
me
quickly
talk
about
what
Cuevas
we
because
this
has
been
explicitly
called
out
in
the
invite
and
agenda,
so
we
released
queries
we
on
May
1st,
already
the
week
before,
summit
and
effectively.
This
has
been
the
first
major
release
on
the
under
the
red
umbrella
after
the
cross
acquisition,
and
we
introduced
a
couple
of
features,
there
was
a
one
of
them.
Is
we
introduced
multi
arch
support,
so
the
full
support
for
the
docker
registry,
API
version
2
schema
2,
which
allows
you
to
store
multiple
images
for
different
architectures
was
in
the
single.
B
It
was
in
the
same
repository
and
at
the
same
time,
it
also
allows
you
to
store
Microsoft
Windows
images
in
way
as
the
registry,
which
is
a
very
high
demand,
feature
for
many
customers
have
asked
us
for,
and
we
have
just
shifted
and
the
origin
it's
worth
to
mention
it.
We
already
shipped
a
couple
of
updates
this
features
based
on
customer
feedback
and
input,
and
we
recorded
a
demo,
so
we
typically
do
demos.
B
We
call
it
by
the
engineers
who
develop
those
features,
and
then
we
push
it
to
YouTube,
YouTube
and
in
the
path
in
the
future.
We
will
do
a
better
job
on,
let's
say
exposing
this
or
doing
some
noise
wrong.
Those
videos
where
you
can
find
them
or
you
can
watch
them
or
you
can
better
stay
up
to
date
with
all
the
great
stuff
we
are
doing
and
pushing
out
to
the
world.
B
This
is
one
of
the
biggest
features,
and
it
has
been
so
jolly,
probably
and
correct
me
if
I'm
wrong
it
has
it
took
much
longer
than
originally
expected,
simply
because
we
realize
ok.
This
is
really
one
of
the
biggest
change
we
ever
made
there
and
again.
We
need
to
ensure
that
whatever
we
do
needs
to
scale
in
large
scale
deployments
as
we
have
in
many
other
customer
environments.
B
B
B
We
consider
to
move
to
the
UVI
image
we
tree
announced
introduced
at
summer
as
well
in
the
future
as
well,
and
the
advantage
of
using
the
well
base
image
is
primarily
there'd,
be
inherit
all
the
existing
certification
and
support
items
from
rated
Enterprise
Linux
and,
at
the
same
time,
we
shipped
a
couple
of
security
updates
and
and
part
fixes
together
with
suite
at
OU.
And
of
course
this
is
an
ongoing
task
to
ship
a
newer
version
of
the
images.
Each
time
a
critical
or
important
DV
comes
out
which
impacts
all
images.
B
So
this
is
tracked
as
part
of
the
Kachina
house
index
and
I
think
we
are
doing
a
great
job
there,
on
keeping
our
images.
Up-To-Date
upgrading
from
2.9
to
sui
is
kind
of
a
seamless
thing.
So
basically,
there
are
only
two
small
down
times
requirement
required
to
change
the
Quai
configuration
and
to
add
the
the
new
config
options
we
haven't
had
before,
and
we
basically
offer
two
different
upgrade
modes.
The
first
one
is
really
assumed
and
complete,
which
means
you
are
bringing
down
the
cluster.
B
You
run
the
whole
upgrade,
including
the
data
schema
migration,
and
then
you
bring
it
back.
The
alternative
method
many
customers
are
using
is
to
run
the
update
in
the
background,
which
means
you
bring
the
new
quality
enough.
You
can
already
leverage
a
couple
of
the
performance
and
other
improvements.
We
did
with
queries.
We
and
then
the
data
migration
runs
in
the
background.
B
You
can
monitor
at
any
time,
and
once
this
has
been
completed,
you
shut
down
the
old
version
and
then
finally
start
using
the
new
version,
including
the
v2
schema
to
capabilities
and
again
there
was
a
demo
out
there.
There's
documentation
out
there,
so
I
think
it's
pretty
well
pretty
well
done
by
engineering
Church.
B
What
our
customers
are
really
understand,
how
to
do
the
upgrade,
how
to
move
to
the
newest
version
and
then
stay
up
to
date
with
all
the
updates
we
oughta
ship
in
the
meantime,
you
can
try
it
try
it
out
at
every
time.
Together
with
Cuevas,
we
introduced
a
self
service
evaluation
workflow,
so
you
can
go
to
any
of
the
product
pages
we
have
folkway,
including
the
one.
Then
the
customer
portal
Edgeworth
had
come
at
approaches
to
come
and
then
basically
click
on
the
request,
an
evaluation
button.
B
You
have
to
accept
our
terms
and
services,
and
then
you
get
an
email,
basically
with
a
couple
of
follow
up
instruction,
how
to
pull
the
images
and
then
point
us
to
the
documentation
how
to
use
them
yeah.
So
it's
a
self
service
locally.
So
you
don't
need
us.
You
don't
need
to
talk
to
us
if
you
want
to
play
around.
B
There
are
only
a
very
few
small
differences
between
whetted
quays
or
the
private
works
we
product
in
querido,
though
some
of
them
ours.
You
can't
use
JIRA
application.
Obviously
at
Quail
you
can't
content
all
education
providers
yeah,
but
but
it's
mostly
it
yeah.
So
epically.
If
you
are
using
Twitter
or
if
you
like
it,
then
you
will
definitely
like
even
more
the
unfriend
product,
because
we're
the
Quay
and
Cueto
are
sharing
the
same
codebase.
So
you
have
Paisley,
have
many
different
options,
but
again,
of
course,
we
would
appreciate
it.
B
A
B
Already
happened
in
May:
let's
quickly
talk
about
the
future
of
Quay,
so
in
the
near
future,
so
within
the
next
two
months
I
would
say
we
will
ship
the
new
version
of
quad,
you
Morelli's
as
quaver
in
sweida
one
and
one
of
the
key
features
we
will
introduce
was
sweden.
One
is
for
posit
for
a
marine.
B
Other
vendors,
you
are
just
one
sink
wanted
from
or
whatever
else
there's
another
use
cage,
which
is
even
probably,
at
least
at
the
same
level
of
importance
which
allows
you
to
mirror
a
subset
of
the
entire
registry
content
to
disagree
with
deployments,
and
you
can
use
filters
to
sync
only
a
subset
of
a
repository
using
a
regular
expression
such
as
text
or
tech
ranges
or
whatever
else.
You
want
a
filter
for
basically
to
explain
the
difference
between
repository,
mirroring
and
geo
replication.
B
Giro
application
again
is
a
feature
which
already
exists,
so
junior
replication
allows
you
to
to
have
a
single
globally
distributed
way,
instance
to
serve
container
images,
which
means
the
binary
blocks
from
localized
stores.
So
it
replicates
the
storage
on
the
knees,
but
it's
one
big
quail
registry
you're
using
a
shared
database,
all
the
users
organizations
the
arbic
permissions,
everything
else.
The
configuration
is
the
same.
So
it's
one
big
registry
and
only
the
Bible
final
gloves-
are
really
accessed
from
localized
storage.
B
In
contrast
to
repository
mirroring,
if
proposals
for
a
meringue
is
used,
then
you
have
two
distinct
weather
suits,
there's
a
with
its
own
permission,
independence
and
highly
you
can
do
whatever
you
want
on
both
sides.
It's
not
related
to
each
other
yeah
so,
and
repository
meringue
as
I
as
I
showed
on
this
first
slide
has
two
major
use
case.
B
What
directly
coming
out
of
the
building
and
then
pushed
into
the
registry-
and
this
allows
customers
to
have
a
Content
ingress
point
and
many
customers
I've
been
working
with,
especially
in
Europe,
but
also
in
non-american,
several
regulated
industries.
They
want
to
have
something
like
this,
such
as
one
Content
English
point
into
the
whole
customer
environment
and
starting
from
there
they
want
to
distribute
or
federated
content
into
the
different
other
data,
centers
subsidiary's
member
firms
and
whatever
else
they
have
was
in
their
global,
distributed
globally
distributed
environment.
B
B
This
is
one
of
the
most
important
customer
demands
will
be
heard
in
the
past
and
it
also
supports,
to
a
certain
degree
at
least
disconnected
casters.
So
some
customers
are
using
Quay
as
the
only
endpoint
which
is
connected
to
the
Internet,
and
the
the
cluster
itself
is
disconnected
keep
in
mind
that
this,
as
of
today,
doesn't
work
out
of
the
box
without
shift4.
But
we
are
working
hard
to
bring
it
into
the
future
versions
of
opera
for
the
lower
stimulus
set
up
so
many
customers
all
here
today
and
on
the
quayside.
B
We
are
working
hard
on
improving
the
overall,
a
gift
and
disconnected
experience
in
future
releases
of
Quay
as
well,
and
one
of
the
future
items
we
also
discussing
is
how
to
get
content,
which
is
either
in
operator
after
I/o,
and
therefore
also
shown
on
the
openshift
embedded
operator
hub,
using
the
console
how
to
get
this
into
your
customer
environment,
which
would
also
allow
you
to
bring
your
own
operator
and
stuff
with
this.
As
it
is
another
future
feature
we
are
working
on.
B
A
repository
mirroring
feature
will
effectively
introduce
three
different
modes
of
a
repository.
The
first
would
the
first
mode
is
what
you
already
use
today.
If
you
are
using
Quay,
which
means
repository
learning
is
disabled
and
you
define.
Why
are
the
Arabic
permission
which
user
is
allowed
to
push
in
which
user
is
allowed
to
pull?
And
so
this
is
what
you
manually
fine
on
a
repository
level
and
you
can
use
the
fine
quinella
or
that
permission
to
do
so.
B
Once
we
introduce
the
repository
miranne
feature,
you
can
switch
the
repository
into
the
mirroring
mode,
and
if
you
switch
it
into
the
mirroring
modes,
then
users
are
no
longer
allowed
to
push
simply
to
avoid
that
there
was
a
push
which
then
conflict
with
the
content
which
is
supposed
to
be
mirrored
from
this
external
registry.
We
don't
know
anything
other
than
the
URL
and
what
images
are
there?
Also?
B
The
user
pushes
effectively
disabled
but
of
course,
pools
are
still
allowed
and
are
controlled
by
ID,
corresponding
object
permission
there
as
effectively
the
only
the
only
push
or
the
only
item
which
is
allowed
to
push
is
the
repository
marine
worker
in
the
backend
as
well.
It
might
be
worse
to
mention
that
we
made
the
decision
to
not
in
reinvent
the
wheel.
B
We
are
using
an
existing
and
battle-tested
technology,
which
is
discovery
of
project
which
is
part
of
rel
and
also
part
of
our
gifts,
and
this
allow
us
really
to
implement
this
feature
much
much
faster
than
ours.
So
this
is
pretty
powerful
and
we
are
glad
that
we
leverage
this
technology
in
Quay
or
product
and
it's
kind
of
a
site
effect.
But
it's
pretty
important
for
some
of
our
customers.
We
will
even
edit
search
for
positive
aura
mode.
B
Many
customers
have
asked
us
for
which
is
the
read-only
or
archived
mode
which
effectively
allows
us
to
put
an
end
to
a
repository
in
a
read-only
mode,
which
means
all
the
users
can
still
push
pool.
But
nobody
can
push
anymore,
and
this
is
really
an
interesting
use
case
and
several
customers
have
requirement
or
regulation
which
requires
that
they
keep
all
the
images
which
are
no
longer
you
just
for
governance
or
compliance
reasons
for
a
certain
amount
of
time.
B
Another
big
item,
so
one
of
the
focus
areas
for
us
and
this
year
is
we
need
to
have
a
better
and
deeper
integration
into
kubernetes
as
a
platform
and
from
a
rather
10-point.
This
means
open
shift
as
the
objects
contain
a
platform
product
one
of
the
items,
so
there
are
basically
two
different
use
cases
we
are
focusing
on
right
now.
B
We
have
been
working
on
automating
this
deployment
if
quai
runs
on
kubernetes
or
OpenShift,
and
so
the
the
cool
thing
here
is.
We
got
a
contribution
from
our
redhead
field
organizations
or
from
our
container
and
cat
form
as
a
source.
Community
of
practice
was
in
redhead
and
those
guys
helped
us
to
build
an
initial
operator
which
we
call
the
quays
setup
operator.
B
The
decree
setup
operator
is
supposed
to
help
with
the
initial
deployment
of
both
Quay
and
Claire
and,
at
the
same
time,
of
course,
drastically
simplifies
only
the
installation
or
deployment,
but
also
the
updating
day
to
operations
all
the
time.
Now
as
a
this
is
the
key
focus
of
operator,
and
we
as
whetted
as
you
probably
know,
we
look
at
operators
as
the
most
strategic
technology
in
the
common
air
space
and
that's
why
everything
is
based
on
operators
already
on
the
open,
shipped
side,
and
we
are
working
towards
this
strategy.
Also
on
the
on
the
quayside.
B
Also,
the
set
up
operator
not
only
deploys
this
office
betweens
deploys
to
containers,
but
it
also
does
a
couple
of
relevant
openshift
configuration
such
as,
where
all
its
secrets,
exchange
and
stuff
it
is
yeah
and
also
it
uses
these
certificate
management
capabilities
was
in
orbit.
You
have
to
do
a
couple
of
additional
magic,
so
this
really
is
supposed
to
address
the
customers
we
have,
and
it's
might
be
worse
to
mention
it.
The
majority
of
quai
customers
are
open
with
customers
at
the
same
time.
B
So
that's
why
this
has
been
a
quitter,
pretty
pretty
important
feature
for
us
and
we
will
ship
it
as
check
preview,
initially
together
with
suite
at
one,
because
we
want
to
have
more
feedback,
more
input
from
customers,
and
we
want
to
have
more
stabilization
time
to
really
ensure
that
it's
battle
tested
before
we.
Finally
over
these
this
operator
as
a
ga
feature
in
future
versions
of
quite
another
item.
People
introduced
with
sweeter
to
is
the
support
for
open
ships.
B
Container
storage,
as
I
just
said,
many
customers,
many
Quay
customers
are
open
to
customers
at
the
same
time,
and
we
have
we've
talked
to
several
customers
who
are
using
August
today,
who
are
interested
in
Quay
and
those
most
of
those
customers
are
using
object,
container
storage
and
they
basically
asked
us
okay.
What
to
do
with
this
storage
bag
and
of
way
I
would
like
to
leverage
the
existing
source
technology
I'm
using
today,
which
is
right
at
home,
container
storage
as
well,
and
we
basically
worked
with
our
storage,
PU
and
the
corresponding
engineering
teams.
B
How
can
we
make
this
happen,
and
then
we
luckily
did
another
acquisition
last
year,
which
is
the
Nuba
acquisition,
and
this
brought
us
a
technology
which
is
pretty
interesting
for
us
on
the
quayside,
because
it
helps
us
really
to
look
at
the
underlying
storage
technology,
and
actually
we
don't
need
to
deal
with,
because
basically,
new
app
provides
us
this
we
interface,
we
can
connect
against
and
then
whatever
else
is
used
under
knees.
We
don't
care
about
and
we
are
totally
working
on
especially
bu
sites
whistles.
B
Let's
say
the
commercial
side
of
the
of
the
things
over.
How
can
we
get
this
into
OCS?
Video
can
be
duly
the
bundling
of
disputes
and
stuff.
It
is
but
effectively
what
we
will
do
is
we
will
use
the
Nuba
interface
as
part
to
connect
against,
and
we
will
add
this
as
an
additional
storage
provider
in
the
quake
config.
B
You
I
that
you
can
select
nuba
on
a
USB
interface
connect
again,
so
this
will
allow
OpenShift
container
storage,
three
customers
to
leverage
the
existing
OCS
deployments
on
their
opposite
sweet
clusters
to
run
away
on
top
of
it
as
well.
Open
ships.
Containers
watch
for
doesn't
exist
yet,
but
will
be
there
pretty
soon
and
once
it
will
be
there,
we
will
hopefully
be
able
to
support
OCS
for
from
day
one
effectively
using
the
same
technology
so
again
connecting
to
the
Nuba
driver.
As
we
do
for
OCS
3.
B
We
will
ship
it
as
a
tech
preview
feature
again,
because
this
hasn't
been
stabilized
and
pedal
pedal
tested
before
and
again.
So
the
idea
is
to
ship
it
as
a
ga
feature
pretty
soon
as
well.
So
that's
the
near-term
future
I
already
talked
a
little
bit
about
the
let's
say
some
of
the
key
items
which
are
target
for
the
next
we'd
months.
Let's
say
this
way
default
home
features
we
want
to
do
is
repository,
mirroring
I
talked
about
a
set
of
operator.
B
We
are
working
with
with
another
database
when
or
we
have
a
joint
offering
with
them
to
allow
AJ
setups
using
their
database
offerings
if
it
runs
on
kubernetes
and
I
only
mentioned
a
couple
of
documentation
enhancements.
We
are
actively
working
on
in
the
midterm
future.
One
of
the
key
things
I
already
called
it
out
is:
we
will
open
source
quake
and
so
I
already
mentioned
explained
it.
We
are
already
starting
to
execute
against
this,
so
this
is
this.
B
What
it
effectively
does
is
its
fetches
the
one
availability
data
from
Claire,
and
then
it
attaches
it
as
a
pod
annotation
on
kubernetes,
and
this
is
the
prerequisite
to
do
a
couple
of
things
on
top
of
it
on
the
platform
side.
So
once
we
have
the
data
in
kubernetes,
then
we
can
leverage
it
in
OpenShift
who
visualize
it
as
inside
the
openshift
console.
So
it
can
go
through
your
project
view
and
then
you
can
click
on.
B
You
can
see
on
your
part
and
then
you
see
the
vulnerability
information
for
your
part,
or
we
can
treat
our
notifications
or
alerting
on
the
Combinator
site
or
on
the
auction
console
to
alert
developers
who
are
using
an
image
as
part
of
their
part,
which
is
now
affected
by
a
vulnerability.
Claire
has
detected
starfighters,
there's
a
couple
of
great
use
cases
coming
out
there.
We
hopefully
will
get
in
Internet
and
there
and
the
future
releases
of
both
kwai
and
open
ships.
So
this
is
a
collaborative
effort.
B
Another
thing
is
the
deeper
integration,
so
we
only
have
an
early
prototype.
We
again
developed
together
with
the
community
of
practice
and
the
customer
we
are
working
closely
with,
which
is
the
open
trips
integration
operator,
what
it
does
it
emulates
to
a
certain
degree,
the
existing
capabilities
of
the
open
to
internal
registries,
or
if
you
create
a
new
project
within
a
group
using
the
console
or
the
command
line,
then
it
automatically
creates
the
corresponding
organization
on
the
quayside.
B
It
creates
the
teams,
it
associates
users,
it
creates
robot
accounts
and
it
exchanges
robot
accounts
token,
as
a
winning
secret
on
the
openshift
side.
So
there
are
a
couple
of
great
features
in
this
as
part
of
this
operators.
As
I
said,
this
is
an
early
prototype.
So
probably
we
won't
ship
it
with
3.1,
but
yeah.
We
will
hopefully
ship
it
in
one
of
their
future
releases,
and
this
finally
allows
you,
if
you
are
using
those
open
ships
and
quay
side
by
side,
you
have
a
much
much
more
seamless
user
experience
than
you
have
today.
B
I
already
mentioned
even
a
little
bit
data
and
acting
on
this
data
in
an
open
truth,
we
have
a
couple
of
items
on
the
repository
marine
side.
We
would
like
to
add
over
time,
and
we
are
actively
working
on
clear
version
three
years
or
if
you're
familar
class,
all
you
have
is
all
the
open
source
it.
So
it's
a
github
repository,
so
you
can
speed
it
clearly,
three
years
unlocked
if
development,
so
we
need
to
finish
the
ice
cream
version
v3
and
then
do
the
integration
into
Quay.
B
Do
all
the
stabilization
and
hardening
and
large-scale
testing
at
coiler
all
before
we
ship
it
as
part
of
the
of
the
product
and
as
I
already
mentioned,
we
are
focusing
a
little
bit
on
the
agate
or
disconnected
environments
as
well,
and
in
the
long
run
there
are
a
couple
of
other
items.
We
are
working
on,
just
to
name
a
few
of
them
and
the
we
considered
to
will
redesign
for
the
existing
build
automation.
B
We
are
actively
working
not
only
on
our
send,
but
also
with
several
other
partners
and
windows
on
scanning
enhancements
and
policy
management
enforcement
stuff,
and
we
are
working
on
have
it
having
a
better
content
support
and
again.
This
is
currently
an
active
discussion
which
happens
in
the
broader
community,
with
different
windows
on
upcoming
protocols
and
specifications
such
as
the
OCI
distribution,
spec
and
stuff.
B
It
is,
and
we
have
a
couple
of
ideas
on
extending
the
pruning
and
garbage
collection
implementation
as
it
is
today,
so
there's
a
couple
of
great
features
coming
out
later
this
year
and
early
next
year,
yeah
so
and
I
hope
that
you
will
join
us
on
this
journey
and
then
see
all
those
great
stuff
which
is
coming
out
and
I
can
probably
skip
this
slide.
Just
one
short
slide
on
clear
this
video
or
they've
mentioned
it
a
little
bit.
So
one
of
the
key
features
of
clear
is
we.
B
In
parallel,
we
are
working
on
the
on
a
better
read
content
coverage,
so
I
mentioned
at
the
beginning,
I'm,
the
guy,
the
rattle
container
Kellogg
and
the
conical
snake,
so
I
know
the
difference
between
the
container
house
and
eggs
and
many
other
scanners
out
there,
including
Claire,
and
the
good
news
here
is
that
on
the
redhead
factory
side,
we
already
moved
it
back
end,
which
is
used
to
calculate
the
container
house
and
explains
which
are
shown
in
the
container
to
lock
onto
Claire.
Also
we
are
already
using
internally
Claire.
B
Unfortunately,
in
a
way
we
can't
ship
as
it
is
so
Bailey.
We
need
to
reflect
and
extend
this
a
little
bit,
but
once
we
completed
as
this
work,
then
we
hopefully
will
not
only
have
a
broader
coverage
over
at
that
product
as
part
of
Claire.
We
will
also,
hopefully,
add
the
container
house
index
grade
and
bring
it
into
and,
as
I
mentioned,
a
couple
of
other
partner,
collaborations
and
partner
integrations
are
planned
here
as
well
in
future
versions
of
clear,
not
the
initial
version,
terribly
sweet
alone,
and
this
brings
me
to
the
last
slide.
B
So
as
they
answer
at
the
beginning,
we
just
kicked
off
the
that's:
a
public
execution
of
open
source
and
quail.
One
of
the
first
steps
we
did
is
scheduling
this
particular
section
or
this
particular
webinar.
Here.
At
the
same
time,
we
created
a
couple
of
things,
including
a
Google
crew,
to
kick
off
a
couple
of
hopefully
fruitful
discussions
with
you,
our
customers
and
and
interested
communities
on
quai
and
the
future
of
quai.
So
if
you
haven't
joined
us,
please
do
so
pretty
soon.
A
So
there
have
been
a
lot
of
questions
and
Joey's
been
doing
a
great
job,
answering
them
so
I
think.
Maybe
what
we
might
do
is
unmute
Joey,
if
that's
all
right
with
him,
and
perhaps
he
can
pick
a
few
of
them
that
might
be
relevant
to
the
general
group.
I
know
the
last
question
that
just
came
in
had
to
do
with
Claire,
and
you
know,
and
the
folks
from
Cisco
were
saying
Claire
as
good
as
a
scanner
and
reporting,
but
is
any
integration
with
OpenShift
reject
a
bad
image
on
the
path
so.
B
This
is
what
I
call
deep
policy
and
management
and
enforcement,
but
again
so
this
is
more
an
open
shift
and
lesyk
way
and
claire
feature.
But
yes,
this
is
the
whole
topic.
We've
been
working,
that's
why
we
did
this
survey,
that's
why
we
did
the
workshops
and
work
with
several
customers
on
better
understanding,
the
let's
say,
more
holistic
view
of
this
concept,
because
it
sounds
a
little
bit
easier
than
the
reality
is.
But
here's.
B
The
overall
idea
is
that
you
can
do
policy
management
within
open
truth,
where
basically
define
only
deploy
an
image
if
it's
free
of
critical
vulnerability.
It
has
been
signed
by
three
different
members
of
my
QE
team
of
my
optiom
or
my
InfoSec
team
and
Intel.
If
it
has
the
following
claim:
text
attestation,
so
quite
clear:
they're
supposed
to
be
used
as
the
single
source
of
truth
or
back-end
to
score
to
contain
and
provide
the
meta
data,
and
then
the
platform
is
supposed
to
manage
the
policies,
including
what
happens?
B
Is
the
policy
or
the
framework
changes
and
how
to
deal
with
emergency
cases
and
then,
finally,
also
to
execute
the
policy
enforcement,
which
means
lock
deployments
act
on
if
something
is
only
deployed,
but
no,
the
policy
says
it's
no
longer
allowed
to
use
to
be
used
inside
Fitness.
So
this
is
what
we're
actively
working
on.
B
Yes,
that's
also
one
big
area
where
we
are
not
only
working
on
our
engine
in
the
isolated
ivory
tower,
but
where
we
are
actively
working
with
several
windows
on
this
holistic
solution,
because,
as
I
said,
the
reality
is
that
many
customers
are
not
only
using
Quay
and
Claire's
or
it's
a
much
more
holistic
view.
But
it's
on
your
own.
C
It
can
add
a
little
bit
more
background
to
that.
So
our
goal
here
is
to
provide
with
the
straight
annotation
operator,
which
I
mentioned
is
currently
targeted
for
three
point
two
is
to
provide
seamless
information
as
to
the
security
status
of
the
images
you
have
running
in
your
cluster,
so
the
current
plan
is,
if
you
are
running
openshift
and
you're
running
and
you're
pulling
your
images
from
a
Quay,
then
that
has
been
set
up
to
talk
to
Claire.
C
Once
the
street
annotation
operator
is
ready
and
installed,
you
will
get
automatically
with
no
further
configuration
these
the
security
status,
the
vulnerabilities
of
that
are
found
on
the
images
running
on
your
pods
on
your
cluster.
Once
that
information
is
present,
we
will
then
be
using
that
information
to
set
up
notifications
blocking
alerting
things
of
that
nature,
which
will
event
boarding,
which
will
allow
you
to
build
your
tailored
security
environment
based
off
of
the
information
provided
that
seamlessly
integrated
from
your
quest.
C
Yes
or
no
Chris
there's
a
few
problems
in
all
honesty,
with
using
an
external
pipeline
to
evaluate
container
scan
results.
The
big
one
is,
first
of
all,
Kinnear
scanning
results
can
take
time
to
perform.
Generally
speaking,
if
you're
using
Quay
and
Claire,
those
results
should
appear
within
under
60
seconds,
but
they
could
take
a
few
minutes
to
appear.
So,
while
you
can
schedule
a
notification
from
Quay
to
your
external
system
and
then
label
your
your
container
image
accordingly,
you're
gonna
have
to
wait
a
little
bit,
maybe
a
minute
or
two
before
that.
C
Information
shows
up
the
bigger
problem,
and
this
is
why
we
are
building
security.
Annotation
operator
is
that
it
is
our
belief,
or
rather
my
personal
belief,
that
preventing
pods
from
starting
with
vulnerabilities
in
them
is
and
as
an
automated
process
is
a
recipe
for
disaster
and
in
particular,
imagine
you
have
a
service
running
in
production
and
that
service
of
a
new
vulnerability
is
reported
for
that
service.
C
You
don't
want
to
stop
your
community's
cluster
from
scaling
that
service,
because
that
doesn't
provide
any
additional
security
and
will
break
your
production
in
our
opinion
or,
more
accurately,
my
opinion.
The
correct
solution
is
to
alert
your
development
and
security
teams.
They
can
make
a
conscious
decision
as
to
what
you
need
to
take
your
production
service
down
if
it's
that
vulnerable
of
a
issue
or,
alternatively,
whether
you
should
leave
the
service
running
knowing
it
is
currently
vulnerable
and
make
sure
to
get
a
fixed
out
ASAP.
C
That's
exactly
what
our
focus
is
on
adding
more
information
and
then
allowing
that
information
to
drive
things
such
as
notifications
alerts,
dashboarding
and
ultimately,
policy
determination.
We
will
provide
the
policy
hook
so
that
you
do
ultimately
decide.
You
know
you
want
to
blacklist
any
image
from
running
that
has
a
particular
role:
mobility.
C
It
then
you
can
do
so
with
the
conscious
understanding
that
if
you
do
that,
and
you
have
a
service
that's
already
in
production,
which
we
will
provide
the
ability
to
find
that
service
will
no
longer
be
able
to
scale
you
and
your
team
has
made
that
decision
that
determination.
We
leave
that
to
you
and
whether
you
decide
to
do
it
or
not.
Is
your
decision?
That's.
B
That's
the
main
reason
why
we
did
all
the
work
with
with
those
customers
really
understand
the
end-to-end
use
cases
to
really
understand
how
it
would
fit
into
the
existing.
That's
a
brownfield
environment
our
customers
have
and
how
it
especially
would
fit
into
existing
governance
and
process
work.
That's
why
I
explicitly
mentioned
it
is
800
190
scope,
because
one
one
of
the
things
I
really
like
about
this
guy.
B
It
is
it's
pretty
generic,
so
it's
chronic
and
when
diagnostic,
it
explicitly
calls
all
the
risk
and
the
major
risk
areas
and
how
to
address
them,
but
it
leave
it
entirely
up
to
you
how
to
specifically
implement
it
with
technology
as
a
products
and
one
of
the
chapters
inside
and
that's
why
we
spend
so
much
time,
especially
with
info
second
security
departments
with
those
of
those
customers
is
really
you
need
to
adopt
and
change
the
existing
processes
and
governance
workflows
in
your
environment.
So
it
you
can't
just
apply
existing
models
to
an
entirely
different
world.
B
Well,
the
majority
of
this
pipeline
is
automated,
and
then
you
need
to
address
a
couple
of
things
and
then
you
run
into
issues.
I
need
to
combine
different,
let's
say,
results
coming
out
of
entirely
different
things.
So
the
simple
question
of
how
do
I
deal
with
source
port
scanner
results
versus
image,
scan
results
and
how
do
I
address
several
tests
which
are
automatically
executed?
How
can
I
avoid
manual
approval,
stuff
and
so
on,
and
so
on?
So
that's
why
we
did
this
survey
and
those
workshops
to
have
a
much
better
understanding,
and
then
we
realized.
B
Okay.
The
the
complexity
of
this
solution
is
something
we
need
to
invest
more
time
in
a
real
good
design,
as
Joey
just
highlighted.
So
there
are
so
many
different
variants
and
corner
cases.
We
need
to
address
that.
We
really
understood
okay,
we
need
to
go
back
to
the
drawing
board
and
then
come
up
with
a
solution
which
stalls
at
least
the
majority
of
those
issues,
and
this
solution
to
be
fair,
it
doesn't
exist
of
any
other.
One
I
was
telling
you
I
have
the
answer
to
all
those
questions.
He
is
just
like.
A
C
So
we
have
this
as
a
common
request
from
our
customers
and
users,
which
is
the
ability
for
InfoSec
or
security
teams
to
be
the
kind
of
clear
reports
that
you
get
on
a
per
tag
basis
on,
like
a
global
registry
level,
I'm
going
to
put
the
caveat
now
that
I
speak
for
myself
and
not
for
Red
Hat
as
an
organization
as
a
whole
and
one
state
that
I
fundamentally
believe
that
such
a
feature
is
long
term,
not
a
good
idea
and
for
a
couple
of
reasons.
C
First,
ignoring
the
technical
reasons
as
to
why
retrieving
the
security
information
of
a
hundred
thousand
or
1
million
tags
is
simply
a
report
that
would
take
hours
two
days
to
run
and
therefore
does
not
work.
It
would
make
it
effectively
impossible
to
do.
The
also
real
problem
is
is
that
it
would
be
a
very,
very,
very
noisy
report.
So
imagine
you
have
a
repository
that
your
developers
have
been
pushing
tags
into
for
the
last.
You
know
year,
2
3
5,
for
compliance
reasons.
C
You
kept
those
images
around
for
that
duration
of
time
now
say
that
there
was
a
way
to
run
an
info.
Sec
report
across
the
entire
registry,
in
less
than
you
know
a
day,
and
you
ran
it
across
your
entire
registry,
now
you'd
be
getting
a
report
that
indicates
that
you'd
have
tags
that
have
you
know
XYZ
vulnerability,
but
those
tags
haven't
actually
been
deployed
for
years.
You've
kept
the
images
around
for
compliance
or
because
simply
no
one
has
cleaned
them
up.
So
this
report
would
be
extremely
noisy.
C
It
is
my
opinion
and
again
I
speak
only
for
myself
here
that
the
truly
useful
knowledge
is
not
what
images
in
my
registry
contain
these
vulnerabilities,
but
rather
what
vulnerabilities
are
actually
running
on
my
clusters
and
because
I
believe
so
strongly
in
that,
and
because
we
believe
that
that
information
is
the
most
useful.
That
is
why
we're
putting
most
of
our
development
effort
behind
the
security
annotation
operator.
Further
parity
annotation
operator
also
scales,
because
the
strany
annotation
operator
will
be
operating
inside
of
each
cluster
and
pulling
the
information
from
claire
on
a
per
pod
basis.
C
It
can
up
it
can
scale
with
the
size
of
your
clusters.
I,
don't
want
to
use
the
word
infinitely
because
that's
not
reality,
but
to
a
very,
very
high
degree,
where
a
centralized
report,
where
you
literally
listed
it
for
every
image
ever
pushed
to
a
registry
would,
let's
be
honest.
It
would
simply
not
scale
to.
B
Add
something
on
top
of
it,
so
what
Joey
just
said
is
the
main
reason
why
we
spend
so
much
time,
especially
with
an
input
department.
So
we
hear
this
requirement
in
nearly
every
customer
environment
we
are
working
with
and
we
try
we
basically
as
Joey
just
did.
We
try
to
explain
okay,
what
is
the
real
use
case
behind
so
behind
asking
just
for
claim
security,
vulnerability,
information.
B
The
real
use
case
is
to
have
a
better
understanding
where
were
the
real
abilities
are
really
affecting
a
production
or
mission-critical
workload,
and
this
is
the
more
tricky
question
and
just
to
add
something
on
top
of
it.
It
gets
even
more
complicated
if
you
add,
for
example-
and
this
is
a
typical
use
case-
to
add
a
second
view
or
a
second
scanner
to
the
game,
and
then
you
have
to
be
always
overlapping
or
even
conflicting.
B
One
availability,
information:
that's
why
we
are
working
so
with
no
security
vendors
to
deal
with
this
situation,
because
if
those
have,
if
those
two
scanners
have
independent
policy
management
and
policy
enforcement
methods,
then
in
the
worst
case,
you're
running
something
on
your
cluster.
And
then
you
have
two
different
policy
enforcement
components
and
by
default,
if
you're,
using
two
scanners
with
which
are
using
different
input
data,
then
you
will
have
always
false
positives.
B
Always
there's
no
way
to
work
around
it
and
you,
basically,
you
will
always
end
up
with
two
different
results
and
then
you
have
to
chicken
a
crawl
and
then
one
says
yes
and
the
other
one
says
no,
and
then
you
have
to
rock
and
roll
on
your
cluster.
This
is
exactly
what
you
are
trying
to
avoid
and
that's
why
it
took
longer
for
arts
to
do
this
crop
or
design
and
as
Joyce
I
said,
there
are
so
many
different
corner
cases
and
busy.
We
we
went
back
and
forth.
B
We
did
a
couple
of
iterations
with
some
customers
and
then
InfoSec
departments
to
really
define
the
long
term
direction
of
not
only
the
reporting
but
especially
be
acting
on
it,
because
what
we
always
try
to
explain
in
the
age
of
automation
and
automated
pipelines,
it's
no
longer
needed
that
somebody,
a
human
being,
is
looking
at
a
dashboard
and
then
enjoys
all
the
great
graphics
which
show
them.
I
have
I,
have
159
security
vulnerabilities,
that's
great!
No!
B
We
had
to
build
an
entirely
new
factory
and
build
automation
to
keep
our
images
up-to-date,
and
since
we
have
more
than
1,000
apposite
voices,
what's
a
continuation
of
all
the
retro
industry,
we
have
to
automate
it,
and
the
good
news
here
is
some
of
the
automation
is
currently
moving
on
to
away,
and
some
of
the
existing
capabilities
in
this
will
allow
us
to
bring
some
of
those
capabilities
into
the
product.
A
future
version
of
this
is
what
Joey
said.
So
all
the
different
things
are
related
to
each
other.
B
So
it's
not
it's
not
a
one
particular
feature.
We
want
to
ship
without
being
able
to
understand
the
impact
on
all
the
other
things
which
are
still
probably
even
more
relevant
than
the
reporting,
but
we
can
do
a
couple
of.
We
can
do
a
follow-up
session
on
this
particular
topic,
so
that
again
we
did
a
two
day
workshop.
Just
on
this
topic,
Joe
and
myself.
Well,.
A
That
sounds
like
a
good
topic
for
our
future
open
ship,
Commons
briefing
as
well,
so
we're
sort
of
at
the
top
of
the
hour.
There
were
a
lot
of
questions
and
enjoy,
did
a
really
great
job,
answering
them
and
perhaps
we'll
turn
those
into
an
FAQ
and
add
them
onto
the
Google
Group
at
some
point
in
the
not-too-distant
future.
A
B
I
see
I
see,
one
question
I
would
like
to
answer
is
a
white
waistcoat
which
is-
and
this
question
probably
pretty
frequently
not
only
from
customers
but
also
from
prospects.
So
immediately
after
the
acquisition,
we
defined
a
long
term
strategy,
which
we
call
Quay
everywhere,
which
effectively
means
we're
placing
all
the
different
registry
technologies.
B
The
idea
is
that
we
will
replace
the
internal
registry
of
open
ship
as
it
is
today
with
what
we
call
quake
or
which
is
exactly
the
same
feature
set
as
you
are
getting
today
with
the
internal
registry,
but
for
given
all
the
customers
we
at
the
quake
customers.
We
have.
We
still
believe
it's
worse
to
have
this
as
a
separate
product,
because
not
every
OpenShift
customer
really
needs
way
in
its
full
scale.
Not
every
customer
really
wants
to
have
all
those
features
that
and
that's
why
we
keep
it
as
a
standalone
product.
B
We
will
replace
the
interwebs.
We've
is
a
small
subset,
but
this
won't
include
all
the
high-value
feature.
Of
course,
many
customers
are
asking
artful,
but
we
believe
and
I
am
the
PM
I
strongly
believe
that
the
money
for
Quay
is
worth
the
money,
because,
given
the
features
we
have,
and
especially
the
features
which
are
coming
up
soon,
I'm
pretty
confident
that
it's
it's
a
good
thing
to
purchase.
A
Well,
let's
hope
so
and
keep
moving
on
forward
with
the
open
sourcing
as
well,
so
everyone
thank
you
very
much
for
for
joining
today
and
Dirk
and
Joey
I
know
Joey's
in
a
difficult
time
zone,
so
I
appreciate
him
and
a
number
of
you
are
coming
in
from
different
time
zone.
So
we
really
appreciate
it.
I'll
post
this
on
open
ship,
comm
blog
with
the
PDF
of
the
slides
and
some
other
resource
links.
So
look
for
that
in
the
next
two
or
three
hours
and
I'll
also
post
it
to
the
Google
group.
A
So
if
you
join,
please
look
there
for
announcements
and
hopefully
we'll
set
up
a
reoccurring,
kwai
sig
meeting
in
the
not-too-distant
future,
so
that
we
can
continue
these
conversations.
All
of
it's
been
really
great
feedback
and
great
questions.
So
thank
you,
everybody
or
for
your
for
your
insights
and
your
questions
and
we'll
talk
to
you
all
soon
again.