►
From YouTube: Plan your Red Hat Quay Deployment
Description
Detailed presentation introducing you to the Red Hat Quay architecture, as well as helping you plan your Quay deployment.
To learn more, visit: https://www.openshift.com/products/quay
A
Hi
guys
and
welcome
to
this
session
on
red
clay
in
this
presentation,
I
will
focus
on
all
the
aspects
you
might
want
to
consider
before
you
finally
deploy
and
start
to
use
query.
Where
is
a
very
powerful
product,
it
provides
a
ton
of
very
powerful
features
and
it
provides
also
a
lot
of
choices
for
different
things.
A
You
need
to
ask
yourself
and
decisions
you
need
to
make
in
order
to
have
a
setup
which
is
sustainable
for
many
different
years
and
effectively
gets
the
best
out
of
Quay
as
the
product
before
we
jump
into
those
different
patterns
and
all
the
questions
associated
miss.
Let's
start
with
a
very
high-level
view
on
the
quay
architecture.
So
Quay
is
a
containerized
product,
which
means
it
can
run
on
nearly
any
containerized
infrastructure.
It
can
one
understand
the
lone
host
with
the
container
on
time,
but
of
course
it
runs
better
on
an
orchestration
platform.
A
It
effectively
consists
of
Quay
as
a
containerized
application.
Optionally,
you
can
use
clear,
they
will
not
ability
scanner.
The
mirroring
worker
for
repository
mirror
antique
way.
Builders
for
the
get
built
triggers
port
bikeway
as
well,
typically
in
front
of
Quay
and
Claire
you'll,
run
a
load
balancer
because,
typically
you
run
more
than
one
part
of
both,
and
then
you
have
your
clients,
your
customers,
the
UI
and
API
commands
which
are
connecting.
Why
are
they
load
balancer
to
quay
and
Claire?
A
So
this
is
a
very
high-level
overview.
We
will
dive
a
little
bit
deeper
into
all
those
details
in
humans.
So,
let's
start
with
a
couple
of
questions:
our
customers
typically
are
asking
us
or
we
are
asking
them
if
they
are
asking
us,
what's
the
best
way
to
deploy
way
and
then
finally
run
and
you
squeeze
so
one
of
the
first
question
is
which
infrastructure
quays
supposed
to
run
on?
Is
it
on
prime?
A
Is
it
on
public
cloud,
and
obviously
this
has
an
impact
on
a
couple
of
things
such
what
is
the
corresponding
storage,
back-end
or
database
service?
I?
Can
use
another
important
question
is:
should
I
use
a
distinct
registry
for
each
lifecycle,
environment
or
should
I
start
with
a
shared
one
which
is
used
in
both
development
and
production?
And
then
there
are
a
couple
of
other
scenarios
which
might
have
an
impact
on
your
overall
design,
such
as
disconnect
or
aggregate
environments,
and
what
are
they
Claire
or
builders
are
supposed
to
use
or
not?
A
Let
me
start
with
the
infrastructure
so
effectively.
Technically,
quai
runs
on
any
physical
or
virtual
infrastructure,
both
on
prime
or
public
cloud.
This
doesn't
matter
it's
a
continuous
application,
it
runs
everywhere
and
it
also
scales
from
a
developer
laptop
where
quai
is
running
on
up
to
a
very
massive
registry,
as
we
are
using
it
at
query,
though,
so
there
is
no
difference
from
a
code
perspective
between
a
very
small
size
set
up
on
a
develop
a
laptop
compared
with
a
massive
scale
set
up
on
public
cloud.
A
We
recommend
if
the
infrastructure
is
public,
Lord
then
do
as
a
favor
and
use
the
public
cloud
services
for
the
back-end
service,
such
as
database
and
storage.
So
if
you
run,
for
example,
on
AWS,
you
can
use
the
AWS
services
for
load,
balancer
storage,
the
database
itself,
the
radius
cache,
and
also
we
added
here
a
recommendation
on
the
virtual
machines.
Do
you
see
two
virtual
machines
at
least
M
suite?
Large?
Probably
better
is
the
m4x
large
sizing
they're
nearly
the
same
applies
to
all
other
infrastructure.
A
We
just
pick
two
of
them,
so
the
other
one
is
Escher
and
again
use
the
public
cloud
service
services
for
database
on
storage.
If
you
run
it
on
public
sort,
there
is
a
very,
very
detailed
overview
of
what
are
the
different
components:
infrastructure,
backends,
etc,
which
we
are
testing
against
and
therefore
also
support
as
part
of
the
product
production
support
and
all
of
those
items
are
explicitly
called
all
indicate,
tested
configuration
metrics,
which
is
in
the
redhead
customer
portal.
A
The
other
question
is
whether
you
should
run
Quay
on
a
standalone
host
versus
running
it
on
over
chips.
It
runs
perfectly
fine
on
those
on
a
standalone
host.
We
have
many
customers
who
are
deploying
quai
on
standalone
container
hosts
it's
a
little
bit
more
tricky
to
run
Quay
there
as
an
age.
A
setup
it's
multiple
hosts
involved,
so
you
need
you
need
to
manually
or
seam.
Your
mail
take
care
on
all
the
things
common
areas
of
us
out
of
the
box
yeah,
and
it
gets
a
little
bit
more
complicated.
A
If
you
look
on
all
the
recent
changes
or
extensions
or
features
we
added
to
Quay,
primarily
in
the
operator
space,
because,
obviously
company's
operators
only
work
with
kubernetes,
which
means
you
can't
use
them
on
a
standalone
container
host.
It's
important
to
know
if
you
run
it
on
a
host.
This
house
needs
to
be
properly
subscribed
from
a
subscription
perspective,
so
the
recommended
way
is
to
run
Quay
on
open
ship.
There
are
a
couple
of
benefits
coming
out
and
all
most
of
those
benefits
are
coming
out
directly
out
of
the
out-of-the-box
capabilities
of
kubernetes.
A
Yesö
companies
takes
care
of
all
the
important
aspects
of
running
a
continuous
application
and
you
can
leverage
them
and
the
overshift
goes
far
beyond
just
being
the
plane
company's
orchestration
there.
That's
why
you
have
a
couple
of
additional
pieces
such
as
to
operate
a
lifecycle,
management,
monitoring,
dashboards
and
all
the
great
things
openshift
offers
out
of
the
box,
and
you
can
leverage
it
with
square
as
well
so
effectively,
Quay
runs
everywhere,
but
it
runs
probably
best
on
organ
shift,
and
this
has
something
to
do
with
all
this
stuff.
A
We
edit,
especially
on
the
operator
side
as
I,
mention
it.
The
quay
operator
itself,
ensures
that
you
can
seamlessly
deploy
way
on
open
truth
and
also
in
future
versions
of
the
operator.
We
will
take
care
on
the
entire
day
to
management
all
the
different
maturity
levels
which
are
critical
for
operators
and
why
we
invented
operators
or
corazon
vendors
of
operators
initially,
and
then
we
are
using
them
everywhere,
especially
on
the
platform
layer.
A
If
Quay
and
overshift
is
used
together,
so
as
a
whole,
in
short,
on
the
other
side,
there
are
a
couple
of
benefits
if
Quay
runs
an
aqua
chef
and
probably
the
most
important
one,
it's
very
easy
to
deploy
quite
an
upshift,
because
this
is
of
course,
the
target
platform
we
are
developing
against.
This
is
something
we
are
investing
heavily.
It
is
it's
at
the
platform
we
know
best
because
we
own
it
and
there
are
a
couple
of
great
benefits
coming
out
of
it.
You
can
see
them
on
this
slide
shown
there.
A
Let's
have
a
look
at
the
database
beckons,
so
one
of
the
more
effectively
the
most
critical
back-end
dependency
for
Quay
histor
database.
All
metadata
is
stored
in
the
database,
so
only
the
physical,
the
binary
blobs
are
stored
in
a
storage
back-end,
but
all
the
data
which
is
shown
in
the
console
is
stored
in
the
database.
That's
why
the
database
is
really
critical.
A
So
since
this
is
the
most
critical
thing,
definitely
we
recommend
to
run
the
database
in
aged
a
mode
we
recommend
to
use
post
quests
simply
because
post
quest
is
required
by
a
Claire
if
you're
only
one.
If
you
only
run
qwave
without
clear,
then
it
can
also
use
other
databases
such
as
mysql
or
mariadb,
and
so
and
again,
if
you
run
it
on
public
cloud
infrastructure,
you
recommend
you
to
run
the
postcode
service
provided
by
a
provider.
A
A
The
next
prerequisite
is
storage
back-end
and,
by
the
way,
requisite
really
means
that
needs
to
exist.
Before
you
start
deploying
quai
storage,
as
I
already
mentioned,
all
metadata
is
stored
in
the
database.
Storage
is
really
to
store
all
the
binary
blobs
kha
requires
similar
to
the
database,
and
H
is
set
up
for
storage.
Jirô
application
has
a
heart
requirement
for
object,
storage.
We
do
not
support
night,
a
local
storage
nor
NFS,
nor
any
any
other
local
disk
which
are
mounted
into
the
container
for
production
setups.
A
So
there
are
a
couple
of
choices.
Therefore,
on
Prem
storage
types,
we
do
support
several
a
source
F.
We
do
support,
always
take
swift
and
we
fully
support
the
open,
shipped
container
storage
version
for
open
ship
container
storage
version.
Sui
remains
and
take
preview
because
the
new
web
heart
remains
in
tech
preview
as
well,
and
we
effectively
inherit
this
support
status
of
the
new
multi
cloud
object:
ap
on
public
load,
nearly
all
public
cloud
storage
beckons
also
ported
such
as
a
OSS
we
or
Google
Cloud
storage
or
plop
storage.
A
It's
important
to
call
out
here
that
only
hot
storage
is
the
storage
back-end
here,
as
opposed
to
used
not
be
called
standby
storage
options.
Weirdest
cache
is
a
third
component
which
is
kind
of
stateful,
but
it's
less
critical.
Compared
with
the
database
and
de
storage
back-end,
it's
primarily
used
to
store
all
the
data
logs
and
decree
to
toriel.
So
let's
assume
you've
already
watched
the
tutorial
and
only
the
builder
logs
are
they
did
the
component,
which
is
really
make
eventually
important
in
there.
So
you
need
to
make
decision
whether
needs
to
be
AJ
or
not.
A
Typically,
it's
not
done
in
an
H
a
fashion,
because
the
the
risk
of
that
greatest
goes
down
and
it's
pretty
low
and
then
the
associated
with
is
also.
The
next
decision
you
need
to
make
is
whether
you
want
to
run
a
dedicated
registry.
A
For
example,
for
development
and
another
one
for
pot,
so
I
met
a
bunch
of
customers
in
the
past
and
basically
they
most
of
them
insisted
of
having
those
greater
squeeze,
keep
them
separated,
because
we
want
to
ensure
that
production
workers
are
protected
kind
of,
and
that's
why
we
want
to
ensure
that
the
content
which
is
produced
and
used
in
the
development
environment
is
not
exposed
to
production
at
all.
However,
this
is
something
so
if
that's
the
goal
or
the
requirement,
you
can
easily
achieve
the
same
thing
using
organizations
or
repositories
and
the
corresponding
Arbit
permissions.
A
So
there
isn't
any
need
to
split
really
or
to
run.
We
need
to
distinguish
ways
effectively,
because
you
give
up
all
the
advantages
and
registry
brings
out
of
the
box,
such
as
deduplication
compression,
because
if
the
same
image
is
used
in
Devonport
and
you
run
two
distinct
registry
registry,
is
you
really
have
a
copy
of
the
same
binary
blob
in
your
storage
back-end
so
effectively?
You
are
nearly
doubling
the
cost
over
time
and
looking
at
the
amount
of
binary
data
which
is
stored
in
registry,
this
could
become
pretty
expensive
effectively.
A
The
same
applies
to
we
want
to
separate
the
content.
We
want
to
clearly
distinguish
between
the
content,
which
is
sourced
from
an
external
source
such
as
the
writer
container
catalog,
or
our
suppliers
versus
the
one
we
internally
produced
yeah,
so
we
don't
want
to
mix
it
in
the
same
tool
and
again
this
is
something
you
can
easily
achieve
with
the
organization's
and
repositories
and
Dr
Beck
permissions.
A
The
same
applies
to
D,
let's
say
upgrade
and
update
experience
yeah,
so
you
shouldn't
be
really
concerned
that
an
upgrade
breaks
the
registry
and
then
a
critical
component
isn't
available
anymore.
This
potentially
breaks
works,
load,
workloads
or
even
the
cluster,
so
this
is
something
we
probably
or
we
hope
that
we
are
doing
a
great
job
on
testing
all
the
features
before
we
ship
it.
In
case
you
don't
know
the
release
and
deployment
model
of
Quay
means
we
are
developing
those
features.
We
are
testing
it
intensively
internally
in
our
QE.
A
Then
we
push
it
to
query,
though,
and
make
it
available
to
selected
namespaces
and
after
this
has
been
stabilized
and
we
opened
the
door
and
basically
make
it
available
globally.
In
query
after
this
has
been
stabilized
again,
then
we
finally
built
the
product
packages
or
images
and
ship
it
to
our
customers,
so
it
should
be
stable
when
it
arrives
in
your
environment.
So
this
shouldn't
be
the
main
reason
why
you
want
to
run
two
distinct
registries
and
the
same
applies.
A
I
met
a
couple
of
customers
and
for
whatever
reason
they
wanted
to
run
a
registry
within
each
of
their
data
centers.
So
the
default
use
case
is
quick
and
easily
sort
of
content
to
multiple
data.
Centers
AJ
can
stretch
stretch
across
stuff
in
data
center,
simply
because
the
AJ
is
primarily
achieved
on
the
backend
side
and
storage.
Typically,
all
the
time
runs
and
more
than
one
data
center.
A
But
if
you
just
look
at
the
use
case
of
query
Lahore,
where
we
are
serving
billions
of
images
to
thousands
of
clients
globally
dispersed
across
the
globe-
and
this
is
the
use
case-
why
this
shouldn't
apply
to
you
as
well.
So
this
shouldn't
be
an
issue.
The
other
one
is
scalability
concerns
again.
The
same
code
base
is
used
for
creditor
and
query,
though.
A
So,
for
example,
if
you
really
want
to
ensure
that
the
Quai
builders
are
only
used
and
enabled
in
the
development
environment,
but
not
in
production
staff
at
this
so-
and
this
also
applies
to-
if
you
have
a
different
ownership
and
different
team
with
different
users
supposed
to
act
as
the
super
before
registry
and
stuff
with
this,
then
it
makes
sense.
Because,
obviously,
if
it's
a
registry
wide
configuration
which
differs,
then
you
can't
use
one
shared
registry
which
is
used
across
all
second
moments.
A
A
Quick,
quick
talk
about
the
disconnected
in
the
agate
environment,
so,
while
clay
runs
perfectly
fine
and
an
agate,
the
vironment
clear
does
not
clear
needs
to
fetch
the
CVE
or
vulnerability
metadata,
and
this
requires
that
Claire
is
at
least
connected
to
internet.
As
of
today,
you
can
use
proxy
so
that
that's
not
an
issue.
That's
not
that
it's
not
an
issue.
This
still
means
that
the
cluster
Quay
serving
content
would
they
can
be
disconnected.
So
this
doesn't
matter
as
long
as
quaintly
as
connected.
This
is
not
an
issue.
A
Future
versions
of
Quay
will
bring
a
feature,
and
this
is
a
slide
from
the
retic
way
road
medic
effectively,
describing
what
the
saw
a
future
vision
on
an
enhance
support
for
agate
environment,
where
we
will
allow
to
run
both
quaintly
entirely
agate,
and
then
you,
of
course,
all
the
clusters
Quay
serving
content
who
are
running
agate
as
well,
but
as
of
today
Claire
needs
to
run
in
a
connected
mode.
Hopefully,
with
the
upcoming
release
we
taught,
for
we
will
get
rid
of
this
limitation
from
a
network
access
or
firewall
perspective.
A
It's
fairly
easy,
obviously,
all
your
clients
and
all
your
clusters,
all
the
nodes
need
to
be
able
to
access
the
registry
itself.
So
this
is
typically
the
as
it's
all
parts
of
4/4.
We
assuming
that
you
run
only
encrypted
communication
to
the
registry,
so
port
areas
typically
not
needed,
and
then
there
are
two
optional
parts
for
the
country
gap
and
for
the
Premier's
endpoint,
which
to
beauty
is
not
exposed
to
the
outside
world,
but
maybe
to
a
broader
internal
community
of
clients
who
need
to
have
access
to
it.
A
All
the
other
services
or
post
squares,
readers
and
Claire
are
not
supposed
to
be
exposed
to
do
to
growth,
so
those
are
just
so
versus
which
of
course,
needs
to
be
accessible
by
way,
but
not
by
the
client.
So
all
the
connection
happens
between
the
client
and
Quay
as
the
registry.
Only
the
storage
backends
needs
to
be
accept
accessible
by
the
client.
A
If
you
do
not
enable
or
use
the
storage
proxy
option,
which
also
means
that
if
the
client
access
to
the
storage
backends
is
not
feasible
for
everything,
then
you
can
use
the
storage
proxy
to
worker
on
this
and
then
the
Quay
container
is
serving
the
binary
blob
to
a
client
one
of
the
last
class
questions
I
had
on
my
slide.
Was
this
simple
question
whether
you
want
to
use
clear
and
the
Quay
build
automation?
Those
are
optional
components
you
don't
have
to.
A
We
of
course,
strongly
recommend
you
use
them,
because
we
believe
it's
quite
powerful
and
it
does
a
couple
of
great
things
for
you,
I'm,
just
a
very
brief
introduction.
So
Claire
is
the
vulnerability
scanning
tool
which
has
been
developed
for
Quay,
but
it's
also
used
by
add
ups
or
party
products,
so
you
might
have
seen
the
announcement
from
last
year,
where
Hitomi
Secr
started
to
use
clear
as
this
cannon
back
in
as
well.
Remember
how
was
using
it
docked
at
whose
are
using
it
as
well.
A
It's
a
1%
upstream
component,
similar
to
a
Quay,
it's
pretty
powerful
and
we
will
just
introduce
a
new
version
of
Claire
with
query
dots.
We,
which
then
introduces
the
support
for
programming
languages
initially
limited
to
pison.
So
you
need
to
ask
yourself
the
question
whether
you
want
to
use
it.
We
recommend
it
because
we
believe
the
scanning
at
the
registry
level
is
the
best
place
where
it
makes
most
sense
on
their
scales
best.
So
this
is
a
whole
recommendation.
A
This
does
not
automatically
mean
that
you
should
not
use
any
additional
standard
scanner
or
other
security
management
tools.
Of
course
you
can
do
this
and
be
a
strongly
encourage
you
to
do
so,
but
you
still
can
use
clear
as
a
second
view
on
the
same
things.
Another
thing
which
is
optional
are
the
quick
trigger,
so
what
is
it
effectively?
A
It
means
that
it
can
quake
and
take
care
of
automatically
building
images
triggered
by
actions
which
happen
in
any
of
the
get
tools
we
are
reporting,
such
as
github,
bitbucket,
gitlab
and,
of
course,
also
just
custom
gates.
So
as
long
as
the
docker
file
is
stored
in
the
repository,
we
can
automatically
trigger
a
build
and
then
the
image
resulting
image
is
pushed
into
quay
and
we
just
introduced
a
very
powerful
feature
for
better
customization
of
the
tagging.
So
this
is
a
feature
you
just
need
to
desired.
A
Whatever
you
want
to
use
it
or
not,
you
can
make
the
decision
or
change
position
at
any
point
of
time.
I
just
want
to
call
it
out,
because
this
might
have
an
impact
in
some
environment.
What
the
other
line
infrastructure
should
look
like
on
the
deployment
patterns
or
the
second
part
of
this
presentation.
There
are
a
couple
of
options
or
choices
you
have
there.
So,
for
example,
we
already
briefly
touch
the
question
whether
should
I
run
it
on
a
standalone
host,
those
kubernetes.
What
are
my
target
destination?
How
many?
A
Where
are
they
what's
the
technology
used?
There
should
I
use
JIRA
application,
or
is
it
more
repository
mirroring
what
I
want
to
use?
What
about
sizing
guidance?
What
about
subscriptions
I
need
and
what
about
age,
eggs
or
hold
wait?
Chief
AJ,
let's
go
through
those
different
points.
Let
me
start
with
the
deployment
example
as
I
mention
it.
Kwe
runs
perfectly
fine
on
a
developer
at
once
in
a
data
center.
It
once
also
stretched
across
multiple
data
center.
So
this
is
not
an
issue.
It
ones
on
effectively
any
infrastructure
as
long
as
its
container
run
time.
A
The
important
point
here
is
again
Kweli
and
Claire,
and
the
repository
mirroring
all
of
them
are
stateless
control,
containers
yeah,
so
the
critical
components
of
a
Kuei
deployment
are
still
the
database
and
a
storage
back-end
and
how
you
achieve
the
AJ.
This
is
entirely
up
to
you,
and
probably
you
won't
change
it
just
for
container
workers.
A
Probably
you
will
continue
to
use
all
the
services
you
have
used
in
the
past
to
ensure
AJ
for
those
mission-critical
services,
which
are
probably
not
only
used
by
creativity,
the
question
to
which
environments
or
destination
targets
Quay
serving
kinda
two
is
fairly
easy
to
answer.
Questions
of
Connect
content
to
any
OCI,
compliant
heart
yeahso.
As
long
as
the
client
speaks,
the
standards
and
specifications
such
as
the
docker
registry,
api
or
CI
distribution
spec,
which
is
upcoming-
you
are
totally
fine
yeah,
so
it
we
also.
We
still
have
a
long
term
spec
support.
A
We
deprecated
the
daca,
we
wanna
push
support,
but
we
still
support
or
everyone
pulls
in
the
choir
registry.
As
of
today,
which
means
you
can
sort
of
content
to
any
container
house
to
plain
vanilla,
kubernetes
to
augment
your
cluster
and
it
doesn't
matter
if
it's
one
client
or
hundreds
of
thousands
up
to
millions.
So
this
doesn't
matter.
It
also
doesn't
matter
whether
the
clients
run
in
the
same
data
center
or
a
different
data
center,
even
up
to
a
different
region.
A
A
Let's
quickly
have
a
look
at
the
question
whether
you
want
to
use
to
your
application
or
repository
marine
since
Quay
is
the
only
registry
out
there,
which
has
two
features:
geo,
replication
and
repository
marine,
many
customers
and
other
guys
are
mixing
up
a
little
bit.
What
those
features
have
been
made
for
those
are
different
and
complementary
feature.
Those
are
not
conflicting
with
each
other.
A
So
this
is
how
you
get
content
into
the
registry
and
repository
mirroring
has
been
intentionally
designed
in
a
way
that
it
allows
explicitly
whitelisted
content,
which
means
you
need
to
explicitly
select
the
external
content.
You
want
Amuro
into
your
registry.
So
as
a
starting
part,
starting
from
this
primal
registries
or
the
entry
point
into
your
environment,
you
effectively
have
two
or
maybe
even
more
options.
A
If
you
also
include
the
various
combinations
coming
out
of
it,
one
option
is
that
the
primary
registry
is
using
JIRA
application
to
ensure
that,
if
clients,
for
example,
are
running
in
North,
America
and
other
clients
are
running
in
amira,
there's
one
large
single
globally
distributed
quai
deployment.
That's
the
content,
the
configuration,
the
users,
everything
is
the
same
in
both
North
America
and
anemia.
The
only
difference
is
that
clients,
Nemea
are
pulling
the
binary
blobs
from
nearby
storage
anaemia
versus
declines.
North
america
are
pulling
the
the
binary
blasts
from
the
north
america
storage.
A
That's
the
main
purpose
of
your
application
so
effectively
it's
supposed
to
speed
up
the
excess
to
the
plot,
to
decline
so
and
it's
an
oozing
forest
replication.
So
if
the
replication
hasn't
been
successfully
completed,
the
fall
bag
is
still
that
a
client
then
goes
over
the
ocean
and
fetches
the
blob
from
the
other
side
of
the
world.
So
G
eradication
means
you
run
one
large
registry,
and
this
is
achieved
by
a
shared
database
which
is
used
on
both
sides.
A
A
And
then
you
have
the
second
environment,
where
you
clearly
want
to
separate
okay
in
aamir,
where
the
software
development,
for
example,
doesn't
happen,
but
only
production
workloads
are
running.
I
don't
need
all
the
content.
I
originally
sourced
into
my
primary
edges.
We
I
only
need
a
very
specific
subset,
which
is
required
to
run
in
my
production
cluster
and
then
a
secondary
registry
make
sense
and
the
community
the
connection
between
the
first
and
the
second
would
be
done
by
a
repository
marine
so
effectively.
A
All
the
clients
Mamiya
in
this
example
wouldn't
have
access,
probably
even
to
the
primary
registry,
because
they
only
can
connect
to
the
nearby
secondary
registry,
which
runs
an
email
region.
So
Bailey
you
have
two
options
and
if
anything
is
you
can
even
combine
them?
So
you
can
even
use
your
application
and
repository
murmuring
side
by
side.
So
there
are
plenty
of
customers
who
are
doing
this
because
there
might
be
globally
dispersed
setups,
where
you,
for
example,
want
to
run
AG
replication
set
up
for
both
North
America
anemia.
A
But
then
you
have
other
clusters,
for
example
in
APEC
and
they're,
using
a
smaller
subset
of
smaller
creator.
Ployment
in
the
epic
region
to
serve
content
there
and
for
the
epic
region
they're
using
repository
marine
and
for
Amir
and
not
America,
they
are
using
well
G
replication.
There
will
be
a
more
detailed
recording
on
explaining
those
features
in
further
detail,
also
how
to
configure
and
how
to
use
them.
That's
why
I
don't
need
to
dive
too
much
into
the
details.
A
It's
also
verse
2
to
mention,
as
I,
only
called
out
so
that
you
can
configure
the
clients
easily
in
a
way
that
you
explicitly
find
which
of
those
where
to
switch.
We,
the
client,
is
allowed
to
talk
to
you,
know
and
again
decline.
The
client
can
be
in
entirely
I
got
or
disconnected
environment.
So,
to
summarize,
the
key
difference
between
were
puzzled.
Her
marine
business
slide,
I've
used
in
the
past
and
again
I
will
run
a
dedicated
recording
on
those
two
features
and
a
couple
of
sample
use
cases
to
explain
it
a
little
bit
better.
A
Let's
move
on
to
the
sizing
recommendations,
and
this
is
really
a
tough
question,
so
we
are
getting
a
lot
of
sizing
question
and
it's
really
really
hard
to
answer
them.
So,
first
of
all
again
scalability
is,
is
not
the
issue
yeah,
so
this
is
there
isn't
any
known
limitation.
We
know
how
to
use
or
when
Quay
reaches
its
scalability
limits,
because
we
are
running
one
of
the
biggest
questions
we
saw
out
there
and
again
it's
the
same
codebase
we
ship
as
the
on-prem
product.
It's
exactly
the
same
thing
right.
A
There
aren't
any
typical
sizing
recommendation,
because
it's
really
depends
on
the
multitude
of
factors.
So
the
the
number
of
users,
the
number
of
images
the
number
of
concurrent
pulls
and
pushes
all
those
and
data
points
have
an
significant
impact
on
the
performance
requirements.
There
is
not
even
an
easy
sample.
Yeah.
One
thing
which
is
important
to
understand
it
to
know
is,
since
it's
a
continuous
application,
it's
fairly
easy
to
scale
our
query
and
clear,
but
this
will
definitely
cause
more
load
on
the
back-end
service.
A
So,
typically,
the
performance
bottleneck
is
not
the
query
or
care
container,
and
also
not
the
repository
mirroring
berker.
It's
really
the
back-end
service.
So
if
you
would
invest
into
something-
and
it's
probably
into
this
storage
database
and
the
connection
to
those
services
from
the
quay
and
clear
containers,
auto-scaling
and
stuff
it
is
you
can
manually
configure
today,
we
will
add
this
as
a
future
capability,
probably
done
by
adequate
operator
if
you
of
quake
the
minimum
requirement.
This
is
something
we
can
specify.
So
the
minimum
requirement
for
quail
is
four
gigabyte.
A
We
recommend
six
and
at
least
two
or
more
virtual
or
physical,
CPUs
Claire's
a
little
bit
more
relaxed,
because
Claire
is
the
scanning
engine
and
from
a
data
standpoint,
keep
in
mind.
We
are
fetching
all
the
security
meta
data
from
various
sources,
so
it's
not
limited
to
ratted
content.
We
cover
a
long
list
of
different
operating
systems
and
we
just
added
pison,
which
means
there's
a
lot
of
security,
Mira
data
which
is
fetched
and
stored
in
a
Claire
database.
So
at
a
minimum
it's
200
me
about
it.
A
It
will
be
probably
even
more
and
the
more
images
you
have,
of
course,
the
more
vulnerability
scan
reports
we
have
and
then,
of
course,
the
database
becomes
quickly
bigger
and
even
the
storage.
This
really
depends
on
how
many
images
do
you
have?
How
many
images
are
you
sourcing
from
whatever
rat-head?
How
many
images
you
are
creating?
How
many
of
those
images
have
shared
layers?
It
also
depends
on
the
way
how
you
build
images
and
especially
how
you
build
your
by
noise,
because
there's
the
question
whether
the
layers
really
share
or
not.
A
So
in
this
slide,
we
we
just
try
to
provide
some
guidance
on
what
is
a
typical
sizing
or
sizing.
We
have
seen
it's
our
customers,
so
the
minimum
set
up.
Of
course
it
works
that
you
can
run
only
one
quake
container,
but
typically
made
a
large
size,
setups
one
three
to
five
containers,
an
average
and
then
they
scale
over
to
eight
or
ten
containers
if
heavy
loads
hits.
The
registry
clear,
as
I
mentioned,
requires
a
little
bit
less
resources,
so
sweet
with
six
containers
are
perfectly
fine
and
the
mirroring
parts.
A
There
was
a
dedicated
slide
on
the
umber
owing
sizing
as
well,
and
how
you
can
avoid
it.
You
need
to
run
more
mirroring
port,
so
they
they're
one
of
the
the
most
important
recommendation.
There's
that
you
don't
run
all
the
marine
operations
at
the
same
time.
So
if
you
Murat
puzzle
toys
every
single
day,
then
please
divide
it
into
the
the
daily
schedule
that
not
all
all
10
are
running
at
the
same
time.
A
Stuff
at
this,
the
database
as
I
said,
is
the
most
critical
requirement,
so
we
recommend
to
use
at
least
four
to
eight
core
and
between
six
and
sorry
32
gig
ram
memory.
So
this
is
a
huge
range.
That's
that's
correct,
but
but
again
so
this
is
typically
and
the
most
critical
packet
bottleneck
and
storage.
I
already
mention
it.
The
yeah
the
registry
is
to
be
growing,
but
never
shrinking
in
so
between
1
and
20
terabytes.
Everything
is
perfectly
fine
in
there,
where
this
is
a
little
bit
less
relaxed.
A
If
you,
by
the
way
you
only
need
or
radius
becomes
only
really
critical,
if
you're,
using
the
crate,
build
automation,
if
you're
not
using
the
build
automation,
it
only
stores
the
tutorial
and
then,
of
course
it
can
be.
A
very
small
sizing,
jewbilee.
The
irradiance
cache
is
running
somewhere
on
the
database
host
and
just
as
an
additional
workload
on
top
of
it
and
on
the
infrastructure
side
will
recommend
that
the
infrastructure
nodes
or
the
host
level
has
at
least
4
to
6
cores
and
between
12
and
16
gigabyte.
A
So
on
the
right
side,
you
see
kind
of
nearly
the
same
sizing
as
we
use
a
cueto,
and
you
can
see
the
number
of
parts
and
also
the
sides
sizing
accepted
the
storage.
Obviously
it's
it's
not
that
big,
as
you
probably
would
imagine
which,
and
the
main
reason
why
we
added
this-
is
to
explain.
No,
it
doesn't
make
sense
that
you'll
run
more
than
15
Quay
containers
by
default,
because
if
we
don't
need
more
on
the
corridor
side,
then
you
probably
don't
need
more
Cray
pots.
A
Then
it's
probably
two
deployments
and
if
you
run
through
database
backends
two
distinct
database
back-end.
Then
it's
no
longer
the
same
registry
because,
as
I
said,
beside
the
config
file
and
the
certificates,
everything
is
stored
in
the
database,
which
means,
if
you
have
two
different
database
entry
points,
then
you
have
two
different
data
sets
and
two
different
registries
with
different
user
configurations,
different
configurations
whatever.
So
those
are
two
deployments
and
then
it
requires
and
two
subscriptions.
The
exception
for
the
data
effectively
is
G.
A
Replication,
G
replication
still
means
it's
one
database
because,
as
I
mentioned
earlier,
it's
a
shared
database
which
is
used
on
both
sides.
But
you
have
two
distinct
storage,
packets,
yeah,
which
are
then
mirrored
from
from
one
to
each
other,
and
and
that's
why
it
requires,
as
of
today
and
two
different
subscriptions
or
if
you're
replicating
even
further.
You
can
also
replicate
to
a
third
or
fourth
region.
If
you
want
and
then
of
course
it
counts
per
replicas.
So
if
you
run
AG
replication
setup
with
Sui
region,
then
you
need
three
sub
captions.
A
This
doesn't
matter
yeah.
So
this
this
has
no
impact,
which
also
means
it's
the
same
price
tag
for
a
very
small
deployment,
but
also
for
very
huge
large-scale
deployment
which
spends
across
multiple
data
centers
or
regions.
They
also
no
further
subscriptions
or
cost
associated
with
all
the
operators
which
one
on
open
shifts
as
the
destination
target,
such
as
the
continuously
operator
or
the
crater
operator.
A
They
run
on
every
August
of
Chester,
and
they
are
the
only
reason
so
they
require
a
craze
of
scription
in
the
sense
of
you
can
only
use
the
continuous
security
operator
if
you
are
using
Quay
and
then
Quay
requires
the
subscription,
but
the
content,
security
operator
itself
or
what
you're
running
on
octave
does
not
require
an
additional
subscription.
So
if
you,
if
you
use
one
query,
point
to
sort
of
content
to
1000
overshift
clusters,
it's
perfectly
fine
to
install
1000,
container
security
operators
and
1000
bridge
operators
on
all
those
clusters
without
having
any
additional
cost.
A
It
also
doesn't
matter
again.
This
sizing
is
whether
or
I
mentioned
how
many
big
the
sizing
of
the
underlying
infrastructure
is
well
as
an
stand-alone
host,
multiple
of
them.
If
it's
in
one
data
sense
for
multiple
one,
this
doesn't
matter
again,
the
requirement
is
it's
one
shared
database
and
one
shared
storage
Bagan.
Typically,
there
was
a
load
balancer
in
front
of
those
to
ensure
that
it's
only
one
it's
it's
burst
will
be
called
out
here
that,
as
of
today,
we
do
not
support
replicas
for
the
database.
A
Neither
we'd
only
no
active
active
setups
are
currently
supported.
We
are
working
on
such
things
for
future
versions
of
Quay
and
again
the
number
of
destination
targets
doesn't
matter.
It's
really.
It
doesn't
have
an
impact
on
the
subscription
and
one
of
the
last
points
was
H
a,
and
this
is
probably
a
little
bit
more
complicated
in
her
topic
and,
as
I
already
mentioned
a
couple
of
times,
the
Opera
DD
containers,
it's
also
the
parts
Quay
and
clear
and
murmuring
and
the
builders,
those
are
stateless
components
and
effectively.
A
A
The
data
base
in
the
radius
cache
and
I
already
mentioned
the
radius
cash
is
stateful,
but
it's
less
critical,
so
it
doesn't
require
a
you
can,
of
course,
to
run
it
anymore,
but
you
don't
have
to
but
storage
and
data
base
are
the
most
critical
thing
and
then
there
are
a
couple
of
other
things.
Of
course,
if
we
are
running
containers
or
parts,
then
of
course
somebody's
to
take
care
of
that
those
are
kind
of
highly
available
as
well,
but
this
is
automatically
then
by
kubernetes
or
over
shift
in
front
of
those
different
parts.
A
You
have
to
run
in
load
balancer,
of
course,
this
Lopez
or
needs
to
be
available
as
well
and
again
in
future
versions
of
the
operators.
We
will
do
a
better
management
of
all
the
different
work
towards
leveraging
all
the
stuff
which
already
exists,
such
as
the
health
checks
or
the
premier's
endpoint
and
servitors
the
infrastructure
age.
A
typically
is
achieved
by
the
Cornelius
platform.
A
So
if
a
node
goes
down
automatically
companies,
work
shift
ensures
that
the
workload
is
still
is
moving
over
to
another
node
and,
of
course
the
same
applies
to
the
entire
infrastructures,
or
data
center
goes
down
and
stuff.
It
is
so
there's
no
difference
here
between
all
the
other
workloads
you're
running
somewhere.
I
already
mentioned
all
those
points
which
are
shown
here.
A
It's
best
to
mention
that
we
have
a
dedicated
guide
which
talks
about
query
age,
a
high
availability
set
up,
and
we
will
probably
extend
and
improve
this
guide
in
in
the
near
future
as
well
to
incorporate
a
couple
of
changes
and
a
couple
of
things
we
want
to
get
in
there
for
storage
begins
again.
The
recommended
way
to
use
storage
is
open
chef,
Katina
Storch.
We
are
using
the
new
multi
cloud,
object,
gateway,
but
also
plain
OCS.
A
All
of
the
box
features
a
couple
of
great
capabilities
in
order
to
to
help
us
to
achieve
DHA
for
the
storage,
so
we
have
by
default.
We
have
three
different
replicas,
which
are
automatically
created
and
the
node
failure
overs
automatically
handled.
We
have
a
couple
of
features
on
the
OCS
side
which
ensure
that
the
storage
back-end,
which
again
is
the
second
most
critical
back-end
for
Quay,
is
always
there
and
up
and
running
and
available
and
accessible
by
Quaid,
and
the
same
applies
for
force
F.
A
There
are
a
couple
of
options
for
playing
SEF
to
achieve
aj,
there's
also
a
lot
of
documentation
and
guidance
out
there
as
well
on
the
database
side.
Again,
the
most
critical
thing,
it's
important
to
know
that
the
res
had
provided
database
images
for
post
where's,
Mariana,
B
and
MySQL
are
not
supported
in
production
workload.
A
So
there's
a
port
limitations
which
are
linked
here
from
this
slide,
make
clear
that
this
is
not
a
recommended
way
to
run
the
database
on
orbit
with
in
an
age
a
fashion
and
that's
why
the
recommendation
is
that,
typically,
because
our
customers
already
have
an
age,
a
database
service
for
post
quest
somewhere,
which
is
operated
by
a
DBA
team
which
is
professional
and
well
maintained,
and
it's
also
used
by
many
other
different
of
applications
and
business
critical
services.
The
customer
is
running
in
public
cloud.
A
Again,
you
can
use
database
service
provided
by
the
public
cloud
provider
and
to
clear
the
cloud
provider
takes
care
of
its
availability
or
you
can
use,
alternatively,
write
a
partner
or
friends
such
as
the
crunchy
operator
and
again
for
the
components.
It's
also
all
the
parts
and
images
this
is
more
or
less
or
Medicaid,
and
with
kubernetes
and
OpenShift
and
the
auto
hitting
and
capabilities
in
there.
We
recommend
to
use
at
these
three
parts
for
each
quaintly
in
AJ
setups,
and
then
the
equator
operators
monitors
the
health
of
those
parts.
A
Why
are
they
health
check
and
we
spins
it
if
it's
needed
and
multi-site
setups
should
effectively
run
parts
on
both
sides,
which
means
you
have
two
distinct
way
clusters,
but
since
you
are
still
using
the
same
database,
it's
one
query
deployment
from
a
subscription
perspective.
Coming
back
to
the
database
operator.
I
briefly
mentioned
the
crunchy
operator,
so
this
is
one
of
the
ecosystem
partners.
We've
been
working
with
closely
and
effectively
on
this
side,
quandary
tries
to
explain
why
you
need
to
run
an
operator
for
stateful
applications
such
as
the
database
and
there's.
A
Another
reason
why
we
recommend
partner
offerings
here
for
the
debits
operator
is
not
only
the
age,
eight
capabilities,
but
also
all
the
additional
feature-
those
let's
say,
market
leaders
offer
in
this
area,
such
as
database,
backup
disaster
recovery,
failover
monitoring,
all
those
great
features
which
are
not
included
in
the
database
offerings
we
should,
and
with
that
I'm
done
for
this
day,
0
session,
recording
I
hope
you
enjoyed
it.
I
hope
you
learned
a
lot
and
I
hope
that
I
answered
all
the
questions
you
wanted
to
ask
all
the
time.
Many
thanks
for
watching
take
care.