►
Description
Case Study: Managing Container Images with Quay at Cisco
Michael White: Solution Architect, Cisco
Nara Tadepalli: Senior IT Engineer, Cisco
Red Hat OpenShift Commons Gathering at Red Hat Summit OpenShift & Quay at Cisco
A
I'm
gonna
bring
the
gentlemen
from
Cisco
up
we're
gonna
get
started
and
I'm
really
pleased
to
have
them
come
and
present
they've
presented
before
at
other
OpenShift
gatherings.
But
this
time
it's
a
little
special
because
they're
my
wonderful
case
study
that
brings
together
some
of
the
core
OS
wonderfulness
and
the
openshift
wonderfulness
and
they're
gonna,
tell
you
all
about
it.
So
thank
you
again
for
coming
thanks.
B
B
See
if
I
can
make
them
go
forward,
so
since
we're
speed-dating
I'll
make
the
introductions
quick,
my
name
is
Mike
white
I'm,
an
architect
at
Cisco.
We
work
in
the
IT
group
called
enterprise
platform
services,
where
we
deliver
platforms
and
middleware
technologies
for
all
of
the
developers
at
Cisco
and
with
me
today
is
Nara
tada
Paulie,
he's
a
senior
engineer,
tech,
lead
and
subject
matter
expert
on
our
enterprise
container
hub,
which
we
deploy
using
core
OS
now
Red
Hat's
Quay
registry
service.
B
B
Cisco
has
been
on
a
multi-year
journey
and
we've
called
it
different
things.
Over
the
years,
we've
called
it
our
global
data
center
strategy
or
cloud
native
transformation.
But
what
we're
really
trying
to
do
is
deliver
services
to
our
developers
in
a
programmable
way.
So
back
in
2015,
when
docker
and
kubernetes
were
getting
popular
and
openshift
rolled
out
open
ship
3
that
included
those
technologies.
We
started
evaluating
technologies
and
seeing
if
we
could
do
container
as
a
service
so
go
beyond.
Just
platform
as
a
service
could
we
run
whatever
technologies?
B
We
actually
deployed
quai
prior
to
open
shift
in
in
2016.
We
saw
that
developers
were
going
to
really
pick
up
this
container
thing
at
that
time,
and
we
felt
that
there
was
a
need
to
centralize
that
function
within
Cisco
and
have
a
good
place
to
maintain
all
of
the
images
that
were
being
created
last
year.
At
this
forum
we
shared
with
you
are
the
fact
that
we
were
in
production
with
open
ship
3.
B
We
had
a
lot
of
large
customers
deploying
in
a
cloud
native
fashion
on
this
platform
across
multiple
data
centers
and
as
we
go
into
the
future,
we're
building
out
OpenShift
on
top
of
OpenStack
and
looking
forward
to
a
lot
of
the
tectonic
administrative
features
that
will
be
brought
into
the
product
going
forward.
So
with
that
I'll
turn
it
over
to
Nara
and
he'll
share
with
you
some
of
the
considerations
we
had
when
looking
at
a
container
registry
thanks.
C
Everyone,
so
when
we
start
when
we
started
to
decide
that
we
will
deploy
our
enterprise
container
registry
at
Cisco,
we
came
up
with
different
requirements.
Some
of
the
requirements
are
highlighted
here.
Main
requirements
were
it
should
be
on
premise
because
of
the
security
reasons.
We
don't
want
our
images
to
go
to
outside
public
cloud,
so
it
should
be
more
secure
and
it
should
be
deployed
on
our
data
centers.
It
should
be
high
available.
We
wanted
to
deploy
this
enterprise
container
registry
in
different
data
centers.
C
So
as
of
now
at
Cisco,
we
deployed
at
3
data
centers
and
also
the
images
getting
pushed
to
one
data
center
should
be
automatic
replicated
to
the
other
two
data
centers.
We
also,
even
though
we
give
our
developers
full
access
whatever
they
can
do.
We
wanted
to
restrict
where
the
images
they
can
pull
and
push
to.
So
as
of
now
at
Cisco,
we
allow
the
developers
only
to
pulled
and
push
the
images
from
our
enterprise
container
registry.
C
It
should
also
have
a
good
integration
with
LDAP
it
should
have
so
when
our
image
is
pushed
to
enterprise
container
regice.
Three
Cisco
is
very
big
with
security,
so
an
image
should
be
automatically
scanned
for
vulnerabilities
and
report
back
any
critical
model
abilities.
We
have
at
that
image
and
also
it
should
have
good
UI
visibility,
and
it
should
have
good
vendor
support.
C
So
keeping
these
requirements
in
mind,
we
evaluated
different
products,
some
of
the
products
we
evaluated
word
docker
trusted
registry,
OpenShift,
internal
registry,
artifactory
and
quake
docker
trusted
registry
was
just
evolving
at
that
time
and
didn't
have
all
the
features
which
we
wanted
from
our
previous
requirements.
So
we
didn't
go
with
dr.
terrazas
registry
open
chip.
Internal
registry
was
just
for
internal
Red
Hat,
and
it
was
not
useful
for
us
because
we
have
other
products
working
for
this
artifactory.
C
As
of
now
at
Cisco,
we
use
artifactory
for
our
CI
CD
stuff,
but
it
didn't
have
the
required
UI
features
which
we
were
looking
for
and
also
there
was
no
vulnerability
scanning
stuff.
Keeping
all
this
requirements
in
mind.
Only
quai
was
fitting
our
requirement,
so
we
went
up
and
deployed
by
Narciso
data
centers.
So
as
of
now,
this
is
our
current
deployment
of
Quay.
So
we
deployed
this
way
in
three
data:
centers
data
center,
one
being
the
primary
data
center
and
data
center
3
or
the
secondary
data
centers.
C
Each
data
center
has
a
load
balancer
in
front
of
it.
This
data
center
can
be
individually
accessed
with
the
load
balancer
URL.
All
these
three
load
balancers
connect
to
a
global
site
load
balancer
data
center,
one
being
the
primary.
We
deployed
many
components
in
it.
We
have
query,
we
have
build
servers,
we
have
Claire
for
security
scanning,
we
are
using
data
post
press
for
database
and
we
are
using
SEF
for
storage.
As
of
now,
the
data
base
is
only
active
and
passive,
the
May
primary
databases
and
data
center
one.
C
C
After
we
deployed
way,
we
had
a
lot
of
success
stories,
so
these
are
few
of
the
success
stories.
I
wanted
to
share
with
you.
This
is
from
our
site.
From
the
administrator
side,
we
had
good
support
on
architectural
guidance.
So
when
we
wanted
to
deploy
quai
on
different
multi
data
centers,
we
got
good
support
from
the
quay
team
and
whenever
we
had
some
issues
with
bugs
or
anything,
we
can
directly
send
out
an
email
to
Quay
and
they
are
very
quick
to
fix
the
bugs.
C
B
These
first
three
bullets
are
are
really
important
to
us.
We've
been
really
really
pleased
with
the
relationship
with
with
Quay
and
core
OS.
So
far,
then
I'm
sure
it'll
continue
with
with
Red
Hat.
We've
got
a
good
partnership
here,
but
being
able
to
give
you
guidance
on
how
to
set
this
up,
how
to
scale
it.
B
C
I
think
it's
just
gonna
be
up
great
Quay
every
two
three
weeks
because
they
give
so
many
updates
and
regular
frequent
upgrades.
So
we
upgrade
Kawai
every
two
three
weeks
at
Cisco
and
the
vulnerability
scanning
is
very
good.
So
whenever
we
push
an
image,
it
tells
us
where
the
image
is
vulnerable
at
which
layer
of
the
image
is
vulnerable
and
also
it
tells
if
there
is
a
fix
available
for
that
image.
For
that
vulnerability,
and
we
have-we
are
using
this
API.
So
Quay
comes
with
order
of
api's
in
our
automation.
C
We
have
this
CIC
D
pipeline
automation,
stuff
going
on.
So
in
this
we
use
this
API
Stu
automate
most
of
our
stuff
okay.
So
this
is
from
the
developer
perspective.
As
of
now
in
Cisco,
we
have
more
than
5,000
users
on
our
enterprise
container
hub
for
20
organizations
are
being
created
and
we
have
total
close
to
10,000
repository
and
each
repository
will
have
multiple
tags.
As
of
now,
the
volume
at
Cisco
is
around
12
terabytes,
so
we
have
around
12
terabytes
or
he
may
just
deploy
it
on
enterprise
container
hub.
C
C
B
The
the
success
there
comes
from
the
ease
of
use,
right,
I,
didn't
think
it
was
gonna,
be
all
that
useful
I'm,
always
on
a
command
line.
Doing
docker
billdocker
tag,
docker,
push
and
I
think
this
just
gives
developers
a
really
good
way
to
start
simply
and
and
get
going,
and
so
that's
that's
been
a
surprise
for
a
pleasant
surprise
and.
C
Also,
we
got
good
feedback
on
UI,
even
though,
if
you're
not
familiar
with
any
of
the
content
of
registry,
you
can
just
log
into
the
Quai
you
why
and
it's
very
useful.
It
tells
you
where
the
tags
are,
how
to
do
stuff
and
all
that
things
are
very
easily
done
on
the
UI
side
and
also
we
use
a
lot
of
notification
by
post.
So
whenever
you
have
pushed
an
image
or
something
to
it,
it
can
give
with
the
notification
thing
it
can
give
you.
B
You
got
to
have
a
place
to
store
them,
and
the
Quai
is
our
our
spot
for
doing
that.
So
let
me
dive
down
into
that
just
a
little
bit
more
more
detail.
One
of
the
things
we're
trying
to
accomplish
with
this
programmable
cloud
native
model
is
to
speed
delivery
of
applications
and
functions
to
production.
So,
in
the
old
days
we
had
a
lot
of
bureaucracy
and
process
and
and
gates
in
the
way
of
developers
deploying
their
code
and
we're
trying
to
stay
out
of
that
production
flow
too.
B
You
know
from
from
dev
through
stage
and
into
production,
but
we're
trying
to
maintain
our
security
and
our
standards
by
giving
visibility
back
with
frequent
reporting
and
auditing,
we
can.
We
can
catch
vulnerabilities
and
get
them
remediated
much
more
more
quickly,
so
pretty
standard
flow
here
developers
are
checking
their
source
code
in
to
get
using
Jenkins
to
to
build
the
container
image
itself
and
compile
the
code,
and,
at
that
point
in
time,
we're
running
different
application
scanning
tools
to
determine.
You
know
whether
the
code
is
of
good
quality
and
has
all
its
test
cases
etc.
B
Now,
as
Nora
mentioned,
our
enterprise
container
registry
is
the
only
pipeline
into
our
open
shipped
platform.
You
can't
pull
direct
from
dock.
You
can't
deploy
from
your
your
local
laptop
you've,
got
to
pull
from
the
enterprise
container
registry
and
run
it
in
OpenShift.
Now
we
don't
have
any
blocks
between
there,
so
you
could
technically
go
to
docker
hub,
pull
down
an
image
and
push
it
right
in
here
and
push
it
into
production,
which
is
a
little
bit
scary,
but
we
can
audit
frequently
and
report
back
and
and
find
out.
B
What's
going
on
in
those
images
and
get
the
fixes
in
about
six
months
ago,
I
don't
know
if,
if
you
guys
remember,
there
was
a
stressful
nur
ability
and
our
InfoSec
department
was
really
concerned
about
it
and
they
wanted
to
know.
Do
we
have
that
vulnerability
running
in
any
production
systems
and
for
us
an
open
shift?
It
was
relatively
easy
to
answer
that
question.
We
went
into
open
shift.
God
all
pods
got
the
SHA
ID
of
all
the
images
that
were
running
and
through
those
api's
that
quai
provides.
B
We
were
able
to
interrogate
the
vulnerability
database
and
find
out
if
that
vulnerability
existed
in
any
of
our
containers
and
then
take
it
back
to
the
developers
to
remediate.
Luckily,
we
didn't
find
it,
but
it
was
cool
that
we
could
pull
the
running
images
out
of
the
system
with
kubernetes
api
x'
and
then
use
quays
api's
to
check
and
see.
If
that
particular
vulnerability
existed.
C
So,
as
I
mentioned
earlier,
as
a
frog
weighs
running
only
on
individual
VM
hosts,
we
wanted
to
deploy
this
way
on
open
chef.
There
are
many
advantages
of
deploying
quai
and
open
chef.
As
of
now
we
are
maintaining
if
we
want
to,
depending
on
the
load.
If
we
want
to
increase
our
way
running,
containers
we
have
to
individually
to
extend
the
VMS
deployed,
offer
on
it
and
run
way
on
top
of
them.
C
If
we
are,
if
you
deployed
way
on
open
ship,
we
can
just
scale
up
and
scale
down
depending
on
our
needs,
so
our
future
goal
is
to
run
Quay
on
open
shift
and
also
we
need.
We
wanted
more
tighter
integration
with
open
shift.
As
of
now,
we
don't
know
what
tags
are
running
in
open
ship
because
of
open
chef
is
not
only
the
platform
which
is
used
as
of
Nomad
Cisco.
C
We
use
different
platforms
to
run
images,
so
we
wanted
to
know
if
we
have
so
we
have
so
many
tags
right
in
H
in
our
enterprise
container
hub.
We
wanted
to
know
if
these
tags
are
part
of
OpenShift
or
not.
This
will
give
us
more
idea
on
audit
stuff
like
if
you
have
any
vulnerability
running
in
OpenShift,
it
can
show
up
on
Quay,
so
we
wanted
to
have
more
tighter
integration
with
OpenShift
and
also
we
need
better
admin.
C
Visibility
features
so
as
of
now,
if
I
am
a
super
admin
and
Quay
I,
don't
have
access
to
all
of
the
repositories.
I
know
it's
a
security
thing,
but
if
deep
teams
come
and
say
that
ok,
this
tag
is
not
visible,
or
this
tag
is
not
working
properly.
For
us,
you
have
to
manually,
go
and
take
the
ownership
of
that
repository
and
do
all
that
stuff.
C
We
don't
want
to
do
that
for
thousands
and
thousands
of
repository,
so
I
think
we
need
more,
better
visibility,
better
admin,
visibility,
features
for
this
one
and
also
as
of
now
as
I
told
you
that
we
deployed
way
only
in
three
data
centers.
So
we
are
getting
lots
of
complaints
that
people
pulled
in
from
remote
locations
like
in
Europe
or
somewhere
they're,
not
able
to
pull
and
push
they
may
just
that
fast.
So
we
want
to
deploy
way
in
my
type
more
data
centers.
Now
that's
the
long-term
goal
for
us,
nor.
B
C
Yeah,
so
we
started
moving
one
component
enough
components
by
components.
As
of
now,
we
are
running
clear
on
open
chef,
we're
running
like
80
parts
of
clear
on
open
ship
because
we
pushed
like
10,000
images
now,
so
we
wanted
to
increase
that
player
to
80
pots.
So
as
of
now,
we
are
running
80
instance
of
clay,
Tony,
OpenShift,
yeah.
B
That
was
a
real,
easy
scale-up
operation,
as
you
might
expect
running
on
top
of
OpenShift
I.
Think
I
was
the
one
that
complained
when
my
images
failed
to
get
scanned
and
and
the
guys
just
went
and
scaled
it
up.
So
I
think
what
there's
a
lot
of
potential
there
as
we
go
forward
to
run
more
and
more
of
the
components
of
the
container
registry
on
top
of
open
shipped
as
well
so
I
think
that
might
be
the
end
of
our
slides.
We've
got
just
a
minute
or
two
left.
If
there
are
any
any
questions.
F
B
You
know
I'm,
not
a
vulnerability,
expert
I'm,
hoping
that
we
can
trust
quai
I
know
that
they
are
plugged
into
all
of
the
different
CVE
databases
and
any
time
that
our
InfoSec
has
approached
us
about
a
particular
vulnerability.
That's
already
been
updated
in
the
the
registry.
The
other
thing
we
didn't
mention
is
that
quays,
it
doesn't
continuously
scan,
but
it
continually
updates.
B
So
even
after
you've
deployed
and
past
the
first
vulnerability
scan
if
a
new
vulnerability
is
brought
into
the
database,
it'll
update
that
flag
it
and
send
that
notification
that
Nara
was
talking
about
so
well
I
can't
attest
and
certify
to
the
the
completeness
of
the
vulnerability
database.
We've
been
very
satisfied
that
we
get
the
info
quickly.
E
E
What
they
start
to
realize
is
when
you're
running
a
fully
immutable
environment,
you're
managing
all
your
deployed
apps
through
these
images,
you're
doing
scanning
and
have
all
this
automation,
it
might
actually
be
more
secure
than
what
you're
doing
today
right
and
it's
a
real
eye-opener
for
the
security
teams.
We've
seen
it
over
and
over
again
on
deals,
and
so
again,
if
you,
if
you
need
help
having
those
conversations,
we're
happy
to
do
that,
and-
and
we
have
great
stories
like
this
to
emphasize
so
you
said,
if
there's
no
more
questions,
we
can.