►
From YouTube: CNCF SIG Runtime 2020-05-07
Description
CNCF SIG Runtime 2020-05-07
B
A
A
A
A
Alright
cool
so
yeah
so
and
I
think
I
have
one
item
for
this
cigarette
time.
Rep
Maya
I
reached
out
to
a
couple
of
folks
Seminole
carp
from
AWS
about
getting
more
participation
and
they
say
you
know
with
firecracker
and
maybe
bottle
rocket
a
couple
of
projects
and
they're
actually
working
on
and
it
so
hopefully
we'll
get
something
in
the
future
as
far
as
participation,
and
we
also
have.
A
Another
or
project
for
a
project
from
my
IBM,
so
I
reached
out
to
them
and
there's
it's
a
research
project
that
look
at
looks
at
a
kernel
and
it's
not
quite
like
a
eunuch
kernel
so
in
it
allows
you
to
rent
Linux
workloads.
So
hopefully
we're
going
to
get
presentation
for
them,
some
participation
and
then
yeah
and
that's
the
two
items
that
have.
C
D
A
B
D
E
I'm
Joey
Shoreham
former
co-founder
of
clay
and
now
the
lead
engineer
on
the
project
way
on
product
right,
I'm,
just
gonna
give
a
brief
overview
of
where
project
way
is
today
and
kind
of
where
it
came
from
and,
as
bill
mentioned,
feel
free
to
ask
questions.
We
didn't
plan
to
PowerPoint
you
guys
on
all
the
live
long
day,
so
we're
here
to
answer
whatever
questions
you
have
so
project
way
came
out
of
Quay
Valeo.
We
were
the
first
private
container
registry
on
the
Internet.
E
We
actually
started
way
before
docker
hub
was
a
working
project
or
product,
or
rather
right
after
the
dock
Republic
Index
launched.
So
we
launched
at
a
doctor
near
City
meetup
in
October
2013.
At
the
time
it
was
by
my
startup
dub
table
LLC.
That
company
was
then
acquired
by
core
OS
in
August
of
2014.
Right
after
we
launched
quai
enterprise,
which
was
our
on
premise,
version
same
codebase,
just
different
version
and
then
Red
Hat,
walk
or
West
in
January
of
2018,
and
we
then
open
source
quai
as
project
way
after
that
next
slide.
E
Please,
though,
so
as
mentioned,
quai
was
the
first
private
container
registry
and,
as
such,
it
has
a
somewhat
unique
history.
It
was
independently
developed,
and
so
it
wasn't
based
off
of
doctor
distribution,
but
is
instead
even
to
this
day
and
independently
developed
image
registry
with
no
external
vendor
dependencies,
but
it
is
fully
open
source
so
we're
essentially
from
lack
of
a
better
term
a
cleanroom
implementation
of
the
registry
protocol.
E
We've
been
implementing
it
on
our
own
since
that
very
first
version
I've,
even
as
the
community
has
been
making
use
of
the
NGO
based
doctor
distribution
to
kind
of
form,
the
core
of
most
other
registries,
we've
been
our
own
also,
as
mentioned,
we
use
the
same
code
base
for
both
our
on-prem
and
cloud
hosted
version.
It
is
literally
the
same
container
image.
We
just
configure
it
differently,
so
we
give
a
different
secrets:
different
storage
configuration-
and
we
feature
flagged
a
few
things
on
feature
flag.
E
Oh,
it
self
is
running
before
we
actually
push
it
to
our
on-prem
customers,
and
that
means
that
we
discover
problems
fairly
quickly
because
if
it
works,
for
you
know,
a
million
repositories
and
a
hundred
thousand
concurrent
customers,
then
we're
pretty
confident
it'll
work
for
a
hundred
thousand
repositories
and
a
thousand
concurrent
customers
we're
also
the
only
registry
product
or
project
that
has
full
push
and
push
push
and
pull
support
with
docker
clients
all
the
way
back
to
version
0.7.
So
we
support
the
initial
dr.
v1
protocol.
We
support
dr.
v2
schema
1,
dr.
E
v2
schema
2.
As
of
two
weeks
ago.
We
support
with
the
experimental
flag,
turned
on
full
OCI
as
well,
and
all
of
this
is
bidirectionally
and
concurrent,
so
you
can
using
a
version
of
docker
from
2013.
You
can
push
an
image
and
then
pull
it
using
a
modern,
OCI
based
client,
or
vice
versa,
with
subtle
scenarios
that
don't
work
such
as
you
can't
obviously
pull
a
non
amd64.
Linux.
F
E
From
dr.
v1,
but
short
of
that,
if
you're
just
pushing
a
standard
container
image,
that's
been
used
for
the
last
7
years,
it'll
work
at
any
version
of
the
dock
or
any
version
of
any
OCI
compliant
tooling
I
mean
this
is
part
of
our
commitment,
both
as
a
product
for
enterprise,
customers
and
as
a
project.
We
have
a
very
strong
belief
that
customers
should
not
be
forced
into
hard
migrations
of
their
tool
chains
unless
there's
no
other
way
around
it.
E
E
In
addition,
we
have
early
access
for
OC
I
mime
types,
particularly
as
part
of
the
USDA
artifact
standard,
which
I'll
reference
in
a
moment
and
to
that
end,
we've
registered
the
feature
flag
and
experimental
feature
flag
support
for
helm
v3.
So
if
you
have
the
ECI
feature,
flag
turned
on
and
become
phiiiy
flag
turned
on.
You
can
push
and
pull
helm,
v3
charts
into
repositories.
E
Similarly
to
how
you
can
push
images
as
well,
and
this
is
again
part
of
our
commitment
towards
growing
standards
and
finally,
we
are
actually
driving
the
UCI
standards
for
artifacts
I
myself
on
on
the
working
group
leadership
for
the
UCI
artifact
standard,
which
the
initial
document
got
committed.
I
believe
it
was
either
earlier
this
week
or
late
last
weekend
or
I
forget
exactly
when
we
LG
TM
did.
But
you
know
immaterial
and
exactly
what
I
was
and
we
are
actively
involved
in
helping
to
evolve.
E
Yeah,
so
just
project
way
at
a
glance.
This
is
just
a
high
level
overview
and
this
obviously
does
not
cover
the
full
breadth
nor
depth
of
the
project.
We
are
an
O
CI,
compatible
registry,
as
I
mentioned,
and
as
also
mentioned,
we
are
the
old
registry
that
is
not
only
o
CI
compatible
but
is
fully
backwards
and
forwards
compatible
with
essentially
every
container
image,
API
and
distribution
format
that
has
been
released.
E
We
have
clear
image
scanning,
so
Clair
is
another
project
that
is
part
of
the
project
way
proposal
to
be
accepted
and
Clair
is
our
open
source
security
scanner.
We
actually
have
two
versions
of
Clair,
currently
the
legacy
v2,
which
is
currently
used
in
production
and
the
up-and-coming
version
for
which
will
be,
which
is
available
today,
two
for
testing
and
will
be
tech
preview
in
the
next
product
released
of
clay.
E
But
you
can
use
it
today
with
project
way,
if
you
just
add
some
configuration
and
we
were
continuing
to
obviously
evolve
and
build
that
as
we
move
towards
the
first
formal
release
of
that
we
have
image
builder
support
and
not
just
image
builder
support,
but
full
integration
with
trigger
setups.
You
can,
via
a
nice
UI
wizard,
create
a
new
bill
trigger
on
github
or
get
up
Enterprise
get
lab
or
get
lab,
Enterprise,
bitbucket
or
even
custom
get.
E
If
you
don't
want
to
make
use
of
specific
api's
and
every
time
a
push
occurs
in
that
get
repo
build
will
be
triggered
on
the
quayside.
Those
builds
are
sandbox
via
virtual
machines
around
under
trooper
Nettie's,
if
you're
using
Kwai
I.
Oh,
if
you're
running
on
premise
or
you're
running
project
way
today
you
can
use
the
kubernetes
based
driver
or
you
can
use
a
legacy,
one
that
doesn't
have
the
same
security
guarantees,
but
we
have
flexibility
there.
E
So
we
can
do
cache
lookups
that
if
we
see
that
hey
this
bill
could
be
benefit
from
a
cache
from
six
months
or
even
a
couple
years
ago,
we
can
pull
that
tag
as
opposed
to
the
more
modern
tag
and
that's
again,
driven
by
the
fact
that
we
are
the
registry
in
that
scenario,
and
we
can
do
those
integrations
and
then
it
you
know,
we
have
custom
a
bunch
of
other
features
built
around
that
integration.
We
have
kubernetes
operators
we're
building
right
now
for
deployment
as
well
as
de
to
operations.
E
These
operators
are
also
part
of
our
our
proposed
project
to
CN
CF
and
in
particular,
I
want
to
call
out
to
the
first
one
is
what
we
have
called
the
container
security
operator
this
operators
already
available
today,
you
can
install
it
in
an
open
chef
cluster.
You
can
also
install
it
in
then
I'll
cuca
cluster,
but
in
open
shift.
The
console
gives
you
some
additional
benefits
and
it
will
actually
talking
to
a
dot,
well-known
endpoint,
which
could
be
kwai,
which
it
is
today
or
it
could
be.
E
An
enclave
registry
will
automatically
label
pods
with
their
security
vulnerabilities.
So
this
is
very
good
for
actionable
intelligence
as
to
what's
going
on
in
your
cluster,
in
terms
of
the
security
of
those
pods,
without
adding
the
overhead
of
a
scanner
in
cluster
or
requiring
that
your
cluster
have
access
to
anything
but
the
registry
in
terms
of
network
access.
So
it
solves
two
very
important
problems
there.
We
also
have
obviously
are
quite
operator
itself
for
installing
and
managing
kwai
itself,
and
that
continues
to
evolve
into
a
full-featured
day
two
operator
today.
It's
fine.
E
It
focuses
on
deployments,
but
we're
already
adding
day
two
operations
to
it,
with
the
eventual
goal,
of
course,
of
making
installation
of
way
as
simple
as
create
a
CR
in
your
cluster
equaiiy
Co
system,
and
you
get
the
full
end-to-end
experience
of
quite
we
support
multiple
storage
providers.
This
is
so
much
other
registries.
We
support.
You
know
the
standard.
S3
is
your
GCP
on
Prem.
E
We
also
support
OpenStack
Swift
and
we
have
a
built
in
feature
for
geo
replication,
which
is
built
on
top
of
the
storage
system
which
allows
for
registry
running
registry
instances
running
in
spared
geographic
locations
to
copy
the
registry,
binary
data
from
location
to
location
in
the
background,
but
even
across
disparate
storage
providers.
So
you
can
use
Azure
in
one
location
and
GCP
and
another,
but
and
and
you
can
configure
D
replication
and
as
long
as
you've
configured
it
correctly.
E
We
have
very,
very
fine-grained
metrics
and
audit
logging.
Our
audit
logging
system
logs
every
action
taken
in
the
entire
registry
product
at
a
granularity
level
of
repo
namespace
and
registry
available
at
each
of
those
levels,
and
that
includes
pushes
pulls
tag
operations.
Anything
you
name
it
I
mean.
So
this
is
extremely
important
for
auditability
purposes
and
it
is
kind
of
our
number
one
requested
auditability
feature
routinely
and
we
are
launching
soon
it's
already
integrated
today,
but
we'll
be
launching
into
the
on-prem
products
and
support
for
not
just
using
the
database.
E
Our
audit
logging,
but
additional
logging
providers,
such
as
Kinesis,
are
elastic
and
this
allows
for
growth
of
scale.
So
when
you're
processing,
you
know
not
a
couple
tens
of
millions
of
logs
a
month
but
you're
processing
a
couple
hundred
billion
in
operations
a
month,
then
you're
logging
infrastructure
can
handle
it,
and
it's
part
of
the
reason
why
we
we've
had
that
we've
enterprise-grade
our
back
and
all
support.
E
This
was
kind
of
the
resin
tear
of
Quay,
originally
as
an
own
product
was
to
provide
as
a
mention
we
were
the
first
private
registry
product
available
and
so
off
and
Harr
Beckworth
kind
of
icky
fount
focuses
of
our
products.
So
we
were
the
first
registry
that
offered
robot
accounts.
We
have
very
detailed
integration
for
external
applications
to
operate,
and
that
includes
operation
at
the
command
line.
So
you
can
use
all
tokens
a
robot
tokens
to
the
command
line.
We
have
integration
with
various
our
back
providers,
o
IDC,
LDAP
Keystone,
including
another
one.
E
We
call
custom
JWT,
which
means
you
can
write
your
own
offense
and
quai
will
just
speak
to
it.
On
the
LDAP
side,
we
have
team
sync,
so
you
can
sit
and
as
well
as
Keystone.
So
if
you
want
to
back
your
team's
ink
way
with
LDAP
groups
or
Keystone,
don't
forget
if
they're
called
groups
or
teams
but
same
difference,
you
can
do
so,
and
the
system
will
automatically
synchronize
those
things
I'm
and
again.
All
of
this
are
back
and
all
support
is
tied
together
with
our
existing
audit
logging.
E
Image.
Time
Machine
is
a
feature
that
is
unique
to
Quay
when
a
tag
is
pushed
into
Quay,
rather
than
overriding
the
the
tag.
We
actually
keep
a
history
of
that
tag
for
up
to
a
configured
period
of
time,
and
that
is
administrator.
Configurable
standard
is
two
weeks
and
that
allows
the
users
to
if
perhaps
they
overrode
a
tag
incorrectly
or
they
needed
an
old
version
for
compliance
reasons
or
myriad
other
reasons
to
look
backwards
in
time,
roll
back
their
tag,
if
necessary
and
at
least
know
you
know
how
that
tag
changed.
This
introduced
him.
E
This
feature
in
particular,
has
saved
my
my
personal
bacon
at
least
a
few
times
where
I
found
first
a
tag
and
I
it
turned
out
that
tag
was
broken
and
it
needs
to
roll
back.
I
was
able
to
do
that
without
having
to
keep
extra
copies.
Around
I
saw
a
few
people
said
they
were,
they
came
in
late.
Do
we
need
to
go
backwards
to
address
anything
that
they
missed
or
should
I
keep
going
forward?
I
think.
E
A
E
E
Yeah
I
was
gonna,
go
into
that
to
the
next
slide
or
not.
Yeah
I'll
go
through
these
last
through
item
pretty
quickly
flexible
deployment
models.
So
while
we
encourage
our
users
to
deploy
project
way
via
kubernetes
and
obviously
the
work
tour
on
the
quay
operator
is
towards
that
goal
is
not
required,
and
so
you
can
deploy
Quay
with
a
docker
run
and
a
database
and
storage.
E
Another
group
that
I
am
a
part
of
we
are
actively
working
towards
the
next
implementation
of
security,
sorry
signatures
and
scanning
stuff,
and
that
will
we
will
adapt
that
work
as
it
reaches
fruition
and
finally,
I
mentioned
to
your
replication,
but
we
also
have
support
for
mirroring
so
an
arm
I'm
mirroring
and
geo
replication
or
kind
of
two
sides
of
the
same
coin.
Do
your
replication
is,
if
you
want
to
have
a
universally
distributed
single
logical
registry,
while
mirroring
allows
for
us
to
say
to
have
disparate
indistinct
registry
instances
with
one
instance
copying
from
another.
E
So
this
is
the
Quay
architecture
at
a
glance
I'm
not
going
to
go
through
too
deeply,
because
I'm
sure
there
will
be
a
lot
of
questions
on
it,
but
at
the
high
level
project
way
consists
of
the
quake
container
itself,
which
you
can
see
in
the
middle.
The
quake
container
runs
all
of
the
pieces
of
Quay
our
build
manager,
our
workers,
the
registry,
the
UI.
In
parallel
to
that,
we
have
the
Clair
container,
which
is
an
independent
container,
acquaint
auch
to
one
another.
E
E
You
can
run
so
you
can
run
as
many
mirroring
workers
as
you
like,
if
you
wish
to.
If
you
have
a
lot
of
mirroring
operations,
you
once
you
can
scale
those
independently
and
then
the
quay
builders
themselves
run
as
separate
objects.
If
you're
using
the
kubernetes
based
build
system,
they
actually
run
those
jobs
on
your
coop
cluster.
These
these
operations
are
then
run
via
the
quay
operator
again,
which
is
optional.
You
don't
have
to
use
it
and
then
Quay
speaks
to
for
storage
is
speak
storage.
E
The
database
for
metadata
the
red
is
caching
for
caching
and
then-
and
these
are
all
generally
configurable,
and
then
we
have
other
things
that
talk
to
Quay
via
of
load,
balancer,
joining
the
UI
customers,
content,
ingress,
the
Red,
Hat
container,
catalog
and
other
things
like
the
operator
hub
today.
The
operator
hub
actually
runs
on
top
of
Keio
via
its
api's,
and
all
operators
served
in
the
OLM
project
actually
are
coming
from
a
Quay
instance
Quay
I/o.
Today,
ok
next
slide,
please
so
one
thing
I
wanted
to
mention
to
you
before
we
start
talking
really.
F
E
The
quay
container,
so
both
Quay
and
Claire,
are
stateless
themselves.
The
containers
themselves
don't
store
any
data
with
the
exception
of
local
cache.
So
as
an
example,
quite
at
I/o
we
run
approximately
30
quake
containers
on
an
open
ship,
cluster
and
they're
sitting
behind
a
load
balancer,
and
they
are
actually
auto
scaled
based
on
traffic.
So,
as
traffic
goes
up,
we
rerun
that
number
up
and
as
traffic
goes
down,
we
take
that
number
down.
E
E
E
Do
the
upgrade
and
put
the
cluster
back
up
we're
endeavoring.
We
hope
to
get
to
the
point
where
the
cluster
never
needs
to
go
down.
We
can
always
do
in-place
upgrades
like
we
do
for
quite
a
time,
even
if
that
means
putting
the
quake
cluster
in
to
read-only,
which
we
have
to
do.
We've
had
to
do
once
in
history
so
far,
but
the
DBA
operator
will
be
responsible
for
that.
So
I.
E
We
have
the
pieces
in
place
today,
but
they're
not
yet
there
in
terms
of
allowing
for
seamless
upgrade
yet,
but
it
it
is
still
faster
today.
So,
for
example,
if
you're
making
use
of
the
quake
operator-
and
you
make
a
configuration
change
to
quake,
the
Quai
operator,
along
with
the
configuration
tool,
will
redeploy
way
by
doing
a
you
know.
Ku-Ku-Ku-Ku
deployment
update
where
we'll
replace
one
note
at
a
time
with
the
updated
config.
E
Or
both,
so
you
can
use
either
my
sequel
or
post
for
us.
We
generally
recommend
Postgres,
because
in
our
experience,
Postgres
is
more
efficient
and
clear
only
speaks
the
Postgres.
So
if
you're
going
to
be
running
a
database
for
Claire,
you
might
as
well
use
the
same
one
for
clay
if
you
want,
but
we
support
on
the
place
that
we
support
my
sequel,
Postgres
and
the
project
way
test
suite
also
tests,
Maria
and
Percona,
which
are,
of
course
variants
is
my
sequel.
There
is
also,
if
you're,
just
running
project
quite
locally
on
your
laptop.
E
We
generally
recommend
master/slave.
We
do
have
support
prototypical
support
today
for
read
replicas
as
well,
and
that
has
been
merged
into
head.
I
have
a
PR
outstanding.
That
will
add
some
additional
changes
to
that
to
address
an
issue
that
may
it
hasn't
come
up
yet
but
may
come
up,
but
our
recommendation
moving
forward
will
be
deploy
the
database
post
press
master
slave
and
then
have
one
or
more
read
replicas
configured
as
well,
especially
if
you're
deploying
across
multiple
geographic
regions,
you'll
likely
want
read
replicas
in
those
regions
to
just
make
the
performance
better
right.
E
It's
not
required.
We
have
customers
today
that
have
gia
replicated
the
Red,
Hat
Claes
deployed
globally
and
they're
all
talking
to
the
same
database
in
one
region,
but
having
read
replicas
will
certainly
make
the
the
read
operations
faster
and
more
redundant,
which
is
nice
and
yes,
I,
know
more
redundant,
isn't
self
redundant
but
and
yeah,
and
the
other
thing
one
thing
I
should
mention
too,
is
the
operator
today
does
deploy
post
rest
for
you.
A
E
Today,
just
deploys
it
in
a
standard
like
Postgres
container,
but
what
we're
working
towards
is
allowing
people
to
configure
the
Quai
operator
to
choose
what
kind
of
what
other
operator
to
use
to
deploy
the
database,
so
our
general
recommendation
will
likely
be
if
you
have
the
crunchy
DB
operator,
which
itself
manages
Postgres
and
a
master
slave
and
does
all
of
the
backup
and
and
failover
for
you,
then
you
could
just
have
Quay
operator
treat
the
crunchy.
Dbc
are
right.
This
is
the
the
great
idea
a
great
thing
about
the
kubernetes
ecosystem.
E
Is
we
don't
have
to
be
responsible
for
how
to
deploy
a
database?
We're
not
necessarily
the
database
deployment
experts,
but
there
are
projects
and
products
out
there
that
are
kubernetes
compatible
that
provide
this
these
capabilities.
So
what
we're
working
towards
is
in
the
quaint
ecosystem?
You
say:
hey
I,
want
to
use
this
crunchy
CR
to
deploy
my
database
crunchy,
who
go
handle
all
that,
and
we
just
say
and
give
us
a
push.
Go
same
point,
oh
great:
now
we
have
a
database
and
then
crunchy
would
manage
that
as
an
example.
F
Question
I
was
just
gonna
mention
that,
in
the
context
of
a
different
registry,
the
topic
of
high
availability
came
up,
because
because
this
is
such
a
sort
of
key
part
of
a
kubernetes
cluster
and
when
it's
down
or
unavailable,
the
cluster
is
essentially
down
or
to
some
extent
down,
and
so
so
master
slave
is
all
very
good.
But
but
they're.
You
know
if,
if
there's
usually
some
kind
of
manual
failover
required
in
those
kind
of
environments,
because
you
don't
you
know
the
master
and
the
slave,
don't
know
which
one
is
down.
E
So
our
high
availability
story
is
around
two
separate
levels
of
high
availability.
So,
from
our
perspective,
high
availability
of
read
operations
is
much
more
important
than
high
availability
of
great
right.
If
you're
you're
unable
to
push
for
five
minutes,
it's
not
the
end
of
the
world
unless
you're
in
you
know
a
production
fire,
but
if
you're
unable
to
pull
for
five
seconds
it
can
be,
and
so,
in
our
opinion-
and
this
is
something
that
I've
been
pushing
heavily
for,
read,
replicas
are
key
and
right.
So
the
way
it
works
today,
Inc
way.
E
If
the
read
replica
is
unavailable,
the
system
will
automatically
failover
back
to
the
master
database,
so
you
actually
already
have
redundancy
they're
automatically
built
in
on
the
quayside,
such
that,
if
you
configure
it
pointing
to
your
normal
master
slave,
so
you
Postgres
and
then
want
at
least
one
read
replica
if
not
more
way
will
automatically
redundant
lea
check
them
to
make
sure
that
it
can
pull
from
at
least
one
of
them.
So
our
belief
is
read.
E
Replicas
will
solve
the
critical
high
availability
aspect,
combined
with
the
fact
that
Quay
itself
is
a
highly
highly
available
design
in
having
multiple
instances.
You
can
have
it
such
that
if
you
have
at
least
one
Fuquay
containers
running,
and
we
generally
recommend
at
least
three-
and
you
have
at
least
one
repo
cup
read
replica
back
in
your
database
inque's
configure
to
talk
to
that
read
replicas
as
well
as
the
master
slave.
Then
now
you
have
a
che
on
the
quayside
and
a
che
on
the
database
side.
E
Now,
while
at
this
moment
in
time
the
storage
failover
isn't
automatic,
we
do
plan
to
do
that
as
well,
and
the
Redis
cache
is
optional,
so
we're
addressing
every
layer
and
ensuring
that
we
have
redundancy
at
every
layer,
and
today
we
already
have
redundancy,
when
my
in
my
opinions
to
two
most
important
ones,
Quay
itself
and
the
database
storage.
We
hope,
as
I
said
via
dear
replication,
to
have
Auto
failover
some
added
in
some
time
soon.
E
That's
that's
on
the
roadmap,
and
that
would
mean
that
if
you
have
do
your
replication
enable
full
G
a
replication
enabled
and
store,
and
your
primary
storage
is
completely
unavailable.
You
then
have
a
situation
where
quake
and
then
failover
on
that
side,
and
now
you
have
essentially
a
primary
and
a
backup
at
every
level.
E
Okay
next
slide
bill.
One
thing:
I
wanted
to
talk
about
briefly
before
I
hand
it
over
to
Bill.
Who
will
talk
about
our
you
know?
Customer
use
cases
and
and
numbers
is
our
testing
suite
arm,
and
this
is
something
that
is
fairly
unique
to
Quay
and
I.
Think
is
a
major
benefit
for
the
community,
and
that
is
our
registry
test
suite.
E
So
at
the
high
level,
our
registry
test
suite
makes
use
of
high
tests
to
create
a
matrix
test,
and
so
we
have
obviously
a
bunch
of
testing
for
various
registry
use
cases,
including
it
a
basic
push,
pull
all
the
way
to
I
push
a
manifest
list
and
I
want
to
be
able
to
pull
it
via
legacy.
But
the
key
differentiator
here
is
that
these
this
test
suite
is
matrix
over
every
version
of
docker
or
every
version
of
the
docker
protocol
as
well
as
OCI.
E
So
the
inputs
are
a
set
of
you
know
the
doctor
version
protocols,
V
1
V,
2,
1,
V,
2,
2,
&,
OC
I
cross
product
with
itself.
So
when
you
run
the
registry
test,
suite,
let's
say
I
run
the
register.
Just
we
would
test
basic
push,
pull
what
that
will
actually
do.
Is
it
will
spawn
up
a
Kuei
configured,
wait
for
it
to
begin,
and
then
it
will
run
for
every
single
variant
of
this
cross
product
the
test
operation.
So
you
can
see
here
if
I
run
basic
push
bowl.
E
I
copied
this
from
a
test
run
a
couple
days
ago
I
get
it
ran
push
with
OC.
I
pull
with
version
1
of
the
doctor
protocol
push
with
OC,
I
Paul
Weatherbee
2,
1
+
reduce
the
I
poll
would
be
to
to
put
for
the
CI
pull
with
OCR,
continue
push
with
V
1,
pull
within
one
etc,
eccentric,
Center,
etc.
Now
does
make
our
CI
a
little
slower
because
we
have
to
running
the
cross-product
of
what
is
essentially
50
to
100
tests
in
this
regard.
E
E
Oh
said
model
the
one
we
were
currently
running,
as
well
as
the
OSI
model,
the
one
that
we
were
migrating
to
and
we
were
actually
able
to
migrate
all
of
quai
I/o
and
our
own
Prem
customers
really
to
use
the
same
migration
in
the
background,
without
actually
having
any
downtime
from
by
changing
our
entire
and
we
were
able
to
change.
Excuse
me,
our
entire
data
model,
without
any
downtime
I
suggest
watching
that
talk
is
it
is
pretty
fun
and
I
go
into
the
deals.
That
I
would
do
that.
E
B
I
have
a
quick
question
not
about
not
about
the
test,
but
about
the
release
management
for
a
quick
project.
What's
that,
what's
the
best
way
to
track
a
particular
query
version
to
the
github
command,
I,
look
at
the
release
notes
stored
on
the
redhead
calm
and
how
do
I,
how
do
I
find
a
gif
corresponding
it
relays
or
commits
yeah.
E
So
we
are
adding
bill.
Correct
me
if
I'm
wrong,
but
I
believe
we're
adding
tags
for
each
of
the
project
way
releases
for
each
sprint.
Is
that
correct?
That's
right,
yeah,
so
we
so
we
do
prop
so
there's
two
different
release
schedules
here:
there's
the
actually
three,
but
so
Cueto
gets
upgraded
or
updated
routinely
right.
So
what
will
merge
stuff
into
head
will
test?
E
It
will
deploy
to
Quay
io,
see
if
there's
any
problems,
fix
them
and
continue,
and
that
ensures
that
you
know
we
have
fast
cadence
on
Keo
and
as
well
as
I
mentioned
earlier.
It
ensures
that
we
catch
problems
really
early
right,
because
if
it
works
on
Cueto
chances
are
it'll
work
at
project
way.
Project
way
releases
occur
with
our
sprints,
so
our
Sprint's
are
three
weeks
and
they're
named
right
now
we're
naming
them
after
Star
Wars,
because
we
just
wanted
a
cool
theme,
and
so
we
tagged
after
each
sprint.
E
We
tagged
the
commit
cha
in
our
github
repo
in
the
project,
Wade
github
repo,
with
the
commits
for
that
sprint,
and
we
also
have
a
build
trigger.
Actually,
a
Kuei
IO
build
trigger
that
automatically
builds
that
release
with
that
tag
and
puts
it
into
quai
Oh
/
project
way.
/
quick,
so
there
are
release
pipeline
today
is-
is
to
have
those
sprints
and
then
redhack
way
are
the
Red
Hat
product
version
of
project
way
gets
numeric
releases,
our
upcoming
release
being
3.30
and
those
get
tags
with
their
own
tags.
E
B
E
So,
today,
for
redhack
way,
we
release
clear
containers
along
with
the
clay
container.
So
when
red-hot
way,
303
goes
out,
a
clear
container
for
3.3
it'll
be
called
I,
believe
Claire
GWT,
because
it
also
includes
the
auth
system,
and
it
will
also
be
given
the
tag.
303
and
that'll
ensures
that
there's
compatibility
between
those
two
systems.
That
being
said
for
project
quake,
we
don't
generally
break
compatibility
with
Claire,
and
if
we
do,
we
call
it
out
on
our
release
notes.
E
So
as
of
right
now,
you
can
use
any
version
of
Claire
v2
with
quake
and
you
should
be
able
to
use
any
version
of
clay,
v4
or
modern
ones.
Once
we
start
releasing
those
on
with
cui
33,
but
we
are,
we
are
going
to
be
calling
out
for
the
project
way
side
when
there's
compatibility,
differences
in
when
there's
going
to
be
the
need
to
move
up
or
down.
E
So
we
don't
support,
object,
other
image,
standard
projects
in
so
much
as
we
do
the
work
so
way
itself
does
not
talk
to
Claire.
It
talks
to
what's
known
as
the
quays
security
scanner
API
there's
two
versions
there's
the
v2,
which
is
for
clarity,
2
and
now
v4,
which
is
for
the
player
before
it
go,
but
there's
no
need,
but
it
doesn't
have
to
be
clear
on
the
other
end
right.
So
we
actually
have.
E
There
is
a
guy
who
works
for
aqua
who
actually
implemented
a
proxy
that
speaks
to
clarity
to
API
and
Quaid
talks
to
that
proxy
and
that
perhaps
he
talks
to
the
Aqua
security
scanner
and
quays
none
the
wiser,
and
that
was
a
deliberate
design
decision
on
our
part.
We
don't
generally
lock
ourselves
to
Claire.
E
The
cleric
guy
isn't
is
in
and
of
itself,
both
in
v2
and
v4,
fairly
simple
on
purpose.
For
that
reason,
so
that,
if
you
do
want
to
write
a
translator,
all
their
security
standards,
you
can-
and
we
also
have
other
security
standards
that
have
integrated
with
Quay
not
bought
via
our
security
scanner
api's,
but
by
making
use
of
our
qwave
the
ice.
E
What
they'll
do
is
they'll
call
the
quay
api's
to
determine
what
has
changed
in
a
repository
and
then
they
will
scan
the
image
and
then
annotate
the
results
in
Quay
by
adding
a
label
so
there's
at
least
one
a
security
scanner
provider
whose
name
is
escaping
me
at
the
moment.
Who
is
actually
already
created
this
kind
of
integration?
Dr
oauth
integration,
so
you
just
go
off
over
to
their
scanner
to
give
them
access.
They
scan
all
of
your
repos
and
then
they
label
with
a
link
to
the
results,
and
we
actually
added
a
change.
D
Yeah,
why
don't
I
pick
it
up
from
here?
I
know
we're
also
running
a
little
short
on
time.
So
let
me
kind
of
go
through
some
of
the
customer
stuff.
Someone
asked
earlier
about
who's
using
Quay.
This
is
just
a
snapshot
of
some
of
the
the
names
I'll
go
into
a
little
more
detail
on
the
fort
reference
in
a
second,
but
we
had
obviously
quays
been
been
used
commercially
for
a
long
time.
You
know,
as
Joey
said
we
we
are
recently
open
sourced.
D
We
open
sourced
back
in
November
of
last
year
for
Quake
Claire
was
open
sourced
back
in
2015,
so
as
a
registry
we're
fairly
new
to
the
open
source
scene,
but
we
have
existed
prior
to
that.
I
also
threw
down
some
stats
on
Quay
dot,
io,
so
Joey
mentioned
about
the
scalability
I
think
it's
relevant
in
the
discussion
around
usage
around
the
scale
at
which
Quay
dot
IO
operates.
So
we
we
do
cater
to
almost
a
hundred
thousand
users
as
well
as
over.
D
Seven
thousand
organizations
and
an
organization
and
a
user
are
kind
of
the
same
thing
inside
Quay.
They
just
represent
differently
on
the
UI.
We've
also
got
close
to
150,000,
plus
robot
accounts
accessing
the
service.
So
again,
if
we
haven't
kind
of
beaten
this
point
to
death,
you
know
quit
Iowa's
built
first
scale.
It
runs
at
scale
and
that's
something
that
we
spend
a
lot
of
time
on
in
the
engineering
team,
making
sure
that
we
don't
break
that
design
commitment
as
well
as
making
sure
that
our
service
runs
adequately
at
scale.
D
Red
Hat,
as
a
company
now
depends
on
Quay
dot
IO
for
for
a
vast
majority
of
its
container
distribution
needs.
Let
me
move
on
to
just
quickly
talk
about
forward,
so
we
have
some
reference
information
that
came
out
actually
just
last
week
with
virtual
summit
that
took
place
so
there's
a
PDF
here.
There's
a
link
you
can
read
up
on
I
just
want
to
call
out
the
usage
here
of
Quay,
obviously
they're
using
the
redhead
Quay
product.
But,
as
Joey
said,
it's
the
same
bits
as
project
quay
they're,
a
longtime
customer
of
Quay.
D
They
began
their
involvement
with
Quay
back
when
Quay
was
part
of
core
OS
they're,
currently
running
on
a
fairly
old
version
of
Quay
actually
and
it's
a
fairly
modest
size,
deployment
I'll,
say
single-digit,
terabytes
of
storage.
It's
it's
not
a
not
our
largest
deployment
by
any
stretch,
but
in
terms
of
the
the
use
case,
it
is
a
centralized
registry.
That's
handling
lots
of
application
needs
within
Ford.
They
also
provide
a
facility
for
partners
to
access
those
images
as
well.
So
there's
there's
an
external
component
as
well
again.
D
D
Let
me
just
jump
into
the
community
briefly,
as
well
as
I
mentioned
before,
Quay
is
fairly
new
to
being
open
sourced.
We
just
completed
that
activity
in
November
of
last
year,
we've
seen
pretty
good
uptick
already.
You
know
the
numbers
are
there,
we've
got
47
contributors,
we've
got.
You
can
see
an
extraordinarily
large
number
of
commits,
obviously
because
of
the
historic
work
that
we
did.
We
basically
took
the
existing
git
repo,
kept
our
commit
history
and
and
opened
that
up.
D
So
we
preserve
the
the
historical
aspects
of
that
we've
already
got
quite
a
few
Forks
we're
starting
to
get
the
github
stars
up
and
we're
starting
to
get
increased
views
and
visitors
there.
So
that's
a
growing
thing.
We
see
that
growing
pretty
much
on
a
weekly
basis.
Our
our
our
sig
Channel
on
Google
has
been
getting
more
traction
and,
as
an
engineering
team
we
are
I
would
say,
on
a
weekly
basis,
increasing
our
involvement
with
the
community
and
vice-versa.
On
the
claire
side,
there's
some
some
slightly
different
numbers.
It's
a
bit
longer.
D
Obviously
it
began
as
an
open
source
project,
a
larger
number
of
contributors,
they're,
obviously
not
as
much
commit
activity.
But
again
you
can
see
just
the
stats
kind
of
speak
for
themselves
there
in
terms
of
how
much
usage
there
we
do
have
I
guess,
I'll,
just
sort
of
summarize
in
terms
of
who's
working
on
Claire
versus
Quay,
from
a
Red
Hat
perspective.
We
too
do
have
some
full-time
employees
on
Claire.
We
have
two.
We
have
four
or
full-time
employees
on
Quay
and
that
kind
of
fits
into
the
contributor
model
there.
Let
me
just
pause
there.
F
D
E
Yeah
I
should
also
comment
on
the
clear
side
that
Amazon
is
using
Claire
v2,
currently
as
part
of
their
security
scanning
system
and
they've
been
actively
contributing.
As
a
result
were
we're
excited
for
them
to
move
to
contributing
to
Claire
v4,
as
we
shift
development
resource
from
the
old
versus
new
one
as
well.
D
Let
me
just
wrap
up
with
the
roadmap
just
to
give
you
a
sense
of
where
we're
going
so
helm.
V3
is
something
that
we've
got
experimental
support
for
in
our
next
release,
which
is
coming
out
very
very
soon
like
next
week.
Well,
GA
that
fairly
quickly,
thereafter,
hopefully
in
the
next
made
minor
release,
dot
release
that
we
can
do
for
Quake
well,
obviously
have
upstream
support
hardened
over
time
much
faster.
We
will
be
getting
the
full
certification
for
OC
compliance
as
Joey.
When
it's
a
detail
about
the
test
compliance
there.
D
This
is
a
proposal
that
we've
we've
authored
and
we've
taken
to
the
community,
we're
looking
for
feedback
on
that,
and
we
would
be
interested
in
getting
community
engagement
on
helping
to
implement
that
as
part
of
project
way.
We
think
that's
gonna
go
a
long
way
towards
solving
some
of
the
scalability
issues
around
event:
notification
for
working
with
registries
at
scale,
especially
the
scale
of
Quay
dot.
D
Io
Joey
mentioned
the
notary,
v2
work
that
we're
doing
we're
staying
very
close
to
that
effort
and
as
we
have
something
available,
we'll
get
that
into
the
product
from
a
feature
perspective.
This
is
coming
from
a
lot
of
our
customers.
We've
gotten
a
lot
of
requests
for
sort
of
enterprise
management
facilities,
around
large-scale
usage,
for
example,
quota
enforcement
and
management
of
quota
for
images
and
repos,
making
sure
that
organizations
don't
run
out
of
storage
or
run
out
of
scalability
in
in
in
constrained
environments.
D
We've
also
got
quite
a
few
requirements
for
working
in
controlled
environments,
financial
services,
institutions,
public
sector
institutions,
where
there
have
to
be
air-gapped
environments.
There
may
not
be
direct
connections
to
the
Internet
day.
2
occupies
quite
a
bit
of
our
roadmap
just
around
how
we
want
people
to
be
able
to
run
Quay,
on-premise
and
in
a
cloud
environment
without
touching
it.
D
So
this
obviously
fits
into
the
model
for
our
operator
strategy,
but
we
really
see
day
2
and
Beyond
and
day
2
+
as
being
major
functional
use
cases
for
backup,
resiliency
recovery,
any
sort
of
operator
centric
on
operation,
centric
use
cases
and
then,
lastly,
just
to
touch
on
from
a
development
perspective.
We
see
that
the
continued
integration,
deep
integration
with
coop
is,
is
really
paramount,
so
we
would
exist
primarily
to
serve
applications
on
cube,
and
so
that's
everything
from
how
does
Quay
get
smarter
about
understanding.
D
What's
currently
running
on
a
cube
cluster,
to
prevent
accidental
issues
would
say,
image,
deletion
or
image,
changes
that
may
affect
production
runtime
also,
obviously,
staying
very
close
to
see
ICD
workflows,
making
sure
that
those
different
tools
and
different
workflows
for
development
work
very
well
with
quai,
above
and
beyond.
Just
the
build
support
we
have
today
and
then
the
last
one
I'll
just
briefly
touch
on.
Is
this
notion
of
image
proxy,
where
Quay
can
provide
image
proxy
at
the
cube
cluster
level
to
provide
additional
resilience
in
the
event
that
the
registry
goes
down?
D
I
know
there
was
a
question
before
about
H,
a
and
and
making
sure
that
the
registry
doesn't
go
away,
having
a
proxy
support.
That's
intelligent
enough
to
work
with
highly
available
cube
and
keeping
things
cast
at
the
node
level
is
we're
looking
at
so
I
blew
that
pretty
fast.
Were
there
any
questions
about
the
roadmap.
F
Yeah
I
had
one
again:
it
seems
to
be
in
my
place
in
the
world
the
the
some
of
the
stuff
on
your
roadmap.
You
could
probably
either
you
know,
implement
yourself
or
just
build
sort
of
plugins
to
existing
system,
so
pub/sub,
for
example,
there
many
of
them
out
there
you
could
just
publish
stuff
to
any
number
of
pub
subsystems.
F
F
E
For
it
yeah,
so
so
on
the
pub/sub
side,
the
reason
that
I
am
the
author
of
the
pub
sub
proposal,
part
of
the
reason
that
that
I
I'm
feeling
that
it
should
be
a
separate
proposal
is
because
the
idea
is
to
allow
registries
to
implement
it
kind
of
the
way
they
want.
I,
don't
I'm,
not
a
fan
of
api's
that
are
tied
to
specific
implementation.
So
the
probe
sub
api
proposals,
like
you,
want
to
back
it
using
an
existing
pub
sub
model
or
you
want
to
use
like
rabbitmq
or
whatever
you
can.
E
But
we
tried
to
do
so
in
a
way
that
is
pluggable
so
that
we're
not
requiring
our
users
to
make
to
use
a
specific
piece
of
technology
wherever
possible
and
then
therefore
get
themselves
boxed
and
so
like.
As
we
mentioned
earlier,
like
you
have
options
of
storage.
If
you
want
to
you
have
options
for
databases,
you
have
options
for
log
drivers,
you
have
options
for
mirroring
and
you
know
down
the
road
for
quota
and
for
pub
sub.
E
In
all
of
these
cases,
our
plan
is
to
allow
our
users
to
determine
what
the
best
piece
of
infrastructure
is
for
their
needs
and
then
hopefully
build
a
sufficiently
powerful,
but
also
somewhat
generic
implementation
on
top
of
it.
That
makes
that
leverages
the
unique
capabilities
of
each
of
these
systems.
It's
a
very
fine
balance
to
hold
I
will
admit,
but
we
we
never
do
wherever
possible.
D
A
D
That
or
yeah
it's
a
good
question,
so
we
don't
call
that
out
as
a
roadmap
items.
Specifically,
there
is
initiatives
that
we're
running
to
get
community
engagement
around
quai,
specifically
yeah.
We
could
talk
to
some
of
those
things
and
Diane
who's
on
on
the
line
as
well.
I
think
can
talk
that
a
lot
of
it
is
around
right
now,
making
sure
that
we
have
not
just
an
outreach
program
but
making
sure
that
we're
getting
significant
value
to
the
community,
so
the
upstream
releases
are
the
first
step
towards
that.
D
We
want
to
make
sure
that
people
get
access
to
the
latest
and
greatest
changes
to
quai
as
quickly
as
possible.
Coming
from
our
team,
we've
also
started
folding
in
obviously,
and
accepting
PRS
from
the
contribution
from
the
community.
Our
first
PR
actually
was
quite
interesting.
It
was
a
fellow
who
ran
a
code
formatter
on
our
code
and
sort
of
helped
us
just
get
the
code
looking
really
nice,
so
so
yeah,
it
sort
of
implied
in
what
we're
doing
I,
don't
think.
D
A
D
A
Yeah,
thank
you.
Thank
you
for
the
presentation.
Yes,
sir
I
think
the
next
step
would
be
to
basically
the
sig
is
going
to
create
a
document
recommendation
document
and
that
will
be
publicly
available
and
then,
after
that,
the
TOC
will,
since
the
FDIC
will
go,
look
at
the
document
and
then
they
need
to
find
and
TOC
due
diligence
sponsor,
because
you
guys
are
going
for
incubation
right
so
and
then
for
the
air.
A
Okay,
so
I
think
we
we
have
one
last
item,
but
we
didn't
get
to
it,
so
that
was
CDI
container
device
interface,
so
Brunel.
She
added
it
to
the
next
meeting.
So
BB
our
first
item
for
the
next
meeting,
and
that's
it
I
think
we
don't
have
anything
else.
So
thank
you
very
much
and
yeah
and
Steve
healthy
and
you
know
and
stay
home.