►
From YouTube: OCI Weekly Discussion - 2021-02-03
Description
OCI weekly developer's call from Wednesday, 3 Feb 2021. Notes/agenda items here: https://hackmd.io/El8Dd2xrTlCaCG59ns5cwg?view#February-3-2021
B
Okay,
so
just
learning
from
from
some
really
good
feedback
last
week
is
there
any
quick
hits
action
items
that
we
want
to
pull
in.
Like
I
see
john,
I
know
the
listing
stuff
will
be
a
longer
conversation,
but
john
is
etags
a
quick
hit.
C
Actionable,
it's
a
it's
a
thought.
I've
had.
I
don't
know
that
it's
actionable.
I
would
like
feedback
from
other
people.
C
B
So
when
we
do
the
second
part,
then
okay
sounds
good.
B
Since
we
try
to
go
with
what's
on
the
agenda,
we'll
go
there,
so
just
a
quick
intro
to
peter
several
couple
months
ago
now,
before
the
new
year,
they
we
had
some
customer
call.
That
was
like
hey
we're
having
some
problems
in
this
region,
we're
seeing
different
performance,
and
there
was
something
about
the
question
that
we
saw
through
the
support
channel
that
just
piqued
my
interest,
so
we
reached
out
to
them
they're
doing
some
great
registry
benchmark
capabilities.
So
we
thought
it
was
a
great
opportunity
to
bring
them
in
here.
B
D
And
I
will
just
get
into
the
presentation:
okay,
so
yeah,
just
ping
me.
If
I
start
breaking
or
something
so,
can
you
see
my
screen?
Yeah
yep
looks
good,
okay,
so
hi
guys,
I'm
peter
garlic.
D
I
am
here
to
present
my
master
thesis
work.
I
created
I
designed
and
implemented
a
benchmark
for
container
registers.
Firstly,
a
bit
about
us.
I
graduated
from
a
master
of
computer
science
from
frye
university
and
the
university
of
amsterdam
topic.
My
focus
throughout
my
master
degree,
was
distributed
in
cloud
systems
and
performance
evaluation.
D
My
first
supervisor,
dr
alexandria
soup,
is
an
established
professor
at
vu,
amsterdam
and
also
he's
the
research
lead
for
the
at-large
research
group,
which
is
a
university
research
group,
a
composed
of
students
from
theodelft
and
fry
university.
Amsterdam,
together
with
my
daily
supervisor,
irvin
vanag.
D
They
are
leading
the
research
efforts
of
the
spec
research
group
cloud
and
the
vision
of
both
of
these
research
group
is
to
engineer
strong
ecosystems
through
innovation,
which
is
facilitated
by
knowledge,
sharing
and
collaboration.
D
The
spec
research
group
is
focused
on
benchmarking
of
systems
and
therefore
the
spec
research
group
cloud
is
primarily
focused
on
providing
benchmarks
for
critical
cloud
services,
and
container
registry
is
definitely
one
of
the
cloud
services
that
is
part
of
this
critical
production
and
the
deployment
processes.
We
observed
two
use
cases
in
which
container
registries
are
important
and
their
performance
can
be
a
bottleneck.
Firstly,
in
case
of
large
kubernetes
cluster
updates,
where
there
might
be
thousands
of
concurrent
image,
pull
requests.
D
The
the
throughput
really
becomes
a
bottleneck
here
and
we've
already
seen
from
companies
like
uber
and
alibaba
that
they
are
already
developed.
They
already
developed
solutions
that
move
this
bottleneck
to
and
solve
this
by
peer-to-peer
sharing
of
data.
D
A
second
use
case,
which
is
especially
interested
for
us,
is
based
on
our
analysis
of
open
source
serverless
platforms,
where
pooling
from
a
container
registry
contributes
to
a
serverless
function
called
start.
So
we
can
see
that
performance
of
container
registries
is
important
and
needs
to
be
quantified.
D
However,
we
haven't
identified
any
real
world
open
source
benchmarking
tools
for
registries.
There
are
some,
but
they
are
mostly
directed
towards
academia,
so
they
lack
some
features
such
as
authentication
to
be
able
to
evaluate
real
world
registries,
and
subsequently
there
we
haven't
observed
any
large
benchmark
work
in
scientific
literature
regarding
production
grade
registers.
D
So
with
our
work,
we
aim
to
bridge
this
gap
to
do
this,
we
in
the
scientific
conduct.
We
first
posed
some
research
questions
that
we
wanted
to
answer,
and
the
first
research
question
is
concerned
with
finding
the
key
aspects
of
every
benchmark.
We
wanted
to
develop
the
find
identify
workloads
that
are
preferably
tied
to
some
real
world
usage.
D
With
the
second
research
question,
we
aimed
to
solve
this
by
creating
the
design
and
implementation
of
this
benchmarking
tool
that
will
support
this
previously
mentioned
workload,
metrics
and
registries
and
finally,
in
the
third
research
question,
we
are
concerned
with
using
this
tool
to
perform
and
report
the
performance
evaluation
based
on
the
experiments
we
designed
so
first
step
in
the
process.
Is
we
launched
an
extensive
survey
of
what?
What
is
the
state
of
the
art
in
both
scientific
literature
and
industry?
D
So,
regarding
the
scientific
literature,
we
identified
four
papers
that
are
primarily
focused
on
reducing
image
pull
times.
D
First,
one,
the
the
earliest
one
is
the
slacker
which
in
which
is
a
work
which
presents
so
slacker,
is
a
docker
storage
driver
which
fetches
image
layers
lazily,
so
that
only
layers
pulled
that
that
it
pulls
layers
based
on
what
a
container
needs
to
run
immediately.
D
However,
this
approach
is
not
the
most
optimal
because
client
and
the
registry
need
to
maintain
a
persistent
persistent
connection.
D
The
second
paper,
which
is
very
important
also
for
our
work,
is
a
collaboration
between
ibm
and
virginia
tech,
and
it
is
a
concern
and
in
this
work
they
presented
a
set
of
anonymized,
ibm
production
registry
traces
and
they
also
presented
a
tool
with
which
these
traces
can
be
replayed
and
some
metrics
are
reported.
D
The
next
two
works
are
also
from
the
collaboration
between
ibm
and
virginia
tech.
Bolt
is,
is
a
distributed
registry
design
where
each
where
each
node
has
its
own
local
storage.
So
the
extra
hop
between
registry
nodes
and
the
underlying
storage
service
is
eliminated.
D
Finally,
the
most
recent
paper
from
2020
is
do
hunter,
which
improves
the
design
of
both
by
in
introducing
some
smart
replication
and
do
the
duplication
based
on
the
analysis
of
the
ibm
production
traces.
D
D
First,
the
category
are
the
public
registries
where
a
main
feature
or
yeah
yeah.
The
main
feature
are
the
public
repositories
where
any
registered
users
can
can
pull
these
images,
and
these
registries
are
very
affordable.
They
also
contain
private
repositories,
it
is
affordable
and
it
is
directed
towards
single
users,
small
and
medium
enterprises.
D
Second,
category
are
the
cloud
registries
which,
which
are
the
services
offered
natively
by
the
underlying
cloud
platforms.
D
They
are
well
integrated
with
other
services
offered
by
these
cloud
platforms,
and
users
can
use
them
together
with
other
cloud
services
and
also
but,
but
they
can
also
be
used
with
an
external
deployment
where
the
registry
is
the
only
point
of
contact
with
the
cloud
platform.
D
And
finally,
the
third
category
are
the
self-hosted
registries,
these
registries,
a
lot
of
them,
are
open
source
and
they
offer
some
niche
capabilities,
as
we
mentioned
before,
with
kraken
and
dragonfly,
they
offer
some
peer-to-peer
sharing
among
images
and,
in
case
of
port
to
some
extended
security
features.
D
Finally,
to
summarize
our
key
findings,
we
identified
the
set
of
seven
registries
that
are
interested
that
we
recognized
as
interesting
and
registries
that
our
benchmarking
tool
should
be
able
to
measure
evaluate.
D
Secondly,
we
identified
three
metrics
that
are
relevant
when
talking
about
registry
performance
and
those
are
throughput,
latency
and
cost.
And
finally,
we
identified
the
workload
which,
which
is
the
set
of
ibm
container
registry
traces,
and
this
will
allow
us
to
perform
experiments
using
a
real
world
real
world
world
usage.
D
Now
that
we
know
a
bit
about
background,
let's
go
over
to
the
design
and
implementation
of
this
tool.
Firstly,
let's
quickly,
I
want
to
quickly
showcase
what
do
we
actually
want
to
measure?
So
this
is
a
very
simplified
view
of
the
whole
of
all
the
components
that
participate
in
the
image
delivery
process.
D
We
are
purely
interested
in
how
registries
are
handling
how
quickly
and
how
big
of
a
load
can
these
registries
handle
so
these,
so
these
http
requests
on
the
right.
We
can
see
the
design
of
our
tool.
There
are
two
core
components
here,
and
these
are
the
harness
which
gets
input
from
the
cli
by
user
and
instructs
other
sub-components
to
perform
the
experiments
and
the
integrated
tracery
player
tool,
which
allows
us
to
run
real
world
experiments.
D
D
Interaction
control
over
the
interactions
with
the
registry
and
also
allows
us
to
construct
our
own
manifest,
which
is
especially
important
for
the
synthetic
generate
synthetically
generated
images.
D
We
also
support
the
fine-grained
configuration
using
yaml
files,
for
example,
number
of
clients,
image
size
and
so
on.
Regarding
the
trace
replayer
tool
it
it
is,
it
has
a
master
and
client
client's
architecture.
We
extended
this
tool
with
us
capabilities
of
actually
evaluating
the
production
grade,
registries
and
yeah.
It's
it's,
but
it's
still
a
tool
provide
from
collaboration
with
ibm
and
virginia
tech.
D
Now
that
we
know
a
bit
about
tool,
let's
go
over
to
experiments,
firstly,
a
bit
about
our
infrastructure,
so
in
our
experiments
we
are
modeling
this
external
deployment,
where
the
ex,
where
the
so
the
deployment
is
outside
of
the
cloud
platform
and
the
only
interaction
between
cloud
platform
and
the
deployment
is
the
container
registry.
D
We
deployed
our
experiments
on
the
our
university
supercomputer,
it's
a
it's.
A
medium-sized
supercomputer
cons
composed
of
six
clusters
with
200
compute
nodes.
It
has
a
one
gigabit
per
second
connectivity
to
amsterdam,
internet
exchange,
which
sits
on
top
of
the
transit
planting
cable.
D
Regarding
our
experiments,
we
designed
three
experiments.
First,
two
are
real,
are
using
a
real
workload,
so
the
ibm
production
registry,
traces
and
the
third
one
uses
a
synthetically
generated
image
and
it's
it's
over
a
long
period
of
time.
D
So,
let's
quickly
just
look
at
the
the
sam
the
characteristics
of
the
container
registry
workload
you
can,
you
can
see,
as
you
already
know,
that
the
container
registry
workload
has
some
has
a
very
large
variability
in
file
sizes
between
different
file
types,
and
we
can
also
see
that
a
ratio
of
get
to
put
requests
is
heavily
skewed
towards
get
request.
So
these
are
some
important
characteristics
of
the
container
registry
workloads
for
the
first
experiment.
We,
as
I
said
we
are
using
a
real
workload.
D
We
are,
we
deployed
three
clients
on
our
super
computer,
each
on
its
own
node
with
the
hundred
threads
each.
The
egress
size
is
relatively
small,
which
allowed
us
to
test
a
large
registry
set,
and
we
also
used
both
delay
and
stress,
trace,
replayer
modes.
What
are
delay
and
stress
free
player
modes?
Well,
the
delay
mode
is
simulating
the
real
delays
between
the
requests
in
the
sample,
while
in
the
stress
mode
we
in
in
the
stress
mode.
D
We
are
just
firing
requests
as
soon
as
there
is
a
thread
available
to
serve
this
request.
So,
as
you
can
see
in
the
stress
mode,
the
very
the
amount
of
the
maximum
concurrent
amount
amount
of
concurrent
request
is
300
and
some
of
the
registries,
as
steve
said,
one
of
the
acr
registers,
for
example,
wasn't
able
to
handle
this
concurrent
load
here
on
the
right
side.
You
can
see
that.
B
What
if-
and
I
just
pasted
this,
because
I
was
trying
to
do
it
multitasking
here.
One
of
the
things
we
were
talking
about
was
testing
private
registries
for
the
cloud
providers.
Is
it
really
fair
to
test
them
outside
of
the
cloud
because
we're
all
like
focused
on
our
cloud
regions
per
se,
and
even
though
we
obviously
want
to
support
developers
at
home?
It's
it's
just
not
a
fair
comparison.
B
One
region
might
be
closer
than
the
to
their
super
computer
than
the
other,
for
whatever
reason
you
know
testing
some
of
the
public
registries,
you
know
whether
it
be
hub
or
even
ecr,
public,
now
or
or
you
know,
get
a
whatever
that
that
you
know
they're
intended
for
that.
But
we
were
specifically
talking
like
to
do
apples
to
apples
testing.
You
should
be
testing
within
the
clouds.
D
Yeah
we
will
get
to
that.
D
I
promise
so
yeah
here
we
can
see.
This
is
a
true
put
per
second
in
the
stress
experiment.
We
can
see
that
some
registries
observe
a
higher
throughput
spikes
later,
which
might
have
to
do,
for
example,
in
the
for
an
example
of
acr
basic
might
have
to
do
something
with
the
throughput
guarantees
that
different
azure
usage
tiers
promise.
D
These
are
the
get
a
layer
latency
and
get
manifest
latency.
Here
you
can
see
all
the
registries
that
we
tested
and
we
can
see
that,
of
course,
there
is
a
larger
performance.
Variability
for
the
layer
gets
because
of
the
much
larger
variability
in
file
sizes,
but
we
also
observed
some
differences
in
variability
between
for
the
get
manifest
latency.
For
example,
in
the
second
experiment,
we
again
deployed
it
with
three
clients:
a
hundred
trades
each.
D
But
this
time
we
selected
a
very
large
workload,
the
so
the
the
experiment
lasted
for
one
hour
and
we
only
used
the
delay
mode,
because
in
this
case
every
registry
started
timing
out.
We
used
the
four
registries,
and
these
are
the
push
latencies
for
all
blob
types,
and
we
can.
We
can
also
see
some
interesting
differences
in
performance
variability
among
these
large
cloud
providers
for
the
third
experiment.
D
It
is
the
long-running
experiment
we
deployed
it
on
in
amsterdam
on
the
digital
ocean
droplet,
we
use
the
synthetically
generated
image
width
of
10
10
layers,
where
each
layer
has
one
megabyte
and
we
we
ran
this
experiment
every
six
hours.
We
pushed
the
image
and
then
we
pulled
it
for
two
months
and
we
used
16
registries,
and
I
think
this
plot
will
again
not
make
steve
happy.
D
D
We
observed
some
interesting
results
where,
after
a
certain
amount
of
time,
we
saw
a
bit
of
a
performance
variability
in
case
of
azure
europe,
west
registries,
so
yeah,
and
now
that
we
showcased
a
bit
about
our
experiments.
D
Let's
go
over
to
next
steps,
both
for
us
and
possible
ideas
for
collaboration
with
you.
For
us,
the
next
steps
are,
as
as
they've
already
noted,
we
got
already
some
very
constructive
feedback.
We
we
understand
that
the
cloud
registries
are
optimized
for
in
inside
a
cloud
platform
inside
cloud
region,
image
deliveries.
D
So,
in
order
to
we
wanted
to
expand
the
scope
to
provide
more
relevance
to
evaluate
these,
these
performances
inside
cloud,
so
we
deploy
our
experiment
inside
our
cloud
region.
D
We
also
want
to
experiment
a
bit
with
the
workloads,
possibly
generate
our
own
workloads,
using
the
ibm
traces
as
a
template
and
also
after
we
fix
these
issues
and
many
more
small
improvements.
We
want
to
turn
this
work
into
a
publication
for
a
top
tier
venue.
D
D
We
think
we
could
collaborate
to
enhance
this
process
on.
Some
of
our
ideas
would
be
to
evaluate
other
steps
of
the
image
delivery
process
so,
for
example,
decompression
and
evaluate
different
compression
types.
We
are
very
much
interested
in
your
work
with
auras
and
we
would.
We
are
interested
in
evaluating
the
delivery
of
other
artifacts
also
to
further
enable
sharing
of
knowledge.
D
We
would
very
much
like
if
we
could
discuss
more
workloads,
which
could
help
learn
more
about
how
users
are
using
these
services,
and
if
you
are
interested
in
in
this
work
and
see
some
future,
it
would
be.
It
would
be
very,
very
nice
to
hear
your
feedback
and
your
ideas,
and
if
not
here,
then
you
can
always
contact
me
on
my
email
and
contact
us
through
our
website
and
to
see
if
there
are
some
opportunities.
B
It
was
great
to
talk
to
them.
I
thought
it
was
a
great
opportunity,
especially
in
all
the
conformance
work
that
had
been
going
on
prior
to
this,
so
it
you
know,
there's
lots
of
different
benchmarks
in
the
industry
for
various
technologies
and
things.
I
I
really
truly
didn't
view
this
as
a
competitive
thing,
although
I'm
sure
it
will
become
one
it's
more
a
matter
of
you
know
we
had
a
little
side.
Conversation
on.
B
You
know
throttling,
like
we're
all
struggling
with
this,
and
we
know
that
customers
can
abuse
registries
and
just
bringing
us
to
death.
You
have
anything
new,
you
have
anything
new.
You
have
anything
new,
as
opposed
to
probably
smarter
ways
to
do
it,
so
I
think
it's
being
able
to
help.
B
You
know
with
this
group
with
one
of
the
right
scenarios
that
we're
actually
targeting
with
registries-
and
you
know
so
we
can
do
the
benchmarks
around
that,
and
especially
as
we
continue
to
support
additional
things
and
registries
to
be
able
to
make
sure
all
this
additional
indexing
work.
We're
gonna
wind
up
having
to
do
you
know,
has
good
baseline
to
be
great
for
us.
B
Instead
of
having
to
write
our
own
tests
on
this
like
to
be
able
to
leverage
this
test
framework
to
see,
you
know
for
our
own
registry
testing
to
make
sure
we're
good
with
it
before
we
deploy
it.
So
I
thought
it
was
a
pretty
cool
project,
as
with
most
universities,
they're
always
looking
for
funding.
B
So
we
certainly
don't
want
this
to
be,
and
we
wouldn't
you
know
we
couldn't,
you
know,
be
biased
by
any
means
towards
azure
or
microsoft,
so
I'll
give
a
plug
for
them
here
and
a
camera
we
were
talking
about
just
before
the
holidays
went
on,
so
I'm
sure
you'll
be
providing
us
information
on
how
we
could
help,
but
it
was
a
pretty
cool
area
to
focus
on.
B
Did
you
get
on
slack
yet
with
this
group
I
know
you're
on
some
of
the
emails,
but
I
would
encourage
your
team
to
get
on
the
slack
channel
with
us
and
ask
the
questions
you
need
and
engage
with
the
questions
and
feedback
and
we're
happy
to
all
help.
D
Yeah,
I
I
I
haven't
this.
This
is
my
first
point
of
interaction
with
with.
A
D
So
yeah,
if
there
is
no
further
questions,
I'm
really
thank
you
for
your
time.
I
hope
that
we
can
find
some
way
of
collaborating
together
yeah.
D
Thank
you
for
your
time
and
thanks
for
also
yeah
listening
to
us.
E
Yeah
some
good
information.
I
I
would
like
to
to
make
a
sort
of
a
side
request
here
when
you're
hitting
the
public
registries.
E
They
they
you,
you
will
get
bandwidth
limited
right
or
it
will
cause
some
issues.
80
gigabytes
is
a
lot
to
download.
It's
very
likely
to
you
know
trigger
your
ip
yeah.
You
might
want
to
keep
it
down
to
into
the
megabytes,
or
at
least
one
gigabyte
size.
B
Yeah
the
length
is
actually
good
for
throttling,
like
hey
we're,
doing
the
right
thing
yeah,
but
I
think
that's
the
feedback
he
was
kind
of
looking
for.
I
was
like
what
exactly
you
know:
should
they
be
testing
and
how
should
they
be
testing?
You
know,
unfortunately,
we
do
have
ml
images,
I'm
sure
you
guys
all
see
them
as
well
for
some
reason:
well,
the
ml
images
tend
to
be
pretty
large,
so
we
definitely
want
to
test
them.
Oh.
E
A
Quick
question
are
the
slides
you
just
presented
a
lot
of
times.
We
put
those
in
the
hack
md,
you
know
some
people
may
come
along
behind
and
read
the
the
notes.
Is
there
a
shareable,
pdf
or
something
we
can
put
somewhere
sure.
F
I
just
want
to
quickly
suggest
so
really
cool.
I
want
to
quickly
suggest
that
we
maybe
make
this
part
of
distribution.
Spec
go
libraries,
so
there's
a
peter
I'll
I'll,
send
you
an
email
with
what
I'm
thinking
but
there's
a
conformance
directory
if
you
go
into
the
distribution,
spec
repo
and
it
kind
of
has
like
a
pattern
for
how
to
plug
in
details
about
a
registry
and
then
run
conformance
it'd
be
interesting.
F
If
we
had
it'd
be
interesting,
if
we
could
put
the
same
type
of
details
like
environment
variables,
basically
as
inputs
to
run
your
tool
and
then
contribute
it
into
the
spec
itself.
Okay,
I'm
gonna
send
you
an
email
specifically.
C
Thanks
cool,
I
might
be
a
little
careful
with
that.
There
are
some
litigious
vendors
in
the
world
who
do
not
like
being
benchmarked.
B
All
right,
john.
C
You're
up,
I
don't
I
don't
know
if
I
can
even
present,
because
I'm
on,
like
a
chrome,
os
device
and
google
hates
soon,
but
I
there's
a
there's
a
pr
I've
linked
to
so
two
things,
I'd
like
to
discuss
and
just
like
get
some
feedback
from
other
registry
operators
or
clients,
or
anyone
interested,
and
I
know
josh.
I
owe
you
a
bunch
of
prs
and
reviews,
but
so
one
thing
that
I've
been
kind
of
hoping
to
push
through
for
years
now
is
a
replacement
for
catalog
and
also
a
way
to
list
manifests.
C
There's
no
way
to
do
this
and
we
removed
our
deprecated
catalog,
and
so
I
have
proposed
two
very
simple
apis
one
for
listing
manifests
and
one
for
listing
repositories.
I
think
they're
good,
I
like
them.
I
would
implement
them
and
I
would
like
to
know
what
other
people
think
the
pr
has
kind
of
devolved
into
like
bike
shedding
about
agile
versus
waterfall,
but
that
the
my
proposal
is
still
there,
and
I
would
like
someone
to
look
at
it.
E
C
Yeah
so
thoughts,
I've
been
thinking
about
this
for
a
while
and
I'm
curious
if,
if
this
does
not
meet
anyone
else's
requirements,
yeah.
C
It
maps
directly
onto
existing
concepts
and
structs,
and
so
I
think
it's
pretty.
You
can
look
at
it
and
kind
of
see
where
I'm
going,
but.
C
The
other
thing
I'd
be
interested
in
talking
about
is
this
etags
thing.
I
know
that
a
lot
of
registries
do
set
etags
and
it's
part
of
the
http
rfcs,
and
so
perhaps
everyone
implements
this,
and
I
don't
because
I
didn't
think
about
it,
but
something
that
has
come
up
before
is
that
it
is
hard
to
do
client-side
coordination
and
it's
very
possible
to
have
race
conditions
around
tagging
things.
C
For
example,
if
you
were
to
construct
a
multi-platform
image
and
push
it
to
a
registry,
you
have
to
coordinate
that
client
side,
and
you
cannot
coordinate
that
like
on
a
tag
because
you
fan
out
and
then
push
to
the
same
tag
and
fan
in
or
like
you
look
or
try
to
append
to
a
tag,
it's
very
possible
that
you're
racing
with
yourself.
Is
it
right?
The
multi-arc.
C
Right
right
and
so
to
do
this,
you
need
to
do
client-side
coordination
where
you
fan
out
and
then
fan
end
and
then
push
it.
I
I
think
tiana-
and
I
talked
about
this
at
dockercon
or
kubecon
or
something
years
ago,
but
one
way
that
I
thought
of
to
address
this
is
to
implement
the
etag
part
of
the
rsc
which
I've
linked
to.
Basically,
you
can
set
a
if
match
thing
or
an
if
and
unmatched
thing
and
say:
hey,
don't
don't.
C
Let
me
put
this
unless
nothing
is
already
there
and
then
the
registry
will
know.
Oh
okay,
I
can
transactionally
check
to
make
sure
that
nothing
else
is
written
to
this.
Similarly,
you
can
send
the
e-tag.
The
registry
gave
you
and
say:
don't
don't
put
this
unless
nothing
has
changed
since
then,
so
this
allows
you
to
transactionally
update
a
tag,
so
you
can
build
it
in
the
registry
instead
of
having
to
do
that.
Client-Side
coordination,
which
simplifies
your
build
process,
a
lot.
C
If
you're
doing
this
kind
of
thing,
I
don't
know
if
anyone
has
already
implemented
this
or
once
or
think
some
other
use
cases
for
this.
But
I
wanted
to
throw
that
out
there
see
if
anyone
thought
that
was
a
terrible
idea
and
then
maybe
I'll
pr
it
after
I've
given
josh.
What
I
owe
him.
B
There's
a
bunch
of
stuff
that
we
talked
about
with
the
multi-arc
manifest
stuff,
that's
been
a
hassle
for
a
lot
of
people,
and
this
was
one
of
the
things
that
came
up.
It
was
october
or
november
when
tianan
and
a
bunch
of
folks
jumped
on.
I
forgot
who
the
other
person
was,
who
was
actually
going
to
try
to
facilitate
some
additional
conversations
on
just
the
multi-arc
space
as
a
whole,
because
there's
there's
a
lot
of
and
that
this
particular
problem
of
you
got
three
different
architectures
that
are
building.
B
When
do
they
get
done
and
when,
when
can
you
actually
pull
the
push
the
architecture
manifest?
So
it's
definitely
a
a
problem
that
I
think
we
all
see
will
get
bigger.
As
we
have
more
iot
devices,
and
now
the
ibm
folks
will
have
zeos
right.
The
the
new
new
platforms
as
well
so
it'd
be
great
to
get
some
more
focus
on
it.
C
Unfortunate
that
we've
already
like
built
all
of
the
infrastructure
to
work
around
this,
but
you
know
maybe
we
can
simplify
or
speed
up
some
yeah.
G
G
C
It
so
this
is
where
it's
weird
right,
because
distribution
spec
inherits
http
and
per
http.
You
should
be
able
to
do
this
as
a
client,
but
we
don't
call
it
out
as
a
requirement,
and
so
it's
very
unlikely
that
everyone
has
implemented
this
and
I'm
happy
to
pr
an
example
of
how
this
would
work
if
that
would
be
helpful
but
yeah.
I
would
like
this
as
well.
H
H
Too,
one
of
the
issues
that
I've
been
struggling
with
is,
if
you
have
a
collection
of
artifacts
that
are
related
to
each
other,
then
how
does
a
client
resolve
this
because
the
client
needs
to
know
okay,
this?
This
is
how
you
parse
this
string
versus
you
know,
download
the
config.
Look
at
the
config
see
what
kind
of
manifests
are
there
look
for
your
specific,
manifest
go
up
and
get
that
manifest.
What
does
this
manifest?
H
C
That
that
is
similar
but
unrelated
to
what
I'm
proposing
I
I
one
thing
that
is
frustrating
is
that
we
have
no
way
we
don't
even
specify
really
how
you
do
this
resolution.
Even
for
multi-platform
images,
it's
not
explicitly
clear
server
side
resolution
is
interesting.
I
I'd
I'd.
Think
about
that.
That
sounds
fun
be
a
little
bit
of
an
optimization,
but
but
my
suggestion
does
not
solve
that
in
any
way.
Unfortunately,.
B
B
The
kind
of
the
example
we've
been
talking
about
if
I
bubble
all
the
way
up,
it's
like
we
know
we
want
to
build
the
house,
but
we
also
know
that
we
need
the
deck
and
the
garage
or
whatever
we
may
not
be
putting
those
other
two
pieces
in
place,
but
we
have
to
design
it
in
a
way
that
we
know
we
can
do
the
additions
later
on.
So
the
the
debate
we've
been.
Having
is
not
that
we
want
a
listing
api.
Of
course
we
do.
B
So
the
the
thing
that
we've
been
kind
of
going
back
and
forth
on
is
let's
just
capture.
What
is
we're
trying
to
achieve,
and
then,
if
we
can,
you
know
because
we
know
what
we
know-
we
don't
know,
we
don't
know,
and
if
we
can
just
capture
that
list,
then
we
can
say
all
right
now
we
have
the
list
of
things
that
we
need.
B
Here's
the
only
ones
that
we
really
have
to
solve
now,
yeah
that
design,
that's
great,
let's
go
off
and
do
it
or
here's
a
minor
tweak
to
it
that
nobody
really
cares
about,
but
look
at
all
the
things
that
lights
up.
So
that
was
the
the
piece
that
I
was
just
trying
to
bubble
back
up
on
is
like
hey.
Can
we
get,
especially
as
we
have
the
diversity
here
of
the
registry
operators
that
you
know
have
implemented
various
pieces
of
this?
B
We
have
ci
cd
providers,
which
was
the
interesting
one
that
came
up
in
the
distribution
call
the
other
day
where,
and
we
see
this
ourselves
in
our
own
azure
tools
within
the
portal.
How
do
you
list
the
content
of
a
registry
so
that
the
user
can
pick?
I
want
to
deploy
this
image,
so
that
was
the
the
bigger
conversation
that
we're
just
trying
to
figure
out.
B
How
can
we
get
this
common
thing
that
will
work
and
across
the
the
off
boundaries,
as
well
as
the
one
of
the
things
that
comes
up
and
still
have
actually
cloud
or
registry
specific
extensibility,
because
I
think
we'd
all
like
to
deprecate
our
listing
apis,
maybe
get
some
better
search
apis,
because,
ultimately,
the
thing
that
I
see
here
is
if
we
can
find
some
better.
These
search
apis
then
just
like
oraz
as
a
way
to
push
things
across
all
registries.
We
could
actually
have
some
better
tools
that
can
browse
and
display
content
across
multiple
registries.
B
So
it's
not
just
the
cic
to
providers
that
we
could
say
they're
making
money,
let's
go,
let
them
go
and
you
know
implement
each
cloud
providers
apis,
but
we
can
implement
a
much
better
ecosystem
and
maybe
even
a
client,
reg
registry
cli
that
works
across
all
registries.
So
I
well
john,
and
I
kind
of
it
seems
like
we're
going
back
in
on
this
list
quite
a
bit.
I
actually
am
very,
very
supportive
of
trying
to
get
this
done.
C
I'm
happy
to
pr
what
I
have
and
if
anyone
disagrees
or
it
doesn't
satisfy
some
requirements
block
me,
but
I
don't
know
how
long
it
is
reasonable
to
wait
for
someone
to
come
and
give
me
more
requirements.
If
I
don't
have
them.
B
B
I
know
we
know
a
bunch
of
what
we
need
with
the
notary
work
which
which
we
as
a
business
need
to
get
done.
You
know
the
next
couple
of
months,
so
I
think
we've
got
enough
pressing
information.
We
should
be
able
to
surface
it
out,
but
I
just
I
didn't
want
it
to
come
just
for
milleria,
so
I
was
asking
others,
the
other
cloud
providers
and
other
registry
operators
to
chime
in
and
what
they
need.
F
No,
what
I
was
going
to
say
would
it
be
helpful
if
we
sort
of
came
up
with
a
standard
like
pr,
not
pr
but
like
proposal
format,
because
I
feel
like
these
things
can
be
pieced
in
and
better
understood
than
just
these
long
pr's,
with
huge
discussions
between
you
and
steve,
and
if
we
had
something
that
was
in
a
more
you
know,
you're
saying
you
could
build
it
in
an
hour.
C
Yeah,
I'm
happy
to
make
up
whatever
I
think
is
reasonable,
but
I
doubt
that
is
what
everyone
would
like.
So
if,
if
someone
is
interested
and
we
could
steal
caps,
if
you
want,
I
don't
know.
B
I
think
the
question
is
more
like
the
bigger
picture.
What
is
it
like?
This
goes
back
to
some
of
the
charter
of
the
distribution
spec,
and
I
didn't
mean
to
walk
into
the
push-pull
conversation.
It
was
more
of
like,
where
does
this
feature
fit
in
so
that
there's
lego
blocks
here
that
I
can
add
this
api
and
the
next
api
could
come
in
easily
and
they
all
feel
very
synergetic
without
having
to
do.
B
I
don't
want
to
pick
on
the
docker
stuff,
but
it's
like
the
early
stuff
was
docker.
There
was
always
a
new
refactoring
that
was
going
on.
It
was
a
quickly
evolving
space,
so
it
makes
fair.
But
how
can
we
all
have
very
huge
registries
that
we're
managing
and
huge
customer
bases
any
churn
in
that
makes
it
very
difficult.
F
F
I
would
yeah
I
would
almost.
I
would
almost
rather
use
like
the
zot
project,
just
because
it's
more
to
the
spec
all
right
shoot.
G
F
Gotcha
and
it's
not
like
a
it's,
not
like
a
thing
to
say:
hey
everyone
go
use,
zot,
there's.
Obviously,
distribution
has
all
this
performance
and
years
of
optimization,
but
it's
like
this
was
a
spec
built
thing
and
if
you
can
add
a
plot,
if
you
can
add
a
I
don't
know,
I
don't
want
to
go
too
far.
C
I
Think
I
think,
as
long
as
the
messaging
around
like
what
the
reference
implementation
is
intended
for
is
clear,
then
go
for
it
like
there's
a
whole
bunch
of
reference
implementation
of
crypto
libraries
that
no
one
is
ever
going
to
actually
use
that
version.
The
point
of
the
version
is
so
that
you
can
read
through
the
code
and
understand
the
process.
Obviously
there's
like
no
hardware,
optimizations
being
done.
It's
just
like
written
and
well
documented.
So
you
can
understand
what
it's
supposed
to
look
like.
If
I
had
to
pick
something
that
existed
today.
I
That
looked
like
that,
I
would
say
it's
zot
and
as
long
as
the
messaging
is
really
clear
that,
like
hey
you're
building
a
registry,
do
you
want
to
see
like
an
example
of
that?
I
C
C
It's
it's
impossible
to
write
a
client
or
registry
that
is
usable
right
now
per
the
specs,
like
I
mean
I've
with
help
implemented
a
very
bad
registry
that
just
sits
in
memory
for
testing.
E
B
We'd
like
to
get
there
at
some
point,
but
auth
is
one
of
those
challenges
right.
I
think
there
is
the
huge
thought
flow
is,
is
usable
and
I
think
that's
in
some
of
them
at
least
like
that's.
How
we
did
auras
was
to
make
sure
that
the
baseline
buffalo
worked,
but
you
know
on
the
scope
of
usability.
This
is
the
debate.
We've
been
having
around
the
push
semantics
like.
C
Right
yeah,
I
wish
there
was
a
separate
spec
that
is
everything
or
we
named
it
differently,
but
I
don't
know
I
like
that.
It
is
a
different
one
so
that
you
can
say
I
implement
the
pool
path
and
none
of
the
other
stuff
that
is
optional.
E
Yeah
and
that's
where
vince
was
going
with
these
extension
points,
that's
that
did
sound
like
a
you
know,
an
interesting
way
to
to
bring
in
new
features
and
test
them
out
and
see
if
people
like
it
and
then,
whichever
ones
get
used
by
the
most
people,
those
will
be
promoted
right.
B
Yeah,
that's
performance
test
right.
The
conformance
test
has
that
bucket
right,
it's
got
the
four
categories
and
you
don't
have
to
be
on
all
four.
I
think
the
question
is:
what
is
what
does
it
mean
to
actually
do
the
basics,
but
I
love
the
extensibility
like
you
know
whether,
hopefully
it's
not
the
listing
api,
but
whatever
the
right
set
of
functionality
that
might
be
in
a
registry
that
might
that
not
everybody
might
implement
those
I'd
love
to
see
those
as.
I
So
you
feel,
like
you,
don't
really
know
the
client
compatibility
story
and
when
you
don't
have
answers
to
these
kinds
of
questions,
yeah,
you
get
all
kinds
of
problems
cropping
up
when
you
go
with
the
extension
model,
and
I
just
want
us
to
avoid
that.
If
we
we
end
up
pursuing
this.
E
Well,
I
think
now's
now's,
a
good
time
to
you,
know
to
pick
the
next
focal
point
and
I
think
index
is
probably
one
that
we
could.
We
could
tackle
right,
probably
easier
to
do
index
than
off,
as
as
the
as
the
next
is,
the
next
big
feature
to
add-
and
I
think
we
can-
we
can
probably
only
do
one
big
thing
at
a
time-
we're
just
now
finishing
up
with
the
zero
spec.
I
think
it's
probably
a
good
time
to
pick
the
next
thing
right.
E
It
sounds
like
john
wants
it
to
be
indexed
and
I'm
not
disagreeing,
and
I
don't
think
anybody
else
is
so
we
could
probably
stay
put
up
a
vote
on
an
email
and
see
you
know
who
all
wants.
B
I
think
that's
the
piece
I
was
just
trying
to
figure
out
as
I
get.
I
don't
want
this
to
be
some
bike
shedding
moment
or
some
filibustering
is
just
simply.
What
is
it
that
we're
agreeing
on
because
the
the
observation
I've
had
on
some
of
these
as
we
get
into
these
big
debates
on
who's,
got
the
better
design,
but
we're
not
really
all
the
designs
are
perfectly
valid.
It's
just.
Each
of
us
have
different
ideas
in
our
head.
What
we're
trying
to
solve
so
we
can
just
write
down
and
agree.
B
B
That's
really
what
I
was
trying
to
capture
is
you
know
what
what
is
we're
solving
and
it's
worked
really
well
for
the
notary
stuff
like
we
wrote,
it
took
us
a
while
to
get
there,
but
we
wrote
down
what
it
is
we're
trying
to
solve,
and
now
a
lot
of
these
arguments
just
go
back
to
like,
but
that's
what
we
said
we
were
trying
to
do.
E
Yeah
yeah
yeah.
We
definitely
put
up
a
list
of
goals
for
an
index
api
is,
is
this
search
or
is
it
just
the
ability
to
have
a
locally
cached
index
of?
What's
in
the
registry,
with
some
of
that
notification
over
time?
Right
or
you
know
what
and
what
kinds
of
data
can
we
pull
out?
It
seems
like
we're.
We
want
it
to
be
more
than
that.
You
know
than
just
just
the
manifest
and
type
right.
We
want
something
else.
Descriptors
things.
B
C
I
one
one
thing
I
think
is
very
important:
is
that
I'm
not
trying
to
build
a
new
thing
so
much
as
just
fill
in
what
is
possible
than
it
is
missing.
I
I
I
know
what
I
can
implement
with
gcr.
I
know
that
I
can
implement
my
design,
it's
very
possible
that
there's
a
registry
out
there
for
whom
my
design
is
impossible
or
overly
cumbersome
and
impossible
to
implement
for
them,
and
they
will
never
do
it
and
that's
terrible.
So
so.
E
C
Everything
yeah
you
have
to
have
all
this
to
serve
an
image
right
or,
and
so
I'm.
C
C
H
It
was
me,
can
I
ask
a
question
to
the
group:
is
user
experience
in
scope
for
oci
or
is
it
just
following
docker's
user
experience.
E
H
Okay,
the
thing
is
that
it
seems
to
me
that
the
use
cases
are
expanding
and
so
the
user
experience
for
those
use
cases
are
still
not
defined.
I'm
wondering
maybe
we
ought
to
start
there.
E
Well,
you
you've
provided
quite
a
few
of
those
use
cases
in
the
past.
That's
sort
of
why
I
was
asking
johnson,
you
know
what
was
he
looking
for
on
the
on
the
descriptor?
I
didn't.
I
didn't
know
if
he
was
going
into
your
area
nisha,
you
know
if
with
human
readable
descriptions
or
was
he
doing,
you
know
going
off
the
you
know
an
existing.
You
know
field,
that's
already
in
the
image.
H
I
mean
I
could,
I
suppose
there
is
because
the
the
specs
are
so
vague
with
regards
to
all
these
other
requirements
that
we're
asking
for
anybody
can
go
off
and
you
know
build
their
own
special
ecosystem
and
say
like
oh,
we
can.
We
can
plug
it
into
that.
H
Whatever
is
existing
right
now,
all
the
darker
images
and
such
so,
for
example,
with
steve's
manifest
proposal
that
can
be
hosted
somewhere
else
and
it
points
to
some
docker
image
and
says
in
another
registry,
and
you
can
say
okay
for
that
docker
image
over
there.
These
are
all
the
you
know,
artifacts.
E
H
Yes,
yes,
it's
almost
like
you
want
to
break
out
of
that
loop
and
go
off
and.
E
You
want
to
break
out
at
least
when
it
when,
if
you
have
to
link
to
a
remote
server,
I
think
that
that
becomes
too
complex
the
stuff
that
johnson's
john
was
talking
about.
Where
it's
already
a
descriptor,
then,
and
if,
if
it's
there,
then
we
can,
probably
you
know,
display
it
or
provide
it
back
on
the
list.
I
that
that's
not
quite
the
same
as
oh
and
use
the
auth
that
I
gave
you
already
in
my
first
connection
and
go
pull
it
from
the
server
over
there
right.
E
I
think
when
it's
an
artifact
and
it's
stored
in
the
same
registry
and
it's
using
your
extended
configure
information
which
includes
author
and
etcetera,
etcetera,
etcetera
and
possible
external
references
that
we're
not
going
to
pull
for
you
but
you,
but
for
which
you
can
use,
I
think,
you're,
okay
right
it
should
it
should
hit
your
you
know
your
scope
and
then
it
would
just
require
the
client
to
go.
You
know,
pull
it
on
its
own.
B
Yeah,
I
think
it's
a
matter
of
like
one
of
the
things
we've
been
trying
to
do
here
is
one
of
the
separable
pieces.
Right
like
a
a
notary.
Signature
is
only
meaningful
when
it's
pointing
to
a
thing
that
it's
signing,
those
are
like
demonstrable
pieces,
a
helm
chart
that's
pointing
to
images.
I
might
get
the
images
from
somewhere
else,
so
it
might
be
a
completely
separate
registry.
So
it's
the
way.
Are
they?
Are
they
separable
in
a
way
that,
of
course,
the
end
result
is
the
client
has
to
figure
it
off?
B
But
it's
not
like
I'm
pulling
layer,
one
and
layer,
two
like
we'll
use
I'll,
throw
windows,
foreign
layers
under
the
bus.
Here,
if
I'm
in
a
lockdown
environment,
I
can't
get
to
the
foreign
layers
because
they're
off
in
another
location,
right,
those
are
the
kind
of
things
that
we
want
to
avoid.
Taking
a
a
a
transactional
object.
If
you
will
a
bad
term
and
splitting
it
across
multiple
things,
it's
here's
the
independent
independent
pieces.
B
So
anyway,
we
got
off
a
little
bit
of
the
weeds
anyway.
So
there's
a
couple
of
pr's
out
there
to
try
to
capture
these.
I'm
hoping
we'll
just
iterate
on
what
it
is
that
we
want,
and
then
we
can
say
yep
that
fits
in
this.
That
fits
in
the
plan.