►
From YouTube: OCI Weekly Discussion - 2021-10-21
Description
Recording of the OCI weekly developer's call from 21 Oct 2021; agenda/notes here: https://hackmd.io/El8Dd2xrTlCaCG59ns5cwg?view#October-21-2021
B
Yeah,
it's
close
enough
to
five
minutes.
Yeah
yeah,
take
it
away,
brandon.
A
Yeah,
hopefully
these
are
quick,
but
I
just
want
to
throw
two
out
there
just
get
some
attention
on
them
and
make
sure
that
I'm
not
completely
out
of
left
field
with
what
I
was
thinking
here.
One
was
on
the
artifacts
one.
We
had
the
pr
out
here
asking
to
archive
it
and
I
was
a
little
confused
by
that,
and
so
I
wasn't
sure
is
there
something
I
don't
understand?
Should
we
be
using
this
at
all,
or
is
this
a
little
premature.
C
I
should
go
ahead
and
apologize
for
being.
I
was
doing
aggressive
grooming
of
oci
yesterday
and
there
was
a
number
of
projects,
so
I'm
in
general
I
think,
there's
a
discussion
that
probably
needs
to
be
had
I'm
super
confused
by
the
status
of
this
project.
C
I
thought
that
there
was
a
disagreement
about
it
and
that's
why
work
is
being
done
in
the
aurus
one,
but
maybe
steve
could
you
know
explain
it
the
way
that
he
sees
it,
but
I
you
know
I
didn't
mean
to
cause
any
issues
with
this
or
confusion,
but
I,
like
I
put
here
like
I'm,
also
very,
very
confused.
What's
going
on
so.
E
D
The
readme
is
a
good
place.
Let's
just
change
this
pr
to
a
you
know
a
link
to
some
efforts
that
are
underway.
The
work
groups,
as
well
as
the
the
auras
effort,
I
think,
would
be
fair
to
link
to
yeah
and
also
a
couple
other
places,
the
the
potential
prs,
the
reference
one
that
was
done
in
image
spec
as
well
as
linking
probably
to
mr,
mr,
you
know,
justin's
a
pull
request.
I
think
it
was
29
or
something
or
maybe
another
number
in
in
this.
In
this
repo.
D
E
When
a
person
is
confused,
it
is
the
documentation's
fault
right.
So
if,
if
josh,
who
is
even
a
person,
who's
relatively
plugged
into
the
effort
and
knows
you
know
knows
us
and
talks
to
us
sometimes
if
he's
confused,
then
like
some
rando
off
the
street
is
gonna,
be
extra
confused,
so
more
documentation
would
be
great.
E
F
I
guess
I
support
documentation,
it's
helpful,
I
sometimes
it
can
be
more
confusing,
so
I
just
rather
than
spawn
this
out
to
several
different
things,
the
like,
if
you
look
at
what's
in
this
repo
now,
it's
strictly
here's
the
guidance
on
how
to
use
the
existing
manifest
that
is
in
production.
People
can
use
today
like
it's,
you
know
as
released
as
anything
could
be,
because
that
was
the
whole
debate.
Is
this
a
spec
or
is
it
an
reference
implementation,
not
reference?
F
F
That's
the
working
group
discussion.
The
outcome
of
that
working
group
is
tbd
right.
It
could
be.
It
could
be
specs
that
go
into
this
one.
It
could
be
a
merger
with
the
distribution
spec
like
we
don't
know
yet.
So
I
I
I'm,
I
don't
know
if
we
should
be
adding
more
links
to
things
that
aren't
closed
as
opposed
to
it
needs
a
status
section
in
the
doc
in.
C
The
reading
yeah,
I
think,
just
at
the
beginning,
like
jason
saying,
like
just
hey
like
actually
the
progress
is
going
on
over
here.
Maybe
it'll
come
back
into
this
like.
Is
that
the
idea
that
you'll
work
without
being
pushed
back
upon
in
the
auras
one
and
then
get
it
back
into
here
as
a
stable.
F
Thing
like
is
that
what
the
working
group
here
is
the
question
like
we
don't
know
if
it
should
go
in
like
when
the
artifacts
repo
was
created,
it
was
because
we
couldn't
get
agreement
to
make
changes
to
distribution
or
image.
So
that
was
the
concession
I
don't
know
what
the
output
of
the
working
group
is
like.
I
have,
you
know
an
opinion,
but
that's
the
point
of.
E
E
For
the
latest
discussion
I
think,
like
I
said,
josh
josh
is
a
person
who
is
relatively
plugged
into
this
stuff
and
if
he
doesn't
know
what
the
latest
status
is,
then
then
some
person
who
saw
a
presentation
about
artifacts
and
wants
to
get
involved
will
have
no
no
help
like.
Where
did
they
do
it?
Do
they
do
it
in
this
repo?
Do
they
do
it
in
rs
artifacts?
I
just
I'm
just
trying
to
give
people
breadcrumbs
to
find
where
the
current
discussion
is
happening.
F
I
mean
I'm
happy
to
do
that.
I
you
know
we're
having
a
big
changes
to
this.
I
was
trying
to
not
add
more
complexity
to
it.
Quite
honestly
of
you
know,
is
this
a
fork?
It's
not
a
fork,
not
that
it
moved.
You
know
this
is
on
the,
maybe
that's
the
words
we
can
put
there.
If
the
group
is
okay
with
that,
that's
great,
considering
all
of
the
contention
that's
happened
around
this
I
was
trying
to.
F
E
If
you're
not
interested
in
bike
shedding,
I
think
you
might
be
in
the
wrong
group,
but
but
I
think,
like
the
discussion
of
specific
wording
by
all
means,
a
pr
is
perfect
for
that.
But
I
think
I
think
I
agree
with
josh
whether
he
meant
to
say
this
or
not,
that
the
current
state
is
confusing
and
some
text
somewhere
saying
what
the
current
state
is.
Even
if
it
is
like
go
look
over
here
for
the
latest
status.
That
would
be
fine.
C
And
that-
and
that
was
really
the
purpose
of
of
the
pr
is
just
like-
I
thought
the
updated
thing
was
there,
and
so
it
was
simply
when
people
landed
this
because
they're
coming
from
someone's
blog
post
or
twitter
that
they
would
then
be
redirected
to
the
aurus
one.
Where
the
you
know
anyway,
I'm
sorry
didn't
I
should
have
reached
out
to
you,
know
mike
and
steve
and
got
more
clarification.
I
was
just
like.
B
F
C
C
A
E
Especially
useful
to
frame
it
not
in
terms
of
any
specific
type
of
thing,
being
racially
created
or
updated,
but
like
regular
images,
have
this
bug
regular
indexes?
Have
this
bug
right,
like
yeah,
I
think
I
think
we
can
easily
fall
down
a
rabbit
hole
if
we
talk
about
specific
use
cases
where
this
is
more
more
than
likely
to
happen,
but
yeah,
regular,
regular
images
from
2017
have
this
problem
and
registry
should
do
better
at
solving
it.
So
plus
one
plus
two
even
I'll,
give
you
plus
two
sargon.
E
E
G
You
hear
me,
I
hate
zoom,
how
do
we
implement
this
on
s3.
A
Not
the
original
author
on
this
one,
I
don't
know.
B
E
When
you
say
aesthetic,
do
you
mean,
do
you
mean
a
registry
that
does
not
accept
pushes.
B
E
D
E
B
Everyone
we
solved
the
problem,
but
but
I
think
you're
pushing
the
s3
you'd
have
to
have
an
intermediary
anyways.
G
We
all
do
it's
called
distribution,
but
like
distribution.
B
G
Basically,
I'm
saying
that
this
spec
is
impossible
to
implement
on
top
of
s3
and
as
far
as
I
know,
the
sd
back
end
is
the
most
popular
back
end
for
distribution,
and
I
would
really
like
so
just
because
you'd
have
just.
G
If
you
have
more
than
one
distribution
node,
you
can't
do
a
read,
modifier
right
operation,
that's
consistent!.
G
Yeah
but
like
let's
say
that
you
try,
I,
I
just
think
you
can
easily
get
into
an
awkward
situation
where,
like
clients
are
not
necessarily
going
to
take
a
negotiation
to
see
if
this
header
exists
and
like
this
is
a
lot
of
complexity.
G
Alternatively,
I
would
love
to
see
someone
actually
get
distribution
to
implement
this
using
like
there
are
clever
ways
by
using
side
channels
and
registry
or
distribution
has
historically
supported
caches
on
the
side.
In
other
databases
that
use
ec
I've
seen
people
build
consistency,
systems
that
are
external
to
them.
I
would
kind
of
love
to
see
someone
prototype
a
metadata
store
on
top
of
or
next
to
distribution,
just
to
see
how
complicated
it
would
be
to
implements
on
top
of
s3.
G
E
I
think
I
think
this
proposal
would
actually
help
motivate
that
work
in
distribution
right
like
like
we're
not
if
this,
if
this
gets
merged,
it's
not
saying
the
distribution
is
bad.
It's
saying
here
is
a
recommendation,
an
oci
recommendation
that
that
distribution
does
not
currently
satisfy
it's
just
a
recommendation
and
because
it's
an
unsatisfied
recommendation
now
somebody
can
say.
Oh,
I
would
like
to
satisfy
that
recommendation
and
go.
Do
that,
however
distribution.
I.
E
I
take
your
point,
though,
that
it's
worth
it's
worth
thinking
about
how
hard
this
is
for
a
registry
to
implement,
because,
like
the.
G
E
G
E
E
Yeah
is
there
anybody
either
here
or
who
knows
someone
in
distribution
land
that
would
be
willing
to
scope
out
not
like
necessarily
build
it,
but
like
give
an
estimate
of
easy
medium
hard
on
this
I
mean
you,
you
seem
to
have
some
knowledge
of
how
distribution
ns3
works,
so
you
are
already
infinitely
better
than
me,
because
I
have
no
idea
of
either
but.
G
I
mean
yeah
we've
talked
about
like
I
feel
like
this
has
come
up
a
number
of
times
in
this
group,
where
it's
like,
hey,
propose
a
spec.
What
were
the
implementations?
The
specs
look
like,
but
the
implementation
takes
a
long
time.
If
there's
no
chance,
spec
will
get
merged.
Let's
not
do
it
in
this
case.
I
don't
actually
think
that
it's
a
a
ton
of
work.
I
think
someone
just
needs
to
propose
what
the
semantics
would
be
so
like
a
lease-based
semantic
would
be
something
that
would
be
very
realistic.
I
guess.
E
The
the
benefit
of
this
of
etags
and
this
proposal
is
that
they
are
not
prescriptive
about
how
the
e-tags
work
right,
like
like
distribution,
could
solve
it
completely
differently
than
you
know.
My
registry
or
brandon's
registry,
or
whatever
like
just
saying
etag,
should
be
used.
I
think,
is
not
a
is
not
a
an
onerous
thing
for
any
registry.
I
get
that
it's
worth
like.
Looking
into
how
hard
it
would
be
practically
inside
distribution,
but
I
don't
know
as
soon
as
I
saw
this
I
was
like.
G
A
G
If
you
have
one
server
but
like
if
you
want,
you
need
two
zero.
Yes,.
A
Yeah,
so
that
that's
where
the
challenge
is
going
to
be
implementing
it,
so
that's
kind
of
where
I
phrase
it
of
what
are
the
next
steps
for
this,
and
so,
if
we're
saying
from
those
the
eyesight
that
we'd
like
to
see
somebody
from
distribution
take
a
shot
implementing
that
and
that's
what
we're
held
up
on
then
that
gives
us
some
direction.
H
G
Like
I
know
that
quay
supports
this,
but
qui
also
requires
a
sql
database
right
and
like
that,
has
some
complications
around
availability.
So
it
would
be
nice
to
be
able
to
have
a
story
around.
How
could
are?
Are
there
any
registries
that
implement
this
without
sacrificing
availability,
a
ton.
A
B
Good
question:
it
was
the
it's
it's
effectively.
It's
been
the
same
process
as
the
other
specs
that
are
did
have
some
documentation
in
a
few
different
places.
It
might
not
be
actually
all
insane.
If
you
looked
at
runtime
spec,
it
would
probably
have
it
and
then
distribution.
Spec
was
the
first
release
that
somebody
besides
myself,
actually
went
through
the
whole
spec
release
process.
B
So
josh
did
that
for
the
1.0
and
if
there's
any
improvements,
otherwise,
like
just
copying
or
carrying
over
the
release
process
to
like
a
release.nd
in
the
distribution
spec,
we
need
to
do
that.
B
I
imagine
if
we
did
anything
like
a
1.1
we'd,
probably
go
to
the
rc
cycles.
I'm
not
opposed
to
having
something
like
a
pre-release
or
rc
for
even
patch
release.
B
Typically,
we
would
actually
create
a
milestone
for
the
things
that
we
were
tracking
that
we
wanted
to
get
into
that
release.
So,
if
we're
thinking
about
a
1.01-
and
we
should
we
can
go
ahead
and
create
a
milestone
for
that-
that's
low
barrier,
look
at
things
on
that
github
milestone.
B
Then
once
we've
done
it,
then
it's
basically
just
sending
out
an
email
to
the
list,
well
sending
an
email
out
effectively
trying
to
get
all
the
maintainers
cc
on
it,
but
also
cc
the
list,
and
it
has
a
timed
voting
period
and
then
it's
merged
in
so
there's
typically
a
little
bit
of
a
song
and
dance
of
putting
up
a
pr
so
that
people
can
see
what
would
be
tagged
as
oh,
you
know,
dot
next
and
then
having
a
vote
that
the
maintainers
lgtm
on
the
list
and
then
it
obviously
then
also
requires
a
couple
of
zms
on
the
er
and
it
merges
and
the
pr
that's
in
in
that
is
usually
like
one
that
bumps
the
actual
version
in
some
file
and
then
then
bumps
it
back
into
like
dev.
B
So
it's
always
dead
and
trunk.
But
still
in
that
pr
is
the
one.
That's
tagged
actually
like
get
signed.
Tagged
and
signed
is
the
release.
F
Yeah
I
mean
I
watched
the
video
from
the
meeting
that
was
out
a
week
on
vacation
on
the
extension
api,
and
you
know
this
kind
of
came
into
the
conversation
of.
Can
we
get
something
in
not
the
merge
spec
like
there's
not
release
specs,
what's
in
maine
is
like
the
work
in
progress,
and
what's
there
is
hey?
This
is
what
we're
working
on,
but
until
a
release
is
cut,
it
might
change
like
until
we
have
enough
confidence
that
this
will
work,
you
know,
will
it
work
across?
F
You
know
is
that
for
that
example
one,
but
it
could
be
any
one
of
these
topics
for
the
thing
that
sargon
promoter
proposed
a
couple
of
weeks
ago
I
mean
we,
it
seems
like
we
need
some
way
to
have
something:
that's
a
work
in
flight,
here's
the
current
status
it
may
change
and
until
a
release
is
cut,
it's
not
finalized,
but
at
least
it
represents
some
amount
of
confidence
that
this
is
where
we're
currently
at.
F
I
mean,
I
guess
I
guess
that's
the
question
like
we
went
through
this
with
the
notary
project.
We
were
working
in
a
prototype
branch,
for
instance,
and
that
just
confused
the
hell
out
of
people,
so
the
feedback
we
got
was
main
is
the
work
in
progress
and
then
there's
releases
for
what's
done
so,
for
instance,
sargon's.
What
was
the
thing?
The
d-duping?
F
I
think
that
got
merged
into
maine,
it's
not
released,
yet
it's
a
place
for
people
to
reference.
Do
the
implementation,
I
think
we're.
I
think,
we're
pretty
confident
about
how
that's
in
that
it
probably
won't
change,
but
you
know,
as
others
try
to
implement
it,
maybe
it
will
as
the
referrers
api
another
referrer.
Sorry,
the
extension
api
goes
in
to
enable
things
like
the
referrals,
api
or
other
extensions
that
others
might
want
to
do
until
they
start
working
on
it.
F
G
I
think
just
furthering
on
that
there
was
a
conformance
test
that
was
added
and
because
our
conformance
tests
are
tied
to
our
releases
and
the
conformance
test
was
testing
pre-existing
behavior,
it
kind
of
adds
a
s
yet
another
layer
of
complexity
to
this
discussion.
G
Do
we
do
that?
Do
we
say
that
this
conformance
test
is
going
to
make
it
to
the
next
release?
And
you
know
the
associated
features
are
likely
to
make
it
like
how
to
kind
of
track
this,
and
you
know
I
would
love
to
see
that
conformance
test
start
to
becoming
tested
against
registries
in
the
wild,
because
I
know
they're
non-conforming
registries.
A
E
B
What
I'm
sitting
here
thinking
like
because
one
of
the
things
that
it
came
up
in
the
past-
and
we
said,
let's
just
to
get
tags
for
now
like
main
and
then
tags
and
not
actually
get
them
to
get
branching
because
hey
we
didn't
want
to
like
maintaining
branches
and
merging
back
and
whatnot,
but
for
some
of
those
performance
tests
to
be
able
to
say
like
whatever
this.
This
moving
target
is
is
always
b1
and
then
it
could
be
v1.1
or
whatever.
B
And
then
you
have
like
increased
amount
of
cherry
picking
and
merging,
but
to
have
something
like
a
moving
target
so
that,
but
for
now
it's
just
a
git
tag
strategy.
B
B
Help
us
to
attack
attack
that
and
make
sure
that
it
like
is
consistent
or
otherwise,
and
even
like
for,
like
fighting
issues
that
we
want
to
discuss
and
get
it
in.
You
know
it's
all
in
the
main
or
whatever.
B
So
let's
go
ahead:
let's
now,
I'm
I'm
thinking
out
loud,
you
thought
it'd
be
more
convenient.
C
No
so
sargan
the
conformance
test
binary
should
be
released
with
the
tag.
If
it's
not,
we
should
fix
that
and
then
there's
also
an
image
in
github's
registry.
That
should
be
tagged
with
the
release.
So
my
thoughts
are
that
you
could
run
the
tests
locked
in
that
version
and
you
would
say
I'm
a
100
compliant
and
then,
when
one
one
comes
out,
it's
a
whole
new
binary.
That's
locked!
C
A
G
So
if
I'm
just
to
recap,
our
version
december
is
z,
increments,
are
clarifications
and
additions
to
the
conformance
test
to
check
all
previous
y
releases
against
those
conformance
tests.
Increments
of
the
ui
release
may
add
new
features
where
the
future
version
of
the
tests
will
validate
previous
versions
may
not.
A
D
D
D
E
All
right,
if
that
topic
is
done,
the
next
one
I
had.
Oh,
I
don't
know
if
it
is
done,
is
this
topic
done?
I
don't
want
to
cut
anybody
off
the
next
one
is
mine,
which
was
mainly
just
a
fyi
in
case
people
didn't
see
the
reference
types
working
group
proposal
was
updated.
E
Take
a
look,
I
don't
know
or
if
people
want
to
discuss
further,
but
I'm
not
sure
that
that
was
communicated
like
I
didn't
see
in
the
slack.
But
if
you
are
interested
in
that
proposal,
go
take
a
look.
K
E
K
G
Yeah
it's
the
next
one
is
interesting,
kubecon
conversations.
G
Oh
yeah,
so
I'm
gonna
reopen
this
wound.
Oh
good
yeah,
I
I
don't
know
if
people
were
familiar
with
this
issue
from
a
while
ago,
but
there
was
no
issue,
but
you
know
a
proposal,
but
there
was
a
discussion
around
adding
content,
encoding
and
accepting
coding
support
to
the
registry
or
to
the
distribution.
Spec
excuse
me,
rather
than
using
our
own
media
types
and
media
type
based
negotiation.
G
So
I
just
wanted
to
see
if
the
proposal
was
clear
on
what
except
encoding
meant
exactly
and
if
it
was
clear,
then
does
it
make
sense
to
go?
Do
the
dance
of
building
a
prototype
of
what
it
might
look
like.
G
G
B
So
is
that
is
this
like?
Would
you
say,
was
this
a
transitional
first
step
so
that
you
could
potentially
stop
even
having
compressed
images
across
the
like
the
thing
that
the
identity
and
digest
is
calculated
on
being
with
any
compression
associated
with
it,
so
that
you
could
on
the
fly,
do
zysta
or
xz
or
whatever
across
the
wire,
and
then
identities
move.
G
More
towards
just
uncompressed
right
and
you
could
even
do
stuff
so
there's
a
second
part
of
the
proposal
which
I've
decided
to
split
off
from
the
first
part,
but
you
could
upload
a
tarball,
that's
not
compressed
with
the
digest.
The
tar
ball,
that's
not
compressed
with
an
encoding
of
gzip
and
you
would
verify
the
digest
on
the
uncompressed
form,
upload
the
compressed
form.
And
then,
when
people
go
to
download
that
blob,
they
only
have
the
permission
to
download
the
blob
with
the
encoding
of
gz.
B
Well,
it's
neat,
so
I'm
all
in
favor
for
this
I've
I've
despised
the
fact
that
compression
was
tangled
up
in
our
identity
mob
identities
for
years,
especially
that
it's
inconsistent
across
places
that
prototype
sounds
a
little
tricky,
because
there
were
brief,
fleeting
moments
that
different
compressions
actually
had
vulnerabilities
in
the
decompression
piece
of
it
so
binding
that
up
with
the
client
to
have
to
do
that,
decompression
just
to
attest
the
identity
of
it,
without
just
doing
like
a
straight
opaque,
digest
like
checksum,
it's
a
little
challenging
in
certain.
B
It
raises
certain
security
fights,
but
it's
a
fine
prototype
or
you
know,
kind
of
a
demo,
but
on
the
whole,
storing,
storing
and
having
ability
based
on
uncompressed
blob
and
then
finding
other
ways
to
like
say
you
know
here's
here.
It
is
also
if
it's
xz
or
z,
stud
or
whatever,
like
ways
that
you
could
actually
present
it
or
point
to
it
or
even
move
it
across
the
wire
types
of
compression.
I
I
think
it's
useful
and
neat
needed.
Are
you.
G
Talking
about
where
people
are
building
like
zip
bombs,
or
are
you
talking
about
where
people
are
building
incredibly
large
documents,
because
the
content
length
can't
be
calculated
calculated
ahead
time.
B
No
there
there
were,
it
was
more
actually
just
got
down
that
that
that
issue.
That
happened
right
around
the
time
that
we
were
arguing
this
and
it
kind
of
put
the
nail
in
the
coffin
I'd
have
to
go
actually
do
some
archive
spelunking.
B
You
know,
you'd
have
to
do
processing
to
jump
through
to
get
a
digest
and
it
could
potentially
have
like
a
buffer
and
as
well
and
like
particular
vectors,
and
it
was
in
the
kind
of
implications
it
quickly
and
easily
fixed,
but
it
kind
of
like
put
a
nail
in
a
coffin
from
a
conversation
that
needed
to
continue
five
years
ago,
right,
okay,
so
then
yeah,
basically
anything
that
needs
to
digest
should
be
so
opaque
that
you
can
do
it
with
command
line
tools.
B
E
You
may
have
answered
my
question,
but
I'm
too
stupid
to
realize
it.
I
have
an
apology
also
if
this
has
been
litigated
and
answered
before
the
size.
E
The
size
that
would
be
returned
is
the
uncompressed
size
right
like
like
I
get
a
size
and
a
digest
the
http
content
length.
Is
that
we're
asking
not
exactly
so.
I
see
I
see
an
image
and
it
says
layer,
one
or
layer.
Zero
is
digest,
abc
size,
a
thousand
bytes,
and
I
say
oh,
I
can.
I
can
process
a
thousand
bytes.
That's
easy
I'll
go
fetch
that
digest
descriptor
size.
Thank
you,
yeah
yeah.
Then
I
go
request
that
zipped.
E
I
say
I
yeah,
I
don't
wanna
a
thousand
bytes,
it's
actually
too
much
for
me.
I'd
rather
fetch
less
than
that
and
then
decompress
it
on
my
size.
On
my
side
I
am
doing
a
lot
of
trusting
that
that
inc,
that
size
isn't
going
to
actually
be
bigger
or
even
way
bigger,
like
the
the
descriptor
size,
is
there
to
prevent
me
from
to.
E
Let
me
put
a
cap
on
how
many
bytes
I'm
going
to
pull
over
the
wire
right,
and
if
the
compression
used
is
really
bad,
it
could
be
way
more
than
the
thousand
bytes
of
the
uncompressed
size
in.
G
E
And
so
so,
okay,
so
I'm
still
guarded
because
I
could
say
I'm
fine
downloading
a
thousand
bytes
and
then
I
find
out
from
the
content
length
header
that
it's
actually
two
thousand
bytes
and
that's
too
rich
for
my
blood.
I'm
not
gonna,
try
to
download
it
anyway,
right,
great,
okay,
but.
G
Way
that
this
works
is
that
when
people
do
dock
or
push-
and
we
did
the
multi-part
blob
assembly
at
that
point,
we
would
do
the
compression
and
output
the
eight
different
compressed
formats
that
are
supported.
I
would
hate
for
people
to
actually
do
on
the
fly
compression
on
serve
time.
I
will
for
fun,
but
I
take
your
point.
B
Yeah,
that's
that's.
That
was
that's
where
we
got
to
in
the
past
was
actually
almost
like
what
I
did
with
the
crazy
tar
split
project
was
going
down
the
the
all
the
different
ways
that
diffie-hellman
windows
are
calculated
for
all
the
different
gzip
implementations
to
see.
If
we
could
like
re-read
and
calculate
enough
to
re-assemble
whatever
gzip
they
used
in
whichever
optimization,
but
it
was
golang
or
bsd
or
otherwise,
so
that
we
could
do
that
on
the
fly
in
the
past.
It
was
really
maddening.
B
F
E
B
E
Shot
256
might
get
broken
tomorrow
and
if
it
does,
oci
falls
apart
I
mean
everything
falls
apart,
but
one
of
the
many
things
that
falls
apart
is
oci.
I
don't
think
steve
to
answer
your
question.
I
think
I
was
the
one
that
talked
about
that
at
the
summit
and
I
think
everyone
just
sort
of
shuddered
quietly
to
themselves
and
we
agreed.
We
should
do
something
about
this,
but.
B
B
F
I
I
guess
my
takeaway
from
the
conversation
was
we'd
spin
up.
You
know
a
group
that
would
start
experimenting
like
here's,
a
conformance
test,
here's
something
that
you
know
somebody
could
start
running
on
their
various
registries
to
see
like
what
happens,
I
think
you're
bringing
up
a
broader
thing,
which
is
a
great
conversation
as
well.
So
when
this
does
get
broken,
do
we
just
accept
that
for
existing
content
and
don't
change
it?
Do
we
fix
the
existing
content
whatever?
F
That
means,
because
everything
kind
of
falls
apart,
because
all
the
links
are
based
on
digest
or
can
new
content
be
start
being
pushed
within
three
days?
And
what
is
the
expectation
of
registries
that
you
know
implement
the
spec?
Does
that
make
the
256
character
limit
that
we
have
on
things
like?
I
think
that's
on
repos,
I
don't
know
if
it
includes
the
tag.
B
One
one
thing
that
it
one
thing
that
I
I
think
in
my
mind
all
existing
content
stays
there
like
addressable
and
everything.
Should
you
know
capital
letters
should
migrate
to
something
else
like
prop
12
or
otherwise.
It's
not
currently
broken
and
those
things
could
even
be
effectively
interchangeable
that
you
might
have
like
a
from
you
know
like
one
of
the
layers
in
the
stack
has
been
a
shot
256
and
the
thing
you
just
built
on
top
that
has
changed
since
this
right,
you're
w.
I
B
But
your
new
image
is
512
for
at
least
a
couple
weeks,
yeah,
but
yeah,
I
think
doing
a
spike
like
that
should
work,
and
I
feel,
like
people
have
already
done
that.
I
have
not,
but
people
even
on
this
call
probably
done
what
would
that
look
like
with
the
different
checks
on
the
shop
56?
B
F
E
I
think
targeting
512
is
even
a
mis
like
targeting
a
single
thing
is
wrong.
If,
if
we're
going
to
do
the
work
of
getting
to
another
digest
like
format,
we
should
do
something
like
more
extensible
it
for
for
conformance
tests.
I
think
I
would
accept
it
as
a
conformance
test
to
say
I
support
md5,
which
is
worse,
which
is
like
the
wrong
direction,
but
at
least
it
proved
it's
not
as
a
guidance
for
what
you
should
use.
Instead,
it's
look
at
how
flexible
we
are.
E
We
can
even
support
rod
13.,
something
even
even
worse.
Anyway.
Brandon
has
his
hand
up.
I
didn't
mean
to
interrupt
yeah.
A
No,
I
don't
think
we
need
to
rot
through
things
that
wouldn't
be
a
bounded
length,
but
going
on
in
in
general.
I
think
I
think
the
one
question
I
have
is
what
this
looks
like
in
a
transition
state
and
maybe
that's
in
there,
and
I
don't
think
we
would
be
able
to
answer
that
in
five
minutes,
but
just
in
general.
A
Something
to
think
about
is,
if
you
have
older
clients
doing
the
push,
or
maybe
you
have
a
newer
client
going
to
push
an
older
client
during
the
pull
you
know,
because
the
content
addressable
store
it's
that
digest
has
to
match
whatever
they're.
Seeing
if
it's
an
older
client
doesn't
know
about
the
about
the
encoding
happening
on
the
fly,
and
so
those
are
the
scenarios
that
I'd
like
to
see
fleshed
out.
C
So
I
actually,
I
think
I
want
to
say
me
and
jason
talked
about
this,
but
basically
one
of
the
things
with
conformance
tests
and
distribution
is
they're,
pulling
an
image,
spec
library.
So
the
the
writing
of
the
spec
is
saying.
You
just
need
some
document
that
references
the
blobs,
so
I'm
not
even
talking
about
going
away
from
sha-256,
but
I
was
considering
like
just
as
a
proof
of
concept.
C
Writing
like
a
yaml
based
registry
that
just
converts
in
and
out
of
yaml,
and
then
we
could
provide
to
the
conformance
test
like
here's,
my
manifest
and
descriptors
and
they're
actually
ammo,
and
you
throw
it
against
the
registry
and
it
works
given
that
new.
Because
then,
if
you
introduce
a
new
image,
spec
manifest
type
or
any
of
these
new
things
like
refers,
then
the
test
would
be
extensible
enough
to
like
you,
just
pass
it
a
bunch
of
blobs
and
then
those
somehow
get
templated
into
these
new
things.
E
Maybe
not,
I
think,
it's
a
different
dimension
of
different
of
changes
we
might
want
to
consider
not
that
I
would
advocate
yaml
specifically,
but
blobs
should
be
able
to
take
anything
and
be
content
addressed.
Currently,
there's
only
one
way
to
content
address
them
practically
and
it's
sha-256,
but
extending
that
to
other
things
could
be
useful.
I
know
I
only
have
a
few
more
minutes,
but
I
wonder
if
sargon
also
has
a
stand
up.
Sorry.
G
My
I
would
love
for
this
to
be
clarified,
because
I
asked
this
question
a
long
time
ago,
which
is
if
we
ever
want
to
mix
hash
algorithms
in
the
pull
process.
So
let's
say
a
tag
points
to
a
sha
256
manifest
that
then
points
to
md5
blobs
is
that
illegal?
If
some
blobs
are
md5
and
some
bobs
are
are
md4?
G
Is
that
legal,
like
none
of
this,
as
far
as
I
know,
is
ratified
in
the
spec?
This
isn't
an
improvement,
but
this
is
just
a
clarification
that
I
think
would
be
really
wonderful
to
add.
A
E
Yeah
yeah,
I
think
I
think
practically
speaking
so
so
I
agree
with
you
that
I
think
it's
not
defined
and
therefore
is
probably
allowed,
but
practically
speaking,
what
a
monstrous
registry
to
do
that
to
you.
But.
G
B
Because,
because
what
if
you?
What
if
you
were
building
something
on,
you
know,
you
know
debian
base
and
it
was
published
as
shout
out
to
56
or
in
the
future
or
whatever
they
want
to
standardize
on,
and
then
you
on,
the
top
are
exclusively
using
blade
to
be,
and
your
layers
would
be.
You
have
a
mix,
a
heterogeneous
set
of
hashes.
G
Should
we
make
this
ratified
somewhere,
please
image
spec.
B
I
think
this
is
one
that
would
need
both
because,
like
I
said,
it's
a
different
api
call
so
having
a
bit
of
verbage
around
those
things,
the
fact
that
you
might
manifest
that
effectively
he's
calling
everyone
to
pass
is
fine,
but
then,
for
whatever
manifests,
you
know
that
are
getting
pushed
to
the
distribution
and
it
would,
I
think,
it'd,
be
in
both
distribution
and
image.
Spec.
E
I
think
it's
fine
to
put
it
in
the
in
the
specs
to
mention
it
in
the
specs.
I
think
it
might
confuse
people
because
practically
it
doesn't
happen
like
you,
you
would
never
see
this
it's
more
of
a
like.
You
know,
educational
oddity,
for
instance,
you
could
have
two
different
types
of
digest,
but
in
practicality,
100
of
registries
only
support
256.