►
From YouTube: OCI Weekly Discussion - 2021-08-11
Description
Recording of the weekly OCI developer's call from 11 Aug 2021; agenda/notes here: https://hackmd.io/El8Dd2xrTlCaCG59ns5cwg?view#August-11-2021
B
Hello,
I
want
to
merge
this,
but
I
want
everyone
to
like
it
so.
B
B
B
B
B
The
size
limit
would
be
whatever
the
registry
has
as
a
size
limit,
so
I'm
leaning
on
http
semantics
for
this.
If
you
try
to
embed
a
gigabyte
of
video
in
this,
then
registry
has
the
opportunity
to
say
no.
That
is
silly.
I
can't
store
that
413.
Please
try
again.
E
And
to
be
clear,
that
is
already
a
concern
with
images
with
or
without
the
data
field,
and
that's
that's.
The
other
item
on
the
agenda
is
how
how
should
we
tell
registries
to
express
this
manifest
is
too
large,
because
you
can
already
do
that
today,
with
a
million
layers
or
annotations
containing
the
base64
encoded
video
contents
or
whatever.
B
Yeah
largely
this
data
field
gives
a
shared
understanding
between
clients
of
how
to
treat
it
right.
It's
a
very
special
thing
associated
with
that
descriptor.
Where,
if
you
see
it,
then
you
can
verify
its
contents
and
avoid
pulling
it
all
together.
B
F
So
maybe
it'll
help
if
I
ask
clarifying
questions,
so
is
this
helping
to
save
like,
as
you
said,
a
round
trip
to
some
other
endpoint
to
get
data.
B
F
B
Registries
in
any
existing
api
registry
should
not
choose
to
put
this
ever
it
is,
it
would
very,
it
would
violate
a
lot
of
assumptions
about
clients
if
registry
started
to
populate
this
arbitrarily,
because
as
a
client,
if
I
push
something
with
a
given
digest,
I
expect
to
get
that
back,
and
so,
when
I
push
a
manifest
that
doesn't
have
any
content
embedded
in
it,
I
should
get
back
a
manifest
that
does
not
have
any
content
embedded
in
it.
My
expectations
for
registries
is
to
do
nothing
at
all.
They
should
just
ignore
this.
B
B
F
G
Hey
john,
I
have
a
lower
hand.
So
just
just
a
couple
of
things.
First,
is
you
know
I
read
through
this
thanks
for
answering
all
those
questions
in
the
comments
yeah,
I
wanted
a
little
more
just
data
around
the
use
cases.
It
wasn't
clear
to
me
like
the
motivation
behind
the
definition,
and
then
you
know
you
understand
it
correctly
based
on
or
did
I
lose
and
with
my
mic
still
working.
G
B
And
this
seems
like
where
it
would
go,
and
it
is
a
small
backwards,
compatible
change
that
should
not
break
anything.
I
think
it
fits
within
the
the
framework
that
we
already
have
for
making
changes
to
specs.
G
Yeah,
abs,
absolutely
but
working
on
the
ref
types
you
know
as
well
and
we
were
told
by
the
oci
to
go
and
do
the
prove
that
out
in
working
group.
It
seems
to
me
that
you
would
be
you
know
using
this
to
shove,
signatures
in
which
is
orthogonal
to
the
ref
types
work.
It's
just
a
different
place
to
do
the
work,
and
you
know
you're,
just
merging
it
in
you're,
changing
without
following
the
same
set
of
rules
that
the
oci
outlined
for
the
ref
types
work.
B
B
I
would
categorize
this
as
like
orders
of
magnitude
different
in
terms
of
the
change
like
this
data
field
was
reserved
for
this
purpose,
originally
in
the
image
spec.
So
this
is
basically
following
through
on
a
we're.
Gonna
do
this
in
the
future
promise,
but
also
like
this
doesn't
change
the
topology
of
the
merkel
dag
in
any
way
right.
It's
just
some
artifacts
may
be
slightly
larger,
and
the
considerations
in
the
spec
already
require
implementations
to
ignore
unknown
fields.
D
Yeah,
the
design
of
the
data
doesn't
require
any
changes
on
the
registry
server
itself.
It's
just
an
agreement
between
the
producer
and
the
consumer,
the
artifact
or
whatever
you're
saying
you
know,
I'm
saying
artifact
but
manifest
whatever
that
that
data
field
can
be
used
in
that
way,
and
so
it's
it's
a
very
minimal
change
compared
to
all
the
ones
we've
been
making.
C
I
may
have
missed
this,
but
if
there
are
no
protections
around
what
data
goes
in,
what's
stopping
clients
you
know
putting
whatever
data
they
want
in
here.
H
My
concern
as
a
registry
operator
is
now
I
can't
like
disallow
storing
some
data.
Like
so
say,
the
like
first
version
of
the
cosine
format
is
horribly
broken,
and
I
want
to
make
sure
that
there
are
none
of.
I
don't
allow
any
more
of
those
into
my
registry
with
this
proposal.
That's
effectively
impossible,
because
before
I
could
just
ignore
it
and
disallow
that
blob
from
being
uploaded.
But
now
I
can't
because
it's
inlined
into
the
descriptor.
B
D
I
D
I
B
Okay
right,
I
I
mean,
if
you
wanted
to
as
a
as
a
registry,
reject
a
non-distributable
layer
that
has
data
embedded.
I
would
find
that
reasonable,
but
unexpected,
because
generally,
that
kind
of
distinction
is
done
on
the
client
side.
I
know
distribution
has
like
some
flags
or
config
around
what
urls
are
allowed
to
be
used
for
non-distributable
layers,
but
I'm
not
really
familiar
enough
with
how
that
works.
To
make
any
useful
statement
about
it.
E
Yeah,
I
think
an
overarching
point
of
this
is
that
registries
shouldn't
should
already
not
do
anything
in
the
presence
of
unknown
fields,
including
data
or
the
same
information
by
any
other
field.
So
it's
really
not
a
registry
issue.
It's
a
it's
a
client
to
client
issue
me
as
the
client
pushing
and
me
as
a
client
fetching
can
agree
on
the
format
of
that
field
and
content.
E
D
D
D
H
D
H
Right,
but
that's
just
now,
contrary
to
the
how
media
type
is
defined
everywhere
else
where
it
says.
H
C
I
I
wonder
maybe
this
is
for
brandon
how?
How
would
clients
that
share
that
data
field
negotiate
content?
That's
in
the
data
field,
do
they
just
go
by
media
type,
and
so
would
that
make
it
so
the
client
must
define
what
kind
of
media
type
they're
looking
for.
D
It's
the
exact
same
thing
we
have
today
with
what's
in
the
blob,
it's
just
base,
64
encoded,
so
the
same
negotiation,
you're
already
doing
on
the
media,
type
and
other
fields
in
there.
It's
just
it's
the
same
content.
You
just
don't
have
to
pull
it
with
a
separate
blob
fetch
you
just
decorate
the
base64
data.
C
Well,
that's,
I
guess
my
question
is
about
the
the
underlying
media
type.
So
if,
if
one
client
wants
to
record
like
like
a
media
type,
called
foo,
dot
bar
like
foo,
dot
bar
and
another
client
is
expecting
media
type
bar
dot
foo.
D
So
the
media
types
we've
already
got
today
the
media
type
for
like
a
manifest
and
it's
got
the
it's
got
the
media
types
right
now
for
okay,
here's,
the
tar,
you
know
gc
layer
and
if
I'm
saying
hey,
this
is
only
145
bytes.
Let
me
pull
that
blob,
so
I
just
do
the
command
there.
It
didn't
pull
run
through
a
base64,
so
the
end
of
the
string
is
a
base
64n
code.
D
D
And
you
could
but
it's
going
to
be
base
64
encoded
and
it's
probably
gonna,
be
the
json
of
your
config.
D
C
In
general,
I'm
ambivalent
to
this,
because
it's
so
vague,
but
that's
my
only
concern
is
that
you
know
what's
preventing
clients
from
stuffing
all
kinds
of
data
in
that
field.
J
J
Right
is
that
is
that
the
question
or
the
issue
that
you're
not
sure
that
it
it
actually
is
the
blob
that
you
were
pointing
to
in
the
reference.
I'm
sorry
the.
D
J
E
Yeah
I'd
be
yeah,
I'd
be
happy
to
trans
transfer.
This
discussion
to
distribution,
spec
820,
whatever
it
was
293
to
talk
about
limitations
that
registries
should
impose
on
manifests.
E
That
proposal
only
talks
about
the
overall
size
of
the
manifest
that,
if,
if
I
as
a
registry,
I'm
getting
a
push
for
a
manifest
that
I,
as
the
registry
think,
is
too
big,
I
should
block
it.
We
could.
E
I
don't
want
to
cram
it
into
that
same
proposal,
but
we
could
talk
about
other
proposals
for
the
types
of
things
registries
might
want
to
be
able
to
express
constraints
on,
like
my
registry
does
not
support
the
data
field.
Now
you
could
imagine
a
set
of
constraints
that
says
now
that
that,
now
that
the
data
field
is
in
the
image
spec,
you
could
say
I
only
support
these
men.
These
media
types
only
support
blobs
under
this
size.
E
I
only
support,
manifests
that
don't
have
the
data
field
and
the
distribution
spec
spec
would
be
able
to
specify
here
is
how
you
express
you
do
not
match.
This
request
does
not
match
my
constraints.
It's
a
you
know,
400
with
errors
that
look
like
this
or
whatever.
But
ultimately
I.
E
J
J
D
D
D
A
A
A
If
you
that,
because
I
don't
think
anybody
intends
to
abuse
it,
I
think
that
they
would
consider
it
a
valid
use
and
others
would,
you
know,
have
an
impact
on
registries
as
they're
they're
working
the
the
thing
is
today
like
we
do
have
expectations
around
size
constraints
that
we
all
implement
they're,
not
necessarily
consistent,
but
it's
with
when
a
customer
comes
to
us
and
says
they
have
a
20
gigabyte
layer
or
they
have
62
layers
and
there's
causing
some
failures
in
some
place.
You
can
kind
of
look
at
them
going
they
really
like.
A
That's
not
a
reasonable
amount.
We
could
argue
whether
it's
a
bug
and
whether
we
should
go
fix
it
for
ml
scenarios,
seem
to
be
the
biggest
ones,
for
example,
but
there's
a
reasonable
expectation
when
you
can
look
at
this
to
say
this
is
an
unreasonable
amount
and
then
decide
to
support
it
and
an
annotation
kind
of
has
an
implication
of
that
around
the
size
of
the
the
string
that
might
be
put
in
there.
A
When
you
start
saying
data,
it
has
a
more
general
feel
that
a
user
of
this
may
not
agree,
and
then
we
get
into
this.
You
know
debate
as
a
spec
we're
trying
to
create
consistency
so
that
content
that
moves
across
registries
is
successful
or
we
want
to
be
able
to
pull
things
from
docker
hub,
nvidia,
msc
or
wherever
and
move
those
into
private
registries.
A
A
It
feels
like
we
need
to
at
least
do
some
more
validations
on
it
to
see
what
the
real
benefit
is
and
to
say,
two-thirds,
you
know
performance
benefits
like
well.
What
is
that
is
we're
talking
a
couple
of
milliseconds
and
and
I'm
not
trying
to
minimize
it,
but
I
think
we
need
to
just
qualify
a
bunch
of
those
things
to
figure
out.
Is
this
really
hitting
the
bar
for
what
we
need
to
for
the
implications
of
what
it'll
have
to
users
and
registries
and
clients,
and
so
forth?
I
I
just
probably
want
to
say
that
I
actually
support
this
pr,
but
I'm
questioning
the
process
here,
because
I
think
there
has
been
a
discussion
of
moving
certain
work
out
to
validate
and
this
one
do.
We
have
the
same
validations
in
place
before
we
accept
this
as
a
merge
pr,
or
are
we
holding
this
pr
to
a
different
process,
because
it's
an
already
an
existing
data
field
and
things
like
that?
I
What
is
the
general
model
of
the
working
group
is
still
kind
of
unclear?
I
think
mike
you
kind
of
hit
it
on
the
nail,
which
is:
should
we
bypass
the
process
for
this
one
and
make
it
explicit
that
that's
what
we're
doing
for
this
one
and
that
might
be
better
rather
than
merging
this
in
and
then
holding
the
remaining
working
groups
to
a
different
standard
of
requesting
a
validation
or
somebody
actually
implemented.
I
On
the
contrary,
if
somebody's
actually
implemented
using
the
data
field
and
they're
doing
it
this
way,
then
I
think
it
makes
it
concrete
to
let's
accept
it
as
a
as
a
validated
industry-validated
capability,
like
oci,
came
about
after
a
ton
of
validation.
So
that's
the
that's
one
aspect
which
I'm
not
clear
of
yet.
D
So
we
recently
approved
a
couple
of
annotations
in
the
oci
distribution
and
that's
just
saying:
here's
a
common
data
field,
that's
in
the
distribution
as
you're,
pushing
this
data
back
and
forth.
So
these
annotations
that
just
makes
sense
for
two
people
to
be
able
to
know
what
to
expect
in
that
field.
I
feel
like
this
follows
in
that
design,
where
we're
just
defining.
A
The
annotations
was,
you
know,
by
default.
You
know,
by
definition,
almost
the
way
that
their
narrative
thing.
Yes,
it
has
to
go
through
a
process
and
get
merged,
but
I
think
that
there
was
a
general
and
we
went
through
a
lot
of
iterations
honestly
and
that's
before
it
got
there.
So
I
think,
there's
a
difference.
I
I
do
want
to
go
back
to
the
process,
so
it
seems
like
zoom
is
doing
the
same
thing.
A
That
teams
does
is
it
remembers,
the
ordering
of
who
raised
their
hands,
so
I'm
assuming
you're,
seeing
the
same
thing
that
sanjay
was
there
and
then
hank.
So
if
we
can
just
kind
of
follow
that
process
and
so
sasha,
if
you're
done
low
with
your
hand
and
that
cues
up
hank
to
be
mixed.
H
Yeah,
I
just
wanted
to
say
I
think,
jason's
suggestion
that
the
distribution
spec
grows
something
to
be
able
to
signal
back
a
registry
thinking,
you're
or
like
requesting
that
something
be
essentially
like
rephrased
sort
of
alleviates.
My
my
concern
and
use
case.
A
I
think
we
should
still
address
the
compatibility
issue
because
and
I'm
not
saying
that
every
registry
should
agree
to
the
same
sizes.
I
get
the
challenge
there
and
we
all
have.
You
know
different
things
on
limits
and
we
probably
all
depending
on
which
customer
is
hitting
us
at
one
time,
with
some
ridiculous
level
that
we
decide.
Okay,
we'll
allow
it.
A
I
think
that
this
it's
not
as
simple
as
putting
a
max
limit
on
a
manifest
post
or
a
blog
post
or
the
number
of
layers,
because
that
will
drive
an
set
of
inconsistencies
across
the
registries
that
just
has
customers
that
are
multi-cloud
or
multi-registry
project
oriented
chasing
us
all
in
a
circle,
because
we
failed
to
create
a
good
enough
standard
that
worked
consistently
across
all
of.
A
Well
true
and
false,
and
that's
part
of
what
I'm
trying
to
get
at
is,
it
doesn't
say
the
max
size
of
a
data
element.
It
says
the
max
size
of
a
manifest
and
that's
what
I
was
trying
to
get
back
to
previously
is.
If
somebody
puts
20
000
annotations
in
a
manifest,
you
can
go
back
to
the
customer.
Going
really
I
mean
come
on.
This
is
unrealistic.
A
A
I
am
concerned
it's
creating
a
set
of
instability
across
registry
products
on
how
they
would
be
managed.
I
agree
with
the
concept
of
a
data
and,
like
I
said
I
don't
disagree
with
the
concept.
I
somewhat
disagree
with
the
duplication
that
it
has
to
do
with
what's
in
the
blob
and
I'm
trying
to
avoid
get
to
that
level
of
detail.
That's
what
I'm
hoping
some
kind
of
working
group
could
kind
of
sort
out
saying
all
right.
What
is
the
real
problem
we're
trying
to
solve
and
then
land
that
solution.
A
Because
I
think
we're
skirting
around
what
looks
like
a
simple
property:
what's
the
big
deal
to
the
property
has
really
big
implications
like
naming
is
hard
because
naming
has
an
implication
around
it,
and
we
did
have
a
previous
call
several
weeks
ago,
where
we
were
talking
about
what
is
the
definition
of
a
manifest
versus
a
blob
layer
of
slash
bob
and
that's
part
of
my
concern?
Look
for
the
for
the
40k
or
whatever
measures
and
k's
size
of
data
to
be
said.
A
It
has
to
be
binary
encoded
to
be
in
this
named
thing
called
data
yeah.
It
makes
sense,
but
we're
not
putting
any
expectations
or
reality,
and
I
don't
know
why
it
needs
to
be
in
a
blob
as
well.
Honestly,
that
seems
to
me
it's
perfectly
fine,
just
to
be
in
the
manifest
a
special
type
of
annotation
or
we
just
use
annotations
like
I'm,
actually
not
sure
why
we
couldn't
just
use
an
annotation
for
some
of
the
examples
that
I've
been
hearing
talked
about.
C
C
Sorry,
I
was
just
the
reason
why
I
keep
like
pushing
back
a
little
bit
on.
This
is
because
experience
shows
me
that
people
who
want
something
done
will
take
the
path
of
least
resistance.
C
So
if
you
say
hey,
here's
a
field
and
you
can
put
like
a
base64
encoded
whatever
in
here,
people
are
going
to
use
that
and
that
might
have
some
undesirable
consequences
for
registry
operators.
I
don't
know
what
that
might
be,
because
I
don't
have
that
kind
of
imagination,
but
I
just
I
kind
of
feel
like
it.
It
is
going
to
happen.
J
J
No,
it
depends.
It
depends
on
the
change
right,
adding
an
annotation
that
has
no
rules
or
regs
on
how
the
api
would
work
between
them
meant.
It
was
a
no-op
right,
a
field
that
has
requirements
or
possible
requirements
on
the
client
or
the
server.
Then
then,
you
know
in
the
api,
then
then
you'd
need
to
make
a
change
over
there,
and
I
know
steve's
brought
this
up
a
lot
of
times
like.
Why
do
we
have
two
specs?
J
J
A
A
This
one's
had
enough
questions
that
I
think
you
know
that
kind
of
warrant,
the
a
better
validation
on
the
impact
of
it,
because
it's
not
just
like
it's,
not
just
the
registry
operators
that
we
actually
for
how
to
do
better,
caching
or
whatever
it
is.
It's
the
implication
to
what
our
customers
will
wind
up
having
with
costs
implied
by
it.
So
I
just
think
we're
being
a
little
over
simplistic
around
the
impacts
of
this
element,
as
currently
being
outlined.
J
J
Yeah
yeah,
I
mean
I
I
like
the
the
on
on
293
for
everybody.
You
know,
since
john
jonathan's
gone,
you
know
it
was
really
just
a
matter
of
you
know,
recognizing
that
current
registries
already
today
have
enforced
a
limit
on
the
size
of
the
manifest
some
one
megabyte
some
four
and
that
we
needed
to
put
some
restrictive
language,
or
at
least
some
descriptive
language.
J
Inside
of
the
distribution
specification
to
to
explain
you
know
what
error
code
you
should
return
and
and
and
put
some
main
language
or
should
language,
and
I
think
we
ended
up
with
may
language
here
where
registry
may
enforce
such
limits
on
the
maximum
manifest
size
that
it
cannot
accept
and,
and
it
should
I,
I
think
it
must.
J
But
I
think
right
now,
there's
been
a
proposal
that
we
should
say
it
should
return
a
413
error
if
it,
if
it,
if
it's
not
going
to
you,
know
to
store
that
manifest,
and
that
would
be
the
explanation
of
why
in
my
mind,
it's
a
must
that
if
you're,
if
you're
going
to
refuse
it
because
of
size,
you
must
return
the
413
so
so
that
clients
can
know
why.
J
As
far
as
what
to
set
it
to
I,
I
don't
think
we
should
set
a
value
in
the
distribution
specification.
I
think
we
should
have
a
table
in
my,
in
my
my
personal
view,
just
a
table
that
that
talks
about
what
the
registries
you
know
limit
to
today,
either
one
or
four
and
let
people
decide
you
know
what
they
want
to
use
which
registers
they
want
to
use,
what
limits
they
prefer,
and
I
don't
know,
maybe
steve.
J
Maybe
maybe,
if
you
want
to
store
data-
and
you
want
to
you-
know-
use
the
data
field.
Maybe
you
end
up
with
wanting
larger,
manifests
and-
and
you
have
a
different
registry
for
that
or
a
different
account
that
you
have
to
use,
but
I
I
don't
think
we
need
to
to
make
that
decision
on
how
the
registries
are
going
to
make
money
or
business.
A
At
this
point
of
clarification,
this
isn't
a
matter
of
business
or
whatever
it's
like.
These
are
the
costs
that
get
passed
on
to
users-
and
we
should
be
careful
about
that
like
this
is
the
fundamental
thing
that
docker
hub
is
struggling
with.
Costs
is
trying
to
be
a
public
registry
hosting
content.
This
is
the
thing
projects
that
we're
using
google,
storage
and
registries
were
asked
to
pay
for
that
they
couldn't
afford,
and
you
know
this
is
the
implication.
It's
it's
storage
is
nothing,
it
does
add
up.
Oh
yeah.
A
Something
nefarious,
so
that's
fair.
I
just
want
to
make
sure
this
isn't
a
matter
of
like
we're
trying
to
manage
our
cogs
like
these
are
the
things
that
get
passed
on
to
users,
and
it's
like
this.
This
call
is
much
more
expensive
than
a
lot
of
the
well.
It's
probably
not
the
application
that
we've
all
seen.
A
Those
that
run
registries
have
seen
the
cost
of
how
these
things
run
and
customers
with
storage
content
in
their
registries
that
they
haven't
figured
out
yet
how
to
delete
in
any
kind
of
reasonable
amount,
and
the
amount
that
they're
pushing
they're
now
getting
very
upset
around
the
size
of
their
storage
costs,
their
lack
of
the
apis
to
make
it
better
like.
I
would
love
for
us
to
spend
a
lot
of
this
time,
for
how
do
we
get
the
delete
apis
in
a
better
shape,
but
I
don't
want
to
get
too
tangent
off.
A
D
D
That
would
give
anybody.
That's
producing
these
images
as
a
way
to
know
that
their
image
is
now
portable
across
every
ocim
registry
out
there,
and
so
it's
that's
kind
of
the
trade-off
that
I'm
looking
at
is
yeah.
There
is
a
value
just
throwing
a
table
out
there
and
letting
this
thing
easily
grow
and
mutate
as
time
goes
on
and
there's
also
a
value
of
having
oci
say
here
is
the
standard
that
everybody
needs
to
follow
so
that
people
are
producing.
D
H
Yeah
from
our
it's
it's,
I
guess
it's
nice
to
know
that.
There's
a
standardized
way
to
say
that
this
is
too
large
but
yeah.
It
doesn't
really
help
image
producers
if
there's
not
a
floor
for
for
the
size.
J
A
Repo
pass
should
only
be
256.,
you
know
it's
like
I.
I
don't
know
what
the
the
right
number
is
for
some
of
these
things
and-
and
I
like
I
said
I
I
do
think
it
should
be
more
on
the
element
than
the
overall
manifest.
I
think
there
is
a
safety
on
the
manifest.
Don't
get
me
wrong.
It's
kind
of
like
a
circuit
breaker
panel
has
a
200
amps,
and
but
if
you
added
up
all
the
individual
circuit
breakers,
it
certainly
is
a
lot
more
than
200
amps
right.
D
J
D
J
To
it
here
and
the
other
side
of
the
two-edged
sword
right,
so
how
many
images
are
you
going
to
say
that
are
not
compatible
because
they're
not
common
on
the
registries,
that
only
do
one
when
maybe
half
the
images
are
four
today
you
know
how
many,
how
many
would
you
say,
you're
not
compatible
right?
If,
because
they
can't
go
to
two
registries
or
other
industries
that
are
there,
I
mean
it
would
give
you
a
tool
a
way
to
find
the
you
know,
images
that
couldn't
be
copied
and
why
not
right
to
certain
registries,
but.
H
J
If
you
make
it
if
the
maximum
today
for
all
the
registers
is
four
and
you
made
that
the
current
low,
then
then
you
wouldn't
have
a
situation
where
you've
got
a
set
of
images
that
couldn't
be
copied
across
registries.
If
that
was
the
goal
brandon
to
to
make
sure
that
there
was
you
know,
every
all
images
created
today
is
can
be
common.
A
I
mean
the
interesting
the
the
thing
that
pops
to
mind
also
is
the
mention
of
the
cache
and
it's
and
unfortunately,
it's
not
really
a
cache
in
the
sense
that,
because
I
think
of
caches
like
hey,
I
don't
need
it.
I
can
toss
it
if
I
take
an
image
from
registry
a
and
push
it
to
registry
b.
That
says:
that's
too
big,
I
can't
just
re.
I
can't
drop
the
data
element
because
the
digest
no
longer
matches
right.
So
it's
which
is
good
right
for
the
larger
scheme
of
what
we're
trying
to
do
here.
A
That
was
a
good
thing.
The
problem
is,
it
doesn't
mix
really
well
with
this,
if
they're
not
all
using
the
same
maximums,
yep.
D
Yeah
so
there's
value
to
both
sides
of
this
I
would
say,
weigh
in
on
the
discussion
to
figure
out
which
way
you
want
to
go
with
this,
because
I
I
get
it
you
know
it's.
There
are
two
options
there.
If,
from
the
registry
maintainer
standpoint,
it's
very
convenient
to
say
just
be
able
to
give
a
return
code
when
you
can't
support
it
and
that
way
they
can
set
the
limit
to
whatever
they
need
to
for
their
own
use
case
from
the
artifact
producer.
D
J
Yeah,
especially
especially
when
you
do
what
steve
is
talking
about,
you
know
bringing
in
like
a
hardware
cache
in
the
middle
of
it
between
the
two
public
caches
like
oh
geez,
I
can
only
store
it
in
the
mirror
and
then
when,
when
he
tries
to
push
it
down
man,
it's
just
all
right.
I
guess
what
I'm
saying
if
I
have
to
make
it
a
max.
Well,
let's
pick
at
the
top
one
and
convince
everybody
else
to
use
that
same
top
maximum
this
time
around.
A
It's
just
code,
but
as
I
look
I
mean
I
can't
imagine
anybody
who's
running
a
registry
doesn't
have
the
same
problem
these
days
like
it's,
not
that
we
couldn't
go
fix
the
way
we
do
our
caching
to
make
sure
you
manifest
fast,
but
where
the
hell
does
that
work
item
fit.
You
know
compared
to
everything
else,
and
what's
the
value
of
doing
that
versus
some
of
the
other
work
that
we're
trying
to
get
done
here
so.
J
Yeah
so
it,
which
is
why
I
I
lean
to
table
if
you
have
a
table
at
least
you
can
you
can
map
out,
you
know,
okay,
I
want
to
use
four.
You
know
where.
How
can
I
map?
How
can
I
configure
that
in
my
setup
and
I
can
look
at
the
table
and
see
okay,
this?
You
know
this
caching
tool
is
gonna.
It's
gonna
limit
me
to
four,
and
this
registry
is
for
two.
D
D
A
Well,
it
depends
on
the
image,
so
I
just
coincidentally,
I
was
playing
with
the
term
tool
today
and
experimenting
with
some
stuff
and
a
simple
image.
The
output
file
was
pretty
small.
I
went
to
a
realistic
unit
and
it
probably
would
have
been
fine
to
push
it
as
a
blob,
for
instance,
as
a
data
and
then
the
next
one.
Oh,
that
was
much
bigger.
A
A
It
was
a
remote
desktop,
sharing
thing
that
he
was
doing
and
all
of
a
sudden
what
he
did
in
testing
work.
Fine
and
production
now
fails
so
or
yeah
anyway.
I
don't
want
to
get
two
deals.
I
D
Weigh
in
in
the
comments,
that's
whether
or
not
you
think
that
should
be
yeah.
D
A
I
I
still
say
I
think,
we're
we
discussed
one
of
the
long
list
of
issues
that
we've
been
discussing
around
this
and
that
that
in
itself
doesn't
finish
the
the
checkbox
of
the
list
of
things
that
we've
been
discussing
your
questions
around
this,
so
it's
not
they're,
not
suggesting
it's
like
you
know,
okay,
move
on.
I
think
you
could
argue
that
is
this
the
biggest
problem
we
want
to
solve.
That's
an
opinion
based
on
people
are
trying
to
solve
certain
problems.
I
think
the
question
is:
is
this:
this
is
the
purpose
of
the
working.
J
Well,
I
don't,
I
don't
see
phil
or
vince
I
haven't
looked
at
the
process
in
the
in
the
new
process,
for
the
working
group.
Steve
is:
is
this
something
that
needs
to
go
there?
A
I
look
it's
my
opinion
so,
but
it's
I
think
we
define
the
working
group
to
handle
larger
ambiguity,
larger
things
that
have
a
lot
of
implications
that
add
complexity,
that
we
want
to
really
test
out
and
give
a
sandbox
environment
to
be
able
to
test
that
out
and
answer
lots
of
questions
the
pr
99
did
get
merged
yesterday.
I
think
it
was
that's
awesome,
so
that's
step
one.
We
do
have
a
pr
out
there
for
the
reference
type
working
groups,
because
that
was
queued
up
as
one
example.
A
We
certainly
want
to
move
forward
with
that.
That's
the
group
that
you
know
I'm
trying
to
push
forward.
I
john
this
one
is
yours:
if
you
want
to
push
forward
in
that
path,
that's
you
know
an
option
to
you
is,
if
you're
going
to
say
it,
that's
up
to
that
too.
B
D
Yeah
me,
as
a
outsider,
looking
at
this,
this
doesn't
feel
like
working
group
territory
either.
I
would
defer
anytime
you're
looking
at
like
a
major
change,
major
version,
number
change,
kind
of
thing
or
adding
new
apis.
That
registries
have
to
support.
I
don't
think
we're
getting
anywhere
near
any
of
that
stuff.
A
B
J
I
think
there
were
a
use
case
issue
to
discuss
or
consider
around
the
deletion
of
the
of
the
blob
that's
being
just
described
in
the
manifest
since
that
that
it's
been
deleted
now,
but
it's
still
in
the
data
field.
I
think
there
was
a
little
bit
of
a
discussion
there
over
what
the
you
know.
The
restriction
should
be
for
that.
J
A
F
One
one
quick
last
note
the
same
thing
I
brought
up
last
week:
I'm
I'm
interested
to
deploy
the
jekyll
template
with
the
documentation
for
all
the
specs.
I
need
someone
with.
J
J
Where
are
you
chris?
Chris
of
course,
has
the
you
know
the
power,
and
I
think
vince
does
too
and
phil.
F
Okay,
I
guess
we
can
wait
another
week,
then
I
mean
there's
no
like
rush.
It's
just
like
documentation
and
probably
vincent's
just
like
super
busy.
So
we
can
wait
another
week.
I
will
keep
being
persistent.
I
have
no
issue
doing
that.