►
From YouTube: OCI Weekly Discussion - 2022-11-03
B
I'm
guessing
we'll
get
Michael
Brown,
not
sure
which
Michael
Brown,
but
since
the
agent
items
on
their
figure
we'll
see
if
he
shows
up.
A
E
B
E
B
E
E
Well,
yeah
so
I
think
I
think
a
lot
of
people
have
been
kind
of
talking
about
what
is
left
and
we
talked
about
it
a
little
bit
at
kubecon
and
I
guess.
I
was
just
curious.
What
people
were
thinking
in
terms
of
you
know
what
we
wanted
to
do
before
we
started
a
release
vote
and
if
we
could
at
least
try
and
propose
some
timelines
to
try
and
hold
ourselves
accountable.
E
You
know
we
can
always
move
them
if
we
need
to,
but
it
would
be
nice
to
have
some
amount
of
predictability
in
terms
of
when
this
is
actually
going
to
happen
for
I.
Think
a
lot
of
people
who
are
winning
this.
D
D
But
a
lot
of
this
I
think
is
technically
like
we
could
just
release
it
today
if
we
wanted
to
and
being
missing
conformance
and
everything
missing
some
things.
But
we
could
do
that.
D
D
So,
for
example,
I
think
the
day
that
came
up
was
January
17th,
which
is
like
the
January
after
the
day
after
Martin
Luther
King
Day,
and
that
would
give
everyone
time
from
the
clients
to
the
Registries,
to
kind
of
have
in
mind
a
day
where
they
can
get
ready
any
production,
stuff
or
releases
or
blog
posts.
And
if
we
miss
that,
if
we
missed
that
day
by
a
little
bit,
that's
okay,
but
it
kind
of
gives
people
that
kind
of
conference
driven
development
fire
under
the
seats
type
of
thing.
B
We
don't
have
a
good
way
to
force
them
to
have
a
timeline
on
their
side
or
a
little
bit
subject
to
what
they're
doing
I
would
also
be
very
hesitant
to
not
have
things
like
conformance
and
whatever
we
release
so
I
want
to
see
some
important
details
come
in
there.
We've
got
a
couple
of
milestones
in
there
with
a
lot
of
open
issues,
open
PR's
that
I
think
we
want
to
get
merged
first,
so
that
would
be
high
up
on.
D
To
clarify
really
quick,
this
would
be
a
suggested
timeline,
so
it
wouldn't
be
like
on
this
day
we
will
release
like
as
a
group.
We
can
come
up
with
checklists
and
given
that
we
have
this
day
in
mind,
let's
try
as
a
group
to
divide
and
conquer
that
checklist,
and
if
we
get
there,
we
get
there.
So
basically
Brandon
anything
you're,
saying
I.
A
I
will
take
it
next.
I
think
that
timeline
is
a
little
aggressive
given
as
we're
just
about
to
walk
into
Thanksgiving
like
American
Thanksgiving
and
then
Christmas
as
well.
So
I
would
actually
move
that
by
about
call
it
two
weeks
make
it
like
January
29th,
which
will
give
you
the
backup
of
the
the
new
Cloud
native
security
conference
in
Seattle
like
if
you
want
to
be
able
to
have
conference
driven
development.
I
got
one
for
you.
A
Separately,
what
that
also
does
is
that
gives
you
all
of
January
to
be
able
to
like
remember
what
it
is.
We
were
doing
here
because,
like
the
that
that
week,
between,
like
New,
Year's
and
Christmas,
has
this
wonderful
effect
of
being
able
to
have
everybody
forget
their
passwords
and
what
they
were
doing
in
the
first
place.
So
I
I
think
I'd
caution
for
being
able
to
really
pick
up
around
like
January,
2nd
and
I.
Recognize
I'm,
saying
that
on
November,
3rd
and
I
will
take
any
comments
off
the
air.
D
F
And
it's
me:
yeah
I
just
wanted
to
call
it
one
thing
on
timing:
this
has
come
up
a
few
times.
I've
talked
to
some
of
you
individually,
I.
Think
it's
it's
good
to
recognize
that
some
of
the
implementers
of
this
are
going
to
be
open
source
projects.
Some
of
the
implementers
of
this
are
going
to
be
really
flexible
on
how
they
can
Implement
and
expose
to
users
and
gather
data
and
others
aren't
so
we
sort
of
have
the
Spectrum
I
can
speak
to
you,
know,
sort
of
our
contributions
and
implementation
and
ECR.
F
This
is
not
the
type
of
implementation
that
we
can
iteratively
release.
So
one
of
the
things
for
us
is
like
we're
going
to
have
a
lot
of
data,
but
we're
not
going
to
be
able
to
get
it
until
we
can
release
and
we
can
only
release
if
we
have
a
ga
spec.
Now,
that's
not
to
apply
pressure.
It's
just
to
sort
of
add
context
and
say:
if
we're
going
down
that
road,
we
can
do
a
lot
of
testing
internally.
F
You
know
we're
actually
even
happy
to
just
take
take
bits
in
take
projects
in
send
us
PRS,
and
we
can
test
things
internally
against
an
implementation.
We
have
I'm
sure
we're
not
alone
in
that
too.
F
There
are
probably
other
people
on
this
call,
even
that
are
basically
in
that
situation
versus
you
know
like
something:
that's
more
incrementally
done
in
the
open,
it's
much
easier
for
us
to
say,
let's
iterate
on
this
until
you
know
sort
of
on
the
road
to
GA,
so
I
just
wanted
to
kind
of
call
that
out
and
make
sure
that
we
understand
there's
sort
of
a
Continuum
there.
C
F
A
A
For
you,
it's
a
cloud
native
security
conference.
It's
Seattle,
it's
trekking
towards
February
Cloud,.
A
Okay:
okay,
let
me
go
find
a
link.
E
Cool
yeah
I
mean
I,
guess,
I
I
would
just
kind
of
put
my
two
cents
in
towards
trying
to
have
somewhat
of
an
aggressive
date
and
recognizing
that,
even
if
we
do
have
to
slip
that,
if
we
keep
saying
you
know
like
I,
don't
know
we
have
all
of
November
or
most
of
November.
E
We
have
some
of
December
some
of
January
like
that
is
kind
of
a
decent
amount
of
time
to
add
conformance
tests
and
add
and
get
get
some
of
the
implementations
finalized,
because
we
have
already
started
on
implementations
for
a
lot
of
projects
and-
and
so
you
know,
I-
think
I'm,
okay
with
if
we
move
it
back
a
few
weeks.
But
I
do
want
to
to
try
to
drive
a
little
bit
of
progress
towards
this
and
and
work
on
it
and
I.
Think
setting
a
setting
a
deadline
may
help
us
do
that.
A
I
wonder
if,
like
that,
there's
a
thing
that
happens
at
the
end
of
the
year,
where,
like
people,
kind
of
wander
off
and
I
I
wouldn't
want
to
be
able
to
like
have
that
be
a
something
that
stops
us
from
releasing
and
I
have
put
the
correct
links
over
into
Chad
around
Cloud
native
security,
con
North,
America
2023.
C
A
In
the
alternative,
the
other
one
that
I
would
link
if
something
happens
and
we're
just
not
there
yet
I
would
then
track
towards
probably
like
April
10th
the
week
before
kubecon
Amsterdam,
but
that's
a
really
long
way
off.
So,
oh.
A
Dayton
calendar
are
close
and
they
may
appear
but
separately,
trying
to
be
able
to
make
sure
that
we
have
like
some
understanding
of
like
if,
if
we
really
slip,
here's
our
next
real
conference
driven
development
deadline,
but
I
also
hear
Jesse
around.
Like
hey
look.
We
we
can't
ship
until
we've
shipped
so.
F
G
Let's,
let's
be
honest
right,
we
lived
on
release
candidates
of
the
oci
specifications,
for
you
know,
probably
three
years
before
they
went
with
the
first
one
oga,
so
you
release
candidates,
the
more
we
have
and
the
more
the
more
we
get
use
out
of
them,
the
better
we'll
feel
about
them.
I
think
the
main
Tanners
will
be
more.
You
know
inclined
to.
You
know,
want
to
go
ahead
and
merge
that
release
once
we
get
a
lot
of
good
feedback.
F
I
agree
also
I,
don't
see
Jason
here,
but
I
know
that
he's
he's
made
the
observation
that
we'd
like
to
just
generally
see
more
clients
as
well
so
giving
us
some
time
to
to
get
changes
into
gtcr
and
and
orsco
and
other
libraries
that
are
implementing
this
and
then
their
client
builds
and
doing
some
of
that
tests
that
that
makes
sense
and
honestly
circling
back
to
Amy's
point.
It's
we're
going
to
run
out
of
calendar
soon.
So
when
mine
is
not
well
forced
to
function
right,
that's
that's!
F
Okay,
yeah
great
I
did
I
did
laugh
to
myself,
as
Mike
probably
remembers.
I
had
a
whole
implementation
against
the
runtime
spec
that
had
a
whole
life
cycle,
I
started
it
and
shipped
it
and
killed
it
oh
like
before.
We
got
to
ga
sorry,
but
yes,
definitely
familiar
with
that.
G
B
I'm
very
much
conscious
of
we
can
always
live
within
a
release
candidate
for
a
while
and
so
part
of
my
question
to
come
up.
Next
was
going
to
be
what's
the
rush
to
turn
this
from
release
candidate
to
GA
my
own
thoughts
is
it?
Would
it
would
be
nice
to
see
those
clients?
It
would
be
nice
to
start
seeing
people
using
us
for
a
while
get
some
traction
out
of
it
before
saying.
Okay,
now
we're
confident
that
it's
done.
F
Yeah,
just
in
quick
response
to
that
sorry
Sanjay,
like
I,
think
if
the
perception
is
this
is
rushed,
then
it's
not
the
right
thing.
I
think
Josh's
kind
of
idea
was
to
say:
let's
try
something.
So
that's
got
us
talking.
So
that's
good,
we'll
just
find
the
right
date,
I'm
perfectly
perfectly
aligned
with
that.
D
It's
also
a
good
way
to
explain
to
you
know
the
people
that
you're
working
for
with
like
hey.
This
is
important
like
this.
This
day
has
been
decided
so
I
know
like
in
the
case
of
me
and
Jason,
there's
a
lot
of
stuff
going
on,
and
it's
like
to
help
with
some
of
the
clients
stuff
it
just
hasn't
been
high
priority
and
knowing
that
there's
a
stake
in
the
ground
would
kind
of
help
make
that
case.
B
So
this
might
be
going
back
to
the
famous
chicken
and
egg
issue,
because
part
of
my
Hope
was
to
see
groups
like
the
groups.
You're
working
with
right
now
start
adopting
this,
instead
of
getting
it
into
tools
like
cosine
and
s-bomb
generators
and
the
signature
generators
over
there.
That
would
be
really
awesome
to
see
that
is
there
friction
on
your
side
trying
to
get
that
implemented
in
your
code.
Waiting
for
us.
D
Yeah
everyone
it
to
speak
for
all
for
cosine,
specifically,
everyone
wants
it
and
we
keep.
People
are
asking
us
about
it.
It's
just
the
have
things
that
were
trying
to
commercialize
and
there's
just
it's
a
priorities
thing.
So
if
yeah,
we
can
kind
of
always
just
keep
saying
like
oh
yeah,
we
want
to
do
that.
We
want
to
do
that.
D
We
want
to
do
that
and
I
think
you
know
once
we
had
the
PR's
merged,
there's
been
kind
of
a
decline
of
activity
on
our
side
that
you'll
see
because
it's
like
we
kind
of
got
there
and
there's
no
pushback,
like
everyone.
I've
talked
to-
and
you
know
I
was
at
kubecon
last
week
and
did
the
talk
with
Sanjay
and
everyone
talking
to
me
after
that
was
very
pumped
about
this
and
excited
so
it's
there
really
is
I
I'm,
not
hearing
any
pushback
from
anyone,
it's
more
of
just
the.
B
You
did
to
say
that
I
know
what
you're
thinking
of
for
like
getting
cosine
out
there.
Part
of
it
is
making
sure
you've
got
support
from
all
the
Registries
and
we're
not
gonna
have
support
from
Docker
Hub
on
day
one.
If
we
release
right
now,
we
wouldn't
have
support
they're
they're,
doing
a
filter
on
the
subject
field.
That's
going
to
break
you!
If
you
try
to
put
that
on
any
of
your
image
or
artifact
manifest.
Unfortunately,.
B
H
Right
so
I'm
I'm
going
to
be
the
boarding
person
talking
about
a
process,
I
think
everybody's
asking
for
a
date.
Is
it
possible
for
us
to
maybe
align
it
with
the
milestone
in
the
repository
so
that
we
can
see
what
the
progress
is?
H
Whatever
the
rate
is
right,
I
have
a
team
in
China
who's
going
to
be
out
in
January,
so
it
would
be
good
to
make
sure
that
they
are
also
aware,
even
if
it's
two
three
months
down
the
line
so
I'm
not
pushing
for
what
the
date
is,
but
just
to
kind
of
like
in
the
next
couple
of
weeks.
If
we
can
decide
okay,
this
is
going
to
be
image.
Spec
v1.1
and
we
target
some
dates
in
the
same
way:
distribution
spec
and
they
will
align
so
that
at
least
the
work
items
are
captured.
H
That's
roughly
what
I
was
maybe
hoping
for
not
to
get
to
a
specific
date
itself,
because
the
Milestones
might
not
be
fully
tracked
at
this
point
and
there's
some
tasks
in
there.
But
if
you
can
at
least
prune
them,
that
would
help
give
some
transparency,
foreign.
G
G
B
The
challenge
of
Docker
Hub
isn't
so
much
that
they're
doing
it
it's
if
we
try
to
get
adoption
from
tools
like
cosine,
who
have
been
trying
to
be
very
registry
agnostic
and
get
a
chicken
egg
issue
where
they're
not
going
to
start
pushing
that
format
until
Taco
supports
it.
Docker
Hub
doesn't
see
the
pressure
doing
it
since
they're
no
clients
are
pushing
for
it.
B
I
don't
know,
there's
a
good
answer
to
that.
I
know.
I
do
want
to
I.
Do
want
to
point
out,
though,
within
the
past
week
sought
to
start
adding
support
on
their
Style
on
the
reg
on
the
registry
server
side.
So
we
are
seeing
some
movement
from
one
group,
okay,
Josh.
D
Okay
back
to
me,
I
has
anybody,
so
the
the
docker
thing
on
that
point
I
just
want
to
clarify
and
ask:
has
anyone
played
with
it?
A
B
I
believe
that
that
will
be
fully
supporting
of
the
fallback.
Is
that
am
I
understanding
that
correctly.
C
D
G
The
problem
is
the
spec
says
you
can't
change
the
the
man,
the
Manifest,
to
do
a
non-oci
image
and
that's:
what's
that's
how
they're
solving
the
problem.
D
D
So
Docker
so
I
think
it's
probably
worth
bringing
up
Docker
this
week
released
support
for
oci,
artifacts
I
know.
B
B
They
released
because
they
actually
put
a
blog
post
out
so
I'll
clarify
this,
because
I
think
this
will
help
everybody.
B
They
took
the
filter
off
the
config
media
type,
so
they
are
supporting
version,
one
artifacts,
which
means
they
have
to
be
pushed
an
image
manifest
and
you
can
have
any
configuration
descriptor
you
want
on
that,
the
type
of
configuration
descriptor
they
are
not
supporting
the
oci
artifact
manifest,
and
if
you
push
an
oci
image
with
a
subject
field
it'll
reject
it.
It
tries.
B
It's
they're
starting
at
V1,
yeah,
yeah
they're,
starting
at
V1.
It
will
go
to
v1.1
eventually
at
some
point
but
I'm,
just
thinking
in
terms
of
timeline
of
people
implementing
this
stuff.
It's
it's
not
there
right
now
and
it's
from
what
they
were
posting,
the
blog
post.
What
they
implemented
was
version,
one
not
1.1,.
G
D
I
didn't
mean
to
detract
from
the
release
conversation
so
I
think.
Maybe
we
can
go
back
to
that,
but
I
I
am
interested
in
the
docker
implementation.
G
G
D
Well,
I
mean
the
fact
of
the
matter
is
that
it's
a
new
version
of
the
spec,
so
there
could
be
right
Registries
that
just
don't
support
it
day.
One
and
I.
Don't
think
that
that's
really
that's
unfortunate,
but
I
think
that
that's
expected
and
that's
why
we
have
versions
and
I
think
that's
why
the
conformance
stuff
will
come
into
play.
So
I
don't
know
if
it's
something
that
should
block
the
release.
C
I
G
B
Well,
in
addition
to
an
update
a
release,
does
it
make
sense
to
update
the
100
conformance
to
check
for
Fields
like
subject
to
make
sure
registry
isn't
blocking
it
and
that.
G
B
G
B
The
reason
they
mentioned
it
might
not
have
been
public
but
I,
think
it's
safe,
not
to
say
they
want
to
have
more
stuff
on
their
side
in
place
before
they
allow
this
to
before.
They
allow
that
field
to
come
in
because
they
want
to
have
it
in
their
database,
indexed
all
that
good
stuff.
So
there
there
is
a
method
to
their
Madness,
but
it's
sadly,
it's
just
breaking
us.
I
B
Did
tell
we
did
put
in
the
definition
that,
if
you
don't
have
the
digest
tag
that
Backward
Compatible
tag,
you
don't
have
to
scan
it.
So
the
hope
is
that
there
won't
be
many
of
those
for
you
to
have
to
scan.
So
you
don't
have
to
go
through
the
entire
registry
for
it,
but
I
I.
Definitely
sympathize
with
the
concerns.
They've
got.
B
Was
definitely
laughing,
I
was
gonna,
pull
a
leg
there
and
ask
not
so
much
people
like,
but
just
kind
of
inquire
is
subject
over
there
supported
yet
on
ECR,
yet
or
that's
still
filtered.
E
No
I
think
that's
something
that
I'm
sort
of
just
now
thinking
about
is
maybe
we
could
try
and
do
something
to
allow
that
fallback
mechanism
to
work
ahead
of
actually
launching
a
new
API,
so
I
I,
don't
know
we'll
look
into
it.
F
B
Holly
mention
it
because
I
think
I
saw
at
least
one
other
name
there,
and
we
had
put
some
other
audience
on
so
I
didn't
want
to
completely
derail
the
conversation
from
some
of
the
other
thoughts.
B
The
other
big
one
on
the
list-
and
we
can
always
go
back
to
this
if
people
want
to
go
back
to
later
on
970
in
the
image
spec
is
talking
about,
implied
directories.
B
J
Yeah
hi
I'm
Bjorn
I'm,
the
the
author
of
the
pull
request
so
happy
to
answer
any
questions
both
about
the
spec
change
as
well
as
existing
implementations
like
podman
and
Moby.
B
J
What
this
is
about
is
codifying
the
behavior
when
a
layer
clearly
implies
that
certain
directories
exist
in
the
file
system
hierarchy
by
virtue
of
the
paths
of
an
entry
without
there
being
Standalone
entries
for
those
directories
in
the
tar.
So
that.
I
J
You
can
have
files,
but
no
directories
and
then
one
has
to
create
the
directories
and
determine
what
the
appropriate
attributes
are.
This
manifested
in
Moby
as
an
inconsistency
in
mode
for
basically,
depending
on
the
graph
driver
used,
you
would
get
different
permissions
and,
after
an
extensive
review,
we
basically
determined
that
the
oci
spec
just
leaves
what
happens
in
this
situation,
ambiguous
and
thus
its
implementation
defined.
B
J
Right
and
that's
kind
of
the
approach,
I
I
linked
to
a
Moby
issue
that
originally
drove
awareness
of
this,
and
that
is
basically
what
happened
there
is
we
had
a
image
author
come
and
say
that
they
were
getting
unexpected
permissions
depending
on
the
graph
driver,
and
then
you
know,
after
digging
into
it,
the
takeaway
was
basically
well.
If
you
care
about
the
permissions
at
all,
you
really
you
need
to
Define
them,
because
you
cannot
rely
on
what
the
implementation
does.
You
know
even
an
older
version
of
the
spec
that
implementation
conforms
to
doesn't
specify
this.
G
B
Where's,
the
handoff
between
image
run.
I,
don't
spend
enough
time
over
and
runtime
spec
to
know
the
the
line
between
the
two.
Once
we
unpack
an
image,
if
we've
got
all
the
layers
they're
already
pre-built,
is
that
where
it
gets
hand
off
to
runtime
or
is
runtime
actually
doing
some
of
the
unpacking
on
their
side,
it's
the
runtime.
G
B
G
We
could
certainly
do
a
little
investigation
if
you
will
right
between
the
two
major
container
run
times
and
see
we
get
to
pull
in
Derek
as
well,
for
example-
and
you
know,
get
get
some
details-
he
just
recently
refactored
some
of
that
code
in
in
container
D,
and
certainly
all
the
snapshotters,
just
like
the
graph
drivers,
antenna's
release
and
The
graft
writers
and
cryo
have
to
handle
so
yeah.
This
is
a
space
that
we
and
it's
not
just
the
the
path.
It's
also
you
know
what
it.
J
Right
so
I
guess
I'm.
What
I
am
trying
to
follow
is
where,
where
this
ties
back
to
the
runtime
spec,
are
we
saying
that,
because,
typically,
you
know
the
software
that
is
implementing
the
oci
images
back?
You
know
specifically
where
we
actually
care
about
implementing
this
as
a
runtime.
It
should
be
surface
there
for
visibility,
or
are
we
saying
that
actually
there
is
some
coupling
to
the
runtime
spec
that
I
missed
in
my
initial
evaluation
of
where
to
make
this
change.
I.
B
B
J
Okay,
I
will
go
ahead
and
create
an
issue
linking
there
asking
people
to
put
some
eyes
on
it.
Foreign.
H
About
this
one,
when
you
create
the
pattern
directory,
if
there
are
mismatched
entities
or
mixed
entities
with
different
attributes,
how
do
you
define
the
attribute
of
The
Parent
Directory,
because
you're
explicitly
I
mean
the
implicit
directories
have
to
have
a
set
of
attributes
right
at
this
point?
According
to
the
change
right.
H
So
if
so,
when
you
create
the
pattern
directory,
is
it
deriving
out
of
the
attributes
from
the
the
child
entries?
Is
that
what
we
are
implying
here
or
am
I
reading
this
incorrectly
I.
B
H
So
no,
it's
actually
68
that,
when
applying
a
layer,
implementation
must
create
a
Parent,
Directory
implied
by
entries
path
right,
so
the
entries
path
will
attack.
Is
our
client
implementations
expects
to
to
specify
this
entries
path,
or
is
there.
B
B
It
would
create
a
home
directory
and
an
application
user
directory
with
that
mod
time
with
that
uid
with
the
mode
and
the
exit
acceptors
empty
for
that
directory
structure,
we're
saying
that
you
shouldn't
do
that.
You
should
actually
put
the
whole
structure
in
there.
That's
the
last
line,
the
76
saying
please
put
everything
in
there,
but
we're
we're
just
saying
this:
that
we'll
create
those
other
two
parent
things
that
the
home
directory
and
the
our
map
user
directory
with
these
settings,
and
that's
it.
No
other
settings.
H
Because
we're
not
so
there's
no
implicit
creation,
or
at
least
warned
that
implicitly,
for
example,
application
X
could
have
one
attribute
and
in
the
same
location
you
could
have
another
file
that
has
a
different
attribute
set
right.
So
the
pattern
needs
to
be
fully
qualified
in
some
way
or
the
whole
hierarchy
needs
to
be
present.
B
Depending
on
what
you're
using
to
run
it,
so
if
you
have
like
an
overlay
file
system,
you
actually
have
to
create
those
parent
directories
in
there
for
their
related
work,
and
so
this
is
saying,
don't
look
at
the
child
files
at
all.
Okay
makes
all
you're
doing
is
just
saying:
what's
the
user
of
the
container
get
that
uid
get
the
GID
set?
Those
and
that's
it
am.
J
I
right
model
I
would
use
the
yeah
so
like
the
mental
model
I
would
use
to
describe.
This
is
basically
in
the
case
that
metadata
is
missing
from
the
tar
file.
Something
has
to
be
chosen
as
the
metadata,
and
now
we
are
explicitly
defining
what
that
what
those
attributes
are,
as
opposed
to
leaving
it
as
a
question
mark
It's,
a
situation
that
already
exists
in
real
implementations.
It's
just
not
explicitly
stated
what
you
should
do
or
if
this
is
something
that
the
spec
encourages
permits
or
discourages.
J
B
Is
there
any
ambiguity
there
because
well
yeah
I
think
there
is
actually
yes,
when
you
pull
up
when
you
pull
a
blob
and
you've
got
the
tar
file,
that's
not
necessarily
associated
with
any
one
image,
and
so
you
don't
necessarily
have
the
image
config
Json,
to
put
a
different
way.
You
could
pull
this
blob
from
two
different
images
and
two
different
images
that
have
a
different
user
in
the
different
setting
in
there
different
git
different
uid
they're,
going
to
use
to
run
the
same
blob.
J
I,
don't
think
there
is
any
ambiguity
from
the
perspective
of
like
maybe
maybe
the
phrasing
under
positioning
needs
to
change
if
we
think
that's
a
problem,
I
think
the
unit
that
we
are
reasoning
about
when
it
comes
to
how
these
hierarchies
are
created
and
how
permissions
are
applied
is
a
complete
container
image
with
a
as
a
discrete
entity
and
so
we're
referring
to
layers
and
putting
this
in
layers,
because
that
is
how
they
are
implemented
as
a
star
archive.
But
what
is
the
correct
permissions?
J
That's
something
that's
essentially
defined
at
runtime
as
a
as
layers
are
applied
and
as
a
root
of
S4
container
is
built,
and
it's
not
defined
at
the
at
the
time
that
the
layer
is
created.
If
that
makes
sense,
yeah
I.
J
See
an
ambiguity
there,
because,
if
you
have,
if
you
have
the
same
layer,
that's
shared
between
two
different
images,
what
the
correct,
what
it
can
form
an
implementation
would,
it
would
would
create
attribute
wise
for
each
image
is,
is
discrete
despite
the
fact
they
share
a
layer.
There
could
be
different
interpretations
because
of
subsequent
layers
and
the
image
config.
B
I
guess
where
I'm
going
is
when
you
get
into
the
implementation
those
layers
get
shared,
and
so,
while
the
image
spec
defines
it
with
the
thought
process
that
this
is
how
we
Define
an
image
when
we
look
at
how
something
like
containerdy
implements
this
they're
going
to
unpack
The
Blob
one
time,
no
matter
how
many
of
them
just
use
it,
and
so
it
could
get
into
a
scenario
where,
depending
on
the
order
that
you
pull
images,
you
might
have
a
different
set
of
file.
Permissions.
J
Okay,
yeah
now
I'm
now
I'm,
seeing
now
I'm,
seeing
what
you
mean
I'll
I'll,
probably
have
to
Circle
back
on
that
one.
After
doing
a
bit
more
digging.
As
far
as
what
those
implementations
look
like
the.
B
Yeah
yeah,
like
I,
say
this
is
getting
lower
than
I,
usually
spend
on
the
the
level
thinking
about
the
challenge,
and
so
getting
a
few
of
the
people
do
run
times
and
I
think
Mike
had
the
best
suggestion
of
all
try
to
get
a
few
of
them
and
it
gets
a
few
more
eyes
on
it.
G
B
G
J
Yeah
I
mean
I
think
fundamentally
like
it's
it's
useful
to
Define,
but
yeah.
If
there
is
ambiguity
there,
based
on
exactly
how
how
layers
are
shared
between
multiple
multiple
root
fs's,
you
know
at
runtime,
then
the
best
solution
probably
is
going
to
be
to
recurse
up
the
file
system
hierarchy.
Looking
for
uid
and
GID,
or
something
like
that,
and
then
you
know
propagating
it
back
down.
It's.
J
It's
just
whether
or
not
the
proposed
mechanism
is
sufficient,
given
the
the
diversity
of
implementations.
G
B
Cool
well
thanks.
So
much
for
joining
me,
Bjorn
I
know
this
is
kind
of
the
last
minute
thing
we
dropped
on
you,
but
he
gave
us
a
lot
of
good
feedback.
There.
B
Was
one
other
item
I
threw
on
the
list,
distribution,
spec
360.?
B
Let
me
pull
that
one
up
and
I'll
give
throw
a
link
in
here
just
because
it
had
been
dropped
and
I
gave
a
bunch
of
feedback
on
slack,
but
it
was
over
in
the
distribution
Channel
container
D,
and
so
it
wasn't
visible
to
a
lot
of
us.
I
wanted
oci,
but
they
wanted
to
ask
about
diff
pools
and
I
just
want
to
make
sure
that
I
wasn't
given
too
much
bad
advice.
When
I
was
looking
at
it.
B
The
question
yes
or
city
Epoch
be
another
good
answer.
There
Brian's
dropping
all
kinds
of
knowledge
problems
in
this
in
the
channel
there,
so
the
death
pools
that
was
proposed
as
a
way
to
potentially
help
people
that
have
very
bandwidth
limited
environments.
B
So
they
could
potentially
look
at
two
different
images
they
pulled
and
when
they
pulled
a
new
version
of
the
image
they
could
do
a
diff
on
the
blob.
From
the
last
time,
I
had
lots
of
concerns
on
this
one,
but
I
wanted
to
raise
a
little
bit
of
visibility
on
this
as
well.
Just
because
I
don't
want
to
be
the
only
voice
given
feedback
on
it.
B
That
didn't
seem
like
something
that
thought
that
registry
operators
are
going
to
want
to
do
and
then,
when
I
get
into
thinking
about
actual
implementations
when
I
looked
at
this
stuff
before,
how
do
you
know
which
two
blocks
diff
each
other?
It's
usually
pretty
straightforward.
When
images
aren't
changing
much,
but
once
someone
starts
adding
a
layer
in
the
middle
once
they
change
what
a
step
is
doing
a
little
bit.
B
B
B
B
How
aggressive
do
they
want
to
be
in
terms
of
the
compression
and
so
that
that
I
think
was,
depending
on
the
implementation
they're
going
to
pick
different
tiers
in
there
and
pick
different
settings,
and
so
it's
not
necessarily
reproducible
and
so
you're
not
going
to
get
the
same
digest
out
of
the
other
end
and
so
you're
going
to
spend
all
this
effort
trying
to
do
a
diff
pull
or
come
out
with
a
different
digest
and
then
realize
you
got
to
fall
back.
It's
not.
You
just
doubled
your
bandwidth
instead
of
having
it
which.
B
It's
an
interesting
request.
If,
if
I
was
to
think
about
how
I
would
do
it,
I
wouldn't
do
it
in
the
distribution?
Spec
I
would
probably
lean
more
toward
like
the
East
rgz
implementation,
because
there
they
can
do
pulls
from
within
a
gzeptar
file
of
certain
ranges
and
pull
out
individual
files
they
want
to
see,
and
that
would
make
a
lot
more
sense
to
me.
C
B
B
Yeah,
more
local,
mirroring,
definitely
I,
I.
Think
the
big
concern
when
they're
pulling
these
things
is
they've
got
images
that
they
are
pulling
from
Upstream
that
change
one
file
in
it,
but
it
changes
the
entire
blob
instead
of
pulling
it
all
down
again,
my
initial
comment
to
them
was
well,
they
really
should
be
just
restructuring.
Their
image.
G
That's
a
good
point
Brian
on
the.
Why
do
we
need
layers
for
this
I
I
really
do
like
the
idea
of
changing
up
the
respect
to
pages
and
or
fly
at
least
files
at
some
point.