►
From YouTube: OCI Weekly Discussion - 2020-12-02
Description
The weekly developer's OCI call recording from Dec 2, 2020. Agenda and notes located here: https://hackmd.io/El8Dd2xrTlCaCG59ns5cwg?view#December-2-2020
A
All
right,
I
I
was
late,
so
I
assumed
you
guys
were
already
started.
Our
first
agenda
item
was
registry
apis.
Mr.
D
Corman,
oh
yeah.
Yes
me,
okay,
yeah!
I
just
yeah!
I
put
this
down
a
few
weeks
ago.
Let
me
just
I've
just
made
a
few
random
slides
just
because
I
felt
like
it
the
other
week.
F
A
D
F
D
No,
these
are
specific
ones,
not
all
of
them.
There's
some.
You
know
this
is
just
some
ones.
I
was
that
came
up
the
other
day
and
I
thought
I'd
like
to
talk
about.
So
this
came
up
really
in
the
context
of.
D
There's
a
not
very
good
tooling
for
users
to
actually
see
what's
in
their
registries,
and
most
people
tend
to
look
at
gui's
and
not
cli
interfaces,
and
we
really
wanted
to
a
docker
which
we
really
want
to
well.
This
came
up
when
we
were
trying
to
go
through
something
we've
postponed
about
trying
to
encourage
users
to
delete
unused
images,
and
we
thought
well,
we
should
have.
D
We
should
ship
a
cli
command
to
help
them
do
this,
but
one
of
the
things
we
actually
want
them
to
delete
is
images
that
are
not
tagged
because
currently,
around
sixty
percent
of
images
on
docker
hub
are
in
fact
invisible
to
users,
because
we
don't
provide
any
way
for
them
to
see
them
through
the
ui
or
a
cli
at
all,
and
so
it
was
like
well
we'd
like
you
to
delete
these
images
that
you
don't
know
you
have,
and
this
seemed
like
a
very
weird
problem-
to
have
got
ourselves
in
most
users.
D
Don't
really
understand
that
pull
by
digest
is
a
thing.
I
don't
think,
and
they
don't
think
that
this
means
well.
Okay,
that
means,
if
I
I
can
still
pull
images
that
are
no
longer
tagged.
D
We're
not
it's
I'm
very
unclear
how
users
are
using
pull
by
digest.
Docker
swarm
always
did
pull
by
digest,
and
I
think
that
kubernetes
in
general
does
not,
which
isn't
a
mistake
of
its
own,
which
is
partly
why
people
like
immutable
tags.
So
this
is
all
a
bit
inconvenient
and
then
there's
this
even
weirder
thing
that
I
don't
think
I
really
knew
about
until
a
few
months
ago,
which-
and
I
don't
know
how
widespread
this
is
amongst
other
registries
either,
but
on
docker
hub.
D
If
you
mutate
a
tag,
we
basically
give
a
rift
count
to
touch
that
tag,
listing
all
the
all
the
images
that
were
once
tagged
as
that
tag,
and
then
we
only
delete
them
when
you
delete
that
tag.
D
So
if
you
label
something
latest
and
then
you
label
something
else
later
and
something
else
latest,
we
keep
a
list
of
all
things
that
used
to
be
latest
and
we
only
delete
them.
When
you
delete
the
latest
tag,
then
we
delete
all
the
things
that
used
to
be
tagged
as
that.
A
D
A
D
Yes,
because
we,
if
you
push
something,
that's
never
tagged,
we
will
garbage,
collect
it
so
and
or
if
you
well
actually
there's
we
didn't,
let
people
delete
a
tag
without
any
kind
of
so
in
fact,
it's
very
difficult
to
actually
the
only
way
you
can
have
an
untagged
image
is
for
it
wants
to
have
been
tagged
on
docker
hub
really,
I
mean
we
do
technically
now
allow
you
to
push
untagged
images,
but
hardly
anyone
knows
that.
So
I
think
it's
extraordinarily
rare.
A
So
the
idea
is,
you
want
to
be
able
to
delete.
People
want
to
be
able
to
delete
things
that
are
no
longer
tagged,
the
untagged
digest
scenario,
but
if
I
do
delete
a
tag
it
deletes
the
history
as
well.
D
Yeah,
well,
you
know,
but
that's
the
only
way
we
delete
things
is
when
you
basically
delete
a
tag
that
that
then
deletes
all
the
things
that
were
tagged
as
that
which
is
really
strange,
and
no
one
knows
about
this
at
all.
We
have
no
plans
to
expose
this
to
users,
even
though
it's
a
kind,
it's
a
kind
of
potentially
interesting
to
know
what
something
was
once
tagged
as,
but
I
don't
think
that
we
want
to
guarantee.
Will
I
keep
this
this
semantics
forever,
because
it's
weird,
I
kind
of
like.
A
D
D
A
D
D
I
haven't
actually
talked
to,
I
guess
stevo
who
was
around
then
it's
because
we
have
to
have
some
way
of
not
deleting
these
things.
Otherwise,
as
soon
as
you
want,
as
soon
as
it
was
not
actually
tagged
anymore,
it
would
be
garbage
collected
anyway,
unless
you
would
just
insist
that
everyone
tags
everything
anyway.
D
So
we're
going
to
release
a
temporary
tool
that
doesn't
address
any
of
these
issues
properly,
but
will
we
will
be
a
temporary
tool
but
for
reasons
like
it
only
works
for
we've
targeted
here
for
docker
hub,
because
if
api
incompatibility
and
is
only
temporary,
we're
not
going
to
be
releasing
it
as
a
docker
command.
D
Initially,
this
is
a
me
called
hub
tool,
and
but
we
would
like
to
have
a
standardized
cross-registry
command
included
in
docker
for
doing
these
types
of
things,
but
we
don't
want
to
you
know
we
don't
want
to
do
that,
while
there's
non-standardizations
I'll
just
kind
of
write,
some
notes.
Thinking
about
what
other
kind
of
semantics
we
could
do,
I
mean
I
think,
that
if
you
look
at
the
registry,
spec
there's
only
a
defined
operation
for
listing
tags.
D
There's
no
operation
for
listing
untagged
images
at
all
specified,
but
I
mean
should
we
just
list
you
know:
should
we
just
say
that
all
operations
should?
Basically,
we
shouldn't
treat
tags
specially
and
like
hashes
are
like
tags,
but
just
not
ones
that
you
gave
it,
which
is
a
bit
weird
semantically
in
terms
of
the
registry,
but
then,
like
everyone,
can
see
things
and
it's
straightforward.
Should
we
just
delete
everything
is
not
tagged,
brutal,
but
much
simpler
to
understand,
and
then
users
should
just
tag
everything
if
they
want
to
keep
it.
D
If
you
want
to
keep
the
old
versions
of
debian
do
what
a
lot
of
people
do,
which
is
give
them
a
you
know,
give
them
a
lot
of
people
now
use
git
hashes
for
their
tags
by
default
for
everything
they
push.
So
everything's
always
tagged,
and
so
this
actually
would
work
for
those
people,
but
it
wouldn't
work
for
other
other
use.
Cases
like
we
don't
have
a
version
for
every
version
of
the
debian
official
image
that
has
ever
pushed,
for
example,
because
debian
doesn't
really
have
a
version
for
them.
D
We'd
have
to
make
up
a
version
and
it
would
be
confusing,
and
there
will
be
a
very
large
number
of
official
images
which
are
for
debian
say,
which
will
be
very
confusing.
Also.
We
could
expose
this
past
history
as
a
thing,
but
we
wouldn't
like
to
do
that
unless
it
other
people
are
doing
that,
because
we
don't
want
to
have
a
different
semantic
model
on
docker
hub
from
whatever
else
in
the
world
uses,
and
that's
all
the
ideas
I
could
think
of.
D
So
I'm
really
interested
in
feedback
on
this,
because
it's
just
a
problem
that
we
really
just
haven't
addressed
in
five
years
really
and
I'm
sure
it
affects
other
people
too.
G
C
A
So
justin-
I
were
talking
about
this
a
couple
of
weeks
ago,
and
it's
like
this
is
a
great
conversation
for
us
to
talk
about,
like
totally
support
docker
getting
something
out,
because
there
is
no
standard
and
we
know
how
fast
we
get
our
standards
done,
so
obviously
ship
something
out
get
some
feedback
as
an
experiment,
but
we
get
asked
of
this
all
the
time
in
acr.
Like
hey,
I
want
to
roll
back
my
image
to
the
previous
digest,
because
I
have
a
problem.
What
was
the
old
one?
A
I
don't
know
there's
and
then
how
do
I
manage?
How
does
a
customer
manage
deletes?
And
each
of
us
as
registries
have
various
delete
semantics
that
we're
trying
to
support,
but
to
your
point,
there's
no
client
tool,
because
none
of
us
have
a
consistent
api.
It's
usually
surfaced
in
our
proprietary
clls.
So
I'd
love
for
us.
You
actually.
A
A
So
I
thought
it
was
just
a
good
topic
to
bring
up
for
us
to
like
tease
the
problem
and
I'd
love
to
get
a
working
group
together
that
we
can
actually
maybe
come
to
an
agreement
on
what
some
of
these
listing
apis
are
and
and
how
to
do,
delete
apis,
delete
management.
I'm
sure
will
be
a
cloud
specific
feature
that
we'll
each
want
to
do
or
registry
specific
feature,
but
at
least
the
api
should
be
there.
So.
D
Right,
I
would
like
the
user's
mental
model
to
be
the
same.
That's
what
I
mean
that's
kind
of
disturbs
me.
I
don't
want
to
have
to
have
a
different
mental
model
of
what
how
garbage
collection
works,
to
use
a
different
registry
that
seems
really
difficult
to,
given
that
we
want
people
to
be
able
to
use
possible
registries,
but
if
they
put
the
stuff
in
the
other
registry
and
then
it
gets
deleted
because
it
doesn't
conform
to
the
things
that
are
kept.
That's
bad!
That's
not
that's
not
great.
E
So
I
personally
favor
referencing
the
images
by
their
digest
rather
than
their
tags.
I
understand
why
the
tags
need
to
exist,
but
the
digest
is
actually
the
unique
identifier
of
the
image.
So
I.
D
E
A
And
what
we've
basically
seen
until
we
get
notary
with
signing
working
where
they
can
assure
it
is
what
it
is.
Then
they
do
all
kinds
of
interesting
work
around
to
leverage
the
digests
and
then
without
without
digest
and
then
tag.
Locking
is
the
other
problem
if,
without
a
way
to
lock
a
tag,
they
don't
trust
that
a
tag
is
not
going
to
be
immutable
and
again
because.
D
Yeah
part
of
the
problem
about
people
about
just
asking
people
to
work
with
digest
is
the
tool
thing
for
working
with
digest
is
also
quite
bad.
It's
very
difficult
to
find
out
what
a
digest
is
at
the
time
you
want
to
know
what
it
is
and
part
of
that's
docker's
fault,
like
doco,
for
example,
won't
give
you
a
digest
at
all
until
you've
pushed
already
pushed
it
to
the
registry
with
a
tag
which
is
kind
of
unhelpful.
D
The
container
the
tooling,
is
better
from
that
point
of
view,
but
it's
still
there's.
No,
I
don't
know
the
majority
of
build
clients
won't
give
you
a
digest
up
front
before
you
push,
even
though
that's
now
technically
more
possible
and
a
lot
of
people
have
trouble
building
workflows
that
they
use
digest.
Because
of
this
kind
of
issue,
I
find
I
say
that
people
tend
to
people
tend
to
end.
D
E
I'm
going
to
shamelessly
plug
turns
a
lock
feature
here,
because
what
that
does
is,
if
you're
building
a
container
image
with
the
docker
file,
if
you
say
turn
lock
that
docker
file
you'll
get
the
you'll
get
the
same
docker
file,
but
with
the
the
image
digest
in
the
from
part.
So
I
mean
at
least
there.
E
D
H
I'll
I'll
just
interrupt
so
I
have
a
quick
take
on
this
slide.
I
think
this
is
interesting
because
it's
listed
in
the
right
order,
I
think,
being
able
to
list
images
that
aren't
tagged
is
like
the
highest
priority
thing
that
I
cannot
believe
is
not
on
the
registry.
Yet
I
linked
in
the
heck
md
to
issue
number
22
on
the
distribution
spec,
which
links
to
proposals
in
docker
distribution
which
got
closed
in
favor
of
the
distribution
spec,
and
then
my
proposal
got
bike
shedded
to
death.
H
The
second
thing
that
I
think
is
important,
but
not
nearly
as
important
as
listing
untagged
images
would
be
deciding
on
some
deletion
semantics.
We
certainly
won't
ever
delete
all
untagged
things
with
gcr
having
some
way
to
like
describe
or
standardize.
Maybe
supply
headers
about
this
kind
of
thing
would
be
interesting,
but
that's
a
whole
other
discussion,
then
the
third
thing
about
like
tag
history.
I
would
love
that
as
like
a
new
feature,
but
I
don't
know
that
it
solves
an
immediate
problem
as
much
as
the
other
two
things.
A
So,
just
for
the
sake
of
time,
because
we
do
have
a
bunch
on
here-
I
mostly
want
to
just
just
in
the
surface
this,
because
I
think
obviously,
a
lot
of
people
hear
a
passion,
make
it
aware
they're
doing
some
stuff
again
love
to
get
a
working
group
to
focus
on
what
we
could
do
and
from
there
unless
there's
some
other
really
pressing
thing.
I'd
love
to
just
free
up
and
move
on
to
our
packed
agenda
before
with
the
holidays.
J
K
Yeah,
I'm
here
sorry
about
that.
I'm
gonna
have
to
refresh.
K
So
I
think
what
we
decided
on
the
head
request
was
that
we're
going
to
mention
it
in
the
spec.
Technically,
it's
covered
by
the
get
request,
but
because
a
lot
of
registries,
let
you
avoid
raid
limiting
by
doing
a
head
request
as
opposed
to
getting
the
entire
blob
it's
worth
mentioning
in
the
spec
and
then
pr208
the
cross
repository
mounting
it's
already
in
the
tests
and
we've
added
it
to
the
spec.
Here,
I'm
sorry,
this
is
the
test
we
just
pushed.
What
we
hope
is
the
final
update
to
this.
K
I
I
think
there
was
an
open
issue
that
derek
mentioned
23
hours
ago
on
the
on
the
issue.
He
said
that
he's
a
negative
one
in
defining
a
new
oci
content,
digest
header,
and
I
think
that
has
to
do
with
the
normal.
I
The
the
current
way
of
addressing
this
is
a
docker
content
digest
header,
and
I
think
that
in
this
issue
in
pr
we
were
wondering
whether
we
needed
to
define
a
forward-looking
oci
content,
digest
header
as
well
and
derek
was
of
the
opinion
that
it
wasn't
necessary
for
for
the
1.0
spec
to
be
that
forward-looking,
but
to
better
document
what
was
existing.
K
Yeah
yeah,
the
consensus
that
seemed
to
emerge
when
we
talked
about
it
last
was
that,
in
order
to
avoid
breakages
with
the
1.0
release
of
the
spec,
that
sticking
with
the
docker
content
digest
header
was
was
the
best
option,
I'm
hoping
to
hearing
alternative
viewpoints.
I
don't
think
everyone
was
at
that
meeting,
but
it
seemed
pretty
convincing
at
the
time.
C
C
J
A
I
A
I
think
some
of
the
stuff
we've
tried
to
do
some
cleanliness
and
cleanliness
for
the
sake
of
cleanliness
at
the
risk
of
breaking
people
like
this
is
not
stuff.
That's
surfaced
on
the
outside,
too
heavily
it's
by
the
time
you're.
Looking
at
this
you're
deep
in
the
sausage
factory,
and
it's
mostly
the
people
on
this
call,
it
see
it
nobody
else:
okay,
cool.
I
L
Yes
sure
does
me
so
hello,
everybody,
I'm
maurizio,
I'm
talking
on
behalf
of
my
colleagues,
alban
rodrigo,
so
they
are
in
germany,
so
this
meeting
was
too
late
for
them.
So
I'm
picking
up
their
work,
their
work
here.
So
the
the
the
idea
here
is
to
give
you
an
update
of
the
world.
We
have
been
doing
with
the
adding
support
for
second
notify
on
the
runtime
spec
and
run
c.
So
we
presented
this
proposal
some
months
ago
here
at
this
meeting,
and
we
started
working
on
that.
L
We,
we
opened
an
initial
pr
for
doing
that,
so
the
idea
was
to
use
you
know
a
hook
to
pass
the
file
descriptor
from
run
c
to
the
agent.
So
there
was
like
an
intermediate
hook
that
that
takes
that
file
descriptor
and
passes
that
file
descriptor
to
the
agent
and
yeah.
Later
on.
After
some
reviews,
we
discovered
that
it
was
not
the
ideal
solution.
We
found
some
problems
with
that
idea
and
we
switched
back
to
the
idea
of
passing
the
file
descriptor
directory
using
a
unis
domain
socket.
So
basically
we
have
run
c.
L
We
have
the
second
agent
and
we
pass
the
file
descriptor
using
this
mechanism,
so
we
open
the
pr.
This
is
similar
to
a
burial
pr
that
was
opened
in
march,
the
pr
1003
on
the
runtime
inspector
repo.
The
changes
we
have
done
here
is
that
we
are
adding
support
for
my
direct
data.
So
basically,
we
are
passing
also
information
about
the
container
information
about
the
process
that
is
running.
This
is
useful
because
the
second
notify
agent
should
be
able
to
understand
what
is
the
container.
What
is
the
process
that
is
performing
a
cisco?
L
L
Additionally,
to
that,
we
also,
we
also
did
implementation
in
run
c,
using
this
this
proposal,
and
so
far
we
haven't
found
any
problem,
so
we
are
using
that
and
we
have
yes,
we
have
performed
multiple
tests
and
this
is
working
fine.
So
what
will
I
I
would
like
to
do
is
to
get
some
reviews
on
that,
so
the
idea
is
to
see
if
there
are
any
blockers.
If
there
are
any
opinions
on
how
we
can
move
that
forward
to
get
that
merged
in
the
spec.
L
D
I
do
have
this
kind
of
weird
question
about
who,
like
who
the
specs
for
now
like
this,
this
detail
of
sending
of
the
fact
that
you
want
to
intercept
some
system
calls
seems
to
me
like
it
should
be
part
of
the
runtime
and
not
part
of
something
like
another
layer.
That's
not
actually
very
well
defined.
D
B
There's
a
couple
of
problems
in
just
using
run
c,
I
think
the
biggest
one
is
you
want
the
manager
to
be
a
centralized.
There
are
use
cases
where
you
want
the
manager
to
be
a
centralized
process
that
lives
in
a
different
name
space
and
has
different
security
properties
than
run
c.
D
Right:
okay,
I
mean
that
yeah
that
make
that
kind
of
makes
sense
if
that's
if
it's
gonna
be
architected
like
that,
it's
just
like
the
whole.
The
whole
kind
of
set
comes
back.
As
I
said,
my
kid
can't
talk
is
basically
just
horrible
in
every
possible
way,
like
the
whole
idea
of
just
serializing
badly
to
jason,
so
that
we
can
do
these
things
is
just
like
a
terrible
design,
so
I
mean
it
doesn't
make
it
any
worse.
D
It
just
continues
to
pile
on
into
a
design.
We're
gonna
have
to
fix
one
day,
because
the
whole
thing
is
just
horribly
laird
like
we're.
Expecting
people
like
users
in
kubernetes
to
feed
things
that
feed
into
run
see
that
now
has
bits
of
spec
they're
designed
for
implementations
of
safe
runtimes.
I
mean
it's
just
like
there's
total
layering
issues
everywhere
in
the
spec,
but
we're
obviously
not
gonna
fix
that.
Now,
it's
just
gonna
get
worse.
D
Literally,
no
one
can
use
it
like
people,
don't
understand
how
you
use.
It
is
not
widely
like
it's
not
usable
in
any
meaningful
sense.
There's
no
idea
about
like
whose
security
responsibility
anything
is
like
it's
like
that.
The
whole
thing
just
kind
of
is
just
a
historic
mess.
That
was,
that
is,
oh
sorry,
it's
all
tucker's
fault,
just
designing
like
this.
In
the
first
place,
it's
garbage
it's
not
helping
anyone.
It's
like
you
can
use
it.
E
D
C
So
I
I
could
imagine
it
being
another
other
run
times
besides,
just
run
c,
though,
and
hand
it
around
and
to
have
some
consistency
around.
That
is
part
of
it.
Also.
C
Yeah
another
other
run
scene,
maintainers
cannon.
I
think
you're
on
that
list
also.
C
C
A
M
All
right,
okay,
looks
like
we're
good
host,
okay,
so
hello,
everybody,
my
name
is
bo
and
working
at
alibaba
cloud,
and
so
here
we're
going
to
give
a
brief
introduction
about
our
project
called
dragonfly
image
service.
M
So
back
to
join
this
year,
we
have
discussed
a
lot
for
a
long
time
about
the
oci
v2
proposal
and
we
have
several
ideas,
outcome
and
so
our
project,
so
at
first
the
issue
with
oc
one
image
back
is
slowness
in
our
product
production
case,
because
every
start,
every
container
starter,
will
involve
downloading
and
unpacking
images,
especially
when
the
image
is
quite
large.
M
The
the
time
is,
the
latency
is
quite
large
and
liz
for
that
lazy.
Fetch
is
a
common
idea,
and
last
year
we
came
up
with
this
project.
We
named
it
as
naidas
and
now
is
merged
as
the
p2p
solution
dragonfly
as
its
one
of
its
components,
image
service,
and
so
the
most
important
design
about
our
project
is
to
split
the
container
image
into
two
parts:
metadata
and
data
and
and
for
a
single
container
start.
We
can
only
download
the
metadata,
because
that's
the
only
thing
a
container.
That's
that's.
M
The
only
thing
container
needs
to
start
with,
and
data
can
be
left
on
the
register
or
other
storage
where
we
can
fetch
it
on
demand.
So
naida's
cons
consists
of
two
parts.
The
first
one
is
the
user
space
file
system
called
wraps.
M
Another
is
an
okay,
this
user
space
will
handling
will
handle,
handle
the
metadata
and
the
data,
and
another
thing
is
about
the
image
manifest,
which
is
the
extra
image
manifest
taking
advantage
of
current
image,
specs
platform
feature,
and
so,
with
this
feature,
we
can
make
this
nidus
project
compatible
with
the
current
image
stack
and
okay.
So
this
is
about
nidus.
M
So
we
keep
all
the
metadata
here,
enabling
files
and
directories
into
a
file
called
bootstrap,
which
is
a
metadata
and
the
other
data
file.
Data
has
been
split
into
trunks,
with
the
size
being
one
megabyte,
but
the
size
can
be
configured
right
now.
We
just
fix
it
at
one
megabyte
for
the
bootstrap.
M
The
whole
metadata
tree
is
a
merkur
tree,
so
we
can
do
the
integrated
check
very
easily
and
and
because
the
data
has
been
split
into
trunks,
so
the
duplication
can
be
done
in
trunks.
Instead
of
the
current
layers,
which
is
more
efficient,
okay,
so,
okay,
thanks
to
this
such
a
design,
a
lot
of
ideas
in
our
brainstorm?
M
Okay,
where
it
is?
Oh
sorry,
okay,
here
it
is
yes,
so
the
proposal
brainstorm
ideas
has
been
covered
in
our
project.
For
example,
the
duplication
has
been
done
because
we
have
using
trunks,
which
is
more.
M
Okay
and
oh
basically,
I
want
to
go
over
this
brainstorm
ideas
to
see
which
one
we
have
covered
and
which
one
are
not
yet
the
duplication
and
this
canonical
representation
and
the
third
one
is
explicit
metadata.
We
have
done
because
of
our
metadata
user
space
file
system
and
we
have
removed
all
the
unnecessary
metadata
like
timestamp
and
device
node,
something
and
and
the
most
important
about
the
liz
fetch
support
here,
because
we
only
need
to
download
the
bootstrap,
which
is
metadata
to
start
a
container.
M
So
the
start
time
of
contender
would
be
extremely
faster
than
than
before
and
from
the
security
point
of
view.
Here
we
have
some
requirement
like
a
bills
of
materials
and
because
we
have
right
now
we
have
a
separate
metadata
file.
We
can
do
static
check
on
this
metadata
file
to
detect
to
determine
whether
there
is
a
forward
effects
on
in
in
this
image
and
and
the
how
to
verify
the
image.
M
M
So
with
our
nidus,
we
will
do
the
validation
about
integrity
in
the
runtime,
on
both
metadata
and
and
and
okay,
and
the
other
two
is
also
about
the
tube.
Oh,
I
forgot
one
thing,
so
our
solution
is
like
a
crfs.
It's
a
fuse
solution,
so
we
develop
our
etherspace
file
system
on
top
of
fuse.
M
All
right,
let's
see,
and
besides
that
we
have
three
more
features
here
we
can
do
prefetch,
which
is
which
can
be
very
important.
If,
if
the
network
is
not
it's
not
stable
and
once
the
container
is
started,
we
will
start
a
background
threads
to
do
prefetch
so
that
if,
if
all
the
other
blob
I
mean,
if
the
other
data
has
been
downloaded
into
local
storage,
then
we
don't
need
any
network
anymore.
M
Just
like
current
osi,
everyone
and
cache
is,
is
kind
of
a
cache
on
the
local
storage
so
that
multiple
images
can
share
their
trunks
in
the
local
storage
and
the
last
one
is
about
compression.
M
So
we
compress
our
data,
our
file
data
with
currently
two
algorithm,
one
of
the
two
algorithm,
the
default
one
in
lg4,
so
that
we
can
get
a
start
efficient
in
both
uploading
transfer
and
local
storage
and
register
storage.
M
A
N
A
N
Not
aware
of
any
I
I
am
curious
about
nidus
you
you
talked
about,
you
can
do
online
integrity
verification.
Can
you
talk
a
little
bit
about
how
that
works?
Is
that,
like
an
ima
style
thing,
or
what
exactly
do
you
do
there.
M
Sure
so
so
the
bootstrap,
the
metadata,
is
a
mercury.
M
So
parent
has
all
the
all
the
hash
values
about
its
child
as
children
and
whenever
a
child
or
parent
is
got
accessed,
that
hash
will
be
compared
and
for
data
we
do
both
compression
and
and
what
and
hash
right
now
we
use
the
shar
256
and
another
thing
is
about
blake
blig3
digest
and
because
we
have
digits
for
every
trunk
and
that
trunk.
Oh
sorry,
the
dentist
is
also
included
in
the
metadata
which
is
covered
in
this
in
this
document,
because
we
don't
have.
N
M
Have
yeah
we
do
have
digest
for
every
trunk
so
whenever
the
trunk
got
accessed,
which
means
when
you
do
the
fetch
on
demand
and
that
trunk
will
be
validated,
but
I
need
to
know
that
this
validation
needs.
You
know
it's
not
free.
C
C
Let
me
see,
I
don't
know,
I'm
sorry,
that's
kind
of
getting
into
the
weeds,
I'm
just
this
topic
excites.
It
is
interesting
to
me.
M
Yeah,
so
my
friends
ponto
maybe
can
he
can
explain
in
more
details.
O
For
now,
for
for
the
metadata
the
we
will
continue,
we
only
check
for
the
the
chosen
or
the
further
dis
sentence
right
now,
but
and
we
have
we
have
two
hours
to
validate
entire
bootstrap,
and
so,
if
we
want
to
verify
our
entire
person,
they
need
to
do
the
durable
as
well,
and
also
the
the
runtime
check
is
optional
because
it
affects
the
cpu
usage
and
long
time.
Overhead.
C
Yeah,
that
was
one
of
the
challenges
we
looked
at
when
messing
with
the
bsd
entry
approach
to
things,
but
we
talked
about
getting
into
this
kind
of
merkle
tree
for
the
kind
of
passive
validation.
F
C
Yeah,
but
it
does
have
an
impact-
it's
fun
yeah,
I'm
keen
to
to
put
my
hands
on
this.
This
is
interesting.
M
Yeah
yeah,
I
put
a
little
dock
here
in
our
it's
not
merged
yet,
but
it
will
be
in
this
week.
So
if
you
are
interested
in
this,
you
can
go
over
this
dock
for
more
details.
This
will
cover
the
main
structures
and
some
explanation
and
details
and
design,
especially
about
the
disk
format,
how
the
bootstrap
is
layout
on
the
on
the
file.
M
Of
course
yeah,
that's
right!
So
currently
we
have
deployed
in
our
production
case,
and
you
know,
alibaba
has
a
double
11
event
and
it
has
passed
that
validation
and
has
been
used.
So
actually
before
this
project.
We
have
serious
problems
when
booting,
some
big
images
like
several
gigabytes
and
it
always
time
out
and
the
success
rate
is
terrible.
So
with
this
right
now,
it's
almost
the
rate
has
reached
almost
99.
M
and
then
99
yeah,
and
so
to
use
it
right
now
you
can
use.
We
provide
an
image
2
to
convert
the
osa,
oci
v1
image
to
our
knights
image
format,
and
if
you
are
going
to
use
this
container
d,
we
also
have
a
snapshotter
independent
snapchatter
and
you
can
just
use
container
d
or
crt
something.
E
M
No,
it's
a
separate
snapshot
just
like,
if
you
know
crf,
as
they
have
a
what's,
that
a
star
gt,
snapshotter
similar.
We
have
a
knight
snapchatter
here.
M
M
It
depends
I
mentioned
that
there
is
a.
There
is
an
extra
manifest.
We
can
do
here,
taking
advantage
of
a
platform
feature.
So
if
you
have
a
with
our
image
converter
too,
we
can
generate
such
a
manifest
and
push
it
into
the
registry
and
with,
and
if
we
do
that,
like
for
a
single
tag,
text
image,
we
have
actually
two
formats.
M
One
is
also
every
one
original
image
and
the
other
one
is
an
either
same
image
and
if
the
client
on
the
client
side,
if
the
two
like
container
d,
they
support
an
idols
format
which
can
recognize
the
platform
and
os
feature,
and
it
can
pull
this
knight
as
image
directory.
M
And
by
the
way,
we
also
support
star
gz,
actually
the
thing
that
that
has
been
done
in
our
network
snapchatter.
Basically,
we
pull
the
start
gz.
Well,
if
the
user
upload
start
gz
format
into
the
registry
and
we
we
will
pull
that
strategy
and
convert
it
locally
to
nidaz
and
use
it
as
another
format.
O
And
let
me
explain
a
little
bit
about
strategizing
and
the
problem
we
see
with
strategies
is,
it
has
multiple
layers
and
so
so
for
you
for
each
layer
you
will
convert
it
into
a
few
smartphone.
So
if
a
default
image
has
mod
has
many
nature,
there
are
many
fields,
few
spring
banks
and
it
will
affect
performance.
So
we
we
kind
of,
come
and
consolidate
all
these
strategies
and
layers
into
a
single.
O
M
M
Yeah,
so
the
other
point
about
the
if
storage
efficient
I
didn't
mention
before-
is
that,
because
we
have
consolidated
all
the
layers
into
just
one
layer
and
there
is
no
intermediate
layers
or
like
those
white
out
files
and
deleted
files,
they
will
just
be
deleted.
They
won't
be
downloaded
again
and
that's
also
contribute
to
the
storage,
efficient
storage.
E
The
this
is
a
problem
that
actually
I
was
wondering
about
because
sometimes
folks
will
delete
metadata
that
they're
not
supposed
to
well
metadata
that
may
help
you
know
bisect
or
trace
back
to
source
or
something
like
that.
So
from
a
com
from
a
compliance
perspective.
E
M
M
E
Go
ahead!
Okay,
so
what's
nice
about
this,
to
me
is
that
the
the
sha
will
not
change
as
you
modify
well,
if
you're
using
the
snapshotter,
but
it
will
be
modified.
If
you
know
some
process
chooses
to
modify
it,
is
there
any
way
that
you
can
record
information
about
what
process
touched?
What
file.
M
Okay,
we
do
have
that
because
we
have
prefetch
I
mentioned
before,
and
for
if
you,
if
you
want
to
do
prefetch
more
efficiently,
there
are
two
ways.
Actually
one
way
is
you
know
your
image
very
well,
and
you
know
your
workload
very
well.
M
You
can
like
a
node.js,
you
can
use
a
static
analyze
to
check
which
file
will
be
fetched
first
and
that's
that's
the
one
one
way
and
the
other
way
is
about
about
prefetch
is
we
can
record,
we
can
just
start
a
container
and
and
roll
over
the
workloads
and
in
between
we
record
which
file
will
be
accessed
and
in
which
order,
and
we
can
save
that,
which
is
a
access
panel,
of
course,
and
we
can
save
that
file
into
into
somewhere
and
literally
and
later
for
the
same
workload
for
the
same
environment.
M
E
N
Yeah,
so
one
question
I
have
is
it
sounds
like
you,
you,
you
do
this
conversion
sort
of
for
run
time
but
like
if
you
build
time,
for
example,
you
have
suppose
you
have
an
image
in
this
format
and
you
want
to
make
a
one
byte
change
to
a
file.
N
M
Okay,
so
if
we
just
move
a
few
bytes,
because
our
data
has
been
split
into
trunks,
so
mostly
there
will
be
only
one
or
two
trunks
which
you
know
changed
and.
M
Yes,
it
is
but
before
yeah,
yes,
but
we
do
dedupe
in
the
unit
of
trunk.
So
before
uploading,
we
will
check
whether
the
registry
have
that
chunk
already.
N
M
So,
ideally,
we
will
have
a
dedupe
algorithm
based
on
trunks,
not
instead
of
layers,
but
right
now
we
can
only
do
it
locally,
not
on
the
registry
registry
side.
O
We
also
support
a
so-called
parenting
layer
when
building
your
new
in
new
nether
image.
So
you
you,
if
you
and
so
you
you
feel
your
original
image
has
two
layers
and
we
will
build
it
into
another
incrementally
so
that
for
the
first
layer
we
will
generate
a
bootstrap
there,
and
when
we
build
the
second
layer,
we
will
compare
with
the
first
generat
first
layer,
blueshape
and
use
whatever
the
probes.
N
N
Size
is
that's
true,
but
so
so
then,
if
you
have
this,
this
new
nidus
image
with
two
layers,
then
do
you
have
to
do
the
same
smooshing
when
you
actually
run
it
then
into
one
file
or
how
does
that
work?
Or
do
you
end
up
with?
If
you
have
this
new
dynasty
image
with
two
nidus
layers,
do
you
have
two
fuse
mounts
for
that?
One.
O
No,
no!
No!
We,
when
we
only
duplicate,
deduplicate
the
the
data
there
for
for
the
metadata,
and
if
you
even
if
we
only
delete
one
file,
we
will
generate
an
entire
percent,
an
entire
new
percent
and
can
be
uploaded
to
religious
and
downloaded
when
running
so
there's
only
one
metadata
layer
for
every
image
on
the
engine
on
the
registry.
A
P
C
M
So
I
was
wondering:
what's
the
next
plan,
should
we
just
start
a
discussion
about
this,
or
should
I
open
an
issue
for
it?
What
is
a
next
type
should
I
do.
A
I
think
this
is
the
kind
of
thing
that
we've
been
trying
to
figure
out
like
where,
when
is
it
something
that
is
that's
kind
of
why
I
ask
if
you're
doing
on
the
registry
side
or
on
the
client,
because
something
we've
been
you
know,
looking
at
with
the
teleport
project,
is
you
know?
Where
is
the
impact
outside.
A
A
M
Maybe
I
can
introduce
some
production
use
case
here,
which
be
might
be
more
interesting.
A
Yeah,
I
think
the
production
use
cases.
I
think
we
all
kind
of
get
it
to
just
be
faster
and
reliable.
I
think
the
question
that
that
struggles
is
as
with
all
of
these
that
are
end
in
tooling,
with
lots
of
different
projects.
Now
it's
right.
It's
not
just
one
company
docker
owns
this
one
stack.
It's
docker,
it's
container
d.
It's
cloud
implementations!
A
What
is
the?
How
do
you
implement
something
like
this
and
not
cause
everybody
to
do
a
reset
or
where
are
plugins
supported?
So
I
think
thinking
about
what
does
that
user
flow?
Look
like
kind
of
lends
itself
to
whether
there
is
something
for
us
to
do
or
it's
you
know
out.
You
know
it's
it's
a
cost,
something
like
that's
and
that's.
We
can
talk
more
about
what
how
we've
been
thinking
about
it
within
azure
yeah.
That's
a
good
question.