►
From YouTube: Working Group: 2021-04-29
Description
BuildKit + Buildpacks
A
A
Okay,
sorry
got
distracted
by
the
recording
message,
so
let's
look
over
what
we'll
be
talking
about,
so
I
will
briefly
talk
to
you
about
docker
build
and
how
that
ties
into
the
build
kit
world
in
the
future,
we'll
go
through
the
demo
and
then
we'll
dive
deeper
a
bit
into
the
implementation
of
how
it
works
and
discuss
some
of
the
challenges
that
I've
encountered
when
trying
to
marry,
build
kit
and
buildbacks
from
both
buildkitsite
and
buildbackside,
and
I
will
also
talk
about
potential
editions
and
future
work
that
can
happen
to
this
integration
and
we'll
have
hopefully
time
at
the
end.
A
If
anyone
has
any
questions
or
for
discussion,
if
anyone
is
interested
so
without
further
ado,
let's
jump
in
so
docker
build.
Probably
a
lot
of
people
on
this
call
will
be
familiar
about
with
this
one,
but
I
just
wanted
to
set
the
stage,
because
it
will
be
interesting
to
see
how
this
solution
has
evolved
into
something
like
buildkit.
A
So
if
we
look
at
the
traditional
docker
build,
then
the
way
you
could
think
about
it
is
pretty
much
in
the
following
way.
You
have
your
docker
file,
which
is
your
cookbook.
The
instructions
on
how
to
turn
your
source
code
into
a
container
image
and
you
have
your
context,
which
is
usually
your
source
code,
but
can
be
something
else
as
well.
Maybe
it's
a
table
or
some
binaries
as
well.
A
So
essentially
that
lets
you
do
some
form
of
templating
of
the
docker
files
by
you
could
potentially
change
the
base
image
or
change
how
your
container
image
is
created,
but
obviously
it's
not
nothing
nearly
as
powerful
as
something
that
build
kit
can
offer
or
buildbacks
can
offer,
and
if
you're
interested
in
want
details,
then
there's
docker
docks
there
and
I'll
share
the
slide
as
well.
So
you
can
explore
after
the
fact
about
more
about
the
how
this
works
and
what
it
is.
A
A
You
could
potentially
have
anything
that
can
be
executed
and
expressed
as
a
certain
layer,
certain
directional
graph
of
actions,
and
you
can
basically
potentially
create
any
form
of
process
to
turn
your
source
code
into
a
container
image,
so
that
basically
gives
users
much
more
freedom
in
terms
of
what
they
can
do
and
there's
a
number
of
different
front-end
implementations
kicking
about
both
for
making
container
images,
but
also
for
things
like
making
reproducible
builds
and
so
on
so
forth.
A
So
if
you
want,
if
you're
interested,
to
learn
more
about
how
exactly
the
build
kit
front-end
front-end
works,
then
you
can
click
on
this
link.
It's
essentially,
it
describes
the
pa
and
draws
the
parallels
between
how
buildkit
works
and
how,
for
example,
a
c
service
would
be
compiled
in
terms
of
translation
into
intermediate
layer,
producing
binaries
spawning
tasks
from
binary
spawning
processes
from
binaries
and
so
on
and
so
forth.
A
So,
if
you
think
about
what
this
means
for
buildbacks,
that
means
that
potentially
there
can
be
some
sort
of
front-end
image
here
that
will
run
the
buildbacks
actions
and
serve
as
a
way
to
integrate
the
two.
So
instead
of
having
the
dockerfile,
you
could
potentially
have
something
that
kicks
off
the
buildbacks
front-end
and
then
that
operates
in
in
the
same
way
that
the
pax
cli
would
do
where
it
would
execute
your
build
detects
and
so
on
so
forced
stages
until
it
gets
a
container
image
as
the
output.
A
A
It
also
means
that
by
the
virtue
of
having
this
integration,
buildbacks
potentially
can
extend
its
reach
from
just
being
potentially
a
way
to
just
make
images
for
deployment
to
maybe
being
a
bit
closer
to
the
developer
edit
cycle,
where
it
can
potentially
be
a
way
to
make
images
for
development
and
testing,
and
so
on
so
forth,
because
compassibility
of
buildbacks
can
be
used
beyond
just
making
production
deployment
images
right,
and
that
also
means
that
anything
that
integrates
with
the
docker
world
can
now
also
work
on
buildback
images
as
well
and
buildback
build
process.
A
So
let's
try
to
have
a
look
at
how
something
like
this
would
actually
work.
So
can
someone
confirm
that
they
can
see
a
shell
and
please,
let
me
know
if
the
font
size
is
sufficiently
big.
It
looks
great.
Okay,
awesome.
Thank
you
very
much.
So,
basically,
we've.
If
you
remember
from
the
from
the
diagram
earlier,
there
was
something
called
front
end.
So
again,
this
is
just
a
container
image.
A
This
one
is
not
particularly
optimized,
it's
quite
big,
but
don't
worry
about
it.
Can
it
can
be
smaller
and
contain?
Basically
what
frontend
is
it's
a
container
image
that
exposes
a
particular
interface
so
that
buildkit
knows
how
to
communicate
with
it?
It
essentially
means
that
you
can
use
the
same
workflows
that
you
would
normally
do
for
distributing
it
and
referencing.
It
is
also
as
easy
as
just
saying
my
fronted
image
is
and
given
the
docker
reference,
so
we
are
currently
in
a
folder
inside
the
samples
repository
from
cloud
native
buildbox.
A
So
let
me
just
quickly
show
you
what's
inside
there.
I
think
I
can
just
look
at
status.
Can't
I
so.
Basically,
the
only
thing
that
we've
done
is
we've
added
a
docker
file
and
you
might
be
thinking
well.
This
is
just
you
know,
nullifies
the
efforts,
because
if
we
have
a
dockerfile,
then
what's
the
point,
but
this
is
not
actually
the
dockerfile
that
you
may
be
thinking
of.
A
A
Okay,
so
basically,
you
can
see
that
this
dockerfile
actually
doesn't
have
much
context.
It
just
specifies
the
name
of
the
builder
image,
so
this
is
similar
to
what
you
would
have
potentially
done.
If
you
called
paccli,
you
would
have
given
it
an
image
of
the
builder
to
sorry,
the
name
of
the
builder
image
to
use,
and
essentially
what
it's
using.
Is
this
syntax
comment.
So
this
is
the
special
comments
that
basically
tells
buildget
that
it
needs
to
kick
in
and
treat
this
build
as
something
different
than
the
dockerfile.
A
So
this
means
that
you
can
basically
switch
the
different
switch
the
behavior
and
choose
the
front
end,
even
though
you
still
have
this
docker
file
or
you
can
even
do
some
other
setups,
but
this
one
is
probably
the
one
that's
easiest
to
go
with
and
let
me
just
zoom
in
as
well
so
that
it's
visible.
So
this
is
what
our
docker
file
is
and
if
you
look
back
this
exactly
matches
the
name
of
our
container
image,
which
is
the
frontend
container
image.
Okay.
A
So,
let's
try
to
build
this,
so
I'm
just
issuing
the
docker
build
command
and
for
reference
I
have
in
my
daemon
I
have
the
build
kit
enabled.
So
that's
why
you
can
see
that
build
kit
is
kicking
in
here.
If
you're
familiar
with
the
older
docker
build,
then
you
know
that
the
output
is
quite
quite
different.
A
So
by
seeing
a
lot
of
this
blue,
you
can
tell
this
is
actually
build
kit
doing
this
thing
and
you
can
see
that
what's
actually
happening
is
that
it
detected
that
this
is
yes,
there
is
a
docker
file,
but
it's
using
a
custom
frontend
now
the
cloud
native
buildbox
frontend
and
you
can
see
that
it
picked
up
the
builder
image
and
now
it's
going
through
all
the
steps
of
the
buildbacks
lifecycle.
A
So
it's
found
the
sources
it
run,
the
detection
it
run
the
analyze
and
restore.
Unfortunately,
there
wasn't
much
to
restore
because
I'm
doing
a
fresh
build
just
for
you
and
from
that
point
on,
you
can
see
that
now
it's
doing
the
build,
so
this
will
take
a
few
moments
and
because
this
is
a
java
app,
so
we
can
see
that
the
paccato
java
buildback
got
activated
and
what's
currently
doing,
is
fetching
release
of
jre
just
so
that
the
application
can
be
assembled
to
run
and
on
the
output
of
this.
A
What
we
will
see
is
that
we
will
have
gotten
an
image
which
would
be
quite
similar
to
what
you
would
get
from
pack
cli,
but
it's
slightly
different
because
of
not
complete
conformance
to
the
spec,
but
essentially
it
will
be
the
image
that
contains
the
sample
java
app
from
the
cloud
natives
repository.
A
And
it's
it's
exporting
the
image
now,
so
it
expo
it's
exported
all
the
layers
cool.
So
let's
now
try
to
run
it.
So
what
I'm
doing
here
now,
I'm
just
running
this
image
and
I'm
manually,
invoking
the
lifecycle
launcher
we'll
get
into
details
for
why
that
happens
a
bit
later,
but
you
can
see
that
it
appears
to
be
running
and
if
we
go
into
localhost
80
80
you
can
actually
there
we
go.
You
can
see
that
it
works,
so
we
are
successfully
serving
the
application
that
was
built
by
buildback
for
the
bill
kit
frontend.
A
A
So
this
is
going
to
be
jungle
up
and
basically
again
here
there
were
some
changes
that
were
done,
but
they
were
quite
quite
minor
again,
so
essentially
I'm
again
adding
a
dockerfile,
and
you
can
see
that
in
this
case
the
docker
file
is
actually
the
using
the
google
container
platform
builders
and
I'm
also,
I
also
have
a
small
setup.
You
can
see
I'm
using
the
google
entry
point,
so
this
is
basically
demonstrating
the
bill.
Docs
functionality.
A
You
can
see
that
basically
build,
but
if
you
are
familiar
with
the
way
that
google
container
builders
are
working,
then
you
know
that
this
system
builder
and
essentially
what
this
one
does.
It's
translating
build
dogs
into
the
pack
environment
variables
that
that
get
injected
into
the
build
environment
and
the
the
rest
of
it
is
just
the
vs
code.
Remote
containers
things.
A
So
this
one
is
this
one
is
just
a
big
file
that
was
generated
by
vs
code
to
get
the
remote
containers
to
work,
and
similarly
this
is
just
this
startup
for
remote
containers.
A
So
essentially,
we
are
building
a
development
container
using
the
build
kit
front-end
for
cloud
native,
build
box,
and
when
it's,
when
it
succeeds,
what
we
should
be
able
to
see
is
that
we
have
a
vs
code
window
into
that
container,
which
is
a
cloud
native,
build
box,
build
container
a
similar
functionality.
Here,
you
can
see
that
there
was
docker
compose
that
was
called
and
docker
compose
in
its
regard.
A
What's
called
delegated
the
build
to
the
build
kit
frontend
because
again,
build
kit
is
enabled,
so
this
one
again
will
take
some
time.
So
I
think
we
can
come
back
to
it
later
when
we
we
can,
in
the
meantime,
discuss
how
the
how
this
is
actually
implemented
and
how
it
actually
works.
So
why
is
that
chip
in
a
way?
A
Let's
discuss
what
we've
just
seen.
So
how
does
the
bill
kit
work
essentially
high
level
build
kits
as
we've
discussed,
is
a
special
container
image.
There
are
tons
of
libraries
and
apis
available
in
go
to
make
the
development
of
frontends
easier,
but,
strictly
speaking,
nothing.
A
So
it
executes
on
that
recipe
and
that
is
called
usually
or
in
the
code
at
least
it's
referred
to
as
solve,
so
it
solves
that
graph
and
as
a
result
of
that,
you
get
a
reference.
A
reference
is
just
basically
a
pointer
to
say,
hey
your
output
of
your
build
is
here
and
usually
it's
just
an
image,
or
it
can
be
multiple
images
if
you're
doing
a
multi-platform
build
like
if
you're
doing,
arm
and
intel
build
at
the
same
time.
A
So
if
we
were
to
apply
that
to
the
build
kit
world,
then
you
would
come
up
with
some
graph
that
looks
some
similar
to
this.
It
doesn't
cover
everything,
but
it
essentially
would
look
something
like
this,
so
you
have
your
builder
image,
which
would
then
be
combined
for
the
context
and
a
bunch
of
inputs.
A
Apart
from
the
your
source
code,
there
will
also
be
probably
caches
in
input
and
some
user
environment
variables,
which
we've
previously
seen
specified
as
build
dogs
and
some
platform
environment
variables,
and
then
we
would
run
through
the
classic
life
cycle
for
buildbox,
where
we
would
find.
What
are
the
actions
that
need
to
be
called
see
if
there
is
anything
to
reuse
from
the
previous
builds
and
then
do
reuse
it?
A
If
there
is
anything
and
then
we
perform
a
build
after
we
performed
a
build,
we
just
need
to
construct
the
final
image,
which
would
be
a
combination
of
the
runtime
image,
the
launcher,
which
is
what
we've
seen
invoking
earlier
explicitly
and
some
metadata,
along
with
the
layers
that
actually
belong
to
your
application.
So
these
would
be.
C
B
C
B
B
Well,
yeah:
I
think
that
was
the
syntax
level
part
of
the
dockerfile,
but
then
to
any
like,
let's
say
to
the
dr
daemon
or
through
docker
compose,
you
could
specify
the
file
right.
So
you
could
just
specify
a
project
tomo
file
and
then
that
project
tamil
file,
as
long
as
it
has
that
syntax
part
at
the
top
it'll
execute
this
different
front
end,
and
that's
the
part
that
I
was
experimenting
with
and
hoping
to
pr
into
the
poc.
D
D
B
A
C
The
diagram
that
you
showed,
I
mean
you
were
somewhere.
A
Diagram,
so
this
one
yes,
okay,
I
mean,
I
think
by
now,
actually
have
has
this
finished?
No,
it's
not
okay.
Let's
let
it
take,
then,
okay,
so
I'll
start
somewhere
halfway.
Then
okay,
so
we
have
a
director
graph
of
actions
which
is
done
solved
by
the
build
kit,
demon
and
essentially
the
output
of
that
the
output
of
the
solution
is
a
reference
and
the
reference
can
be
either
a
single
image
or
multiple
images.
A
So
essentially
this
is
how
it
functions
and
the
way
that
high
level
it
would
work
is
that
you
create
this
graph
and
then
you
send
it
to
the
build
kit
daemon
to
execute
it.
So
that's
why
grpc
is
needed
and
if
I
don't
know
again,
if
at
what
point
I
broke
up
so
feel
free
to
stop
me.
If
I'm
repeating
myself,
that
means
that
any
language
or
any
ecosystem
that
supports
grpc
could
theoretically
be
used
to
implement
a
custom
front-end,
because
it's
really
nothing
special.
A
A
I
think
again,
we've
covered
I've
covered
that,
but
I
don't
know
at
what
point
I
broke
up.
So
the
challenges
of
build
kits
are
that
it's
quite
docker-centric
in
some
cases,
especially
if
you're
looking
for
the
most
seamless
integration,
you
have
some
sort
of
metadata
file.
That
needs
to
be
the
entry
point
to
that
world
unless
you
want
to
go
through
the
more
custom
setup
which
doesn't
work
as
seamlessly,
and
that
means
that
there
needs
to
be
some
sort
of
metadata
file.
That
says:
hey
build
kit.
A
There
were
also
some
talks
that,
potentially
you
know,
project
automl
could
work
similarly
in
the
world
of
buildkit,
but
we'll
see
how
it's
going
to
work.
So
another
issue
is
that
build
kits.
A
If
you
want
to
go
for
that
workflow
of
integration,
seamless
integration
and
producing
container
images,
it
can
be
quite
boilerplating
and
again
I
won't
go
into
the
detail,
because
I
want
to
leave
more
questions
more
time
for
questions,
and
it
has
also
some
interesting
ideas
about
how
the
cache
and
sharing
of
artifacts
should
work.
A
A
So
it's
quite
a
big
difference
and
the
sharing
of
functionality
of
sharing
binaries
is
a
bit
difficult
because
you
can't
easily
inject
some
binary
from
your
front
end
into
the
build
process,
so
that
makes
it
a
bit
harder
to
share
some
functionality
that
you
might
otherwise
want
to
just
mount
a
magic
binary.
Let's
say
like
again
for
build
some
of
the
buildbacks
functionality
into
the
target
image
and
just
run
it,
but
these
are
kind
of
implementation
details
that
as
users
of
buildback,
you
may
not
necessarily
concern
yourself
with.
A
Finally,
the
big
styling
feature
of
buildbacks
is
the
ability
to
replace
individual
layers
so
again,
something
that
build
kit,
unfortunately
lexington
today,
and
you
can
sort
of
approximate
that
performance
and
that
experience
by
basically
doing
something
which
would
translate
into
the
dockerfile
world
as
a
bunch
of
multi-stage
builds
and
copy
froms.
A
But
it's
not
really
the
same
experience
and
again
you
wouldn't
get
that
nice
feature
or
nice
property
of
later
layers
not
been
invalidated
by
the
previous
layer
change.
So
if
we
go
into
the
challenges
of
buildbacks,
then
I've
been
focusing
mostly
as
a
kind
of
platform
creator,
because
essentially
front-end
is
a
platform
of
sorts.
A
So
hopefully
it's
something
that
you
know
in
the
future
can
involve
and
change.
Again.
We've
discussed
the
cache
scoping
so
just
to
reinforce
it's
slightly
different
and
pak.
Cli
also
has
much
more
flexibility
in
terms
of
what
it
offers,
because
you
can
basically
mount
anything
from
the
host
which
docker
has
been
historically
quite
opposed
to
and
even
with
build
kit.
A
Whilst
you
do
have
some
functionality,
it's
a
functionality
that
is
treated
as
you
can
mount
secrets
from
the
host,
but
not
necessarily
any
host
off,
which
is
what
pax
cli
appears
to
allow
in
terms
of
what
can
be
done
in
the
future.
So
there
is
still
a
few
things
missing.
As
you've
seen,
I
had
to
run
the
launcher
as
commands
manually,
because
because
basically,
the
entry
point
of
the
image
is
not
set
automatically,
so
that's
something
that
can
be
improved,
there's
also
some
talks.
A
There
was
some
suggestions
on
the
issue
from
the
community
that
the
id
from
project
automall
could
be
potentially
used
as
the
way
to
namespace
the
caches.
So
that
would
be
an
interesting
enhancement
to
add
as
well,
and
there
is
also
some
things
that
don't
necessarily
affect
functionality,
but
would
be
good
to
do
for
this
pack
completeness.
A
A
If
oci
is
used
more
commonly
as
an
interface
between
the
platform
and
the
life
cycle,
that
that
means
that
it
might
be
easier
to
integrate
because,
as
I
mentioned
earlier,
buildbacks
of
today
sort
of
have
two
modes
of
operation
and
that's
either
you
have
a
docker
demon
or
you
have
a
remote
registry
so
having
something
which
is
sort
of
less
opinionated
would
make
it
easier
to
integrate
something
like
bill.
Confronted
into
this
and
that's
pretty
much
it,
I
can
check
whether
the
vs
code
is
still
checking
away.
D
So
I
have
some
questions
around
caching.
I
guess
sure
first
thing
to
say
is:
I
think
this
is
a
great
idea.
I've
talked
to
tonis
about
it
a
bit
like
I'm
a
hundred
percent
sold
that
we
should,
you
know,
kind
of
at
least
to
replace
local
builds
for
sure
use,
build
kit
instead
of
implementing
our
own.
You
know
parts
of
the
life
cycle
to
do
that.
I
think
just
even
looking
at
how
fast
this
is.
You
know
it
seems
like
an
improvement
from
what
we
have.
You
know
locally
for
sure.
D
The
question
I
have
around
caching
is
specifically:
there
are
environments
where
builds
may
happen
on
different.
You
know
from
different
containers
or
vms,
where
we
don't
want
to
download
the
previous
image
before
we
do
the
rebuild
you
know
and
then
upload
the
layers
that
changed.
We
just
want
to
rebuild
the
layers
that
changed
without
having
to
do
that.
Initial
pull
and
I
think,
there's
actually
an
issue
in
build
kit.
D
I'm
not
sure
if
you
see
if
you've
seen
this
one,
it
wasn't
linked
in
the
presentation
that
tonis
actually
kind
of
encouraged
us
to
implement.
When
I
had
a
chat
with
them,
I'd
link
to
the
zoom
for
a
merge
lb
that
would
let
you
do
builds
locally
and
then
not
have
layers
present
unless
they're
accessed,
essentially
so
that,
when
you
can,
you
know,
do
the
push
up
to
the
registry
it'll,
you
know
kind
of
do
the
same
thing.
D
Does
it,
I
guess,
should
we
position
this
as
something
that
is,
you
know
just
a
option
for
local
builds,
but
when
you're
building
against
a
registry
we
keep
our
current
integration.
You
know:
should
we
try
to
implement
this
first
before
we
roll
out
build
kit,
support
and
then
get
rid
of
the
current
exporter
implementation
kind
of
curious
where
people
are
thinking
there.
C
Can
I
make
sure
I'm
understanding
you,
so
this
merge
lb
would
be
for
creating
basically
partial
images
in
the
demon
that
could
then
be
pushed
to
the
registry.
C
C
That
makes
sense
to
me
on
some
level
our
registry
implementation.
The
way
it
is
now
is
probably
about
as
efficient
as
it
could
be
and
as
efficient
as
this
would
be,
because
we
don't
need
to
recreate
layers
that
don't
need
to
exist,
but
I
think
it
runs
into
problems
with
how
users
want
to
use
the
tool.
Most
people
do
not
want
to
just
directly
build
things
into
the
registry.
They
want
to
build
it
locally
and
then
push
it.
So
if
we
could
get
the
same
characteristics.
A
Yeah,
essentially
the
what
you
can
do
from
inside
built-in
front-end
in
terms
of
accessing
the
daemon
is
quite
limited,
so
you
don't
have
the
ability
to,
for
example,
access
the
docker
sockets
or
do
things
like
perform,
for
example,
find
out
what
is
the
tag
of
the
image
that
you're
building?
A
You
are
supposed
to
give
out
a
reference
which
is
basically
here's
the
contents
of
an
image
you
asked
me
to
build,
and
then
this
is
where
the
rest
of
the
build
kit
demon
takes
all
and
says.
Okay,
I
took
the
contents.
I
took
the
metadata
that
you
produced.
I'm
gonna
target
is
this:
maybe
I'm
gonna
export
it
into
a
towel,
or
maybe
I'm
gonna
save
it
in
the
demon
or
maybe
you
want
to
build
and
push
at
the
same
time.
C
C
Based
on
that
explanation,
where
it
seems
like,
if
like
we
don't
need
from
the
sounds
of
it,
doesn't
sound
to
me
like
we
need
to
build
a
tarball
first
and
then
give
it
to
bill
kit
to
finish
our
build.
Instead,
we're
you
know
giving
a
set
of
instructions
to
the
demon,
then
it
runs
it
and
ends
up
with
a
with
a
docker
demon.
A
Yeah,
so
you
don't
necessarily
need
to
give
it
a
double.
The
difference
is
that
if
we
look
at
the
way
that
the
life
cycle
today
works
particularly
export
a
bit,
it
has
two
modes
of
export
and
one
is
I'm
going
to
export
to
a
docker
demon
and
therefore
I
need
access
to
docker,
sockets
or
I'm
going
to
export
to
a
registry
and
therefore
I
need
registered
credentials.
A
So
this
is
the
bit
that
sort
of
clashes
with
the
bill
kit
way
of
doing
things,
because
for
the
bill
kit,
you
just
say
here's
a
file
system,
it
doesn't
necessarily
need
to
be
even
tables.
It's
just
you
create
an
image
file
system
and
then
build.
It
knows
how
to
export
it.
So
I
think
the
clash
that's
happening
is
that
bilkit
wants
to
do
its
own
exports.
When
the
cloud
natives
platform
spack
has
quite
rigid
ideas
about
how
the
export
should
function
and
what
are
the
modes
of
operation.
C
Yeah,
I
guess
I'm
not
at
all
surprised
that
our
current
spec
doesn't
account
for
this
and
that
we
would
need
to
do
something
different
here
in
order
to
target
bill
kit
as
an
endpoint.
I
guess
I'm
like.
Maybe
we
don't
need
to
do
this
all
here,
but
curious
to
take
this
offline,
to
learn
like
in
detail
how
you're
sort
of
creating
this
file
system
for
the
final
app
image
in
build
kit,
because
our
export
step
like
right
now,
the
state
of
the
container
that's
running
export,
isn't
the
state
of
the
app
container.
C
D
Well,
I
think,
for
that
file
system
that
gets
generated
right.
The
problem
is
that
build
kit.
Doesn't
let
you
generate
a
partial,
oci
image
layout
right
and
you
have
to
have
the
full
image
that
gets
exported
locally
on
the
file
system
before
it'll?
Do
the
export
and
that's
the
big
feature-
that's
missing,
comparing
our
exporter
to
build
kit's
exporter
right,
but
it
seems
like
they're
open
to
you
know
a
feature
in
build
kit.
D
That
would
let
you
create
a
partial
layout
and
upload
that
to
the
registry,
just
just
from
talking
to
tonis
and
reading
that
particular
issue.
You
know
we
may
not
need
our
exporter
to
achieve
the
same
outcome
if
we're
willing
to
make
some
contributions
upstream.
It's
my
understanding.
C
D
I
assume
that
ends
up
as
oci
on
disk
at
the
end
and
that
gets
sent
to
the
registry,
but
right
now
build
kit
has
to
process
and
generate
all
of
those
layers
right
and
it
has
to
come
up
with
a
full
image
before
it
gets
exported
to
the
registry
if,
instead
it
was
comfortable,
you
know
like,
as
in
that
github
issue,
you
know
keeping
pointers,
two
layers
that
aren't
actually
present
through,
that
build
process.
Saying:
okay,
this
is
cached.
D
I
don't
even
have
this,
but
you
know
in
the
end,
I'm
gonna
write
a
reference
and
I'm
gonna
keep
an
empty
reference
in
that
file
system
based
layout
to
generate
and
then
upload
those
layers
to
the
registry.
I
I'm
not
convinced
that
this
1131
gets
us
all
the
way
there
it
may
just.
Let
us
do
the
local
build
and
have
those
empty
references,
I'm
not
sure
if
docker
would
be
comfortable.
D
You
know
there
might
be
a
little
more
work
on
top
of
this
to
get
that
working,
but
it
seems
like
they're
open
to
you
know.
Implementing
that
amount
of
functionality
upstream-
and
that
would
let
us
get
rid
of
our
you
know,
kind
of
custom
logic,
for
you
know
partial
image
generation
that
we're
doing
in
the
exporter
in
the
end.
D
There's
another
problem
with
this,
though,
that
I
was
going
to
bring
up,
which
is
that
in
a
lot
of
the
platforms
that
run
our
builds,
they
run
with
without
user
name
spacing,
you
know
like
no
capabilities
whatsoever,
and
so
in
those
cases
I
think
if
we
tried
to
do
the
build
during
build
kit-
and
you
know,
even
if
we
had
that
partial
export
functionality,
you
know
those
platforms
would
need
to
give
more
privileges
to
their
build
containers
like
if
it's
kpak
running
on
kate's,
for
instance,
so
that'd
be
another
reason
to
maybe
keep
our.
B
I
think
when
emily-
and
I
spoke-
we
were
talking
about
keeping
the
registry,
the
oci
registry
implementation
and
then
replacing
the
daemon
implementation
with
an
oci
layout
right
and
then
pac,
for
instance,
could
then
take
the
outcome
of
that
ocl
layout
export
and
put
it
into
the
daemon
in
a
quote:
unquote,
efficient
manner,
and
then
that
would
be
the
same
implementation
that
we
could
use
in
build
kit,
because
you
know,
I
think
this
might
lend
into
like
the
the
code
that
eric
was
about
to
show.
A
I
mean
I
can,
I
can
show
how
it
it
is
implemented
today.
So
it's
not
quite
the
same
again:
efficiency
as
potentially
the
pax
cli
does,
but
essentially
you
can
see
here
this
code
again.
Let
me
zoom
in
a
bit.
This
is
where
the
export
basically
procedure
begins,
so
we
get
the
run
image
and
then
we
do
the
export.
A
Saying
you
know
this
is
a
launch
layer.
I
need
to
copy
it.
This
is
not
a
launch
layer.
I
can
skip
it
so
this
is
then
feeds
into
this
llb
copy
operation,
and
this
is
what,
together,
when
combined
with
the
all
the
other
bits
like
run
image
like
launcher
like
the
up
layer,
produces
that
output
image.
So
you
can
see
at
the
end.
Basically,
it
has
an
llb
state
which
has
all
of
this
layered
on
top
of
each
other
and
with
solid
solvent
to
get
a
reference
which
then
gets
used
to
export
an
image.
D
So
in
this
case,
the
build
happens,
first
generates
all
the
layers
and
then,
when
you're
doing
the,
what
you
call
the
export
phase
is
actually
running
copy
instructions
to
copy
all
the
layers
in
separately.
So
it's
it's
not
like
build
kit
is
orchestrating
the
build
process
itself.
Build
kit
is
just
orchestrating
the
construction
of
the
you
know
like
image
in
the
end
right,
yeah.
A
Okay,
yeah
essentially,
so,
if
you
think
about
every
of
this
every
box
of
this
graph
in
an
llb
state,
basically
they
all
reference
each
other.
So
there's
a
builder
state
that
then
gets
referenced
when
it's
all
combined.
Then
there
is
an
llb
run
operation,
which
is
represents
the
detect
stage.
Then
we
copy
any
results
from
that,
then
we
do
analyze,
which
again
it
does
a
bunch
of
llb,
runs
and
then
the
output
of
that
when
you
solve
this
graph,
this
middle
bit
is
a
file
system
which
then,
as
mentioned.
Basically,
you
walk
through.
A
You
can,
and
this
is
where
the
export
process
that
we've
just
seen
happens
where
it
finds.
What
are
the
launch
layers?
What
are
the
layers
that
need
to
be
included,
and
so
on
so
forth?
So
this
is
the
bit
that
has
that
I
had
to
basically
mimic
the
life
cycle
in
because
again
because
of
how
life
cycle
operates
today,
so
I
had
to
basically
mimic
how
it
works
by
looking
at
the
launch
layers
and
constructing
the
that
file
system,
that
build
kit
can
understand
and
can
export
into
a
docker
demon
as
the
image.
B
So
this
might
be
a
terrible
analogy,
but
I
I'm
thinking
of
this
along
the
lines
of
like
futures
and
promises
in
javascript
right
where
you're
just
like
creating
this
stream
via
llb
of
like
what
should
happen
during
these
steps
and
then,
when
you
pass
it
to
solve
it,
does
everything
internally
right,
so
you
could
put
in
between
these
steps
or
states.
What
you
want
to
happen
in
those
particular
instances
is
that.
A
B
A
That's
a
really
good
analogy,
because
essentially
you
go
through
the
process
and
frontend
can
either
do
it
once
it
can.
Do
it
many
times
where
you
make
an
llb,
you
make
a
graph,
you
give
it
to
build
kids
to
solve.
It
returns
a
reference
to
you
and
then
you
can
do
something
with
that
reference,
so
you
can
either
export
it
or
you
can
look
inside
it.
You
can
look
at
the
files
inside
you
can
maybe
inspect.
A
You
know,
read
a
single
file,
look
through
the
directories
and
so
on
so
forth,
so
you
can
basically
chain
these,
and
in
this
case
there
would
be
two
of
these
cycles.
That
happened
because
the
first
cycle
happens
for
the
build,
and
then
this
is
when
the
front
end
inspects
the
output
of
the
build
says.
What
are
the
launch
layers
and
then
the
cycle
happens
again
where
an
lb
is
constructed.
A
D
A
D
There
are
two
places
where
there
are
timestamps
there's
in
the
files
themselves,
so
we
zero
the
time
stamps
for
all
the
files
in
the
disk,
then
there's
also
in
the
image
that
gets
generated.
There's
like
a
created
at
time
that
we
also
zero
so
that
it's
possible
to
rebuild.
Like
literally
the
same.
You
know,
image
has
the
same
digest
in
the
registry.
D
At
the
end,
if
you
build
with
the
same
inputs,
I
think
you're
talking
about
the
ones
in
the
file
system,
but
we'll
build
could
also
let
you
zero
the
timestamp
of
the
image
that
gets
generated.
I.
A
You
you
do
have
access
to
the
metadata
which
or
image
config,
which
is
separate
from
the
file
system,
but
I
am
I've
never
tried
to
actually
set
that
to
see
if
buildkit
would
respect
it.
I
would
go
off
on
a
limb
saying
that
probably
bill
kit
sets
it,
because
that
sort
of
falls
into
the
whole
build
kit
wanted
to
take
care
of
exports
story.
C
Something
I'm
curious
is
about
it's
like
right
now
we
do
things
like
take
what
the
diff
id
of
different
layers
are
going
to
be
and
like
put
them
in
a
label
in
the
same
image,
and
with
this
llb
copy
interface,
can
we
get
back
the
the
digest
of
the
layer
from
that
and
then
use
it
in
another
step
to
make
these
labels.
A
I'm
I
don't
know
enough
to
to
answer
that,
because,
from
the
interfaces
I've
seen,
you
don't
seem
to
get
access
to
the
intermediate
layers.
You
only
get
access
to
the
output.
There
might
be
some
hidden
api
somewhere
that
I'm
not
aware
of.
D
For
the
thing
you
generate
for
this
oci
on
disk
format,
you
generate
what
are
the
layers,
tar
balls
or
the
layers
tgzs?
What
are
you
generating
directly
and
then
passing
to
build
kit?
Could
we
calculate
the
checksums
ahead
of
time.
C
D
A
So
basically,
every
llb
command
you're
on
produces
a
layer,
so
you
can
see
here.
This
is
an
example
of
a
run
command.
So
this
is
where
we
are
executing
the
builder
part
of
the
platform.
So
you
can
see
it's
not
actually
run
in
it.
It's
again
coming
back
to
that
idea
of
promises.
This
is
basically
just
saying
hey
as
a
as
a
part
of
that
graph.
What
I
want
you
to
do
is
I
want
you
to
insert
a
run
operation
in
there,
and
this
would
be
done
corresponding
to
this
build
box
over
here.
A
So
essentially
we
would
say
hey.
I
want
you
to
take
the
file
system
that
I've
had
before
from
the
previous
states,
and
I
want
you
to
execute
a
run
command
on
top
of
it.
So
every
single
llp
command
that
you
add
on
top
of
this,
would
produce
a
layer
and
then,
when
we
come
round
to
export
sorry,
sorry
I
didn't
interrupt
and
then,
when
you
come
back
to
the
export,
you
can
see
what
we
do
here
is
we
do
the
solve.
A
A
So
then,
again,
if
you
want
to
get
multiple
layers
out
of
it,
this
is
where
this
thing
is
happening,
where
basically
right
now,
the
front-end
is
looking
at
the
build
file
system
and
it's
looking
for
individual
layers
that
buildback
produced,
and
it
basically
translates
each
layer
into
a
separate
copy
which
then,
because
it's
a
separate,
llb
operation.
It
then
produces
a
new
layer
of
the
image.
C
D
If
we
wanted
the
diff
ids
back,
you
know
I'm
not
sure
exactly
what
context
this
export's
running
in,
but
if
this
exports
running
with
the
files
currently
on
disk,
we
could
just
you
know
the
dev
ids
are
uncompressed,
so
we
could
stream
everything
through
tar
pre-calculate,
the
diff
ids
in
the
stage
take
a
little
time.
That's
kind
of.
C
D
A
Yeah,
so
it's
completely
reliant
on
build
cadas,
so
that
that
also
means
that
if
you
were,
for
example,
to
run
it
because
against
the
exact
same
file
system,
because
it
knows
that
the
input
has
not
changed,
it
will
just
return
all
cached
as
well.
So
yeah,
it's
not
doing
anything
sort
of
magic
around
layers
by
itself.
It
defers
to
build
kit
completely.
That.
D
Do
we
need
the
diff
ids
if,
if
we're
not
going
to
support
like
rebuilding
from
a
remote
image,
or
something
like
that,
are
they
really
are
those
labels
really
necessary
if
you're
just
scoped
to
rebuilding
locally.
C
We
use
them
for
restoring
layers,
but
if
we
were
restoring
layers
differently
in
this
world,
it
could
be
different
right,
like
you
can
imagine
doing
it
by
in
this
mode.
I
mentioned
by
like
file
path
like
you'd,
have
to
you,
have
an
image,
and
then
you
copy
the
files
from
that
path,
rather
than
you
know,
pulling
a
blob
out
of
the
registry
by
diff
id
or
docker
saving
and
pulling
a
blob
out
by
diff
id.
You
would
instead
tell
buildkit
to
copy
everything
at
a
path
during
restore
back
into
your
build.
A
A
Yeah,
so
if
you
look
in
inside
here,
let
me
just
zoom
in
a
little
bit
you
can
see
that
there's
google
python
paper,
google,
python
runtime,
so
it's
google
containers,
buildbox
builder,.
B
Cool
awesome.
Well,
thank
you
again
eric.
I
think
it
was
really
insightful
presentation.
I
do
appreciate
it
and
yeah
we'll
definitely
keep
working
on
it.
Like
I
said
I
have
a.
Maybe
you
dropped
off,
but
I
have
a
pr
coming
for
project
hummel
that
hopefully
we
could
collaborate
on.