►
From YouTube: CNCF SIG App Delivery Meeting 2019 12 04
Description
Join us for Kubernetes Forums Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
CNCF SIG App Delivery Meeting 2019-12-04
A
A
C
A
D
A
A
Good
yeah
I,
just
I,
tried
my
best
and
who
is
going
to
present
our
cloud
native,
build
tax.
A
A
Alright,
so
I
also
created
an
issue
for
the
logo
for
a
so
if
people
have
ideas,
because
you're
most
likely
going
to
run
out
of
time
by
the
very
end
so
that
people
can
later
on
share
ideas
on
logos
and
I
think
this
is
quite
exciting
that
we're
getting
our
own
logo
so
yeah.
My
proposal
would
be
to
jump
right
into
the
discussion
about
air
gapped
environments.
I
moved
this
one
first,
because
there
was
really
a
massive
requests.
A
C
C
Okay,
does
everybody
see
my
screen
yeah?
Okay,
great,
so
we're
gonna,
I'm
gonna
go
over
just
like
a
high-level
overview
of
air
gapped
networks
and
not
air
gap
networks,
so
that
everybody
kind
of
has
context
show
a
little
bit
of
security
related
context.
Some
concerns
that
you
might
have
when
doing
these
things
and
then
I'll
just
really
quickly
address
how
see
NAB
has
addressed
this
so
far.
Some
lessons
we've
learned
from
that
and
then
I'll
also
draw
on
the
conversation
I
had
at
coop
cotton.
C
Somebody
else
done
something
similar
with
just
helm,
seen
up
trying
to
do
a
little
bit
more
generically.
So
the
first
thing
to
talk
about
is
what
is
it
actually
an
app?
So
we're
talking
about
definitions
and
that's
a
lot
there's
a
lot
of
things
in
scope,
for
this
turns
out
that,
like
he
can
use
a
lot
of
things,
your
app
definition
could
just
be
a
docker
container.
It
could
be
a
compose
file
with
a
docker
container
could
be
a
helm
chart.
C
Some
docker
containers
could
be
some
terraform,
plus
how
I'm
plus
docker
any
kind
of
combination
of
those
things.
Maybe
some
batch
scripts.
Many
other
things,
so
all
those
things
together,
one
or
more
pieces
of
those
things
could
be
your
application
definition
representing
replicate
like
how
do
you
move
that
thing
to
other
people
distribute
that
thing
so
we'll
just
abstract
away
and
just
think
about
a
box
as
the
application.
C
So
some
security
and
verification
things
that
play
into
this
scenario
when
you're
doing
that
generally
the
if
you're
pushing
containers
the
registry
doesn't
change
you're
going
to
reference
in
OCI
registry
from
each
of
those
installation
targets,
you
can
use
sign
descriptors
so
like
you
could
somehow
sign
your
helm,
chart
or
sign.
Whatever
thing
manifests,
you're
using
you
can
reference
digest,
so
you
get
really
good
immutable
references
to
those
containers.
You
can
use
other
trust
mechanisms
like
tough
and
in
todo,
and
you
have
access
to
the
meta,
the
metadata.
C
C
If
we
think
about
using
like
a
stable
helm,
chart
repository,
we
publish
to
that
I'm
gonna
install
from
that
that
stable
chart
repository
using
Elm
I'm
gonna
reference,
some
docker
images
that
are
in
an
existing
OCI
registry
like
there's,
nothing
I
really
have
to
do
outside
of
that,
because
the
tool
is
the
tools
are
really
built
to
support
that.
So,
what's
an
air-gapped
Network,
the
air-gap
network
could
be
to
physically
separate
locations
where
there
is
no
network
connecting
them.
C
We
could
think
of
like
in
the
DoD
world,
like
a
unclassified
network
and
a
classified
network,
or
it
could
be
a
bunch
of
oil
rigs
or
something
that
are
you
know
they
don't
have
any
connectivity
between
them.
It
could
also
be
something
where
you've
got
intermittent
connectivity
where
sometimes
location
B
can
talk
to
location
a
sometimes
it
can't,
or
sometimes
it's
really
slow
like
we.
All
of
these
things
could
be
our
gapped
networks
as
part
of
the
stuff
we've
been
doing
with
C
now
I'm,
trying
to
figure
out
what
security
looks
like
in
that
mechanism.
C
We've
had
a
lot
of
different
people,
chime
in
with
what
they
think
air
gap
networks.
Look
like
I'm
gonna
assume
that
it's
the
most
extreme
case
right
now.
So
it's
this
case
where
there's
no
network
connectivity
between
those
two
things
and
when
you
have
this
situation,
a
lot
of
the
securities
that
we
had
before
fall
away,
and
you
end
up
having
to
distribute
these
things
using
something
like
a
CD
or
physical
media.
That
can
be
moved
back
and
forth.
So
you
put
your
thing
onto
media,
then
you
move
it
to
the
next
location.
C
So
you
may
work
around
that
by
saying
oh
I'll,
just
dr.,
save
and
then
docker
load
and
tarp
up
like
a
tar
ball
and
bring
it
across,
but
the
docker
tooling
itself
doesn't
preserve
content
digests.
So
if
you're
very
gonna
rely
on
like
hashes
of
the
images
that
won't
actually
work
with
the
existing
docker
Twilley,
if
you
have
signed,
manifests
or
descriptors
that
reference,
those
digests,
those
things
will
be
invalidated.
C
However,
it
is
possible
to
import
and
export
those
things
use
some
tooling,
that's
not
built
into
the
docker
core
stuff.
So
a
good
example
here
is
this
imagery
location,
library
from
typical
and
then
moving
beyond
just
a
docker
or
a
registry
problem,
if
you're
using
tough
or
something
like
that,
you
have
to
have
access
to
the
keys
that
were
used
to
sign
that
that
content
and
then
you
also
have
to
have
access
to
the
other
metadata
that
goes
along
with
that.
C
A
C
See
if
that
fixes
so
see
me
up
is
really
a
couple
of
things.
It's
the
descriptor,
so
I
mentioned
descriptor
a
couple
times
in
our
case,
it's
the
same
called
a
bundle
JSON
and
then
the
bundle
that
JSON.
In
addition,
the
parameters
and
credentials
and
stuff
you're
ever
seen
some
docker
images.
One
of
those
is
the
invocation
image.
So
in
the
case
of
like
the
helm,
char
or
terraform
those
other
things,
we
package
all
of
those
things
into
an
invocation
image,
and
then
you
also
reference
other
application
images.
C
So
if
you're
gonna
install
something
like
WordPress,
you
might
have
a
reference
to
the
actual
WordPress
container,
and
maybe
you
have
a
reference
to
something
like
my
sequel
or
DB
that
that
server
is
gonna
use.
So
those
things
are
all
referenced
inside
of
that
descriptor
file,
then
we
have
two
mechanisms
of
representing
the
bundle.
C
One
is
the
sting
bundle,
and
this
is
the
case
where
we
have
a
connected
network
you're
able
to
actually
publish
all
of
these
things
to
OCI
registries
and
I,
say
OCI
registries
because,
like
they
have
to
be
compliant
with
the
OCI
spec
fully,
and
then
we
store
things
like
we
store
the
actual
bundle
JSON
in
that
OCI
registry
and
we
store
all
of
those
images
in
the
OCI
registry
and
then
users
are
able
to
pull
from
that.
So
if
we
look
at
this
scenario,.
C
Location,
a
we
can
push
that
OCI
registry
as
we're
doing
our
development,
maybe
our
CI
pipeline
rebuilds
the
bundle
and
publishes
it
to
the
OCI
retreat
and
then
users
BCD,
whatever
can
just
pull
from
that
OCI
registry
and
install
directly.
It
works
a
lot
like
if
you're
gonna
just
run
docker
containers
or
run
comp
charts,
but
in
the
air-gap.
Oh
sorry.
The
second
thing
second
way
we
could
represent
these
things.
C
Correct
is
what
we
call
a
thick
bundle
and
instead
of
leveraging
just
the
o
ciock
registry,
what
we
actually
do
is
build
in
our
case,
a
tarball
and
inside
of
that
tarball.
We
have
a
bundle
JSON,
and
then
we
have
the
representation
of
all
those
artifacts
as
they
would
exist
in
the
OCI
repository.
So
we
take
the
bundle
JSON
and
the
invocation
image
and
these
other
images
we
archive
them
into
this
file
that
we
can
then
burn
to
a
CD
burn
to
or
put
onto
a
USB
stick.
C
However
else
we
want
to
distribute,
that's
it,
and
the
interesting
thing
is
that
when
we
do
this
because
we're
using
that
pivotal
library
I
mentioned
earlier,
we're
actually
able
to
preserve
the
digests
and
bring
those
things
across
with
us.
So,
in
contrast
to
that
connected
scenario,
installation
looks
something
like
this.
We
take
the
bundle
archive
it
put
it
on
the
CDE
move
across
the
air
gap.
C
C
So
again,
we
use
the
original
pivotal
symmetry
location
to
preserve
content
digests,
but
there
are
some
issues
with
this
and
that
gets
back
to
the
the
fact
that
notary
in
particular,
wasn't
really
built
for
fully
disconnected.
It
wasn't
implemented
with
fully
disconnected
scenarios
in
mind,
you
really
need
to
have
the
signing
keys
in
the
metadata
and
able
to
leverage
notary
in
able
to
use
sorry
in
order
to
use
the
intone
stuff
as
well.
C
So
that's
actually
being
addressed
in
the
work,
that's
being
done
for
notary
v2
obvious,
that's
not
done
yet,
but
we're
able
to
solve
everything
up
until
that
problem,
because
we're
using
just
the
OCI
specification,
so
I
can
do
a
quick
demo
of
Porter
right
now,
which
is
a
tool
we
have
built
to
implement
the
CNF
spec.
It
supports
doing
thin,
bundles
and
thick
bundles
and
imports
and
exports
that
you
can
do
across
the
air
gap.
Networks
or
I
can
just
open
it
up
for
questions
now.
C
If
anybody
wants
to
just
discuss
things,
I
think
the
the
follow-on
from
this
should
be.
We
should
start
discussing
best
practices
for
tooling
in
that
space.
If
people
want
to
address
air
gap
scenarios
I,
don't
think
a
lot
of
things
do
right
now,
for
instance,
helm,
doesn't
you
know
it's
really
just
built
for
I'm,
going
to
use
my
local
environment
and
connect
to
this
kubernetes
cluster?
Deploy
thanks
to
it
at
coop,
con
I
actually
had
a
discussion
with
someone
who'd
written
a
helm,
plug-in.
C
They
did
something
very
similar
to
this
slide
right
here,
oops,
so
they
actually
take
helm,
charts
and
their
plug-in
will
scrape
through
them
and
look
for
all
the
images
and
then
built
something
that
looks
kind
of
like
this
bundle.
Json,
so
it'll
reference,
the
helm
chart
and
then
it'll
offer
also
reference.
All
of
the
images
that
are
gonna
be
using
that
hunk
chart
and
then
they
do
a
docker
save
in
docker
load
to
build
a
petard
ball
with
the
helm
chart
and
the
images
in
it
and
they're
able
to
ship
that
across
air-gap
networks.
C
F
This
is
Rob
from
Red
Hat
I
just
wanted
to
throw
out
there
that
we
are
thinking
about
this
on
our
lifecycle
manager,
and
it
has
a
idea
of
folks,
at
least
in
their
operator
declaring,
which
related
images
that
they've
used
for
their.
What
we
call
the
operands,
the
actual
app
and
so
the
weekend,
then
slurp
that
into
an
offline
registry
and
orchestrate
that
yeah.
C
A
C
What
I
did
here
was
I,
ran
this
Porter
publish
command
and
that
actually
publishes
the
invocation
image
and
then
any
other
reference
images
to
whatever
registry
you're
doing,
and
then
it
rewrites
the
bundle
descriptor
itself
to
reference
those
things.
What
Porter
does
to
define?
What
a
seen
a
looks
like
is:
give
you
a
manifest
that
is
yellow
and
you
can
reference
things
like
those
images.
C
C
C
C
C
There's
the
OCI
layout
descriptor
itself,
the
index
that
JSON
that
you'd
have
from
the
OCI
repository
and
then
the
bundled
a
JSON
itself.
So
then
I
can
take
this
thing
and
move
it
across
any
other
network.
That
I
want
to
see
you
and
then
I
can
republish
it
most
of
the
cases
that
we're
working
with
right
now
obviously
require
registry.
So
we
just
kind
of
bake
that
into
the
workflow,
so
I
can
do
a
porter
publish
again
and
I
can
specify
an
archive
file
to
read
and
then
a
tag
I
want
to
send
it.
C
C
And
it
will
rehydrate
that
and
basically
run
the
reverse
process:
it'll
use
that
pivotal
library,
OC
imagery
location,
to
take
all
that
data
and
basically
just
copy
it
into
the
OCI
repository
and
preserve
the
digest.
As
it
goes.
So
you
can
see
it's
working
purely
off
of
digested
references
and
the
cool
thing
is
like
it
generates
a
file
called
a
relocation
file
that
you
can
pass
in
at
runtime.
You
don't
have
to
change
any
of
the
references
in
your
file
because
the
digests
are
the
same.
C
C
A
B
Oh
hey,
so
my
name
is
Sedona
I'm
here
to
talk
about
the
open,
app
model,
I
think
in
the
cigarette
delivery,
deep
dive
at
coop
con.
There
was
a
mention
of
it
from
Harry,
so
the
open
app
model
was
released
a
month
before
koukin
I,
in
close
collaboration
with
both
Microsoft
and
Alibaba,
and
really
the
open
app
model.
We
released
it
to
target
developers
who
are
just
looking
to
build
apps
on
grantees
or
any
platform
and
have
them
build
these
apps
without
needing
to
worry
about
the
infrastructure
so
before
I
get
into
it.
B
Who
am
I
I'm
a
maintainer
on
one
of
the
implementations
of
the
open,
app
model
and
a
core
contributor
to
the
spec
I'm,
a
program
manager
at
Microsoft
on
the
computing
and
just
like
65%
of
the
people,
which
is
kind
of
crazy
at
koukin
I'm
brand-new
to
the
ecosystem.
So
it
was
my
first
coop
con
they're
just
getting
into
the
open
source
projects
and
how
things
work.
So,
thanks
for
letting
me
come
on
this
and
do
a
talk,
I
think
that's
pretty
great,
that
we
can
just
do
that.
B
So
what
is
the
open
app
model
on
the
goal
is
that
we
wanted
to
create
a
spec
that
is
completely
platform
agnostic
and
it's
meant
to
help
define
cloud
native
apps
and
the
the
premise
is
that
you
don't
need
to
worry
about
any
container
infrastructure
or
any
platform
specifics.
You
can
just
focus
on
your
apps
and
their
dependencies.
B
The
inspiration
of
this
was
from
some
of
the
teams
at
Microsoft
and
Alibaba,
who
work
with
some
largest
customers
running
kubernetes
or
anything
on
Alibaba
cloud,
and
when
we
created
the
specification
and
when
you
look
at
the
spec
that
I'll
show
you
it
doesn't
take
any
opinions
on
the
implementation
or
the
platform.
So
respect
is
very
intentionally
open-ended
and
we
wanted
to
ensure
that
there
could
be
multiple
implementations
on
it
and
not
just
tied
to
one
specific
implementation.
B
B
Will
care
about
all
like
what
are
the
kind
of
the
three
biggest
things
that
stand
out
when
people
first
check
out
ohm
the
first
one
is
that
its
application
focused
and
really
the
target
here
is
the
developer
audience
we're
looking
to
build
cloud
native
apps.
We
believe
that
for
them
today
there
is
a
little
bit
too
much
of
the
infrastructure
concepts
that
are
exposed
and
we
wanted
to
bring
it
back
to
just
focus
in
on
apps.
B
We
started
with
the
first
one
and
we
kind
of
talked
a
lot
of
customers
about
it,
and
one
thing
that
we
noticed
very
quickly
was
that
large
enterprises
have
separate
people
are
in
separate
roles,
so
they
have
separate
developer
teams,
operation
teams.
So,
despite
the
spec,
you
know
being
completely
developer,
focused
it
split
up
into
different
parts
such
that.
If
you
do
have
these
different
teams
in
charge
of
you
know,
operating
the
app
versus
one
team,
that's
building
the
app
another
team
in
charge
of
setting
up
the
infrastructure.
B
B
B
The
final
one
is
kind
of
future-looking,
and
it
is
that
we
believe
that
you
know,
as
kubernetes
starts
to
be
put
on
more
aged
devices.
You
either
get
things
like
kiiis
or
other
things
that
might
come
up
on
that
there
will
be
an
interest
for
people
wanting
a
common
application
model
for
their
cloud
deployments
for
the
on
trend
deployments
and
also
even
tiny
edge
devices.
B
One
of
the
common
themes
at
coop
con
was
a
lot
of
enterprises,
have
hybrid
deployments,
so
they'll
have
deployments
across
clouds,
on-premise
and
even
edge
devices,
and
a
common
task
was
hey,
be
great
if
you
could
have
a
consistent,
app
model
that
spans
across
all
these
different
environments,
so
I'm
going
to
jump
next
in
to
how
it
works
and
I
actually
took
out
some
slides
on
from
an
actual
component.
B
But
if
there's
time,
I
would
love
to
come
back
to
it
cool,
so
the
open,
app
model
splits
the
infrastructure
in
the
platform's
away
from
all
the
application
concerns.
So
if
I'm
a
developer
and
architect
I'm
responsible
for
writing.
My
code,
once
I
write
my
code,
I
author,
something
in
the
spec
called
a
component
file
which
describes
my
intent
when
it
comes
to
actually
applying
things
like
traffic
management,
canary
and
auto
scaling.
B
We
have
something
called
application
configurations
which
allows
you
to
take
a
bunch
of
components
that
developers
have
authored,
put
them
together
in
an
application.
Configuration
apply,
application
concerns
like
what
kind
of
identity
you
want,
how
you
want
to
route
traffic
to
it.
What
your
create
strategy
is
on
to
those
components
and
both
of
these
things
together
create
the
open,
app
model
and
you
can
take
this
open
app
model
and
deploy
it
across
implementations
of
the
open,
app
model
on
any
infrastructure.
B
So
the
application
configuration
has
these
constructs
and
you
request
what
you
want,
but
as
the
developer
and
operator,
you
don't
necessarily
care
about
how
its
provided
to
you.
That's
the
goal
or
the
job
of
the
infrastructure
operator,
so
you're
asking
for
ingress,
and
this
could
be
provided
by
something
like
you
know:
engine
X
or
a
natural
bounce
or
whatever
it
is,
but
how
its
provided
shouldn't
be
of
concern
to
the
developer
or
the
operator.
B
When
we
release
that,
we
also
released
an
implementation
of
kubernetes
on
the
open,
app
model-
and
there
are
a
few
considerations
in
released
this
one
was
that
developers
wanted
to
still
use
the
cool
API
so
initially.
Actually,
when
we
were
previewing
this
thing
we
had
our
own
CLI
tool,
but
there
was
heavy
demand
and
pushback
from
a
lot
of
the
advocates
that
hey
we
liked
the
kubernetes
api.
So
let
us
apply
this
open,
app
yamo
that
you
have
created
with
these
components
in
this
app
config
directly
to
the
corrals
cluster
using
two
cuddle.
B
Alternatively,
a
lot
of
mature
DevOps
setups
will
have
things
like
home,
charts
or
even
developers
will
use
home
charts
and
we
have
intentionally
made
own
compatible
with
things
like
doctors,
Tina
and
counters.
You
can
even
wrap
and
when
I
yeah
Moe
in
a
home
chart
and
use
the
home
she
like
to
deploy
when
you
deploy
to
kubernetes
cluster,
we
have
a
project
called
rudder,
which
is
an
example.
B
B
B
So,
let's
take
a
look
at
a
component,
so
component's
schematic
is
something
that
the
developer
would
author
and
they're
stating
their
work,
intent
for
how
the
component
can
run
they're,
stating
the
OS
type
of
the
component
and
all
the
environment
variables
that
are
override
about.
So
in
the
case
of
the
UI
component,
it
needs
to
know
the
location
or
the
URI
of
all
of
the
API
components
and
finally,
the
port
of
the
container.
B
So
the
if
I
look
at
the
API
components,
there's
three
of
them.
There's
the
flights
quakes
and
weather
on
each
of
those
are
very
similar,
are
their
component's
schematic,
so
they
have
the
exact
same
concepts
in
them.
But
that's
all
a
component
is
so
once
you're
actually
done
with
the
components
you
can
apply
to
the
kubernetes
cluster
and
I
have
a
script.
That
does
it
because
I
didn't
want
to
run
apply,
commands
so
I'm
just
going
to
do
that,
you
you!
B
So
what
what
this
does
on
the
kubernetes
cluster?
If
I,
if
I,
do
a
get
components,
is
it
just
creates
these
components?
It
hasn't
created
any
parts
for
me,
it's
just
made
these
components
available
for
instantiation
and
you'll
notice
that
there's
a
rudder
pod
running.
So
it's
the
one,
that's
being
able
to
understand
this,
and
if
they
could
get
here
these,
you
can
see
that
they're
of
a
bunch
of
extra
CDs
that
are
available
so
components
on
their
own.
B
Don't
do
anything
special,
the
the
magic
kind
of
happens
in
the
appconfig,
where
you
create
instances
of
these
components,
so
you
can
stitch
together
all
the
components,
the
UI
component,
the
API
component
and
the
data
component
and
create
instances
of
them
as
the
application
operator
overriding
the
parameters
that
were
exposed
in
them
with
ones
that
you
want
to
expose
the
ones
that
you
want
to
run.
So,
for
example,
if
I
were
to
deploy
this
in
a
particular
environment,
I
might
want
certain
parameters
versus
another
environment.
B
When
it
comes
to
instantiate
these
components,
you
might
want
to
add
things
like
ingress
and
concepts
like
ingress
and
scaling
are
modeled
by
this
concept
in
the
spec
called
traits,
so
traits
apply
to
a
component
and
add
operational
functionality
to
them.
So
an
example
of
a
trait
is
a
manual.
Scalar
is
the
most
basic
example.
We
specify
the
replicas.
B
You
want
a
more
interesting
one
is
ingress
where
you
specify
hey
this
particular
UI
component
I
needed
to
be
exposed
to
the
public
Internet
via
interest
on,
and
this
is
the
hostname
and
path
that
I
want
again.
You'll
notice
that
there's
no
mention
of
how
the
interest
is
provided,
I'm
just
that
you
need
ingress,
and
these
are
the
parameters
that
you
need.
B
So
if
I
go
ahead
and
apply
this
configuration
to
the
cluster,
what
it'll
do
is
it
will
apply
it
first,
it's
gonna
go
ahead
and
read
this
and
rudder.
That's
running
on
the
kubernetes
cluster
is
gonna,
create
pods
for
each
of
the
components
that
were
in
the
application,
configuration
and
they'll
create
them
equal
to
the
manual
scalar
information.
That's
provided
on.
B
It
will
also
create
the
services
for
me
and,
if
I
look
at
ingress,
it
will
create
the
UI
tracking
dress,
with
the
particular
hostname
that
I
wanted,
but
from
the
perspective
of
the
developer
or
the
operator,
the
application
operator,
they
never
had
to
interact
with.
You
know
the
direct
services
or
the
deployments
or
the
pods
back.
They
just
described
their
application
and
just
to
kind
of
top
it
off.
Actually
I
can
just
show
you
what
it
looks
like
it
works.
B
On
so
there's,
this
is
the
UI
component
that
I
showed
when
I
click
the
refresh
button,
the
pods
might
still
be
starting
up.
It's
gonna
reach
out
to
those
API
switch,
go
ahead
and
pull
data
from
an
external
service
and
then
it'll
store
all
of
this
information
and
the
local
MongoDB
component,
that's
running
on
the
cluster
itself
and
then
I
can
just
go
check
out
the
map
of
all
the
earthquake
zones.
I
said.
B
D
B
Mailing
list
you'll
see
that
we
have
some
interest
from
other
folks
like
ballerina
and
cosplaying
or
interested
in
some
implementations,
but
that's
just
an
example
of
one
implementation
of
the
spec.
So
quick
digest
of
what's
happened.
Cuz
we're
a
recent
community
project
at
first
is
we
released
in
october
2019
with
an
alpha
version.
This
means
that
it's
super
open-ended
for
change.
B
So
it's
in
alpha
we're
expecting
a
lot
of
feedback
from
calls
like
this,
and
our
community
calls
on
in
december
will
probably
next
week
we're
gonna
do
a
second
release
of
rudder
which
is
going
to
actually
integrate
with
SMI.
We
have
a
community
end
user,
actually
providing
a
logging
trait
that
they're
they
want
to
implement
with
rudder
and
then
there's
a
few
validations
that
we
wanted
to
put
in
runner.
And
the
last
thing
is
that
it's
super
early
days,
so
it
is
the
best
time
to
get
involved
and
help
us
solve
the
space.
B
I
think
their
general
consensus
I
got
from
coop
Khan
was
that
the
kubernetes
itself
is
maturing
and
cigs
like
this
is
kind
of
gonna.
Be
the
next
thing
that
end-users
are
gonna
ask
for
a
need
as
to
start
thinking
about
their
apps.
I
can
send
this
light
up,
but
the
final
slide
just
has
information
on
how
to
get
involved.
So
just
some
links.
These
are
all
hyperlinks,
so
I
specify
to
show
the
slide
which
I
can
do
after
but
yeah
thanks
for
letting
me
hop
on
and
talk
about
all.
B
A
A
B
So
like
the
next
next
few
months,
the
biggest
thing
for
us
is
to
get
a
couple
more
implementations
of
it
and
right
now
we're
actually
working
on
a
roadmap,
respect
to
a
stable
release.
So
right
now
the
spec
is
in
alpha
we're
trying
to
figure
out
what
I'll
take
for
us
to
get
to
100
release
of
the
specification
and
then
and
then
we're
gonna
go
from
there.
Okay,.
A
E
Next,
so
pretty
simply
will
they'll
take
source
code
as
input
and
then
produce
a
no
sign
image
that
has
layers
that
map
logically
to
your
application
next
slide,
and
so
just
as
like
as
a
quick
example
of
like
kind
of
input/output.
Here,
if
we
have
like
a
simple
node.js
app
that
we're
trying
to
build
here,
it's
gonna
produce
an
OCA
image
where,
as
a
Bill
pack
author,
you
actually
have
a
lot
more
male
control
over
the
layers
that
come
out
of
it.
E
So
you
get
to
produce
what
actually
ends
up
in
what
we
call
the
launch
layer,
which
is
the
thing
that
you'll
push
to
your
dock
registry,
that
you
can
pull
down
and
actually
run
as
well
as
having
ability
to
have
these
cache
layers.
That
can
be
separate
from
the
launch
image,
and
so
next
slide
so
like
on
a
second
build
one
of
the
nice
things
about.
This
is
that
we
can
actually
pull
these
cache
layers
in
to
speed
up
the
build.
E
Next,
and
so
at
the
end,
you
get
this
OCA
image
that
has
builds
and
since
there's
more
structure
around
the
build
process
and
kind
of
layout
of
the
application,
we
can
add
metadata
to
that
app
and,
as
we've
seen
have
layers
that
map
more
directly
in
hats,
higher
control
of
what
those
layers
are,
that
map
to
your
application
next
slide
and
so
kind
of
taking
a
little
bit
of
a
step
back.
There's
a
lots
of
different
components
in
this
part
of
this
project.
E
E
Then
we
also
maintain
a
Tecton
template
using
that
you
can
use
with
K
native
that
uses
the
lifecycle
as
well.
That
will
run
through
kind
of
the
whole
build
process
next
slide,
and
so
I
was
talking
about
how
we
have
the
specification
and
how
life
cycle
implements
it
and
there's
really
two
parts
to
this:
API
to
the
spec
itself.
There's
a
contract
between
the
lifecycle
and
the
dope
acts,
because
it's
actually
running
the
book
pack.
E
So
there's
this
bill
pack
API
and
since
platforms
like
I
was
talking
about
like
pack
or
Tecton,
will
actually
go
ahead
and
do
all
this
running
and
kind
of
do
all
that
stuff.
There
needs
to
be
a
contract
between
both
what
the
lifecycle
expects
from
a
platform
and
vice-versa.
So
there's
this
platform
API
as
well,
and
so
most
platforms
will
basically
take
lifecycle
directly.
It's
a
go
binary
and
actually
go
ahead
and
implement
kind
of
building
and
running
bill
packs,
using
that
and
I
believe
now.
G
Yeah,
let
me
know
if
this
terminal
is
a
good
size
or
not
so
is
Terrence
said
I'm
gonna
demonstrate
PAC,
which
is
like
a
reference
implementation
of
what
we
call
a
platform,
so
I'm
gonna
run
it
against
a
simple
java
application.
It's
just
I
think
it's
like
a
Java
spring
boot
free
standard,
maven,
app
and
so
the
and
people
who
are
using
PAC.
The
first
thing
they're
gonna
do
in
most
cases
is
run,
PAC
build
and
the
first
time
you
run
PAC
field.
G
So
I'm
gonna
set
the
that
builder
as
my
default
builder,
so
that
the
next
time
I
run
PAC
build
it'll,
first
download
that
image
and
then,
when
it
runs
the
build
pack,
the
go
pack
lifecycle
it'll
run
with
those
build
packs.
So
the
first
thing
it
does
is
detect
that
I'm
using
a
java
application,
I
didn't
have
to
tell
it
that
or
configure
it.
It'll
install
the
JDK,
it
installs
maven,
and
then
it
runs
maven
the
first
time
it
runs
maven.
G
It
has
to
download
the
Internet
has
to
download
all
my
dependencies
that
it
needs
to
compile
the
app,
but
it
will
cache
these
and
because
the
bill
packs
are
application
aware
they
know
what
maven
is.
They
know
what
artifacts
maven
produces.
It
can
intelligently
cache
these
things.
The
last
thing
it
does
is
determine
what
command
can
be
used
to
run
this
application,
so
it
actually
inspects
the
pom
file
again,
a
kind
of
application
aware
and
it
detects
a
Java
main
command
can
be
used
to
start
this.
G
The
last
step
in
execution
is
exporting
an
image,
and
you
can
see
it
adds
layers
for
the
application,
the
configuration
the
JRE
and
some
other
things.
It
also
creates
a
cache
set
of
cache
layers,
but
the
final
output
is
my
image,
which
is
essentially
just
a
no
CI
image
that
I
can
run
with
docker
run
or
or
whatever
command.
So,
if
I,
because
it
already
set
up
the
entry
point
for
me,
it
I
can
just
do
doctor
run.
G
I
can
also
use
docker,
push
or
docker
inspect,
and
if
we
do
docker
inspect
we'll
see
that
there's
some
metadata
in
here
that's
specific
to
build
packs.
This
actually
tells
us
what
each
of
these
layers
contains,
so
that
we
can
reason
about
the
image
and
that's
what
gives
it
some
of
its
performance
capabilities.
G
So
if
I
run
pack
inspect
image,
it
kind
of
reads
that
metadata
at
a
very
high
level.
It
can
tell
us
what
stack
we're
running
on
that's
essentially
the
operating
system
that
we're
running
on
top
of
what
version
of
the
build
pack
was
used
to
create
this
image
and
some
other
things.
We
can
also
generate
a
bill
of
materials,
so
each
build
pack
will
write
what
dependencies
that
are
included
in
the
image
and
what
other
components
are
artifacts.
G
So
the
real
value
of
bill
packs
comes
when
you
update
your
application,
so
I'm
going
to
update
this
job
application
to
job
11.
Just
by
writing
a
Java
system
properties
file
and
then
I'm
gonna
run
PAC
build
again.
This
time
it's
gonna
load
my
cache
and
the
layers
from
my
previous
image,
so
it
can
reuse
them,
but
it'll
reinstall,
the
JDK.
So
now
it's
gonna
install
job
11
and
we
rerun
maven.
But
it's
going
to
use
that
cache.
It's
very
fast.
This
is
something
that's
essentially
not
possible.
G
So
the
pack
rebase
command
allows
us
to
take
those
layers
that
are
the
build
pack
layers,
the
layers
generated
by
a
build
pack
and
sort
of
lift
and
shift
them
on
to
a
new
new
image
of
that
operating
system.
If
there's
an
update
indicate
in
this
case,
there
is
no
update
because
I
just
ran
in
a
moment
ago,
but
if
there
was
a
new
update,
you
would
pull
that
down
and
then
right
without
actually
running
a
docker
build
essentially
just
editing,
JSON,
lift
and
shift
those
layers
on
to
the
new
image.
G
E
Thanks
Joe
so
yeah,
so
when
Joe
is
demoing
rebase,
we
have
you
know
players
in
this
app
and
one
of
the
things
that
you
get
with
planet
built
packs
when
you
build
it
with
this.
Okay
image
is
that
when
we
did
that
packing
spec,
we
actually
know
kind
of
where
the
separation
is
between
Joe
is
showing
the
stack
image
and
kinda
a
players,
and
so
when
we
next
slide
we're
kind
of
patching-
and
we
have
this
like
vulnerability
in
this
operating
system.
E
You
know
a
lot
of
links
arose
through
LTS
and
other
releases
guarantee
ABI
compatibility
between
updates
of
that
operating
system,
and
so
we're
able
to
basically
produce
a
new
docker
image
of
just
for
that
OS
push
that
to
a
registry
and
then
using
a
PAC
rebase.
We
can
basically
do
that
lift
and
shift.
Have
this
new
image
without
running
dr.
E
build
and
all
we've
done
is
basically
replace
the
layers
in
that
JSON
file
and
then
now
we
can
push
that
image
up,
and
so
you
only
ever
have
to
do
that
once
across,
like
a
fleet
of
I,
don't
know
500
apps,
you
push
that
OS
image
up.
One
time
and
then
the
actual
rebasing
of
all
your
other
applications
can
be
relatively
fast
next,
and
so
this
is
all
great.
We
have
some
uses
of
use
cases
of
this
in
production
like
salesforce
at
Dreamforce
last
we
just
announced
salesforce
evergreening,
that
is
powered
by
climate
bill
packs.
E
You
can
use
at
spring
one.
There
is
the
announcement
of
the
adjure
spring
cloud
that
uses
cape
underneath,
which
is
another
platform
that
also
leverages
built
climate
bill,
PACs
project
rift
which
is
used
in
the
pivotal
function,
services,
which
is
like
a
faz
uses
cloud
medical
packs
kind
of
power
how
they
build
the
images
that
get
run
in
that
platform,
and
so
we
definitely
have
a
bunch
of
different
platforms
that
are
looking
and
using
bill
packs
today
in
production,
the
next
slide
and
so
client
bill
packs.
E
Hopefully,
that
you've
seen
had
allows
you
to
basically
use
the
same
set
of
bill.
Packs
like
I
was
showing
with
the
Java
one
like
you
can
use,
bring
any
Java
app
to
it,
so
you're
able
to
use
a
single
bill
pack
to
kind
of
power,
different
kinds
of
applications,
so
you
don't
have
to.
Unlike
doc
fog,
it's
like
rewrite
a
special
doc
file
for
each
individual
application,
with
some
of
the
capabilities
with
the
caching
and
only
replacing
the
layers
that
need
to
be
changed.
It
can
be
fairly
fast,
as
well
as
the
new
base
commands.
E
E
Sig
bill
packs,
we
believe
fit
and
kind
of,
and
with
talking
with
Harry
like
the
topic
11.5
part
of
the
delivery
model
that
was
showed
off
at
coupon
next
slide,
and
so
we
believe
that
the
actual
like
pack
build
or
like,
if
you're
doing,
that
through
Tecton
or
whatever
kind
of
fits
in
that,
like
application,
packaging
part
and
then
as
well
as
rebase
being
able
to
produce
kind
of
new
images
as
well
there
and
then
the
application
definition
like
the
bill
packs
actually
describe
how
an
application
gets
built.
So
they
actually
build
packs
themselves.
E
E
E
It
will
have
Bill
packs
that
kind
of
meet
their
needs
next
slide,
and
so
we've
had
a
bunch
of
releases
kind
of
just
like
quick
status,
check,
stuff
kind
of
the
main
things
that
people
are
interested
in,
or
mostly
the
lifecycle
itself
and
pack,
as
kind
of
easiest
way
to
get
started
with
bow
packs,
as
seen
through
the
demo.
Next
slide,
we've
had
we
get
asked
about
like
how
often
do
we
break
things,
and
so
we've
had
kind
of
one
major
break
and
change,
but
we
take
backwards
to
kind
of
build
a
fairly
seriously.
E
Point
four
could
do
that
kind
of
a
little
bit
on
their
own
schedule
not
be
forced
to
when
they
update,
packed
like
be
forced
to
just
have
it
break
everything
so
kind
of
had
that
compatibility
layer
and
then
what
Bill
pack
is
also
another
kind
of
tool
we
have
to
kind
of
capture.
Some
of
these
changes,
yeah.
A
So
sorry,
I
have
to
directly
here
I
think
we're
on
top
of
the
off
time
and
I
want
to
give
Amy
a
couple
minutes.
Can
you
please
also
post
the
slides
and
we
need
to
defer
the
follow-up
discussion
for
the
mailing
list
and,
if
there's
more
in-depth
destruction
to
be
after
so
we
can
schedule
a
future
meeting,
just
want
to
give
Amy
quickly
a
chance
to
talk
about
the
logo.
A
D
Much
easier
so
hi,
sorry
buddy
I'm,
just
not
just
shouting
at
my
computer,
because
it's
always
a
good
time.
We
have
a
issue
open.
It's
issue,
number
20,
I,
believe
please
go
in
and
put
in
your
thoughts
about
logo.
Unless
people
want
to
talk
about
a
logo
here,
I
heard
on
site
at
Kim
Kahn
that
people
were
considering
a
salmon
for
app
delivery,
I'm
happy
to
take
suggestions.
A
D
Well,
like
a
for
context,
the
other
two
SIG's
that
currently
get
logos
I,
have
focused
on
an
animal
theme.
The
raccoon
is
the
same
security,
one
it's
very
cute
and
if
storage
went
for
a
clam
with
a
pearl
inside
of
it,
so
if
you
want
to
be
able
to
keep
in
the
animal
theme,
we're
totally
welcome
to
you're.
Also
perfectly
welcome
to
be
able
to
put
in
ideas
for
other
things.
A
D
We
basically
do
like
refinement
from
their
process
usually
takes
about
like
a
month
or
so
to
be
able
just
kind
of
do
the
back
and
forth
so
plan
on
being
able
to
have
this,
like
you
know,
up
and
running
by
January,
but
this
also
means
that
by
the
time
we
get
the
Amsterdam
well
we'll
have
a
logo
to
be
able
to
show
off.
A
D
A
Yeah
thanks
everyone
that
I
think
was
great
meetings,
also
love
the
demos
I
think.
That's
that's
good
right,
not
just
having
slides
but
showing
live
demos
of
people,
please,
through
your
discussions,
especially
on
follow
ups,
where
the
mailing
list
ideally
have
more
discussions
and
we'll
meet
together
in
two
weeks.
If
you
have
agenda
points
that
you
want
to
bring
up
in
the
next
meeting,
please
let
us
know:
okay,
like
that's
it
for
today
thanks
everyone.