►
Description
Microsoft just announced support for Cloud Native Buildpacks (https://buildpacks.io/) and Pivotal announced alpha release of Pivotal Build Service based on Cloud Native Buildpacks (https://content.pivotal.io/blog/pivotal-build-service-now-alpha-assembles-and-updates-containers-in-kubernetes)
The ideal way to get your source code into a container would keep the convenience of a Dockerfile, and remove human intervention over the development lifecycle. It would work on Day 1 and Day 2. This is the thinking behind Pivotal Build Service, now in alpha.
A
They,
you
know
what
I'm
alluding
to
is,
you
might
have
heard
of
Cloud
Foundry
and
you
might
never
picture
your
monitor
and
understanding
as
a
snapshot
in
time
of
what
that
actually
is
well
for
all
of
you
who
have
a
look
that
cloud
foundry
in
the
past,
suspend
what
you
know
about
that
technology,
because
we're
going
to
start
showing
you
some
things
that
are
an
evolution
of
what
Hungary
needs
today
is
really
just
one
piece
of
that
that
explanation
dip
and
we're
going
to
talk
about
building
docker
containers
and
running
them.
Kubernetes
today,.
B
A
B
A
B
B
We
do
some
demo.
The
final.
The
box
will
be
demo
on
my
sample
of
a
large
second
pipeline
through
a
CAC
82
honker
using
those
bastards
and
then
we'll
touch
level
upon
these
little
book
service,
which
is
the
enterprise
version
of
unlabeled
bags
backwards
of
the
little
things
which
makes
it
enterprise
ready
for
consumption.
B
And
started
using
those
facts
about
building
containers
and
foundry
and
other
distribution
of
stuff
on
grades
well,
as
their
food
has
been
using
both
back
for
almost
eight
years
now
lots
of
large
customers
using
it
in
production
at
scale
example,
Home
Depot,
is
doing
like
thirty
thousand
containers
across
the
lifecycle
stages
using
an
amusing
constructs
of
go
back
to
then
under
containers.
So
it
has
quite
a
lot
of
history.
It
has
a
lot
of
validation
industry
about
its
usage
about
its
usage
at
scale
and
what
cognitive?
The
back
is.
B
Taking
that
aspect
which
was
filled
with
in
and
bringing
it
out
as
a
standalone
project
right-
and
it
was
accepted
as
part
of
Cynthia
sandbox
projects
open
source
relative
in
green
foundation
foundation
and
it's
trying
to
bring
that
running,
building
to
be
massive
scale,
that's
not
a
little
projects.
They
can
run
their
demons.
A
A
B
B
So
one
important
thing
which
we
come
across
great
you
always
have
that
challenge
of
that
the
bloopers
of
losing
their
images.
They
trying
to
figure
out
what
Gould
in
the
mistake
use
what
makes
mr.
use
operators
and
the
off
site
is
ranking
for
several
images
and
base
images,
and
you
have
that
perfect
of
like
okay,
when
we
use
this.
B
But
how
do
I
go
get
approvals
and
figure
out
how
to
get
things
down
so
and
a
lot
of
times
beside
projects
can
just
go
and
start
putting
a
lot
of
different
people
can
review
tomorrow,
where,
when
you're
going
to
do
a
real
project,
you
get
him
do
stuff
with
your
own
op
Steven
figure
out
how
to
do
it
right.
So
there's
so
this
this
friction
between
the
dev
and
ops
team.
When
you're
building
your
own
doctor
in
this,
we
cream
salt
that
friction
using
both
balance
that
now
Dennis.
B
You
only
have
to
very
much
about
what
fills
in
the
manifest
of
the
darker
image
as
part
of
the
rest
of
the
layers
which
you
want
as
a
supporting
layer
for
your
application.
You
care
about
application
and
just
worry
about
your
lair
building
your
application
and
you
push
it
up
a
patient
across
so
and
the
rest
of
the
layers
have
been
brought
in
by
the
pops
grooving.
Good
luck
right.
B
A
B
Inwardly,
one
of
the
biggest
factor
which
comes
along
with
it
is
the
security
compliance
which
office
is
pushing
towards
right.
I
mean
the
reason
you
have
that
possibility
out
there
and
about
using
different
kinds
of
base.
Images
in
different
little
layers
is
that
ops
and
security
is
driving
those
security
compliance
requirements
and
it
just
wants
to
go
faster
right
and
how
do
I
don't
bring
in
their
compliance
and
security
benefit
in
such
a
way
that
not
don't
have
to
leave
their
image
because
it's
they
just
put
anything
up
in
there.
B
A
B
Happy
that
they're
driving
home
to
standards
around
it
it
also
it
also
helps
in
a
lot
of
new
operations
pieces.
We
will
touch
upon
some
of
the
use
based
on
the
some
of
the
use
cases
under
needle
awesome
solves,
for
example,
the
use
case.
If
you
look
at
it,
how
do
you
go
about
fetching
your
applications,
which
are
running
to
production
at
scale,
with
your
image
there?
B
B
B
B
B
B
B
B
A
Llorona
statue
up
today
today
to
secure
the
container
near
any
operating
system
in
address
Baltimore
today.
One
of
the
things
PCI
is
that
you
have
to
know
what
what
is
the
current
state
of
exposure
with
apps
that
are
running
at
any
given
time
and
what's
the
plan
through
meeting,
especially
with
either
others
my
severity
vulnerabilities
and
primarily,
what
we're
talking
about
with
Bill
tax
here
outside
of
the
platform
that
is
HUGE
bill
detectors
were
primarily
talking
about
how
you
can
dress
the
motor
abilities
in
the
layers
that
make
up
a
container
to
the
doctor.
A
A
Is
that
without
having
a
consistent
process
and
it's
owned
by
her
each
develop
team
in
a
different
way
why
they
are
learning
mechanism?
There's
no
guarantees
that
the
operations
team
can
report
that
all
apps
are
compliant
what's
a
trip,
so
there
is,
there
is
the
visibility,
but
then
there's
also
the
process
to
get
it
done.
B
B
B
A
B
B
B
B
Say
you
know
500
node.js
apps
running
like
oh
and
on
September
26,
2016
9,
all
of
your
500
moon
apps,
have
authorities
if
the
docker
files.
This
is
an
example.
If
you
dr.
Polzin,
done
wrong,
we
might
have
different
posts
package
later
across
applications
right
and
how
much
time
is
an
update.
You
can
have
patched
all
these
different
layers
through
dr.
files
and
rebuilding
them
and
decline
them.
B
B
A
B
You
have
me,
and
you
have
the
package
living
in
a
separate
layer
which
comes
with
in
this
examples
here:
furnaces
FS
3,
which
is
going
to
be
based
on
its
image,
Linux
layer.
It
has
maybe
a
compatibility
guarantee
with
e
upper
layer.
So
with
the
help
of
Bill
packs,
we
are
able
to
the
wavy
associate,
see
layers
through
some
people
with
the
same
link
between
the
navigation,
the
the
no
layers,
as
well
as
the
coolest
packages
in
there.
B
So
it's
not
statically
as
one
image
it's
built
through
simplex
and
when
this
post
package
is
vulnerable,
you
push
in
the
Muellers
package
to
your
repository
and
all
you
do
is
change
your
give.
You
some
way
that
that
push
to
the
repository
and
that
push
itself
goes
in
will
start
updating
the
manifest
file
for
your
applications.
So
essentially-
and
this
happens
right
directly
into
the
docker
registry
right-
this
was
docker
history
to
that
Oh
version.
B
One
version
it's
supposed
to
cross
people
got
mount
support.
So,
instead
of
now
the
equal
link,
all
these
500
apps
in
your
local
unit,
pushing
it
across
into
the
depository
you
get
a
pair
district.
This
is
seen
right
in
the
docker
registry,
and
now
you
all
your
application,
images
are
up
to
date
and
map
doing
right
course
for
you,
patch
version.
What
this
does
is
the
build
process
has
gone
from
building
500
images
to
pretty
much
pushing
one
layer
of
worse
package
and
sending
an
update
to
be
manifest.
B
B
Image
at
all
all
you
doing,
it
is
just
changing
something
to
your
back
and
all
your
applications
are
baited
with
the
guarantee
that
would
not
change
right
instead
of
rebuilding
500
application
locally,
but
through
pipeline
and
then
pushing
across
you
have
the
risk
of
your
application
might
change.
Some
properties
within
the
application
might
have
change,
which
is
you're
just
gonna,
be
guaranteed
over
here
that
you're
not
rebuilding
it
you're
just
changing
the
links.
B
So
quickly
touching
upon
like
what
are
the
parts
of
native
nomads
right
now,
you
have
something
called
as
back
CLI
that
you
used
to
build
your
doctor
images.
It
comes
with
a
builder
whether
it
is
is,
you
can
say,
setup
configuration
telling
what
all
different
types
of
built
back
screen
support
for
a
builder.
In
this
example,
we
have
the
Java
Ruby
profile,
bathroom,
Reno,
PHP,
and
all
these
builders
are
all
these
parts
are
available
as
part
of
the
Builder.
So
you
can
configure
what
you
want
a
new
cleaner
right.
B
B
B
B
Pack,
I
should
be
using
for
this
application:
that's
the
detect
phase
of
people
back
draw
after
it
has
detected
it
analyzes
if
there
is
any
cache
which
is
already
there,
which
I
can
use
what
other
things
I
can
bring
in
with
it,
so
that
analysis
step
is
done
as
a
second
step.
Third
step
is
very:
it's
actually
going
to
build
the
docker
image
for
you
and
fourth
step
is
where
it's
gonna
export
that
across
your
doctor
project.
That's
it.
B
B
B
B
B
The
customers
they
have
like
60
kbps
network
link
for
the
entire
edge
center.
You
know
that
60
kbps,
you
can
use
only
maybe
20
or
30
K.
Various
your
your
data
transfer
speed
matters
a
lot
right.
It
gets
to
a
point
where,
every
week,
a
dps
counts
at
that
point
so
make
sure
that
you're
doing
minimum
their
transfer,
and
so
it's
using
the
concept
of
rebasing
to
achieve
that.
B
B
And
what
I'm
trying
to
kind
of
showcase
over
here
is,
if
I
do
this
in
my
in
the
regular
way
now
then
I
go
and
update
my
application
so
think
of
it.
I'm
doing
only
the
copy
sample
know
that
if
I
have
to
rebuild
this
application,
what
will
happen
so
what
I
for
sure
it's
time
to
go
through
all
these
pull
and
extraction
from
the
second
step?
What
do
you
think
that
the
run
and
game
installed
pretend
that
they're
not
just.
B
B
Had
to
change,
there
are
several
scenarios
in
docker
files
when
you
making
this
figure
out
the
inner
additional
layers
which
can
have
cascade
the
one
with
it.
Changes
as
well,
and
speaking
of
people
on
those
layers
right
in
case
of
built
back
now,
take
an
example
of
fill
pack.
If
I
go
and
do
this
first
time
around
build
pack
is
gonna,
build
all
the
layers,
but
once
it
has
built
at
once,
it
will.
B
It
will
reuse,
it
will
change
only
layer
to
say
it
will
not
going
to
change
any
other
cascading
layers.
It
will
reuse
all
the
cascading.
That's
one
of
the
benefits
of
Bill
pack,
which
kind
of
is,
is
really
important
when
you
get
to
scale
when
you
get
to
like
500
containers
or
30,000
containers.
Those
additional
Larry
building
matters
from
time
perspective
from
data
transfer
perspective.
So
back
note.
B
So
when
you
do
the
back
build,
you
have
different
options
which
you
can
do.
One
of
the
option
is,
you
can
publish
this
so
if
I
can
just
do
back,
build
and
we'll
build
it
locally
and
if
I
do
publish
it
will
publish
directly
to
the
registry.
So
that's
part
of
the
build
step
it
also.
It
can
also
publish
it
to
the
author
registry
and
this.
B
And
that's
all
I
do
all
I
needed
was
the
back
CLI
and
once
I
had
be
back,
see
like
I
was
able
to
do
back,
build
and
give
it
the
image
name
so
from
here
it
started.
What
it's
doing
is
as
I
was
saying
that
goes
to
the
detection
phase.
At
this
part,
it's
trying
to
detect,
what's
the
best
built
back
to
use
for
this
application,
and
after
that,
it
figured
out.
Okay,
it's
in
no
doubt
so,
I'm
going
to
use
the
NPM
built
back.
B
There
is
an
existing
image,
can
I
give
you
something
in
this
case.
It
not
find
anything
now,
because
I
created
a
brand
new
image.
It
went
through
the
build
process,
the
NPM
install
process
and
then,
after
that,
it's
going
through
the
export
process
and
so
exported
all
the
layers
and
after
that,
it
cached.
Although
there's
in
this
scenario,
if
I
go
and
look
at
my
daughter,
images,
I
have
the
back
node
data
so
here
and
in
this
scenario,
if
is
now,
if
I
not
go
and
change
this
and
say.
B
And
I'm
gonna
do
I'll,
show
you
the
to
detect
it
stronger,
detect,
found
the
IMP
envelope.
I
began
its
restoring
the
cache
layers
for
node
4,
node
modules,
analyzing
game
widgets,
using
the
cache
layer,
rewriting
the
metal
layer
for
engine,
and
then,
if
I
go
back
here,
it'll
say
it's
reusing
the
layers
there.
It
was
already
already
there
so
where's
the
app.
B
This
is
the
yeah.
This
is
the
up
layer,
so
this
is
the
only
layer,
recreated
or
recreated
me
layer
and
exploded
it,
but
it
reused
for
all
the
layers
right-
and
this
is
just
a
simple
example,
simple
example
on
the
recent
without
her
file,
how
in
case
of
a
fire,
it
will
a
lot
of
times
people's
a
lot
of
cascading
layers,
but
Bill
pack
kind
of
makes
it
efficient,
and
it
was
only
the
layer
which
is
suited
and
in
text
and
changes
only
those
layers
and
updates
those
layers.
B
It
also
gets
to
a
point
where,
for
example,
when
you're
writing
your
doctor
files,
every
developer
health
of
the
styles
right,
they
can
write
a
different
set
of
script
commands
in
different
ways.
Right
bill
packs
drives
a
lot
of
efficiency
in
those
things
as
well
right
in
case
of
a
developer,
who
is
very
honor.
A
B
Run
multi-line
docker
run
steps
versions.
Bill
back
is
optimizing
that,
because
it's
kind
of
driven
by
the
system
right
so
there's
a
lot
of
optimization,
which
also
comes
with
Bill
back
versus.
When
you
do
it
manually
through
our
files
right,
you
can
go
and
analyze
and
optimize
your
daughter
files,
but
think
of
it
when
you
get
to
like
thirty
thousand,
how
many
applications
optimize
those
different
things
right.
B
So
what
Bill
bag
is
doing
is
taking
the
entire
dockerfile
authoring
away
from
developers
and
saying
you're,
building
your
application
layer
compiled
and
building
application
layer
and
let
build
backs
ability,
popper
imagery
right.
So
it's
kind
of
optimizing
rest
of
the
layers
along
with
that
application,
through
built
by
project.
B
A
Okay,
you
might
know
Jenkins
or
you
might
know,
TFS
if
your
windows
that
I
Barry
CI
tools
are
out
there
go
see
a
Travis
whatever
pivotal
as
a
technology
called.
Of
course,
it's
an
open-source
CI
CD
technology,
I'm
going
to
use
that
my
demo
doesn't
have
to
be
what
you
use
to
leverage
the
build
packs,
but
this
this
bill
this
pipeline
is
going
to
go
through
a
flow
of
building
and
testing
the
in
this
case,
I'm
using
the
spring
bunet
I'm.
A
Also
gonna
run
it
up
the
vulnerability
scan
using
a
partner
technology
called
sneak
and
then
I'm
going
to
docker
eyes
that
spring
boon
app
using
the
cloud
native,
build
packs
and
then
I'm
going
to
apply
it
to
both
aks
and
also
PKS
aks.
As
a
juror
kubernetes
tks,
it's
pitbull
confer
pennies
so
to
us
it
doesn't
matter.
I
have
kubernetes
here
and
communities
there
at
the
end
of
the
day,
I
want
to
take
this
docker
image
and
pass
through
the
pipeline
and
run
it
in
a
variety
of
places.
A
So
if
anybody
is
interested
in
getting
the
full
link,
I
can
email
to
you,
but
in
this
repo
you'll
see
things
like
you
know
the
Java
source,
the
definition
of
the
pipeline
itself,
which
is
I
blend
tks,
is
what
I'm
using
here
and
then,
when
it
executes
SAR
various
tasks
and
as
we
can
imagine
based
on
that
diagram,
I
showed
you
earlier.
You
know
these
are
the
tasks
that
executable
or
ability
scan.
These
are
the
tasks
that
execute
the
bill.
A
Concourse
is
a
yanil
document,
Concours
pipeline
30ml
documents
and
if
I
go
there
open
that
up,
give
you
the
the
animal
view
of
what's
actually
happened
here.
The
way
you
use
concourse
is
he
defined
resources.
Resources
would
be
things
like
a
git,
repo
or
a
kubernetes
destination,
or
it
might
be
an
s3,
but
to
grab
things
so
coop
resources
are
inputs
and
outputs
to
pipelines.
Think
of
it
that
way,
and
then
the
pipeline
itself
is
broken
down
into
jobs
and
within
jobs.
A
You
have
individual
tasks,
so
in
the
case
of
building
test,
I've
got
a
task
called
test
and
it
references
that
task
definition
and
it
needs
the
venue's
repo
as
input.
That's
kind
of
how
you
define
pipelines,
Concours
pretty
simple.
It
is
the
animal
when
you,
when
you
define
those
pipelines
and
take
a
look
at
what
that
rendering
is.
A
I'm
going
to
set
my
pipeline
I'm
going
to
pass
it
the
definition
of
the
pipeline
and
then
my
secrets
secrets
are
things
like
docker,
username
and
password
token
for
sneak
the
coop
config.
That
allows
me
to
push
out
to
this
kubernetes
instance.
Those
are
what
I'm,
storing
in
grams
and
I'm,
not
checking
that
into
my
repo,
so
you'll
see
pipeline
in
the
definition
for
pipeline
in
the
repo.
But
you
won't
see
my
secrets,
the
repo
then
I
set
the
pipeline.
A
It
says
it's
out
there,
it's
paused
when
you
first
deploy
it
by
playing
the
Concours.
It
starts
out
pause.
So
if
I
want
to
kick
this
off,
I
can
either
click
and
do
it
and
then
click
play
and
then
I'll
kick
off
my
first
instance
of
a
pipeline
and
so
that
that's
gonna
take
a
little
bit
of
time.
I'm
gonna
go
through
a
scenario
where
I'm
gonna
I
want
to
address
the
vulnerability.
A
That
sneak
is
gonna
call
out
for
me
and
I'm
going
to
rebuild
my
docker
image
should
have
it
deployed,
so
this
isn't
it
Congress
is
asynchronous.
It
works
on
pulling
these
resources
when
it
sees
a
change.
The
initial
play
of
the
pipeline
takes
a
little
time,
so
it's
queuing
up
work
in
a
containerized
environment
behind
the
scenes
to
execute
the
steps
in
this
pipeline.
A
B
A
You'll
see
this
pulsing
going
on.
So,
if
I
click
in
to
build,
this
is
doing
my
maven
package
for
my
spring
moon
app.
So
you
can
see
all
the
maven
downloads
and
I
haven't
optimized
this
to
cache
the
dot
into
directory,
which
is
how
you
manage
dependencies
locally
on
your
laptop
when
you're
using
maven
I.
Would
want
to
cache
that
I
think
I
can
optimize
this
pipeline
to
do
that.
So,
at
the
end
of
this
build
and
test,
it's
going
to
build
my
spring
boot
app
and
run
my
spring
unit
tests
in
this
task.
A
At
the
same
time,
what's
happening
is
parallel,
is
I
ran
a
scan
on
the
codebase,
so
I
explained
to
you
earlier
was
billed
service
and
pivotal
are
going
to
take
care
of
patching
and
CBE
vulnerabilities
up
to
your
app
sneak
is
an
example
of
you
can
address
the
app
itself
dependencies,
so
the
output
of
Snee
is
actually
taking
me
to
my
account.
Sneak
is
a
SAS
based
solution,
but
it
can
be
run
air-gapped
on
print
there
great
partner
fiddle,
I
go
here
and
I
see
this
particular
app
and
I
have
some
high
severity
here.
A
I
see
Jackson
Tennessee
they're
recommending
that
I
update
to
two
nine
nine
three
to
clear
this
vulnerability.
So
I
know
my
pipeline
is
running
and
that's
great
I
can
I
can
let
it
go
there.
While
it's
going
through
the
building
back
and
I'll
review
the
output,
the
cloud
needle
acts
running,
it
has
a
task
in
concourse,
but
in
the
meantime,
I
want
to
go
and
fix
that
that
vulnerability,
because
I
don't
want
to
get
a
call
from
security
team
next
time.
Somebody
finds
a
way
into
this
app.
A
It
wasn't
expecting
so
I
somewhere
in
here
have
Jackson
core
data,
bind
I'm,
taking
the
default
version,
I'm
just
going
to
go
ahead
and
override
that
and
Mayman
Java.
This
is
how
you
do
that
different
ways
to
do
this
and
other
technologies,
but
NuGet
has
something
similar
in
the.net
world,
so
that
should
force
my
dependency
to
to
the
right
version.
I'm
going
to
go
ahead,
say
that
and
once
I
say,
I'm
gonna
go
to
my
changes
here
and
I'm
going
to
commit
that.
A
Go
ahead
push
because
of
the
way
I
have
a
pipeline
set
up
once
that
gets
pushed
out
to
my
github
repo,
that's
going
to
cause
another
round
of
build,
testable
or
less
capable
of
those
getting
to
kickoff.
Well
we're
waiting
for
that
to
happen.
We
could
take
a
look
at
similar
to
what
Myra
showed
you
with
pack
on
his
laptop
you'll,
see
similar
output
coming
from
just
completed
coming
from
the
tasks
that
are
running
it
in
concourse
itself.
So
once
again,
you
see
I'm
passing
a
certain
set
of
build
packs.
A
I
haven't
detected
whether
and
that
I
use
any
of
those
build
packs
yet
and
I
also
look
for
a
cache
of
weather
agreements.
Previously
concourse
has
a
way
of
specifying
a
cache
directory
so
that
every
time
this
pipeline
runs,
it
can
remount
that
directory
and
reference
it
for
this.
This
build
process,
so
the
next
time
I
run
this.
It
will
take
the
layers
that
I
built
the
first
time
and
reuse
them
just
like
what
you
saw.
A
Myers
is
laptop,
so
I
detected
that
I
need
open,
JDK
and
the
system
built
back,
and
now
it's
compiling
the
application.
If
this,
this
particular
set
up
with
the
cloud
native
built
acts
I'm
pulling
directly
from
the
repo
repo,
yet
I
can
set
this
up
so
that
the
build
packs
used
the
jar
file
that
was
created
as
a
result
of
my
test.
A
A
Down
build
success,
great
removing
the
source
code;
I!
Don't
want
that
in
my
container,
then
it's
going
to
add
another
layer
called
the
JVM
application
foot
pack
and
oh
by
the
way,
this
is
a
spring
boot.
So
it's
pulling
in
that
dependency
as
a
specific
layer
that
detected
that
and
so
there's
special
treatment
here
for
a
spring
moon
app.
It's
thought
that
I
was
referring
to
spring
boot
parent
to
da
1.7
in
my
palm
and
said
all
right,
I'm
going
to
treat
that
as
a
separate
layer
altogether.
A
So
if
your
palm
is
changing,
but
the
spring
blue
version
doesn't
change
I'm
not
going
to
mess
with
that
layer,
I'm
not
going
to
push
it
on
a
reconfig
for
optimizing,
how
you
find
services
and
then,
ultimately,
my
my
image
was
created
and
pushed
out
to
Microsoft
and
your
container
registry.
If
I
go
to
my
container
registry,
I
can
see
that
those
things
were
published
out
there.
A
I
could
sitting
on
it
right
now,
so
I
refresh
I
should
have
refresh
see
if
I'm
still
logged
in
cruise-line
a
cast
I
think
it's
a
tag,
I
use-
and
this
was
offended
when
last
tag-
728
yeah,
that's
that's
the
one
that
was
just
updated
by
my
pipeline
run.
So
that's
great
I
now
have
an
image
built.
It's
up
in
registry,
probably
the
meantime.
It's
already
deployed
out
to
both
my
kubernetes
clusters,
and
so
in
this
case,
I
used,
what's
known
as
a
resource
definition
to
point
a
cast,
so
there's
different
different
approaches.
A
A
Need
to
know,
what's
the
IP
address
of
venue
service
well
before
I
go
there,
how
did?
How
did
it
know
to
create
that
service
in
kubernetes
and
by
the
way
I'm
talking
about
kubernetes
concepts?
Know
it
at
this
point:
I
realized,
but
there
are
some
artifacts
that
are
needed
to
deploy
a
docker
in
as
an
app
to
kubernetes.
If
you're
not
familiar
kubernetes,
you
know
things
like
the
app
deployment
itself
deployment,
a
mammal
description
of
the
app
itself.
There
is
networking
to
get
to
have
ingress
to
your
application
over
the
network.
A
It's
called
a
service,
I
define
the
service
and
a
deployment
animal
for
this
particular
app,
and
it
references
that
registry
entry
in
the
deployment.
So
in
this
repo
you'll
see
Cades,
Boulder
and
you'll
see
deployment
gamal.
Ultimately,
that's
where
it's
picking
up
the
image
that
I've
been
building
here
from
the
process
and
I've
also
created
a
service
that
says,
give
me
an
external
IP
address
and
allow
me
ingress
to
this
application
report,
80,
90,
and
so
now
that
I
know
what
the
IP
address
is
from
majeure.
A
A
A
B
A
Oh
yes,
it
did
go
away,
yeah,
yeah!
So
now,
I
now
I
have
a
different
high
severity
listed
up
here.
I
looked
into
this
h2
database.
If
anybody
does
Java
development,
there
is
no
fix
for
this
one,
but
the
good
news
is
h2.
Is
they
do
remember
database,
that's
only
used
for
testing.
So
let
you
get
the
idea,
but
so
now
I've
made
my
security
teams
a
little
bit
happier
because
I've
eliminated
a
high
severity
vulnerability
and
the
process
that
myrish
walked.
A
B
A
A
Said
in
concours,
you
can
say
what
directories
or
folders
within
the
container
that
ran
this
task.
I
want
to
have
for
the
next
time.
I
run
this.
That's
how
I'm,
letting
the
cloud
native
builder
know
that
there
were
layers
and
there's
a
cache
of
those
layers
that
were
both
previously
referenced
them
and
restore
them
in
the
process.
So,
if
I
go
back
to
the
build,
this
would
be
my
second
instance
I.
Remember
the
first
distance
said
something
about
there's,
there's
nothing
to
restore
there's
nothing
to
restore
the
demo.
God's
failed
me.
A
This
should
say,
and
I
don't
have
to
look
into
why
it's
not
safe
now,
but
for
some
reason
it's
not
thinking
that
back
up,
but
that's
the
intent
and
it
did
work.
Damn
it
last
time.
I
did
this
so
anyway,
any
questions
there.
The
idea
was
to
take
what
myrish
was
doing
on
his
laptop
put
it
in
a
pipeline
and
and
also
deploy
the
app
once
it's
built
with,
of
course,
security
scan
if
you've
done
so.
This
is
this
is
more
of
what
you
know.
A
A
A
A
A
Only
the
a
player
itself
needs
to
be
updated
in
the
registry
and
that's
what
that's
what
the
cloud
may
have
built
acts
gonna,
but
then,
once
it's
in
layers
and
in
the
app,
so
that's
the
a
player
as
where,
as
spring
food
app
itself
is
sitting
the
dependency
for
spring
boot.
Parent
is
also
treated
as
a
separate
layer,
so
they
they
did
take
that
one
part
of
the
beer-pong
and
separate
it
out
as
a
layer,
but
everything
else
is
considered
a
player.
A
B
Commercial
and
price
of
water
version
of
it,
and
it's
it's
not
just
the
along
with
the
the
commercial
advice
of
the
open
source
available
back
and
life
cycle.
The
act
of
all
things
maybe
act
something
all
as
we
do
a
ton
of
Vittoria
adding
see
our
DS
or
your
brothers
in
each
configuration
which
lives
within
communities
now
I
want.
B
B
Default
has
things
like
like
deployment
or
services
or
pods
right
so
communities
already
in
that
stands.
Those
constructs
if
I
want
to
build
something
which
is
my
own.
A
custom
construct
and
I
want
communities
to
understand.
Bo's
contract
I've
built
a
see
Arnie,
but
I
have
my
custom
resource
definition
and
the
logic
what
we
do
and
we
push
that
CRT
to
communities
now
when
I
will
send
up
something
which
has
a
time
or
type
of
what
crdi
pushed
communities
will
understand
that
and
it
will
run
with
that
logic.
B
A
A
B
B
Then
those
development
deeds
where
you're
handing
this
configuration
are
back
control
in
the
department
you
it
would
be
able
to
kind
of
operate
within
that
be
so
you
can
say:
t1
hasn't
been
there
for
node
and
Java
team
to
is
actually
a
Python
theme,
so
I
have
a
put
in
there
for
Python
associated
team
to
right,
so
you
can
build.
These
are
back
based.
B
B
B
And
such
he's,
driven
by
the
developers
and
that
together
those
people
service
and
increase
the
image
of
streams
of
images
which
is
playing
it's
just
you
showing
what
pretty
much
pack
was
doing
right
in
this
scenario,
it's
it's
helping
both
now
Steve.
Do
you
have
that
separation
of
DVD
and
I'd
sure
it
was
really
how
it's
a
bridge
so
I'm
gonna
walk
through
this
demo.
B
Miss
tag
which
I
want
to
push
right
once
Cody
applies
that
information
to
be
available
service,
favorable
service,
we'll
take
that
and
total
bill
service
will
start
building
your
games
and
peshitta
dr.
this
now,
every
time
Cody
commits
score
and
pushes
it
to
the
repository
her
local
service
will
pick
it
up.
B
B
A
B
B
Images
of
the
applications,
and
once
that
is
done,
Cody
can
now
run
through
the
like
who
reapply
these
so
who
did
not
go
and
rebuild
all
those
images
right.
It
was
done
just
by
alumni
operator
of
being
champion.
You
still
need
to
go
through
the
quantities
you
need
to
go
through
the
desk,
but
it
improves
the
life
cycle.
Time
of
building
a
whole
lot
by
just
having
David
layers,
which
it's.
B
B
B
B
B
B
A
B
A
A
B
B
We
open
source
cannot
keep
back,
is
something
which
is
the
second
part
of
the
UVs
service
is
also
open
source,
the
CR
DS,
and
you
can
actually
go
and
use
it.
So,
whatever
I
did
as
far
as
the
PV
through
the
enterprise
version,
you
can
do
that
with
the
open
source
version,
so
back
CLI,
it
was
just
doing
the
local
way
of
building
things
the
pipeline,
which
might
showed
it
confused.
A
B
Open
source
constructs
of
fact
and
TV
ads
those
two
things,
the
way
out
of
which
we
came
back
because
some
resource
definitions
are
also
in
sourced,
so
things
which
I
did
where
you
apply
it
in,
in
which
you
would
we
will
do
that,
apply
an
image
to
your
for
your
for.
You
can
use
through
Capac
and
it
will
build
the
image
the
same
way.
People
did,
but.
B
Back
version
of
what
he
is
doing,
build
your
own
or
back
around
it
right
then
try
to
manage
who
has
access
to
what
builders
and
since
you
do
things
which
this
gave
back
CRD
and
the
image
CRD.
So
you
can
have
the
definition
of
both
what
will
backs
are
not
associated
with
my
at
my
deployment
and
what
image
I
need
to
apply
image
conservation
I
need
to
apply,
so
you
can
find
more
about
capac.
B
B
B
Is
the
Enterprise
version,
logical
parts
which
adds
the
are
back
typically
droid
and,
along
with
that,
to
add
that
our
bot
capability,
with
these
custom
basis,
controller
at
the
CR
DS,
which
does
the
building
and
image
creation
and
the
fourth
pieces
that
the
customers
of
the
finishing
which
we
create
as
by
moving
that's
also
open
source.
So
you
can
use
that
model
as
well
and
will.