►
From YouTube: Peeling Back The Layers of AutoDevOps
Description
In this session Julie, Enterprise Solutions Architect presents on the mechanics of how the auto build and auto deploy jobs work.
Key Points:
1. Troubleshooting deployment issues with demos
2. Guiding customers on appropriate use of auto devops for their own applications
3. What topics to dig into next to further peel back the layers of auto devops
A
B
Awesome
I
am
going
to
share
my
screen
and,
as
I
do,
that
a
couple
housekeeping
items
john
you've
been
great
at
organizing
this
and
getting
us
working
in
the
gitlab
way.
So
there
is
a
document
in
the
agenda
that
you
can
take
notes
in
and
also
put
in
questions
as
we
go
and
let's
jump
into
it.
I'm
gonna
try
really
really
hard
to
obey
our
time
box
today.
So
that's
my
goal,
50
minutes
and
we're
out
of
here
and
give
time
for
discussion
throughout
as
well.
B
So,
let's
see,
if
that
happens,
or
not
all
right.
So
as
I
start
to
tackle
things
like
this
and
presentations
or
projects
that
I'm
working
on,
I
like
to
write
them
in
terms
of
a
user
story.
You
know
what
the
heck
am
I
trying
to
accomplish.
What's
the
value
even
putting
all
this
effort
into
this
particular
topic
so
for
this
particular
topic,
you
know
I
have
to
be
honest.
B
I
started
digging
into
auto
devops,
because
I
was
just
frustrated
as
all
heck
that
I
was
getting
review
at
failures
constantly,
like
my
customers
were
getting
them
when
they
were
doing
povs.
I
was
getting
them
and
I
was
I
was
like
well.
I
have
to
understand
how
autodevops
really
works
in
order
to
understand
how
to
debug
these
things.
B
So,
as
I
wrote
my
user
story
for
this
particular
session,
I
want
to
you
know,
learn
the
mechanics
of
how
auto
build
and
auto
deploy
work
so
that
I
can
troubleshoot
deployment
issues
when
they
come
up
in
my
demo,
projects
with
customers
and
prospects
and
also
guide
our
customers
on
what
the
appropriate
use
is
of
auto
devops
and
what
it
is
not
especially
around
the
auto
build
it's
not
just
magic,
you
don't
just
throw
in
any
random
code
and
you
get
something
deployed
into
kubernetes.
B
It's
not
as
simple
as
that.
So
you
know
what
are
those
kind
of
guide
routes
that
we
can
provide
for
customers
and
then
what
are
the
next
topics
that
we
really
need
to
dig
into,
or
I
really
need
to
dig
into
anyway
to
really
understand
how
all
this
works
and
comes
together.
B
As
I
start
to
dig
in,
I
think
about
this
as
peeling
back
the
layers,
a
lot
like
peeling
back
the
layers
of
an
onion,
because
there
are
a
lot
of
them
and
sometimes
they
make
you
cry
so
so
you
know
that's
how
it
goes
as
we
learn
all
the
technical
details
of
how
this
works.
I
have
to
say
I'm
just
so
impressed
by
your
engineering
team
every
time
I
look
at
auto
devops
and
see
all
of
the
amazing
logic
that's
built
into
this
prescriptive
pipeline.
B
It's
really
cool,
but
there's
just
a
lot
to
understand
to
really
get
how
it
all
works,
and
so
I
feel
like
I
always
have
to
have
a
disclaimer
when
I
start
diving
into
topics
like
this.
In
order
to
really
understand
it,
all
you
need
to
be
an
expert
in
a
lot
of
different
areas:
docker
kubernetes,
heroku,
helm
other
related
areas,
I'm
not
an
expert
in
any
of
those,
so
I
have
a
goal
of
presenting
this
stuff
in
a
way.
B
That
is
me
learning
as
a
non-expert
and
my
goal
for
this
particular
session
is
to
give
you
kind
of
the
200
level.
If
you
think
about
college
courses,
you
know
the
100
level.
Marketing
stuff
is
great.
Let's
get
to
the
next
level,
let's
not
get
to
the
graduate
level
yet
because
if
we
start
to
talk
about
the
really
technical
details
without
understanding
some
of
the
higher
level
details,
first,
it's
going
to
be
challenging.
B
So
I'm
coming
at
this
from
a
new
perspective
and
I
am
shooting
for
80
to
85
accuracy
and
completion
in
the
content
that
I
provide
and
I
would
love
to
drive
additional
conversation.
B
I
figure
if
everyone
on
this
call
learns
one
thing
today
and
I
won't
learn
one
thing.
Then
it
is
a
huge
success
and
valuable
use
of
our
hour
for
50
minutes
of
time.
B
So,
as
I
started
to
put
this
presentation
together
and
started
to
put
things
into
slides,
I
then
realized
that
there's
so
much
content,
there's
so
much
stuff
to
learn,
and
so
you
know
last
night,
at
about
7
30
at
night
I
was
like.
I
have
to
have
30
minutes
worth
of
talking
points
and
I
have
like
60
slides
and
I
started
like
hiding
a
whole
bunch
of
slides,
and
I
said:
okay.
B
I've
got
to
cut
this
back
a
little
bit,
there's
always
a
chance
to
do
some
follow-up
later
on,
so
we're
going
to
pivot,
just
a
little
and
so
for
today.
I
just
want
to
talk
about
my
process
for
investigating
auto
devops
and
what
I've
learned
about
auto
devops
in
general,
along
the
way,
because
I
think
that
will
help
everybody
and
then
how
does
auto
build
really
work?
B
I
think
my
biggest
learning,
which
is
so
obvious
once
you
know
it,
there's
so
much
about
auto
devops
once
you
know
it,
it's
so
obvious,
but
my
biggest
learning
so
far
is
that
auto
deploy
is
going
to
fail
in
a
lot
of
cases,
because
what
you
built
doesn't
actually
run
appropriately.
So
let's
dig
into
auto
build
first
and
let's
get
beyond
thinking
that
that's
magic
and
talk
about
how
do
we
build
apps
in
the
container
appropriately
so
that
we
can
run
them
with
an
auto
deploy
process?
B
B
So,
first
of
all,
what
do
we
require
for
auto
devops?
There
is
a
documentation
page
that
has
some
of
this.
B
Just
like
I'm
gonna
just
be
overly
transparent
in
learning
about
this
stuff,
I
kind
of
felt
reading
through
a
documentation
how
I
feel
reading
through
a
handbook
at
times
there's
so
much
stuff,
but
it's
not
all
organized
in
a
way.
That's
necessarily
the
easiest
to
follow,
but
I'd
rather
have
more
documentation
than
less
so.
It's
not
a
complete,
it's
just
an
expression
of
being
a
little
bit
overwhelmed
at
the
beginning,
trying
to
figure
out
what
we
really
need
for
auto
devops.
So
we
don't
really
need
a
kubernetes
integration.
B
B
If
you
set
up
a
an
ingress
cube
ingress-based
domain
as
part
of
the
integration,
you
do
not
have
to
set
the
variable
to
set
it,
but
you
do
have
to
set
a
base
domain
that
is
used
in
the
autodeploy
job,
either
through
that
integration
or
through
a
variable,
and
you
need
an
nginx
ingress
controller
setup
and
karen-
and
I
were
talking
about
at
the
beginning
of
this-
call-
how
we
are
removing
the
ability
to
install
those
gitlab
managed
apps
through
the
ui,
and
you
have
to
do
it
through
a
ci
job,
starting,
I
think
14.
B
So,
just
heads
up
on
that
aspect
of
things
from
a
docker
perspective,
every
single,
auto,
devops
job
runs
a
docker
container
that
has
everything
in
it
that
it
needs
to
process
that
job,
except
for
the
source
code
that
it
copies
in
after
does
the
git
checkout,
but
any
analyzers
any
scripts
that
are
needed.
It's
all
built
into
that
image
that
gets
that
a
container
gets
spun
up
up
from
and
runs
everything
in
there.
So
you
need
a
docker
executor
for
a
gitlab
runner.
B
You
can
use
docker
machine,
executors
or
kubernetes
executors
as
well,
but
you
need
to
have
docker
executors
to
run
auto
devops
jobs,
the
auto
build
job.
I
think
a
couple
of
the
other
auto
devops
jobs
as
well,
but
definitely
the
auto
build
job
for
sure
still
needs
docker
and
docker,
and
that
means
a
couple
of
things.
First
of
all,
you
need
to
have
privilege
mode
enabled
in
your
runners,
which
we
know
from
some
customers
is
somewhat
sensitive
and
you
have
to
specify,
if
you're
going
to
write
your
own
build
job.
B
You
have
to
specify
the
docker
and
docker
service.
If
you
are
using
auto,
auto
dev,
auto
build,
you
don't
have
to
specify
it's
part
of
the
job
and
we
use
1903-12
docker
docker
service
specified
in
the
gitlab
ciemo
file.
B
Another
thing
that
I
learned
just
this
past
week,
openshift
all
of
a
sudden,
is
becoming
a
hot
topic
and
then
their
new
openshift
gitlab
runner
operator
to
be
able
to
very
easily
spin
up
runners
within
openshift
and
use
them.
They
do
not
support
docker
and
docker,
so
you
cannot
use
them
for
auto
devops
jobs.
So
just
be
aware
of
that,
if
you're
guiding
your
customers,
another
thing
I
just
discovered
this
week,
which
again
things
are
obvious
once
you
know
them,
but
not
obvious
until
you
discover
them.
B
Sometimes
that
seems
to
be
my
theme
at
gitlab,
like
generally
speaking,
but
we're
assuming
that
any
application
you
build
runs
on
port
5000
and
if
your
application
is
not
running,
I'm
going
to
run
on
port
5000.
You
need
to
specify
that
port
in
an
auto
deploy
values
yaml
file
in
a
docket
lab
directory.
So
everything
about
auto,
build
and
auto
deploy
will
assume
that
your
app
is
going
to
be
available
to
run
at
5000
by
default
and
then
somewhat
of
a
simpler,
more
obvious
thing.
B
Your
repo
does
need
to
contain
either
a
docker
file,
so
we
can
do
a
docker
build
and
docker
push
to
build
the
application
or
it
needs
to
contain
source
code
that
can
be
built
with
the
appropriate,
build
packs.
We're
going
to
talk
more
about
hurokuish
and
heroku,
build
packs
and
cloud
native
build
packs
in
just
a
few
minutes.
B
All
right,
so
a
lot
of
us
know
this
that
when
you,
you
start
to
talk
about
auto
devops
with
your
prospects,
you
open
up
a
code
repository
and
you
add
a
gitlab
ci
yml
file
and
there's
templates
available
for
you
built
right
into
gitlab.
So
you
can
just
choose
the
autodevops
template
and
it
shows
you
this
autodevops
format
that
has
each
different
job
template
included
in
it,
and
it
also
has
some
workflow
rules
built
into
the
gitlab
ci
file.
B
Now
I
started
to
dig
into
this
because
it's
like
pretty
obvious
what
it's
doing.
It's
checking
to
make
sure
that
your
repository
contains
a
type
of
app
that
can
actually
be
built
by
a
heroku,
build
pack
or
a
docker
file
right.
So
at
the
high
level.
That
makes
sense,
but
then,
when
I
started
to
look
at
the
very
first
rule
with
these
this,
if
condition
I
was
like
what
is
that
that
autodevops
explicitly
enabled
equals
equals
one.
B
B
Just
by
checking
a
box
flipping
a
switch,
then
the
auto
devops
is
on
for
everyone
in
that
group
or
across
the
instance.
If
you
set
it
at
the
instance
level,
when
you
do
that,
what
I
discovered
is
that
this
auto
devops
explicitly
enabled
environment
variable
gets
set
to
one,
and
what
that
means
is
that
now,
every
time
someone
does
something
that
would
start
a
pipeline,
we're
going
to
run
an
auto
devops
pipeline
right.
B
So
every
time
somebody
in
a
project
where
this
is
set
tags
does
a
git
tag
or
does
a
get
push
of
a
commit.
We're
going
to
run
on
our
devops.
B
B
I
was
kind
of
surprised,
I'm
still
kind
of
surprised.
I
haven't
had
a
chance
to
follow
up
with
the
product
team,
but
if
I
enable
auto
devops,
let's
say
this
is
enabled
at
the
group
level
instead
of
the
project
level
or
the
instance
level,
that
the
impact
is
basically
that
anytime,
somebody
creates
a
new
repository
just
with
a
readme
file
in
it
or
any
kind
of
crud
in
it.
B
Like
you
really
don't
want
ci
file,
it's
going
to
start
running
pipelines,
it's
going
to
run
auto
devops
pipelines
and
it's
going
to
fail
those
pipelines,
because
you
don't
have
anything,
that's
buildable
in
your
project,
but
it's
not
even
just
one
or
two
jobs.
That
start
because
remember
with
autodevops
we're
so
clever,
we
start
the
build
job.
We'd
start
the
test
job,
a
code,
quality
job,
hey
code,
quality
passes
when
you
only
ever
read
me
in
your
repo
hey,
and
we
also
started
this
cleanup
job
all
when
the
pipeline
starts
so
effectively.
B
I
would
recommend
for
most
customers
that
they
do
not
turn
on
auto
devops
for
an
entire
group
or
for
an
entire
instance,
because
if
you
have
people
who
are
just
creating
new
repos
that
don't
have
anything
in
them
yet
or
are
creating
repos
that
they
don't
want
to
uci
for
you're,
going
to
start
using
a
lot
of
resources
of
runners
using
up
your
pipeline
minutes,
if
you're
on
gitlab.com
your
ci
minutes
and
things
that
really
shouldn't
be
running
ci.
So
just
keep
that
in
mind.
B
I
thought
it
was
interesting
and
what's
even
more
interesting
about
it.
Is
that
even
if
I
then
put
in
a
git
lab
the
gitlab
ci
yama
file,
which
is
the
auto
devops
template
that
has
these
workflow
rules
built
in
it's
still
going
to
run
pipeline
every
time.
I
change
the
readme
file
as
an
example,
because
this
auto
devops
explicitly
enabled
value
is
set
to
one,
because
that's
that
switch
is
flipped,
so
I'm
just
running
a
lot
of
jobs
that
don't
really
need
to
be
run
in
this
particular
project.
B
So
hopefully
that
makes
sense
to
everybody
and
is
something
useful
to
think
about
so,
and
the
other
thing
that's
interesting
is
that
if
you
are.
B
B
As
I
said
before,
it's
going
to
run
auto
devops
job,
so
you
only
know
that
it
doesn't
fit
the
specific
pattern
once
the
build
job
fails,
your
build
packs
will
tell
you
it's
something:
that's
not
buildable!
So
just
keep
that
in
mind.
I
do
like
using
this
file
explicitly
because
it
will
check
that
you
have
the
right
thing
in
your
repository
and
heroku.
Build
packs,
basically
work
the
same
way.
B
Okay,
so
then
the
next
thing
we
tend
to
do
is
say:
hey
look,
each
of
these
templates
has,
or
each
of
these
jobs
has
a
template
associated
to
it,
and
so
autodevops.
Is
this
prescriptive
set
of
stages
and
jobs
within
each
stage
to
allow
you
to
fully
test
and
do
security
scans
on
your
code,
and
we
can
take
a
look
at
what
each
of
these
jobs
actually
does
and,
of
course
you
have
the
ability
to
override
these
jobs
and
customize
them
to
suit
your
needs.
B
If
you
want
to
which
is
great,
we
provide
the
source
code
link,
so
we
can
take
a
look
at
the
the
yaml
file
for
every
job
and
see
what
it's
actually
doing.
B
And
then
we
have
to
be
a
little
bit
careful
when
we
do
that,
because
these
the
links
point
to
the
latest
version
of
the
ci
job
for
that
template
in
the
master
branch.
B
B
B
B
As
you
start
to
your
admin,
starts
to
upgrade
your
repo
to
the
latest
version.
So
if
you're
using
13.9,
you
get
the
13.9
version
of
gitlab
ci
yml,
once
we
upgrade
to
1310
you're
still
on
the
13.9
version
of
the
overall
ci
file,
but
you're
using
the
1310
version
of
all
of
the
included
job
files
now
so
each
job
template
is
now
using
the
latest
and
that's
why,
when
I
first
started
working
at
gitlab,
I
don't
know
I
think
over
the
summer
or
the
fall.
B
We
made
a
bunch
of
changes
to
how
our
staff's
jobs
worked.
We,
you
know,
separated
out
secrets,
detection
from
the
main
sas
job,
and
you
start
to
see
all
that
stuff
in
real
time,
because
we
are
using
the
latest
version
of
each
of
those
templates
when
we're,
including
the
file
and
saying
hey,
go
look
at
that
template
and
pull
that
template
in.
B
That
was
something
that
kind
of
blew
my
mind
when
I
finally
pieced
this
all
together.
Like
yesterday,
I
think
I've
only
been
here
for
11
months
learned
something
new
every
day
and
then
the
other
thing
is
especially
when
you're
working
with
customers
and
helping
them
in
their
own
povs
make
sure
you
know
what
version
of
gitlab
they're
on,
because
these
each
individual
template
file
may
change
slightly
from
release
to
release
of
gitlab.
B
You
want
to
be
looking
at
the
appropriate
version
when
you're
looking
at
the
source,
so
don't
go
to
the
latest
on
master,
go
choose
the
appropriate
tag.
We
do
use
semantic
versioning
and
we
tag
all
of
these
template
repos
with
every
release
of
gitlab.
So
you
can
go
and
find
the
specific
version
of
each
of
these
files.
That's.
B
Used
now
in
each
of
these
gitlab
files,
the
next
thing
we
have
to
keep
and
keep
track
of
is
that
each
of
these
uses
a
particular
image
docker
image
again,
that
has
everything
built
into
it,
that
we
need,
for
that
particular
job,
to
run
whether
it's
a
scanner,
a
custom,
build
script
that
gitlab
has
created
whatever
the
case
may
be,
make
sure
you're.
Looking
at
the
right
version
of
the
image
as
well,
I
kept
you
know
trying
to
change
through
everything,
and
I
was
like
well.
B
It
says
over
here
in
this
job
that
we
should
be
doing
this
thing,
and
I
don't
see
that
thing
and
that
output.
Well,
it's
because
some
of
these
template
jobs
are
pinned
to
older
versions
of
gitlab
images,
not
the
current
version
that
gitlab
has
produced
okay.
So
how
do
we
see
how
we
even
created
this
image?
To
begin
with?
B
Right
so
here
we're
in
our
gitlab.com
gitlab
org
cluster
integration,
auto
deploy
image.
I
now
go
and
choose
version
107,
because
that's
the
version
of
the
image
that
my
current
template
for
auto
deploy
is
using
and
now
I
can
see
how
we're
actually
building
that
image
and
what
gets
put
into
that
image.
B
You'll
notice
in
that
project
that
there's
a
docker
file
that
specifies
how
that
image
gets
built
and
a
lot
of
times
for
these
different,
auto
devops
jobs,
you'll
see
these
copy
commands
and
that's
saying
we're
going
to
copy
some
stuff.
That's
in
our
project
repository
into
this
docker
image,
so
we
can
use
that
for
our
particular
job,
so
in
this
particular
case
of
auto,
deploy,
we're
copying,
source
directory
and
we're
copying
an
assets
directory.
B
So
if
we
go
back
and
look
at
what
the
project
repository
looks
like,
we
see
the
source
bin
directory
that
gets
copied
in
and
we
see
assets
auto
to
play
up
that
gets
copied
into
the
image
and
these
things
get
used
as
part
of
the
processing
of
the
auto
deploy
job.
So
if
you
really
want
to
see
what
the
job
is
doing,
you
have
to
look
at
what's
in
those
directories,
to
understand
what
the
particular
job
is
doing.
B
And
then,
if
we
go
back
and
we
look
at
our
original
template
file
or
ci
yml
file
in
this
case
for
auto
deploy,
it
starts
to
now
all
come
together
and
make
sense
right.
The
script
block
for
this
particular
job
says.
Do
these
you
know
eight
things
or
whatever
the
case
may
be
in
this
case,
auto
deploy,
check,
cube
domain,
auto,
deploy
blah
blah
blah
for
the
longest
time.
I
was
like
what
does
auto
deploy
mean.
Well,
that's
the
script.
B
That's
in
the
source
bin
directory
there's
an
auto
deploy
script
that
has
all
these
different
functions
built
into
it.
So
check
cube
domain
great
now
I
can
go
and
see
I
can
walk
through
line
by
line
by
line
and
see
exactly
what
these
things
are
doing.
If
you're
not
really
familiar
with
shell
scripting.
We
have
a
bunch
of
these.
We
like
to
use
these
flags
a
lot,
these
conditional
flags,
dot,
dash,
z,
dash
d.
B
It's
checking
to
see-
and
I
have
them
on
the
note
somewhere-
and
I
can't
remember
them
all
the
time
is.
Is
it
a
null
value,
you
think
is
dash
z,
dash
n
means
a
nominal
value,
so
a
string
with
any
length
we
check
to
see
if
a
file
exists
or
not
if
that
file
is
a
directory,
so
there
is
a
whole
reference.
I
have
a
link
at
the
end.
B
I
think
that
tells
you
all
the
reference
flags
that
get
used
in
these
shell
scripts,
but
the
good
news
is
javascripts
are
pretty
easy
to
read
as
long
as
you
know
what
those
flags
mean.
So
you
can
just
walk
through
each
of
these
scripts,
and
most
of
these
jobs
have
a
script
that
gets
built
into
the
docker
image
that
does
most
of
the
steps
of
the
actual
ci
script
block
itself.
B
B
So,
as
all
of
you
probably
already
know,
we
use
heroku
build
packs
with
herokuish
to
automatically
build
your
application
code
and
deploy
it
to
the
cloud.
Heroku
is
a
platform
as
a
service.
They
said,
hey,
there's
a
lot
of
developers
out
there
who
are
developing
cool,
simple
web
apps,
but
we
want
to
abstract
the
ability,
abstract
all
the
details,
of
how
we
actually
deploy
these
things
to
the
cloud.
B
We
can
offer
a
service
that
will
handle
all
of
that
in
an
automated
fashion,
so
they
can
just
push
to
remote
a
get
remote
and
we
can
build
everything
package
it
up,
deploy
in
the
cloud
for
them
without
them
having
to
worry
about
how
to
do
this
stuff
on
their
own,
and
it's
going
to
simplify
application
development
and
and
allow
more
teams
to
get
good.
You
know
web
apps
out
there
fast.
B
So
that's
great
herokuish
is
just
an
emulator
that
says
yeah
we're
going
to
use
those
for
open,
build
packs
to
basically
build
your
application
in
a
docker
container
and
you
deploy
it
so
everything's
containerized
as
we
started
as
the
industry
started,
moving
to
cloud
native
apps,
a
lot
of
people
weren't
really
familiar
with
them
and
weren't
really
sure
how
to
dockerize
or
containerize
their
applications
so
again
for
simple
apps.
This
is
a
way
to
not
make
those
developers.
Those
teams
have
to
worry
about
all
those
details,
but
get
containerized
apps
deployed
very
quickly.
B
Now,
at
some
point
you
know
pivotal
cloud
foundry
came
around
and
said
well
horico's
doing
this.
We
can
do
this
too.
We
can
create
our
own
version
of
these
heroku,
build
packs
that
you
know,
look
for
different
types
of
applications
and
automatically
build
and
deploy
them
and
then
other
platform
as
a
service
providers
like
google
cloud
and
others
came
along
too
and
started
to
say,
hey,
we
can
all
use
these
things.
B
It's
a
great
idea
and
at
some
point
the
standard
heroku
build
packs
and
the
pivotal
version
of
the
heroku
build
packs
kind
of
got
out
of
sync,
and
there
are
two
different
sets
of
build
packs
floating
around,
and
the
good
news
is
that
pivotal
and
heroku
decided
that
that
was
not
the
smartest
thing
to
do
that
they
should
be
in
partnership
and
they
should
offer
build
packs
that
can
be
consumed
in
an
open
source
fashion
by
the
industry
as
a
whole.
Work
together.
B
So
I
think
about
six
months
ago,
maybe
or
so
get
lab
started
supporting
cloud
native,
build
packs.
There
is
an
environment
variable
that
you
can
set
just
like
using
a
cloud
native,
build
pack,
auto
devops,
build
image,
cmd
enabled
use
cloud
native,
build
packs
instead
of
heroical
build
packs
and
digging
into
it.
Looking
at
it.
My
understanding
this
is
a
simple
simplified
version
of
what
cloud
native
build
packs
are
compared
to
heroku,
but
essentially
for
our
purposes
and
the
build
packs.
We
use
by
default
with
cloud
native,
build
packs.
B
It's
just
a
wrapper
around
the
heroku
build
packs
to
use
the
new
format,
the
new
syntax
that
cloud
native
build
packs
provide.
So
it's
all
the
same
concepts,
so
you
don't
really
have
to
worry
about,
are
using
the
cloud
native
or
not.
I'm
sure
that
there
are
some
reasons
that
people
will
want
to
start
using
the
cloud
native
format
instead
of
heroku.
B
But
don't
worry
too
much
about
that.
I
would
say
for
now
and
heroku,
build
packs
are
still
the
defaults
for
github.
So
with
build
packs,
it's
interesting
because
you
know
I
know
I
just
think
back
to
the
the
demo.
I
did
during
my
interview
process
and
I
cringed
now,
because
I
didn't
understand
what
any
of
these
things
meant
as
I
was
explaining
them,
but
during
my
interview,
so
you
know
I've
come
a
long
way,
probably
hopefully,
but
in
any
case
it's
not
magic.
B
B
You
can't
just
take
any
application
code,
even
if
it's
code
that
you
can
build
locally
on
your
own
laptop,
that's
not
necessarily
sufficient
to
be
able
to
deploy
it
in
a
container
via
heroku
build
packs.
Okay.
B
So
the
good
news,
though,
is
that
for
each
type
of
build
pack
so
for
each
type
of
application
you
want
to
build.
There
is
documentation
about
what
heroku
expects
for
that
particular
build
pack,
so
we
can
go
and
look
at
if
we
want
to
build
a
java
maven
app.
What
does
the
heroku
build
pack
say
for
java
right?
We
need
a
palm.xml
file
in
a
root
directory.
B
Okay,
if
we
don't
have
that
we're
not
going
to
be
able
to
build
some
other
palm
formats
are
supported,
but
also
you
know,
if
we're
going
to
build
this
app
by
default,
when
we
use
heroku,
build
packs,
we're
going
to
use
maven,
362
and
open
open
jdk8,
and
if
you
want
to
use
different
versions
of
those
things
you
need
to
specify
them.
B
The
good
news
here
is
that
if
we
go
to
glider
labs
heroicoish
on
github,
we
can
see
what
specific
build
packs
which
heroku
build
packs
are
being
used.
So
important
thing
to
make
sure
everybody
understands
here
is
that
herokuish
doesn't
have
its
own
build
packs.
It
just
uses
the
standard
heroku
build
packs,
but
if
you
want
to
see
which
versions
it
uses,
we
can
go
to
hirokush,
go
to
the
build
packs
directory,
and
then
we
can
say
okay
for
java.
B
As
an
example,
let's
go
and
take
a
look
at
that
and
there's
a
build
pack
url,
it's
just
a
single
line
which
points
back
to
the
build
pack
from
heroku:
that's
actually
being
used,
okay
and
in
heroku.
When
we
go
to
that
url,
we
have
all
the
documentation
about.
What's
expected,
how
we
build
apps
and
deploy
them
using
this
build
pack
in
this
case
for
java
maven,
and
so,
if
there's
any
question
about
whether
or
why
our
app
isn't
building
successfully,
why
we
can't
deploy
it.
We
can
go
and
look
at
that.
B
Okay.
So
that's
important
to
know,
because
sometimes
the
information
here
will
help
you
determine
why
your
app
is
blowing
up
once
you
start
to
try
to
run
it
in
a
docker
container
and
then
I'm
going
to
be
honest-
and
most
of
you
probably
know
this
already,
but
I
need
to
be
honest
here
max
power
tipped
me
off
to
this
yesterday
and
for
some
reason,
if
I
hover
over
it
pops
up
that
stupid
google
slide
thing
but
proc
file.
B
This
is
a
file
that
heroku
expects
that's
in
the
root
of
your
repository.
That
tells
basically
specifies
how
the
application
gets
executed.
So
how
do
you
run
your
app
now?
You
don't
always
need
a
product
file
and
in
every
single
app
I've
used
so
far
for
auto
devops.
I
have
not
specified
one
and
been
just
fine
and
for
simple
web
apps.
It
will
figure
out
for
you
what
that
proc
file.
You
know,
how
are
you
going
to
run
a
jar
file?
B
You
know
java
dash
jar,
whatever
that
the
name
of
the
jar
file
is,
but
you
really
should
specify
one,
especially
if
you're
getting
into
some
more
specifics
in
terms
of
how
that
application
needs
to
run
okay.
So
I
thought
I'd
point
that
out,
because
it
was
something
new
for
me
and
there
is
a
link
in
the
slide
deck
to
the
proc
file
documentation
in
heroku.
B
All
right,
so
what
does
auto
build?
Do
it
does
a
couple
things
so,
first
of
all,
it
sets
two
environment
variables,
ci
application,
repository
and
ci
application
tag.
I
point
this
out
because
these
variables
point
to
the
docker
image
that
we're
going
to
first
of
all,
push
to
the
container
registry
and
then
the
same
docker
image.
That's
going
to
be
pulled
from
the
registry
to
be
deployed
in
the
auto
deploy
job.
B
So
if
we
want
to
write
our
own
build
job,
for
example,
that
or
maybe
we
already
have
a
dockerized
app
that
we
can
build
outside
of
auto
build,
we
can
set
the
ci
application,
repository
and
ci
application
tag.
Values
twitter
over
that
repository
name
is
for
that
docker
image
and
the
tag
value.
So
we
pull
the
right
image
to
deploy
it.
Okay,
there's
a
lot
of
logic
built
into
the
build
script,
to
get
to
the
ci
application
repository
and
tag
names
correct
based
on
predefined
environment
variables
that
are
part
of
our
ci
job.
B
After
we
do
that,
we
run
our
build
script,
and
that
goes
and
looks,
for
you
know,
the
appropriate
build
pack
builds
our
application
copies
the
application
files
into
our
docker
image,
and
then
you
know
pushes
the
image
to
the
repository.
B
B
What
do
these
things
actually
mean?
Well,
basically,
for
a
branch
pipeline,
the
ci
application
repository
is
going
to
be
the
name
of
the
branch,
and
so
it's
not
really
the
branch
name.
It's
the
first
65
characters,
all
lowercase
with
alphanumeric
characters,
non-alpha
numeric,
characters
removed
and
replaced
with
a
dash.
It's
called
ci
commit
ref
slug
if
you
care
about
the
environment
variable
that
gets
printed
out.
B
B
Okay,
a
lot
of
times
when
auto
deploy
files,
because
it
we
don't
actually
have
that
docker
image
with
the
correct
name
and
tag
in
our
container
registry
and
the
errors
you
get.
Don't
tell
you
hey,
you
don't
have
that
correct,
docker
image.
It
just
gives
you
some
abstract
error
that
you
have
to
go
figure
out
on
your
own,
so
you
do
want
to
check
those
things.
B
If
you're
running
a
pipeline,
that's
the
result
of
a
git
tag.
Operation
things
are
a
little
bit
different.
The
docker
repo
name
will
have
the
name
of
the
commit
sha
and
the
tag
that
gets
applied
from
a
docker
tag.
Perspective
is
the
name
of
the
git
tag
that
you
specified
when
you
did
that
tag
operation.
B
B
B
If
you
do
decide
to
use
a
dockerfile
and
your
own
build
steps
instead
of
using
herokuish,
it
does
speed
up
the
process
a
bit
and
not
using
auto
build
speeds
up
the
build
process
a
little
bit
more
because
it
doesn't
go
through
all
that
logic
in
terms
of
figuring
out.
Is
there
a
docker
file,
or
is
this
an
app
that
has
to
be
built
with
heroku
and
all
of
that?
So
from
a
performance
perspective,
doing
it
yourself
does
give
you
some
benefits
using
a
docker
file
instead
of
using
heroku.
B
Build
packs
also
gives
you
more
control
over
how
you're
building
that
application
and
allows
you
to
streamline
your
overall
docker
image.
So
if
your
image
sizes
start
to
get,
you
know
to
be
a
concern,
you
know
maybe
not
using
the
build
packs,
but
using
your
own
build
process
with
the
dockerfile
is
a
better
route.
B
B
I'm
looking
in
the
dock,
I
don't
see
any
questions
yet,
but
please
do
feel
free
to
add
them
if
you
have
them,
but
if
you
want
to
build
your
own
app,
it's
as
simple,
really
as
doing
a
build
job,
saving
your
artifacts
as
a
jar,
you
know
jar
file,
whatever
your
your
compiled
binaries,
are
saving
them
as
artifacts
and
then
copying
those
artifacts
into
the
container
and
specifying
in
the
docker
file.
How
would
you
build
your
docker
image
copy
your
jar
file?
B
I
just
copied
my
jar
file
into
something
called
app.jar
and
then,
of
course,
we
specify
an
entry
point
for
the
docker
image
that
says:
how
are
we
going
to
run
this
docker
container
once
we
spin
it
up
and
start
to
run
it?
How
are
we
going
to
run
the
application
inside
of
it
simple
java,
dash
jar?
App.Jar
will
do
the
trick
in
this
case
it
can
be
more
complicated
than
that
right
for
other
types
of
apps
here
in
for
our
app
we're
exposing
port
8080
to
use
8080
for
this
app
to
run.
B
That
means
for
the
auto
deploy.
We
also
have
to
specify
port
values
in
a
yaml
file
in
order
for
this
to
deploy
successfully
and
then
we're
basically
logging
into
our
docker
registry
container
registry
before
as
part
of
the
before
script,
we're
building
our
docker
image
and
then
we're
pushing
it
to
the
container
registry.
B
In
this
case,
I
just
want
to
point
out.
I
did
not
tag
with
latest
here.
I
should
also
have
a
build
a
dash
t
latest
so
that
whenever
we're
replacing
our
previous
image
with
a
newer
one
that
newer
one
gets
referenced
with
the
latest
tag
as
well,
but
I
was
lazy,
so
I
didn't
do
that
in
this
particular
case.
B
B
The
last
thing
I
just
want
to
talk
about
is
how
do
you
know
what
happened
if
things
go
wrong?
So
how
do
you
start
debugging
things
set
trace
to
equal
one
as
an
environment
variable
either
at
the
project
level
or
within
your
ci
file?
That's
going
to
give
you
additional
output,
so
you
can
go
through
the
auto
deploy
file.
You
can
see
when
it's
actually
doing
you
know
creating
the
name
space
using
cube,
color
create
in
space
when
it's
doing
the
helm
upgrade
command.
You're
going
to
see
more
information.
B
If
everything
looks
good
there
and
you
can't
figure
that
out,
another
helpful
tip
is
to
disable
postgres
if
you're
not
using
a
database
for
your
application.
It's
just
another
container
that
gets
spun
up
and
deployed
that
you
don't
need
and
there's
some
complexities
there
that
you
can
get
rid
of
just
by
setting
postgres
enabled
to
be
false.
B
If
you
are
doing
the
build
yourself,
make
sure
that
you
can
actually
see
artifacts
saved
as
part
of
your
build
job
and
make
sure
that
your
app
actually
is
something
that
is
built
successfully
out
outside
of
docker
like
forget
about
docker
for
a
minute,
can
you
actually
take
the
code
and
build
the
app
yourself?
If
not,
there's,
probably
a
problem
in
a
lot
of
cases,
the
build
job
will
fail,
but
sometimes
the
build
job
will
succeed,
but
then
it's
not
something
that
can
actually
run
in
a
in
a
containerized
environment.
B
If
you're
using
the
auto
build
job
make
sure
you
go
through
the
heroku
specific
deploy,
troubleshooting
documents,
because
there
are
cases
where
for
node.js
as
an
example,
you
may
have
a
different
version
of
the
package
manager
than
you
need
or
something
like
that,
and
it
gives
you
a
lot
of
guidance
there
to
make
sure
that
you
can
check
your
app
and
make
sure
it's
actually
building
successfully
and
then
look
and
see
that
there
is
a
container
in
the
registry
with
the
appropriate
name
and
tag
so
that
there
actually
is
a
an
image
to
pull
for
the
auto
deploy.
B
I
have
to
tell
you
I
made
this
mistake
at
least
10
times
and
didn't
realize
it
for
hours
that
I
wasn't
actually
pushing.
I
was
building
an
app
but
not
pushing
it
to
the
registry
and
it
was
trying
to
deploy
and
give
me
some
like
random
error
timed
out
waiting
for
a
condition.
What
does
that
mean?
Well,
it
means
I
can't
find
the
stupid
image
to
deploy.
But
it
didn't
tell
me
that,
and
then
can
you
run
the
docker
container
successfully
like
run
it
like?
B
Do
a
docker
run
command
from
the
command
line
on
your
machine
and
point
to
the
registry
point
to
the
image
in
the
container
registry
that
you
think
is
being
deployed
and
make
sure
it
runs
successfully.
I
did
that
a
couple
times
and
I
saw
like
huge
blow-ups
and
errors
happening.
It
means
that
my
app
wasn't
built
correctly
within
the
container,
so
it
can't
actually
run
that
so
the
auto
deploy
job
is
not
going
to
tell
you
why
that
crapped
out,
it's
just
going
to
tell
you
that
it
didn't
work.
B
B
It's
a
namespace
per
environment
that
you're
deploying
to
do
some
cleanup
and
if
you
are
cleaning
up
your
cluster,
deleting
a
bunch
of
namespaces
for
old
review,
apps
as
an
example,
make
sure
you're
clearing
the
kubernetes
cluster
cache,
and
you
can
do
that
right
from
the
integration,
the
cluster
integration
settings
on
the
advanced
tab,
and
that
typically
does
the
trick.
If
you
delete
the
namespace
clear
the
cache,
let
the
deploy
job
run
again,
it
will
create
the
namespace
for
you
and
deploy
the
app
successfully
well.
B
B
D
All
right,
a
quick
question:
what
happens
if
we're
grabbing
the
other
develops
packages
from
our.com
site,
which
seems
to
be
the
default
and
customers
are
disconnected
because
they're
at
a
secure
facility
of
some
kind
of
you
know
their
favorite
three
letter
agents,
you're
a
banker
or
what
have
you?
What
do
we
do
in
that
case.
B
There
are
some
instructions
for
how
to
pull
everything
local
into
local
registry
to
use
those.
Instead,
I
don't
know
all
the
details
of
those
off
the
top
of
my
head,
but
there
have
been
several
threads
and
slack
about
this
for
disconnected
environments,
so
we'll
make
sure
we
get
those
specifically.
B
E
B
Oh
awesome,
thank
you
fantastic.
Let
me
pull
it
up
here
too.
Of
course,
I
can't
find
the
chat
now
so
good
enough.
E
Yeah
and
I'm
saying
there
is
specific
scanner
instructions
for
running
these
offline
for
sas
das
license
all
that
kind
of
fun
stuff
awesome.
A
B
B
No
some
things
I'll
leave
you
with,
and
we
will
make
sure
that
you
have
a
copy
of
this
presentation,
but
we
are
starting
to
use
knowledge
based
articles.
So
a
lot
of
this
information
is
now
captured
in
a
couple
of
different
kb
articles,
and
I
do
have
some
links
to
some
of
those
external
resources
I
refer
to
as
well,
so
you
can
take
a
look
at
them.