►
From YouTube: GitLab 13.0 Technical Showcase
Description
The GitLab 13.0 Technical Showcase includes
Simon Mansfield and Christiaan Conover covering Gitaly and Praefect (10 mins)
Chloe Whitestone covering new Defend features: Standalone Vulnerabilities, Exportable Vulnerabilities Reports, and WAF SIEM Integration (10 mins)
Mark Cesario covering the AWS ECS Auto DevOps feature (10 mins)
Jamie Reid covering Terraform plan view in MR, http terraform state backend (10 min)
A
Hello,
everyone
and
thank
you
so
much
for
joining
us
today.
I
am
proud
to
feature
the
gitlab
13.0
release,
technical
showcase,
big
big
things
happening
so
I've
got
the
agenda
in
the
chat
and
we've
got
an
exciting
lineup
today.
So
Simon
and
Christian
are
going
to
talk
to
us
about
giddily,
and
is
it
prefect
say
that
right,
yep,
Chloe's,
gonna
talk
about
new,
defend
features
with
standalone
vulnerabilities
and
exportable
vulnerabilities?
A
B
Cool,
so
this
is
a
pretty
highly
anticipated
capability.
We've
been
talking
about,
for
you
know,
probably
over
a
year
for
many
of
us
with
customers,
because
when
we
talk
about
setting
up
11
and
H
a
configuration,
the
one
caveat
we've
always
had
to
put
on
it
is
except
Italy
which
isn't
actually
AJ
capable
yet,
and
you
have
to
youthis,
and
it
becomes
a
little
bit
of
an
annoying
asterisk.
But
we've
give
up
their
team.
We
can
finally
released
our
first
iteration
of
what
wink
Italy
cluster,
which
is
our
H
a
solution
for
that.
So.
B
B
B
C
B
So
fair
point:
yes,
there
is
that,
but
yes
for
an
ancient
environment.
Now
you
take
me,
you
don't
need
NFS
really
at
all,
depending
on
how
you
set
it
up,
but
especially
not
for
your
good
repositories,
which
is
the
key
thing
here,
and
so
this
basically
addresses
the
last
remaining
single
point
of
failure
element
of
the
atlast.
So
this
is
a
big
thing
for
for
customers
who
are
running
cloud
providers
on
Prem.
What
have
you?
B
The
reason
for
that
is
that
the
prefect
can
actually
have
multiple
prefect
nodes
in
a
cluster
so
that
you
can
even
remove
the
single
point
of
failure
of
your
prefect
component
in
your
cluster.
So
it
can
communicate
with
your.
You
know
it
in
any
combination
with
the
Postgres
database
that
is
set
up
for
prefect
and
act
as
a
as
a
load
balancer
itself
and
handle
any
requests.
So
it's
not
a
requirement
to
have
a
load,
balanced,
prefect
environment.
B
You
can
have
just
direct
connection
from
your
get
lab
instance
to
a
single
prefect
node
up
to
you,
but
it
is
architected
to
allow
you
to
do
so
to
do
it
in
a
way
that
even
prefect
is
redundant
so
and
then
you'll
notice.
Here
the
giddily,
the
actual
guilty
nodes
you
could
they're
set
up
in
in
clustered
groups,
and
you
can
also
shard
across
multiple
guilty
instances
behind
prefect.
If
you
needed
to
do
so.
For
example,
your
storage
volumes
can't
support.
B
You
know
a
horizontally
scalable
get
Ally
environment
for
IAB's
purposes,
and
you
need
to
have
multiple,
different
storage,
physical
storage
volumes
behind
it.
There
are
mute
cases
there.
It
gets
complicated,
but
it
can
be
done
so.
The
idea
here
is
that
this
provides
a
variety
of
permutations
for
setting
up
an
H,
a
environment
for
Italy
storage.
B
C
C
Get
lab,
doesn't
know
that
it's
talking
to
prefect
it
just,
but
it's
the
thing,
that's
responsible
for
syncing,
all
that
all
the
Gately
nodes
and
it's
in
it's
the
thing
that's
responsible
for
really
implementing
the
page,
a
as
christian
said
you
can
have
multiple
prefect
nodes
so
that
they,
you
know
the
the
prefect
mode
itself,
isn't
a
single
point
of
failure,
and
and
yet
really-
and
it
also
doesn't
load
barting
as
well.
So
the
prefect
nodes
also
kind
of
have
this
kind
of
element
of
balancing
traffic
between
the
different
various
nodes
as
well.
C
B
B
You're,
probably
gonna,
hear
this
referred
to
quite
a
bit,
so
Simon
and
I
on
Monday
actually
went
through
the
process
of
setting
up
and
get
early
cluster,
and
we
started
from
an
existing
environment
that
I
created
with
a
single
giddily,
app
or
I,
get
loud,
app
node
and
a
Italy
node
step
really
configured.
We
took
this
approach
figuring
that
the
majority
of
our
customers
currently
looking
to
do.
B
If
you
really
are
interested
in
doing
that,
you
can
watch
the
video
that
we
put
on
YouTube
on
filter,
which
I
pair
down
a
few
spots,
or
we
fumble
through
TCP
for
our
own
lack
of
knowledge
there
and
eventually
figured
out
the
solutions.
But
you
can
see
the
progression
from
start
to
finish
of
how
we
got
that
done
and
the
pitfalls
we
encountered.
But
here
are
some
of
the
takeaways
from
going
through
that
process.
Saima
know
if
you
want
to
go
first
release
altitude.
Yes,.
C
There's
something
to
be
aware
of
at
the
moment
is
that
the
prefect
leader
election
is
currently
not
favours
availability
of
a
consistency.
So
at
the
moment
there
is
a
possibility
of
a
data
loss.
Not
it's
very
unlikely.
It's
when
print
effects
know
what
a
prefect
know
it
has
to
fail
and
basically
the
leader
election
takes
place.
There
is
a
an
issue:
that's
in
flight
at
the
moment,
to
change
that
to
default
to
using
the
Postgres
database,
the
prefect
uses
to
do
leader
election
and
that's
going
to
be
made
the
default
in
13.1.
It's
actually
there.
C
C
The
next
point,
I
added
so
I'll
talk
about
it,
which
is
that
at
the
moment,
when
you
install
omnibus
you
kind
of
get
everything
all
in
one
and
that's
you
get
a
Postgres
database
there
for
all
the
data
as
well
omnibus
does
not
include
a
Postgres
database
for
prefect
prefect
needs
its
own
database.
Ideally,
it
can
actually
go
inside
and
this
is
another
point
of
it
later
on
and
it
can
go
inside.
Your
get,
you
know,
get
our
database,
but
it's
not
supported
in
that
configuration
when
you're
using
geo.
C
So
for
that
reason,
I
mean
meet
myself
and
Kristin.
Would
both
recommend
I?
Think
that
you
know
you
separate
out
straightaway,
because
if
you
ever
want
to
go
to
geo,
you
then
have
to
untangle
your
database.
So
there's
no
prefect
database
going
to
be
installed
in
omnibus
by
default
at
the
moment.
John
take
next
one
question
or.
B
So
you
have
to
tell
git
lab
where
you
want
projects
to
be
stored
and
to
migrate
from
an
existing
one
to
into
a
new
one,
requires
API
calls
right
now,
I,
don't
know
of
any
utility
that
has
been
created
to
help
support
the
migration
of
data
in
a
batch
process
from
one
location
to
another.
I
actually
haven't
started
poking
myself
and
just
building
a
proof-of-concept
that
might
be
able
to
do.
That
is
a
very
simple
CLI,
just
as
a
personal
project
to
maybe
facilitate
that
more
easily.
C
Yeah
the
next
one
is
really
I
already
said,
which
is
separate,
your
databases
out
and
then
the
final
point
is
that
at
the
moment,
there's
very
limited
scope
for
admins
to
actually
be
able
to
monitor
the
cluster
there's
an
epoch
around
this.
This
is
something
that
the
gotelli
team
is
working
on
and
so
yeah.
That's
something
that
they're
really
pushing
for.
B
One
final
point:
I
want
to
make
on
that
first
bullet
here.
The
main
reason
for
this:
if
you're
not
familiar
with
clustered
environment
and
data,
consistency,
architectures
and
stuff
like
that,
for
this
sort
of
thing,
the
main
the
main
reason
that
there's
a
potential
for
data
loss
with
the
way
it's
currently
set
up
is
if,
if
one
year,
if
your
primary
giddily
node
fails,
what
prefect
does
right
now?
Is
it
just
picked
another
one?
B
It
doesn't
worry
about
how
up
the
data
did
it
just
picks
one
and
uses
that
as
the
leader
and
then
it
goes
forward
from
there.
The
database
that
gets
created
for
prefect
is
also
used
to
track
the
changes
that
have
occurred
over
time
and
which
nodes
have
which
data
sets,
but
it's
not
currently
utilizing
any
of
that
information
to
actually
take
the
leaders.
So
there
is
a
possibility.
If
you
get
out
of
sync
that
it
will,
it
will
say
we
don't
know
which
nodes
have.
B
What
and
it'll
put
you
in
a
read-only
state
for
any
repositories
that
don't
have
up-to-date
synchronization.
So
that's
that's.
The
reason
behind
them
is
that
they're
working
on
that
and
13.1,
hopefully
we'll
see,
resolution
that
point
at
William
be
concerned
by
the
time
our
customers
are
deploying.
This
introduction.
C
Okay
yeah,
so
we
went
into
this
process
without
having
read
up
anything
without
having
spoken
to
the
product
team
about
it
specifically,
so
we
do
it
as
if
a
customer
was
running
through
the
process.
We
did
make
notes
throughout
the
process
and
we
followed
the
documentation
and
actually
the
documentation
is
really
solid.
There
was
pretty
much
only
one
point
really
where
we
really
truly
got
blocked,
and
that
was
due
to
the
cloud
provider
like
documentation
really
rather
than
our
own.
C
B
Made
a
note
here
at
the
bottom
that
they
get
lab
Orchestrator
project.
This
is
something
that
I
actually
was
made
aware
of
by
Jason
plum
when
he
graciously
joined
our
session
on
Monday
to
try
to
help
us
work
through
CCP
issues.
We
apparently
have
a
project
underway
at
the
moment
that
some
interns
are
working
on
to
build
out,
basically
a
canonical
set
of
terraform
and
ansible
scripts.
B
That
customers
can
use
to
set
up,
get
lab
in
any
permutation
of
single
omnibus
and
saw
up
through
full
aj
capability
and
they're
building
in
the
ability
to
build
out
giddily
cluster
as
part
of
a
scripting
process.
I
will
update
the
deck
here.
The
link
to
that
project.
If
you're
curious,
jason
game
assaults
are
two
disclaimers
about.
This
is
not
production
ready.
This
is
not
product
as
yet
your
mileage
may
vary.
B
Cluster
is
really
well
architected
it
it's
clearly
thought
out
to
be
scalable
and
her
done
in
and
address
all
the
concerns
somebody
would
have
of
building
an
H
a
solution,
especially
when
you're
dealing
with
the
types
of
things
that
they
get
transactions
cause
with
storage.
So
it's
I
think
it's
gonna
be
a
great
solution.
B
Just
anticipating
that,
since
it
was
the
first
ga
released,
there
were
naturally
gonna
be
bugs
that
got
surfaced
by
it
being
on
the
wild
and
that
aligns
with
what
we've
been
seeing
from
some
of
the
known
caveats
to
it
that
are
they're
expecting
to
resolve
that
a
couple
of
releases,
so
I've
encouraged
customers
to
set
this
up
in
staging
environments,
to
test
it
out
and
understand
the
process.
But
I
would
say
it's
probably
not
production
ready
for
most
of
them
until
later.
B
This,
and,
as
you
know,
Simon
and
I
have
also
agreed
that
we're
gonna
help
the
Ghibli
team
develop.
The
docks
to
be
more
even
more
usable
Maori
currently
are
for
a
customer
facing
perspective
so
that
you
know
ideally,
customers.
Gonna
walk
through
this
step-by-step,
with
little
to
no
assistance
from
us.
A.
B
May
be
using
an
introduction
on
comm,
but,
let's
not
forget.
We
also
beta
releases
of
the
product
itself
on
get
lab.
Comps
I
think
were
more
risk.
Tolerance
with
our
dedicated
for
such
a
team.
To
do
so,
and
we
probably
haven't
built
out
in
such
a
way
that
we're
we're
limiting
the
risk
of
data
loss
because
of
the
scale
and
we're
probably
putting
it
at.
That
would
be
my
my
assumption,
but
yeah
I
do
believe
we
are
using
I'm.
C
B
C
B
D
B
If
you're
meeting
and
management
on
a
computer
system
might
be
provided
by
your
cloud
provider
or
you
may
have
a
sand
or
something
in
your
infrastructure,
but
in
general
you're,
probably
gonna
have
one
giddily
node
that
is
connected
to
my
storage
volume
for
NFS
and
if
you
have
multiple
giddily
nodes
that
are
talking
to
that,
it's
still
just
those
nodes
in
a
get
elite.
Cluster
environment
unist,
you
necessarily
are
adding
on
addition.
B
In
addition
to
your
n
number
of
giddily,
knows
you're
now,
adding
at
least
one
prefect
node
at
least
one
post
rest
database,
which
is
probably
on
its
own
note
as
well,
and
possibly
a
load
balancer
in
front
of
all
of
that
between
your
getaway
environment.
So
your
computer
resources
alone
are
likely
to
be
higher.
You
may
feed
some
savings
if
you're
not
having
to
use
an
a
managed
NFS
solution,
but
it's
not
necessarily
going
to
be
enough
to
offset
the
additional
cost
of
setting
up
this
infrastructure.
With
that
being
said,
I
mean
our
reference.
B
Architectures,
don't
even
start
talking
about
AJ
and
player
like
3
or
5,000
users,
so
it
may
be
a
negligible
difference
from
the
perspective
of
the
service
provider.
Me
first
writer
in
terms
of
like
the
group,
your
customer,
that's
managing
it,
but
just
based
on
the
architecture
that
we
have
recommended
you're
likely
gonna
be
using
more
compute
resources
than
you
would.
If
you
were
doing
just
a
single
giddily
instance
with
NFS
or
multiple
instances.
Even
but.
C
So
benefits
the
benefits
of
the
rest
is
that
the
actual
implementation
of
you're
getting
nodes
becomes
much
more
important
because
get
early.
If
it's
talking
to
NFS
it's
more
about
the
NFS
storage
and
how
perform
at
that
itself
now
you've
got
local
SSDs
attached
to
your
price,
and
that
is
where
it's
getting
the
data
for
all.
So,
potentially
there
is
performance
benefits
there.
I
would.
B
Argue
maybe
holistically
when
you
factor
in
not
just
the
infrastructure
costs
that
also
the
the
increased
productivity
from
faster
performance
as
well
as,
hopefully
the
less
the
lower
amount
of
infrastructure
management
that
maybe
would
have
to
take
place
for
fine-tuning
you're,
getting
instances
to
talk
to
an
NFS
solution.
You
might
overall
see
a
total
cost
reduction
from
that
perspective,
but
purely
from
the
bill
that
you
paid
your
cloud
provider,
it's
probably
gonna
be
a
little
higher
yep.
So.
C
D
B
E
B
C
E
My
issue
is
that
we
have
you
know
a
lot
of
people
use
LFS,
because
that's
a
smart
way
of
managing
the
size
of
your
git
repos
and
we
have
historically
to
my
understanding,
not
done
anything
except
it's
here
now
you
you
get
to
replicate
it
all
over.
Okay,
just
just
questioning
them,
not
not
accusing
ya.
B
C
E
You
mean
that's
how
olaphis
works,
because
we
want
to
keep
the
stuff
out
of
the
repo,
but
the
question
is
like
what
do
we
do
with
it
like
that
and
pages
and
other
data,
this
sort
of
exists
outside
if
the
sort
of
normal
get
infrastructure
is
like
stuff,
that's
alone
in
the
file
system?
So
we
still
have
the
same
problem
with
that
that
wasn't
good
before
I
guess
well,.
B
And
the
majority
that
other
that
other
type
of
data-
still
we
already
have
our
contextual
support
for
using
things
like
object,
storage
to
make
it
so
that
you
don't
rely
on
single
point
of
failure,
storage
solutions
and
you
are
on
a
scalable
option.
Obviously
it
doesn't
work
for
people
who
are
intimately
on-prem
and
don't
have
an
object,
storage
layer
that
they
can
use,
but
our
application
architecture
does
support
the
use
of
things
like
s3
for
all
those
little
components.
B
But
this
is
targeting
it's
specifically
the
components
they
can't
live
in,
s3
that
they
currently
rely
on.
If
that's
because
there
is
no
other
solution,
we
absolutely
need
to
get
repositories,
and
that's
that's
where
we
see
the
most
problem
when
people
are
using
it.
That's
because
of
the
transactional
nature,
yeah.
F
F
Ok,
so
number
one
it
made.
The
highlights
for
the
release
is
standalone
vulnerabilities,
so
what
is
it?
Standalone
vulnerabilities
are
also
known
as
first-class
vulnerability,
but
for
our
purposes
it
just
means
that
each
loner
ability
has
its
own
standalone
breach,
which
you
can
access
by
clicking
on
the
vulnerability
in
to
security
dashboard
and
then,
which
you
can
link
to
to
share
it
or
work
on
it,
etc.
On
the
vulnerability
page,
the
default
status
for
a
vulnerability
is
detected,
but
you
can
change
it
to
confirm,
dismissed
or
resolved
to
keep
it
up
to
date.
F
You
can
also
create
a
new
issue
which
is
confidential
by
default
and
is
pre-populated
with
information
from
the
from
the
vulnerability
report
and
then,
finally,
on
the
standalone
page,
you
can
apply
and
automatically
generated
solution,
some
of
which
will
be
manual
things
you
have
to
do
and
some
which
get
lab
can
solve
for
you,
which
is
pretty
cool
an
example
here,
it's
just
a
manual
solution
where
I
have
to
update
the
version.
So
this
is
what
the
standalone
page
looks
like
and.
F
F
Each
vulnerability
can
be
triage
and
tracked
as
the
single
source
of
truth,
and
they
will
also
be
persistent,
which
what
that
means
is
that
previously,
any
new
scans
that
were
on
the
same
branch
as
a
previous
scan
would
overwrite
the
previous
findings,
and
now
they
don't
so
that's
less
duplicates
and
better
tracking,
which
is
great.
It
also
ties
straight
to
reporting
and
be
able
to
track
trends,
which
is
a
big
deal
and
ultimate
still,
you
might
think.
Maybe
that's
still
not
that
big
of
a
deal
but
in
true
guerilla
fashion.
F
It's
a
MVC
that
will
improve
a
ton
going
forward.
So
I
linked
an
issue
here
that
you
can
take
a
look
at
that.
Just
collects
a
bunch
of
other
issues
that
it's
going
to
help
resolve
going
forward,
primarily
primarily
just
around
having
a
better
security,
dashboard
or
more
accurate
one.
Our
abilities
still
have
better
reporting.
It
opens
a
door
for
a
lot
of
potential
new
features
like
better
false
positive
management.
Having
vulnerabilities
linked
to
occurrences
went
to
many
linking
vulnerabilities
to
existing
issues.
F
None
of
these
are
guaranteed,
but
they're
just
examples
of
things
that
the
team
is
thinking
of.
Now
the
standalone
phoner
abilities
exist.
We
can
do
a
whole
bunch
more
as
I
mentioned.
This
was
a
huge
effort,
so
here's
some
links
and
videos
that
go
into
more
detail.
The
category
Direction
page
here
was
just
a
dated
two
days
ago.
So
it's
super
fresh.
It's
got
dice
in
the
mirror.
It's
awesome.
F
Alright,
let's
move
on
to
exploitable
security
reports,
which
happens
to
be
one
of
the
features
that
standalone
vulnerability
is
made
possible.
It's
already
making
our
lives
better.
It's
great
okay,
so
I
feel
very
passionately
about
this
feature
because
I
opened
the
initial
issue.
I
was
involved
in
it
every
step
of
the
way,
the
past
eight
months
and
it's
finally,
here
it's
like
my
baby
so
before
today,
vulnerability
is
for
only
exportable
as
a
JSON,
which
is
pretty
cumbersome
by
itself,
let
alone.
F
If
you
want
to
turn
it
into
a
usable
or
shareable
report,
much
to
the
dismay
of
my
customer,
the
JSON
files
showed
dismissed
vulnerabilities
with
no
indication
that
they
were
dismissed.
So
their
clients
were
freaking
out.
There
was
a
ton
of
unnecessary
back-and-forth.
It
wasn't
a
good
experience,
but
now
it's
okay.
You
can
just
click
a
button
on
the
security
dashboard
and
export
all
of
the
vulnerabilities
to
a
CSV
file,
and
it
has
all
the
info.
You
need
it's
available
on
both
the
project
and
instance,
level
dashboards
and
the
group
level.
F
Dashboard
is
planned
for
13.1
in
a
couple
weeks
and
there's
a
list
tier
of
what
the
fields
are
populated
in
the
CSE
and
the
ability
to
export
PDFs
is
planned
for
next
year.
After
vulnerability
management
becomes
viable,
it's
minimal
right
now
and
if
you're
paying
attention
in
that
epoch
was
linked
to
two
slides
ago.
I
also
noticed
some
gotchas
here
just
about
using
the
export.
But
overall
it's
it's
pretty
straightforward.
It's
just
a
button.
F
Here's
some
documentation
and
resources,
as
well
as
a
screenshot
of
what
this
supercool
button
looks
like
alright
on
to
the
last
defend
feature
in
1302,
though
Wafaa
integration.
So
that's
a
lot
of
acronyms,
not
very
messy
foo.
So,
let's
break
it
down.
Woof
is
web
application
firewall,
which
filters
and
monitors
HTTP
traffic
between
a
web
application
and
the
internet.
It's
a
type
of
reverse
proxy,
so
it
protects
the
server
from
exposure
by
having
clients
passed
through
the
Wafaa
for
reaching
the
server
easy
enough.
F
The
sim
means
security,
information
and
event
management,
so
really
I'm,
really
good,
as
I
am
or
whatever
it's
the
software
solution
that
aggregates
and
analyzes
activity
from
different
resources
across
return
for
structure
some
example
providers
are
like
security,
IBM,
cute
radar
and
stupid
logic,
and
so
before
we
had
this
integration,
there
was
a
lack
of
visibility
into
the
traffic
that
passes
through
the
laughs
and
no
real
easy
way
to
determine
if
it
was
working
as
expected,
and
nearly
all
users
who
use
web
also
use
a
sim.
So
we
figured
let's
let
them
connect
the
two.
F
This
is
all
done
via
fluent
de,
which
is
an
open
source
data
collector
and
it
just
runs
on
each
pod
and
allows
customers
to
send
the
logs
really
anywhere
they
like,
so
they
can
use
that
are
very
tool
they
want.
The
integration
is
available
for
all
tiers,
even
the
free
version,
and
it
can
be
enabled
and
configured
by
going
to
fluent
EE
under
applications
on
the
operations.
Kubernetes
page
you'll
need
to
enter
the
host
port
and
protocol
where
the
wife
logs
would
be
sent
and
then
select
one
of
the
available
locks.
F
G
D
Right,
let
me
share
here:
oops
I
want
to
do
that.
All
right.
Can
everybody
see
my
slide
or
now
looks
good?
Okay,
great
I
want
to
notice
that
the
picture
over
there
is
mid-april
and
that's
no
an
identical
way
to,
and
second
week
of,
May
and
I
actually
had
the
shovel
in
May.
So
that's
you
know,
there's
a
moral
though
somewhere,
okay,.
A
D
D
D
So
this
is
AWS.
You
know
I,
guess
reply
to
google
kubernetes
2015.
It
came
out,
so
it's
been
around
for
a
while
and
integrates
with
everything
AWS
and
that's
one
of
the
cons
and
it's
AWS
only
closed-source,
supposedly
it's
a
quick
learning,
ease-of-use
Wow,
and
that
is
not
true
as
you'll
find
out.
So
there's
a
lot
of
moving
parts
from
cluster
services.
Task
definition
that
ec2
instance
a
container
to
V
PC
at
al
be
charged
group.
You
have
to
know
all
of
them.
I
put
this
together,
but
does
work
somewhat
alright,
how
does
it
work?
D
So?
Basically,
you
create
a
task
definition.
This
is
this
TV,
and
this
tax
task
definition
is
going,
get
the
resources,
you
add
an
ec2
instance
it'll
spin
one
up
for
it.
If
you
want,
you
can
do
your
own,
then
you
add
a
container
to
that.
You're
gonna
have
a
container
docker
container,
you
can
provide
and
the
faithless
cluster-
and
this
is
where
you
take
that
ec2
instance
with
the
task
definition
and
then
you
add
the
VP
see
in
the
subnets
right
step.
Two
and
three
you
create
a
service.
D
This
is
kind
of
like
the
cooler
age
where
it
actually
will.
You
know,
keep
on
working
to
maintain
whatever
you
define.
So
this
is
where
it's
for
keeping
the
number
of
tasks
and
everything
running.
So
this
is
where
you
configure
the
82
ec2
instance
this,
where
you
have
to
define
the
ports
and
everything
and
you
have
to
have
an
alb.
If
you
try
to
use
classic
load
balancer,
it's
gonna
bounce,
your
order,
far
gate,
so
you're,
basically
stuck
with
a
nail,
be
if
you
want
to
use
ec2.
D
Alright,
then,
on
our
side.
So
now
you
flip
over
to
get
lab
to
actually
work
with
this.
Once
you
set
all
this
up,
then
you
can
go
over
here.
You
can
start
defining
these
variables
environment
variables.
The
key
one
is
this
platform
we'll
look
at
Auto
dev
out
platform
target.
If
you
put
ECS
in
this,
this
is
where
auto
dev
ops
kick
in.
This
is
where
you
can
use
auto
dev
ops,
with
this
ECS
you'll
notice.
You'll
need
this
cluster
that
we
define
over
here.
You'll
need
the
service
that
we
define
over
here.
D
You'll
need
this
task,
definition
got
it,
and
finally,
there's
nice
little
video
I've
been
working
with
idea.I
ten.
You
can
to
figure
this
out
he's
the
main
contact
to
help
make
this
a
reality,
because
there's
some
changes
they'll
have
to
do,
took
us
a
while
to
get
this
going
work
once
these
are
two
in
product
marketing,
which
I
think
he's
involved.
With
this
auto
to
AWS
summit,
any
questions
I'm
sure
you
wait,
none
we'll
go
on.
D
I
wonder
there
is
no
framework,
yet
I
think
we're
gonna
have
to
get
lamps.
Gonna
have
to
make
that
commitment.
There
is
no
review
apps
that
you
want
to
do
that,
it's
all
manually.
So
if
you
run
this,
you
don't
wipe
out
your
UCS
contain
where
you
have
in
that
service.
So
right
now,
I
got
the
joy
of
working
with
a
TN
is
great
to
working
with
them,
but
we're
just
not
it's
not
mature
enough.
D
Yet
if
you
have
a
kubernetes
instance
and
you
even
though
you
set
the
target
to
ECS,
it's
still
going
to
use
kubernetes,
so
I
had
to
go,
get
a
separate,
instant,
totally
know
kubernetes
to
make
it
so
I
could
work
on
this,
so
it'll
override
that-
and
these
are
some
key
links.
What's
really
helped
was
an
external
blog
that
actually
walked
through
how
to
set
up
a
nice
es
cluster,
not
for
the
fan
of
heart,
but
it
will
work
and
we're
putting
in
all
the
documentation.
D
D
Okay,
I
know:
I
haven't
all
right.
So
let's
go
to
this
project
that
I
did
have
one
here,
but
it
never
actually
worked.
So
I
worked
with
a
ten
to
make.
This
work
can't
already
see
this.
So
this
is
a
rails
minimal,
app
that
he
had
to
build
from
scratch.
You'll
notice,
there's
no
get
lab.
You
know
that's
the
mo
file,
so
we
are
using
auto
dev
ops.
If
you
go
down
here
and
you
look
at
the
you
want
to
call
it
the
not
logged
in.
B
D
D
And
if
we
go
into
variables
here-
and
these
are
the
ones
that
we're
talking
about
so
this
is
a
TCS
target.
If
you
do
not
have
this
you're
not
going
to
get
the
auto
dev
ops,
you're
gonna
have
to
have
your
own
llamo
file
and
do
the
includes,
and
then
these
guys
are
the
connections
to
the
service,
the
cluster
definition
and
within
that
you
got
to
be
careful
with
the
ports
to
make
all
this
work
and
we're
documenting
that
we're
in
the
process
and
etienne
and
their
team
are
working
out
working
on
that
right
now.
D
D
Let
me
try
that
that's
funny,
it's
not
working
so
so.
Basically,
what
happened
is
you
change
the
thing
it
will
fire
it
off
and
I
used
my
own
Google
domain
that
we
can
push
to
in
the
in
the
PPC
and
test
service,
whatever
it
is
to
map
it.
So
you
can
see
it's
not
customer
ready,
but
they
are
working
on
it
and
I
believe
the
AWS
Emmett
coming
up
will
force
them
to
make
a
an
investment
to
make
this
happen,
which
was
better
news,
but
that's
what
Pat.
I
So,
last
but
not
least,
a
couple
of
new
terraform
features
in
thirteen
point,
Oh,
first
of
which
is
get
labs,
integrated,
HTTP,
terraform,
state
back-end
and
secondly,
we've
now
got
a
merge
request.
Widget
that
provides
a
summary
of
your
terraform
plan
output
directly
within
your
merge
request
view
our
Doc's
as
think
you
kind
of
like
Christian
and
Simon.
Both
mentioned
are
pretty
solid
here.
I
So
if
there's
any
product
folks
presents
or
listening
thanks
for
an
awesome
job
on
that,
and
they
really
do
a
good
job
of
covering
the
what's
in
the
how
of
infrastructure
as
code
and
and
all
of
these
new
terraform
things
that
we've
brought
to
get
labs
so
I'm
going
to
focus
more
on
the.
Why
trying
to
mesh?
You
know
what
this
is
with,
how
it
fits
with
get
labs
value,
drivers
and
the
conversations
you're
having
with
customers.
I
First,
the
HTTP
state
back
in
for
terraform.
So
for
the
uninitiated,
which
was
me
I've,
seen
lots
of
you
know
talk
about
infrastructure
as
code,
you
know,
I
thought
I
knew
what
terraform
was,
but
I'd
never
actually
got
my
hands
dirty.
So
I
took
this
as
an
opportunity
to
try
and
learn
something
new
and
in
a
game.
The
docs
really
helped
here
so
for
those
that
aren't
familiar
terraform
backends
or
what
contains
this
state
of
your
infrastructure.
I
If
you,
if
you're,
not
familiar
with
terraform,
it's
a
tool
designed
to
allow
you
to
declare
what
you'd,
like
your
infrastructure
to
be
and
to
look
like
and
then
it'll
go
through
the
process
of
you
can
hit
validate
to
sort
of
lint
and
check
that
your
declaration
is
is
well-formed.
Then
you
would
do
a
terraform
plan
which
compares
what
your
declaration
is
versus,
what
your
infrastructure
actually
is
and
apply
tries
to
make
the
to
match,
and
so
the
backend
is
really
that
that's
stateful
store
of
what
actually
is
in
production
by
default.
I
If
you
just
download,
terraformer
install
it
and
run
it,
it's
going
to
use
what
they
call
the
local
back-end,
and
that
is
just
like
th
state
files
that
are
like
binary
stores
of
State
on
your
local
disk.
That's
all
well
and
good,
and
it's
fine
if
you're
just
a
one-person
team.
But
if
you
want
to
collaborate
with
others,
having
a
shared
storage
state
is
really
important
to
make
sure
that
you're
always
singing
the
same
tune.
I
Likewise,
the
ability
to
lock
that
down,
while
you're,
doing
and
apply,
which
can
be
kind
of
a
multi
minute,
quite
a
long
process,
depending
on
what
you
want
your
infrastructure
to
be,
it
look
like
being
able
to
lock
that
and
say:
listen
like
I'm,
making
a
change
and
everybody
else
that
may
be
collaborating
with
me.
Please
don't
collaborate
and
step
on
my
toes.
I
Actually,
I
think
that's
next
slide,
so
I'm
jumping
ahead.
For
now.
The
the
technical
tidbits
here
are
that,
if
you're,
if
you're,
calm
user,
you
really
don't
need
to
think
about
it
right
it
just
works
and
I
can
show
off
where
the
HTTP
endpoint
exists
and
how
that
looks.
If
you're
a
self
managed
customer
today,
the
sort
of
technical
tip
it
is,
we
use
local
storage.
So
it's
like
for
our
optical
lab,
something
where
state
is
stored
and
they
get
lab
doesn't
correct
that
state.
But
you
can
also
ought
to
use
object.
I
Storage
such
as
Amazon
s3.
Now
the
PowerPoint
dad
joke
I've
been
waiting
all
morning
to
make
here.
I
mentioned
and
I
wanted
to
give
specific
props
that
Nicholas
click
is
the
e/m
in
configure
for
a
really
good
documentation
and
I've
linked
here
to
his
example,
project
which
really
helped
me
get
going
from
0
to
100
real
quick.
I
So
this
is
like
a
quick
and
a
glance
summary
of
any
additions,
changes
or
removals
you're
making
to
your
infrastructure,
as
as
terraform
plan
provides,
but
disappears
directly
in
the
merge
request,
with
a
handy
link
out
to
the
actual
job
that
created
that
full
terraform
plan
output
and
we've
actually
got
a
step
farther
and
we've
created
a
an
inheritable
or
customizable
template.
That's
part
of
you
know,
just
like
the
Auto
DevOps
build
or
test
templates.
You
can
inherit,
but
you
don't
actually
have
to
use
it.
I
You
just
have
to
sort
of
obey
the
the
artifact
output
in
here
and
I've,
taken
a
screenshot
of
at
least
a
piece
of
what
that
good
lab
CI
would
look
like,
but
it's
the
artifacts
reports
terraform
object
that
the
merge
request
looks
for
if
that
exists,
and
if
it's
well-formed
JSON
that
it
can
parse
it'll
show
a
quick.
You
know
kind
of
one-line
summary
that
I
hope
I
can
hopefully
show
off
here
of
exactly
what
Terra
formats
provided
to
you
again.
If
you
take
nothing
away
other
than
this.
I
The
point
is
by
giving
our
users
immediate
glanceable
information
directly
in
the
mr
we're
helping
them
deliver
better
software
faster
or
in
this
case,
better
infrastructure
faster
and
now
for
the
fun
part.
The
demo,
so
I've
got
a
public
project
here
that
you
have
a
fruity
clone
of
for
play
with,
as
you
wish
that,
hopefully
don't
work,
but
as
Marcus
shown
that
live
demos
are
always
a
bit
of
fun.
I
So
the
project
itself
is
pretty
straightforward.
There's
a
good
lab
CI
my
hear
defined.
We
define
the
image
we're
using
the
terraform
image,
and
actually
all
of
this
was
pretty
much
a
copy/paste,
the
the
template
that
I've
linked
in
the
slides,
visible
for
script.
That
runs
that
actually
defines
an
alias-
and
this
is
the
alias
that's
used
to
convert
the
output
of
terraform
plan,
remove
any
sensitive
credentials,
and
then
you
can
use
that
later
on
during
the
planned
stage.
I
Pardon
the
build
stage.
This
is
the
plan.
Job
terraform
will
show
the
output
of
plan
and
then
convert
between
your
pipe
beginning
to
convert
report,
which
is
then
writes
a
file
in
the
container
point
in
JSON
which
the
job
passes
out
as
an
artifact.
That's
what
really
triggers
the
magic
can
be
at
the
merge
request
view,
so
the
other
stuff
I'll
show
off
relating
to
the
HTTP
State
back-end.
So
it's
a
pretty
pretty
basic
state
back-end.
So
it's
it
authenticates
via
the
use
of
an
API
token
see.
It
looks
very
much
like
another
API
calls.
I
If
you've
got
a
valid,
get
lab
API
took
and
you
can
you
can
do
it
like
a
curl
like
an
HTTP
GET
against
your
project,
ID
terraform
state
and
then
the
project
name
that
should
work
this
contains.
In
my
case,
this
is
version
4
of
the
terraform
Tara
forms
under
staying
at
my
infrastructure
state
and
there's
a
big
pile
of
JSON
here
that
describes.
There's
there's
some
resources
like
there's
a
EPC
and
Google's
compute
network,
there's
a
managed
subnet,
there's
some
kubernetes
details,
here's
an
ode,
pool,
mixer,
etc.
I
I
Yeah,
when
you
show
off
the
EMR
widget
and
then
I
think
I'll
yield
to
time
and
I
mean
QA,
so
maybe
you've
got
a
cluster
up,
but
but
your
boss
is
on
your
shoulder
and
says:
hey,
listen!
Your
cost
me
500
bucks
and
lots
for
this
Q&A.
This
cluster,
you
guys
size
it
down
in
this
case.
Here's
an
example,
mr
to
actually
make
the
very
basic
change
to
gk
etf
file,
which
is
where
your
again,
where
your
declaring
your
desired
infrastructure.
I
Previously
we
had
n1
standard
one
machines
in
the
note
pool
and
you
could
maybe
size
it
down
to
a
to
medium,
which
is
like
a
lighter
weight,
less
performant
machine
again.
The
point
of
merge
request
is
collaboration
right.
So
as
a
result
of
that
potentially
mirja
Beltaine
in
this
branch,
you
can
see
here
that
the
pipeline
is
run
was
successful
and,
as
a
result,
I've
got
a
terraform
plan
summary
here.
It's
a
very
quick.
It
looks
like
four
to
add
so
in
the
case
of
gke
and
kubernetes
clusters
in
general.
I
Terraform
doesn't
always
necessarily
provide
the
most
meaningful
sort
of
Delta,
but
there's
also
handy
link
out
here
to
the
the
full
output
of
terraform
plan,
which
they're
given
some
knowledge
and
experience,
probably
able
to
parse
better
than
I.
That
would
help
help
you
understand
what
are
the
changes
they're
actually
being
making
made
here
as
part
of
this
merge
request?
I
I
Won't
happen,
it's
it's
pretty
much
analogous
to
the
same
way
like
the
g-unit
XML
or
the
code.
Quality
reporting
works
as
long
as
you
specify
that
artifact
terraform
report
object,
so
any
job
that
passes
that
out
and
provided
its
json
formatted.
The
mr
will
show
that
widget
provided
that
artifact
has
but
has
been
created
and
it'll
work,
parse
it
and
give
you
that
sort
of
changes,
additions,
deletions,
okay,.
I
J
I
Let
me
take
that
offline
I'll,
try,
I'll,
try
remake
renaming
this
from
plan
to
something
else
and
I'll,
let
you
know,
but
that's
a
good
good
thing
to
be
worth:
oh
yeah
and
sort
of
fun.
Fact:
it's
becoming
a
tradition
for
me.
Every
time
I
asked
you
everytime
I
volunteer
to
do
like
a
release.
Showcase
first
thing:
I
hit
on
this
was
a
horrendous
bug,
so
do
not
put
any
periods
in
the
name
of
your
project
because
they
that,
if
there's
a
period
in
the
project,
slug
the
terraform
back
and
just
floral
for
us.
I
H
Next
question
I
was
wondering:
is
our
our
container
team
they
implement
Auto,
DevOps
and
review
apps
I,
don't
think
they
make
use
of
like
you
know,
tariffs
or
more
infrastructure
is
cold
tools,
so
I
want
to
as
it
does.
It
make
sense
to
try
to
re-implement
review
a
feature
like
just
the
whole
provisioning
of
environments.
Does
it
make
sense
they
implement
that
in
terraform?
Do
you
see
any
issues
with
that
terms
of
how
that
will
work?
Yeah.
I
You
know
review
that
environment
that
only
exists
for
the
duration
of
the
EMR
being
open,
I.
Think
in
the
case
of
review
apps
today,
there's
probably
not
much
that
actually
changes
from
an
infrastructure
perspective
right,
like
you
really
just
you're,
making
an
ingress
change,
which
is
probably
a
bit
sort
of
below
or
maybe
above
terraform,
see.
If
you
think
of
a
terraform
is
like
in
the
case
of
kubernetes
managing
you
know
what
zones
or
regions
your
cluster
being
what
size
of
node
you
want
to
use.
It's
not
to
say
you
couldn't
do
that
with
terraforming.
I
Maybe
if
there's
any
folks
in
the
call
or
had
expertise
in
terraform
would
state
an
opinion
like
oh
yeah,
you
could
definitely
manage
that
in
turn
form
and
make
that
change
in
terraform
today.
How
do
we
do
it?
We
use
like
we
just
passed
a
change
to
be
ingress
like
the
nginx
and
ingress
chart
right
to
turn
up
like
a
specific
sort
of
review,
app
domain
name,
yeah.
H
I
think,
like
the
last
time,
I
I
read
through
it,
it's
like
there's
a
specific
job
that
spins
up
I.
Guess
it
does
that
deployment
that
ingress
service
deploy
and
all
those
other
components.
I
was
just
wondering
it's
like
there
any
plans
to
try
to
re-implement
yeah
I
think
what
you
said
makes
sense
is
that
the
nature
of
the
change
is
not
that
dramatic.
It's
just
you're,
just
being
a
temporary
environment.
It's
probably
just
the
same,
deploy
every
time.
I,
love.
I
The
question
and
I
think
it'd
be
interesting
to
see
how
we
evolve
in
us.
This
way,
all
right,
like
I,
asked
the
question.
In
my
one
of
my
last
slides,
there
I
think
the
combination
of
like
infrastructure
is
code
and
how
we
evolved
there,
plus
what
we're
doing
with
releases
and
release
evidence
I
think
that's
gonna
be
really
neat
to
see
how
those
play
together.
I
know
we're
at
time
so
I'll
yield
back
to
Chris.
But
thanks
for
the
question.
A
Thanks
to
all
our
presenters
today,
I
know
Simon
and
Christian
had
to
drop
off
big
thanks
to
them
to
Chloe
mark
and
Jeanie,
thanks
to
John
woods,
for
the
idea
to
kind
of
have
a
separate
focus
area
on
this
versus
just
doing
in
the
hall
hands
really
appreciate
the
support,
great
questions.
We
will
see
you
all
next
week.