►
From YouTube: Intro to DevSecOps!
Description
Speaker: Steven Terrana, Sr. Lead Technologist from Booz Allen Hamilton
Description: This webinar will cover an introduction to DevSecOps and a live demonstration of how to get up and running with the Jenkins Templating Engine (JTE).
Through JTE, find order in the chaos of managing DevSecOps pipelines at scale. Bring organizational governance, optimize pipeline code reuse, and simplify pipeline maintainability for your team by creating tool-agnostic, templated pipelines that can be reused across teams regardless of the tools being used.
A
A
All
right
thanks
everyone
for
for
joining
the
webinar.
Sorry,
we're
kicking
things
off
a
little
bit
late.
Today,
we're
going
to
be
talking
about
an
introduction
to
two
deaths,
a
cop's
little
about
me.
My
name
is
Stephen
Tirana
I'm,
a
CDF
ambassador
I'm,
a
senior
lead
technologist
at
Booz,
Allen
Hamilton.
A
I've
had
the
privilege
of
talking
to
quite
a
few
people
about
death,
check,
ops
and
I've
learned
one
thing,
and
that's
that
no
one
actually
agrees
on
on
what
it
means
so
I'm
going
to
throw
a
definition
into
the
ring
and
say
that
desta
cops
is
all
about
incorporating
security
into
every
step
of
the
software
development
lifecycle
and
throughout
this
talk,
we'll
we'll
jump
into
a
couple
different
ways
that
you
can
go
about
doing
that
to
kick
things
off
at
a
high
level
overview.
You've
got
a
couple
different
steps
involved.
A
A
So
a
common
analogy
when
we're
building
dead-set
cops
pipelines
is
to
think
of
your
pipeline
like
an
assembly
line
all
right
and
what
what
kinds
of
testing
or
validation
can
we
incorporate
along
that
assembly
line
to
ensure
the
applications
we're
building
are
secure
and
that
can
manifest
itself
in
a
couple
different
ways
making
sure
dependencies
are
secure,
the
code
itself,
the
artifacts,
we're
building
the
infrastructure
and
so
on
all
right.
So,
let's
dive
into
each
one
of
these
in
detail.
A
So
the
first
step
in
building
that
trusted
software
supply
chain
or
implementing
dead-stick
ops
is
making
sure
that
our
third-party
application
dependencies
are
secure
all
right
when
we're
building
applications
were
we're,
often
not
building
them
from
scratch.
There's
plenty
of
very
helpful
third-party
libraries
out
there
that
can
accelerate
our
ability
to
build
applications.
The
problem
is
that
we
don't
actually
know
where
these
these
dependencies
are
necessarily
coming
from
and
oftentimes
they
might
have
vulnerabilities
associated
with
them.
A
So
the
first
thing
you
want
to
do
is
you
know
if
you,
if
we
continue
with
that
assembly
line
analogy
these
third-party
application
dependencies
are
really
like
the
raw
materials
you're
pulling
into
your
application.
So
there's
a
lot
of
great
tools
in
the
industry
to
help
ensure
that
our
dependencies
are
secure,
there's
from
an
open
source
perspective,
there's
OS
dependency
checker,
which
can
which
can
scan
your
dependencies.
There's.
A
Also
tools
like
nexus
certified
nexus
has
tools
called
firewall
in
life
cycle
which
can
scan
your
application
dependencies
as
you
pull
them
into
your
environment
and
then
continuously
great
new
CDs
are
coming
out
on
a
daily
basis.
So
you
need
to
continuously
be
monitoring
if
there's
any
known
vulnerabilities
associated
with
the
materials
in
your
application.
A
So
now
that
we
know
that
the
materials
were
pulling
in
to
our
application
are
secured,
the
next
step
is
ensuring
that
the
code
we
write
doesn't
have
inherent
vulnerabilities.
So
the
most
common
examples
of
this
would
be
things
like
hard-coded,
IP
addresses
or
password
variables,
but
another
more
complex
example
might
be
something
like
if
you're
using
C
for
lower-level
application
development.
There's
a
lot
of
like
memory,
exploitation,
vulnerabilities
that
can
take
place
so
there's
a
lot
of
tools
out
there
as
well.
A
The
most
prevalent
that
I
see
being
used
is
sonar,
cube,
so
Center
cube
can
can
assist
in
making
sure
that
your
code
as
written
doesn't
have
vulnerabilities.
It
also
helps
with
with
pull
request,
reviews
so
being
able
to
scan
your
application
for
industry
best
practices
and
looking
for
things
like
dead
code
or
places
where
there's
there's
opportunities
to
improve
your
application.
That
allows
the
actual
pull
request
reviewer
to
focus
on
some
of
the
more
nuanced
aspects
of
code
review
like
making
sure
it
aligns
with
your
team
standards
and
things
like
that.
A
So
up
until
this
point
we've
pulled
in
secure
third-party
dependencies,
we
then
wrote
our
application
and
made
sure
that
the
code
itself
doesn't
have
vulnerabilities
associated
with
it.
Then
the
next
step
comes
for
actually
building
and
deploying
our
application.
I
would
say
that
security
vulnerabilities
come
in
two
flavors.
You've
got
vulnerable
packages
like
third-party
application
dependencies
or
operating
system-level
packages,
but
then
you
also
have
ulnar
belen,
fruss
truck
sure
all
right.
So
you
know
exposing
port
22
on
your
container
image,
your
opening
port
22
on
your
ec2
server.
A
To
you
know
the
public
Internet
is
an
easy
way
to
ensure
that
you
might
have
a
rough
time
from
a
security
perspective.
So
a
lot
of
the
tools
that
do
scanning
can
also
validate
and
make
sure
that
the
infrastructure
you've
deployed
is
compliant
with
different
security
control
profiles
and
there's
a
lot
of
them
out
there
right,
there's,
FISMA
compliance
and
HIPPA
and
managing
PII.
A
So
there's
a
ton
of
different
guidelines
out
there.
Things
like
CIS
benchmarks,
I
happen
to
work
in
the
federal
space
where
there's
there
are
things
called
secure,
technical
implementation
guides
or
stiggs,
and
these
are
security
controls
that
it's
recommended
you
implement
to
ensure
your
infrastructure
is
hardened.
A
So
it's
not
enough
to
just
make
sure
that
the
application
itself
is
secure.
You
need
to
make
sure
that
it's
running
on
secure
infrastructure.
The
next
piece
is
we
can
actually
build
and
deploy
our
application.
So
now
we
have
a
deployed
application.
We
can
start
doing
some
more
dynamic
testing
all
right,
so
penetration
testing
is
a
way
to
actually
attack
the
deployed
application
and
see
if
it's
susceptible
to
two
common
exploitation.
A
Some
common
examples
of
this
would
be.
You
have
a
website
that
has
a
form
and
then
you
know
making
sure
it's
not
susceptible
to
remote
code,
execution
or
sequel,
injection
or
cross-site
scripting.
So
there's
tools
like
Olaf's
app
that
can
help
with
doing
penetration,
testing.
Sure
there's
a
couple
hundred
others
that
you
could
incorporate
as
well,
but
this
is
a
good
way
to
make
sure
that
your
application
isn't
susceptible
to
some
of
the
more
common
attacks
that
it
might
experience
out
in
the
wild
along
the
same
vein
of
compliance.
A
You've
got
accessibility,
compliance,
their
accessibility
assurance,
so
this
is
making
sure
that
the
application
that
we
build
are
accessible
and
inclusive
for
everybody.
Some
of
the
more
common
examples
of
this
would
be.
If
you
have
an
image
on
your
website,
making
sure
that
it
has
an
alt
text,
you
want
to
make
sure
that
the
application
you're
building
is
going
to
be
consumable
by
screen
readers.
A
It's
important
to
note.
There's
no
tool
out
there,
that's
going
to
tell
you
with
100%
confidence
that
your
application
is
secure,
but
what
it
will
do
you
will
do
is
tell
you
if
you're,
not
right
so
section
508.
Compliance
for
accessibility
definitely
has
some
more
complex
or
gray
areas
of
the
law.
So
you
want
to
be
able
to
give
developers
fast
feedback
on
those
easy
to
check
accessibility
issues
so
that
those
that
have
to
do
manual,
testing
for
accessibility
compliance
insurance
can
focus
on
those
more
nuanced
nuanced
areas.
A
If
you're
going
out
and
fetching
a
image
from
docker
hub
without
validating
the
source
of
that
image
or
the
contents
of
it,
you
could
be
opening
yourself
up
to
all
kinds
of
vulnerabilities.
I,
don't
remember
the
exact
image
that
I
remember.
There
was
a
image
on
docker
hub
last
year,
where
I
had
a
couple
million
downloads
and
was
doing
Bitcoin
mining
right.
So
you
want
to
make
sure
that
the
way
you're
packaging,
your
application,
doesn't
bring
with
it
inherent
vulnerabilities.
A
So
now
we
can
actually
go
and
deploy
the
application
all
right.
So
that's
that's
where
we
can
do
runtime
monitoring.
So
there's
you
know
we,
we
integrated
all
these
kinds
of
security
testing,
but
that's
still
not
enough
right.
Well,
new
vulnerabilities
are
introduced
on
a
daily
basis
and
we
need
to
make
sure
that,
even
though
we
brought
in
all
the
security
testing
that
we're
still
monitoring
the
environment,
it
take
containerization
as
an
example.
A
If
I'm
deploying
my
micro
service
based
application
using
something
like
kubernetes
I,
know
exactly
what
is
supposed
to
be
happening
on
that
cluster
right,
every
container
has
a
set
process
that
it
should
be
running.
So
I
can
use
tools
like
Falco
or
systick,
secure
twistlock
to
do
continuous,
runtime
security
monitoring
to
be
able
to
alert
if
the
application
starts
behaving
in
a
way.
That's
that's
unexpected
right.
The
most
common
example
that
gets
used
is
if
a
new
shell
session
is
open
in
a
container
in
production.
A
That's
probably
a
good
sign
that
something's
happening
this
shouldn't
be
so
a
lot
of
these
tools
can
do
alerting
to
let
you
know
that
that
happened,
but
they
can
also
do
some
some
pretty
advanced,
forensics
and
response
to
be
able
to
go,
terminate
that
container.
So
the
second
someone
is
able
to
gain
a
shell
session
into
your
container.
You
can
delete
it
and
then
send
notifications
that
something's
happening
that
it
shouldn't
be.
A
So
you
know
all
the
security
testing
is
in
addition
to
all
the
other
kinds
of
quality
assurance
that
we
want
to
bake
into
our
CI
CD
or
just
like
ops
pipelines,
things
like
unit
testing,
measuring
code
coverage
for
ingestion
into
sonar,
cube,
doing
browser-based,
test
automation
or
API
integration.
Testing,
basically
pull
a
word
other
dictionary
and
put
testing
on
the
end
of
it.
So
implementing
these
practices
at
scale
can
quickly
get
pretty
complicated
right,
there's,
not
a
single
tool
that
is
used
regardless
of
text
sack
for
all
these
different
kinds
of
testing.
A
If
I
have
a
front-end
application,
I
might
be
doing
a
unit
testing
with
karma
or
Jeff.
If
I
have
a
back-end
application,
maybe
I'm
using
J
unit
or
Spock
to
write
my
unit
tests,
maybe
I'm
executing
them
with
maven
or
Gradle
or
answer
NPM
it.
Some
teams
might
be
using
something
like
fortify
for
static
code
analysis
where
other
teams
could
be
using
sonar
cube
so
implementing
these
practices.
A
That
scale
often
results
in
you
know
many
individualized
pipelines
that
can
be
difficult
to
maintain,
and
on
top
of
that
not
too
many
folks
are
necessarily
experts
at
constructing
views,
mature
devstack,
ops
pipelines.
So
a
few
years
ago
we,
you
know
we
wanted
to
try
to
figure
out.
How
can
we
make
this
easier?
It
was
taking
too
long
to
build
mature
pipelines.
Every
team
that
needed
one
was
kind
of
reinventing
the
wheel.
It
was
getting
a
little
too
complex
right
if
I
have
sixty
microservices,
for
example,
each
living
in
their
own
source
code
repository.
A
Some
of
them
might
be
using
different
tools
and
the
way
a
lot
of
the
CI
CD
tools
in
the
industry
work
right
now,
because
you
need
to
create
a
pipeline
definition
for
each
application
individually,
which
can
quickly
result
in
a
pretty
difficult
to
maintain
system
right.
So
from
a
standardization
perspective,
if
I've
got
multiple
definitions
of
the
pipeline,
how
do
I
actually
know
that
that
all
of
those
pipelines
are
implementing
the
organizationally
required
checks
from
a
security
perspective
and
then
continuous
improvement
right?
A
So
I've
been
on
teams
where
I'm
responsible,
as
as
a
DevOps
engineer,
facilitating
dedsec
ops
pipelines
to
to
maintain
the
the
pipeline
infrastructure
and
definition
for
those
60
apps?
So
no
one
gets
the
pipeline
right
on
the
first.
Try
right
we're
continuously
incorporating
lessons
learned
into
our
automated
software
delivery
processes.
So
another
one
of
the
challenges
associated
with
building
pipelines
on
a
per
application
basis
is
now,
if
I
want
to
add
a
new
tool
or
I
want
to
change
the
business
logic
of
the
pipeline.
I
have
to
go.
A
Do
that
in
across
every
source
code
repository
which
can
result
in
a
lot
of
you,
know
operational
overhead
I
have
to
talk
to
all
of
the
individual
development
teams
and
coordinating
a
migration
to
change
the
pipeline,
and
it
can
quickly
get
pretty
cumbersome.
So
one
approach
to
solving
some
of
these
challenges
is
the
Jenkins
template
engine.
So
the
idea
here
is
that
all
of
our
jessa
cops
pipelines,
regardless
of
tech,
sack,
follow
a
common
business
process.
Right
we're
going
to
build
something
test.
A
It
we're
gonna,
you
know
package
it
into
an
artifact,
deploy
it
somewhere
scan
it.
It
doesn't
matter,
it's
a
front-end
app
using
NPM
to
run
test
packaging,
static,
HTML
for
deployment
to
an
s3
bucket
or
if
it's
a
back-end
API
using
maven
to
execute
tests,
building
a
container
image
and
using
helm
to
deploy
a
to
create
cluster
that
business
process
to
implement
the
pipeline
doesn't
change.
A
So
the
Jenkins
templating
engine
is
a
plug-in
that
enables
the
the
creation
of
these
tool
agnostic
pipeline
templates
to
be
able
to
plug
in
play
with
exactly
what
tools
are
being
used
to
implement
the
steps
of
that
template.
So,
let's
see
what
that
that
looks
like-
and
this
is
an
example
for
for
using
scripted
pipeline,
it's
on
the
Left.
We
have
a
pipeline,
that's
packaging,
something
with
maven
and
then
doing
static
code
analysis
using
center
cube
and
on
the
right.
A
We
have
a
pipeline,
that's
using
Gradle
and
then
packaging
doing
static
code
analysis
through
sonarqube,
so
both
of
these
pipelines,
even
though
one
team
happens
to
be
using
maven
and
another
team
happens
to
be
using
Gradle-
are-
can
be
represented
by
the
same
generic
process.
Right
they're
both
going
to
do
a
build
and
they're
both
going
to
do
a
static
code
analysis
with
center
cube.
So
it
would
be
awesome
if
I
could
take
these.
A
You
know
individualized
jenkins
files
out
of
the
individual
source
code
are--those
and
instead
define
a
common
pipeline
template
that
both
teams
have
to
use
and
to
accomplish
that
with
with
the
templating
engine.
I
just
have
to
shuffle
the
code
around
a
bit.
All
right,
so
I
can
create
a
centralized
pipeline
configuration
repository.
A
I
can
create
in
jcu
it's
called
a
library
source,
so
we're
going
to
have
a
maven
in
a
Gradle
library.
They
both
implement
a
build
step
and
then
a
center
cube
library
which
is
going
to
implement
a
static
code
analysis
step.
It's
on
the
right-hand
side.
We
can
see
that
the
pipeline
code
to
make
this
possible
is
the
exact
same.
We
just
organized
it
a
little
bit
differently
to
be
able
to
load
in
different
functionality
depending
on
what
tools
the
team
is
using.
So
alongside
these
pipeline
libraries,
we
can
also
create
a
common
pipeline.
A
Template
called
the
Jenkins
file.
It
just
lives
outside
of
any
one
source
code.
Repository
in
a
centralized
repository
po
and
then
instead
of
having
a
often
700
line
jenkins
file
in
every
source
code
repository,
we
can
instead
give
the
the
applications
their
own
pipeline
configuration
file
and
it
all
it's
responsible
for
is
is
implementing
that
pipeline
template
alright,
so
this
template
says
we're
going
to
do
a
build
and
we're
going
to
do
static
code
analysis.
A
So
we
get
better
pipeline
reuse
of
pipeline
code,
reuse,
better
standardization,
but
we
can
still
be
flexible,
allowing
teams
to
choose
the
right
tool
for
the
job,
but
we
can
still
do
a
little
bit
better
than
this
right.
Those
libraries
that
were
creating
aren't
super
helpful.
Almost
they
can
take
input
parameters
all
right
so
on
the
right
here.
We
have
an
example
of
expanding
that
center
tube
library
a
bit
to
be
able
to
take
some
input
parameters.
A
If
anyone
has
used
the
Sun
or
cube
scanner
plug-in
before
you,
you
set
up
a
Sun
or
cube
installation.
It
would
be
great
to
be
able
to
parameterize
which
son
or
cube
installation
that's
configured
in
Jenkins.
Do
we
want
to
use
some
teams
might
want
to
enforce
the
whether
or
not
to
sale
the
pipeline
if
the
center
cube
quality
gate
fails?
So
all
of
these
be
these
parameters
get
exposed
through
the
pipeline
configuration
file
all
right.
A
So
in
this
configuration
file,
you'd,
say
libraries
sonar
cube,
and
then
you
can
pass
in
some
parameters:
the
the
scanner
version
and
whether
or
not
to
enforce
the
quality
gate
or
the
example
used
here
and
on
the
right
in
the
upper
right
hand,
side
of
the
code.
We
can
see
that
jte
will
auto
wire
a
config
variable
which
exposes
the
the
library
configuration.
That's
been
provided
right
so
now,
as
an
organization,
you
can
maintain
a
portfolio
of
these
libraries
which
act
as
building
blocks
to
implement
a
common
pipeline.
A
Template
there's
no
reason
that
each
team
should
have
to
google
sonar,
cube
post
jenkins
and
find
the
ten
lines
of
code
that
do
static
code
analysis
with
sun
or
cube
from
a
Jenkins
pipeline.
We
can
accelerate
everyone's
ability
to
quickly
build
the
mature
pipelines
by
reusing
the
pipeline
code
in
a
way
that
allows
us
to
pass
parameters
in
and
then
from
a
configuration
standpoint.
We
can
still
do
a
little
bit
better,
all
right.
A
So
all
the
way
on
the
right
here
we
have
the
desired
pipeline
configuration
the
Maven
app
is
going
to
use
maven
and
sonar
cube
the
Gradle
s
can
use
Gradle
and
Center
cube.
But
let's
say
we
want
to
apply
some
organizational
governance
to
this
setup
and
say
everyone
has
to
use
the
center
cube
library
if
you're
in
an
organization
that
wants
to
apply
some
some
strict
governance
to
making
sure
those
security
gates
are
implemented.
For
example,
you
could
set
up
a
organizational
pipeline
configuration
that
says
everyone
has
to
use
sonar
cube
for
static
code
analysis.
A
They
have
to
use
something
like
Claire
or
Tostig,
secure
anchor
for
container
image
scanning,
but
you're
going
to
allow
the
individual
teams
to
tell
you
which
build
tools
are
using
right.
So
the
same
way
and
maven
you
have
a
parent
pom
file
in
a
templating
engine.
You
can
create
hierarchies
of
pipeline
configuration
that
get
aggregated
together
and
result
in
the
the
aggregated
pipeline
configuration
that's
used
to
implement
the
pipeline
template
in
the
way
the
template
engine
works
is
that
you
create
governance
hierarchies
that
map
directly
to
your
Jenkins
job
hierarchies.
A
So
if
you
have
a
configuration
that
should
be
applied
to
every
jte
pipeline,
you
set
that
up
in
manage
Jenkins
configure
system,
then
on
each
folder
in
Jenkins,
and
that
applies
to
multi
branch
pipelines
or
github
organization
jobs.
You
can
also
create
what
are
called
governance
tiers
to
be
able
to
provide
these
libraries
these
pipeline
configurations.
So
now
you
know
you
can
give
teams
as
much
flexibility
as
you
want
to,
or
you
could
dial
the
governance
way
up
and
say.
A
This
is
the
pipeline
that
you
have
to
inherit,
and
these
are
the
libraries
you're
inheriting,
but
please
tells
which,
which
build
tool
you're
going
to
use,
for
example,
all
right.
So
what
are
the
takeaways?
The
first
is
that
the
the
templating
engine
is
really
a
framework
for
developing
tool,
agnostic,
temporary
workflows.
That
can
be
reused.
A
Nothing
in
that
example,
we
saw
was
you
know,
particularly
hard-coded.
You
can
create
whatever
libraries
you
need
to
those
libraries
can
provide
whatever
steps
they
want
to
write.
So
this
is
a
framework
for
being
able
to
create
tool,
agnostic
pipeline
templates
that
teams
inherit.
If
you
want
all
of
your
steps
to
come
from
one
library,
that's
the
perfectly
valid
way
to
do
it.
A
If
you
don't
want
to
use
libraries-
and
you
just
want
to
pull
the
Jenkins
fall
out
of
the
repo-
that's
another
valid
use
case
right.
The
idea
here
is
that
we
want
to
separate
the
business
logic
of
our
pipelines
from
the
technical
implementation.
It
doesn't
matter
what
tools
a
team
is
using.
Many
teams
will
frequently
be
following
the
same
business
process.
A
A
You
know
these
libraries
can
now
become
you
know,
either
open
source
or
a
centralized
portfolio
tool
integrations.
So
teams
don't
have
to
start
from
scratch
anymore.
You
can
have
a
set
of
libraries
that
are
building
blocks,
to
construct
your
mature
pipelines
and
then
throughout
the
organization,
as
new
tool
integrations
are
completed.
They
are
added
to
that
portfolio
of
building
blocks,
and
now
you
can
drastically
accelerate
how
quickly
you
can
build
mature
pipelines
so
that
no
one's
starting
from
scratch
anymore
and
the
third
is
simplifying
pipeline
maintainability.
A
In
my
opinion,
it
is
a
lot
easier
to
manage
a
set
of
pipeline
templates
and
modularized
tool
integrations
than
it
is
to
try
and
manage
you
know:
60
copied
and
pasted
jenkins
files
that
have
been
tweaked
to
fit
an
individual
applications
need
and
if
I
need
to
make
an
update
to
the
pipeline.
That
applies
to
multiple
teams.
Simultaneously,
I
can
just
update
a
consolidated
pipeline
definition
that
teams
are
inheriting,
instead
of
needing
to
orchestrate
a
migration
of
the
pipeline
and
open
pore
requests
to
every
branch.
A
A
lot
of
these
practices
are
sort
of
what
we've
been
doing
in
software
for
a
very
long
time,
separating
out
business
logic
from
technical
implementation.
It's
just
that
that
those
best
practices
have
been
made
their
way
into
pipeline
development.
Yes
great.
We
we
want
to
be
able
to
say
this
is
the
shell
of
a
workflow
we're
going
to
follow,
but
then
be
able
to
swap
out
tools
in
and
out
seamlessly
so
that
that's
an
introduction
to
dedsec
ops.
A
The
too
long
didn't
read
here
is
incorporate
security
as
early
as
you
can
and
any
points
in
the
pipeline,
as
you
can
to
embed
security
and
every
step
of
the
software
along
the
lifecycle
make
sure
your
applique
some
dependencies
are
secured
using
something
like
a
loss,
dependency,
checker
or
Nexus
or
black
duck
make
sure
the
code
you
write
doesn't
have
inherent
vulnerabilities
with
static
code
analysis.
If
you're
using
containerization,
that's
a
new
artifact.
We
need
to
scan
so
du
container
image
scanning.
A
We
want
to
make
sure
the
infrastructure
we're
hosting
our
applications
on
are
secured
through
continuous
compliance
and
making
sure
that
you
know
all
the
controls
have
been
implemented
as
we're
supposed
to
once
we
deploy
our
applications.
We
want
to
make
sure
our
interfaces
are
secure,
doing
penetration
testing
and
we
want
to
make
sure,
they're,
inclusive,
doing
accessibility,
compliance
scanning
using
something
like
Google
lighthouse
or
a
tool
called
a
liar
or
a
lie,
and
then
once
you've
actually
deployed
your
application,
you
know
none
of
this
is
enough.
A
B
Ok
well
Stephen.
Thank
you.
So
much
I
really
really
appreciate
you
taking
the
time
to
write
this
webinar
for
the
CDF.
We
will
have
it
on
demand
on
our
YouTube
channel.
So
if
you
didn't
get
a
chance
to
tune
in
there's
other
other
webinars,
there
too
appreciate
your
time
and
thank
you
for
an
intro
to
dev
cycles.