►
Description
When IT operators and developers think containers, naturally, the first thought is Kubernetes. The use of Kubernetes managed services has provided teams the ability to bootstrap environments at unprecedented velocity. Removing the necessity to manage cluster components shifts the focus on development initiatives, enablement, CI / CD workflows, and tasks that create business value.
If Kubernetes is the path forward for your organization, how do you adopt the optimal model for provisioning, managing and deploying container based applications in the cloud?
Presenter:
Anthony Ramirez, Director of Consulting @Nebulaworks
A
All
righty
are
we
gonna,
go
ahead
and
get
started
now
so
I'd
like
to
thank
everyone
who
is
joining
us
today
on
the
welcome
to
today's
CMC
webinar
to
carbonelli
is
zero
to
hero
deployments
and
management's.
So
my
name
is
Daniel
I'm
working
for
red
head
at
the
Technical
Marketing
Manager
specialized
the
crown
every
application,
development
and
I
also
responsible,
see
ambassador,
so
I
will
be
moderating
today's
webinar
and
we
like
to
welcome
our
presenter
today,
the
Ansonia
Ramirez,
the
director
of
a
conserving
our
networks
and
there's
a
few
couple
of
things
to
housekeeping
idle.
A
Before
we
get
sorry
during
the
webinar,
you
are
not
able
to
talk
as
an
attendee,
so
there
are
creating
a
box
at
the
bottom
of
your
screen.
So
please
feel
free
to
drop
your
question
in
there
and
we
will
get
to
as
many
as
we
can
at
the
end.
So
this
is
the
official
webinar,
obviously
as
yet.
So
as
such
it's
a
subject
to
CNC
of
a
code
of
conduct.
So
please
do
not
add
anything
to
the
chair
or
a
question.
Ruby
violation
of
the
code
of
conduct
so
basically
are
pleased
to
be
respectable.
A
B
You
so
much
Daniel
and
thanks
everybody
for
attending
today,
great
to
see
the
participant
list
filling
up
here,
I'd
like
to
take
a
quick
moment
to
you
think
Kim,
Christy
and
Daniel,
who
are
part
of
the
CN
CF
that
helped
put
all
this
together
and
and
LAN.
Who
is
a
marketing
director
at
never
works
that
assisted
me
in
getting
all
of
this
put
together
as
well.
B
So
the
cloud
and
kubernetes
has
been
part
of
my
duties
for
a
few
years
now
so
in
this
talk,
I
hope
to
share
a
few
things.
So
this
talk
was
designed
for
us
full
stack
engineers,
DevOps
engineers
or
generally
anybody
that's
working
on
managing
infrastructure,
so
nowadays
with
responsibilities.
Shifting
way,
left.we
for
finding
development
teams
having
to
manage
infrastructure
more
commonly
and
the
entire
stack
from
infrastructure
provisioning
to
application
configuration
and
deployment
is
now
the
responsibility
of
maybe
one
team
versus
silo
teams.
B
Cluster
I
have
provisioned
in
Amazon
using
terraform,
as
well
as
a
demo
of
deploying
a
application
with
helm.
So
this
is
supposed
to
provide
a
cohesive
understanding
of
how
all
these
tools
fit
together,
and
this
is
from
my
experience,
working
with
very
large
organizations,
adopting
containers,
deploying
applications
and
systems
to
the
cloud.
So
you
might
be
familiar
with
some
of
these
concepts.
You
might
be
doing
it
in
a
similar
way
or
a
different
way,
but
generally
it's
to
put
all
of
these
tools
together
and
show
how
they
think.
B
As
we've
seen
in
the
last
couple
of
decades,
there's
been
a
huge
shift
from
how
teams
are
managing
applications.
Managing
infrastructure
patterns
of
monolithic
applications
are
now
transitioning
to
micro
services
due
to
technologies
like
containers
to
the
contributions
that
Google
made
to
the
Linux
kernel.
That
includes
C
groups
and
namespaces,
and
generally
I'd
like
to
address
those
things.
I'd
like
to
talk
about
how
we
can
increase
developer
productivity
and
since
there's
so
many
tools
out
there
I
hope
to
distill
down
the
toolset
that
you
would
need
to
get
started
with
communities.
B
So
docker
has
become
a
very
popular
open-source
container
runtime,
there's
an
enterprise
wing
to
it.
However,
the
open
source
tool
itself
has
gained
a
lot
of
popularity
over
the
years
and
there's
other
runtimes
that
exists
and
that
have
become
useful
for
teams,
but
docker
seems
to
have
a
large
community
people
using
it.
So
there's
other
runtimes
that
are
compatible
with
kubernetes
like
container
G
and
cryo.
But
my
experience
over
the
past
few
years
is
primarily
using
docker
as
the
runtime
and
leveraging
the
docker
file
as
the
method
of
container
creation,
image
creation.
B
So
working
for
nebula
works,
which
is
a
consultancy
over
the
past
few
years,
I
found
myself
in
meetings
with
teams
trying
to
justify
the
use
of
container
technology
or
building
a
business
case
to
potentially
adopt
these
patterns,
adopt
these
technologies
and
propagate
this
across
teams.
So
this
persona
may
use
on-premise
hardware.
They
may
have
strict
silos
in
their
team
structures
and
are
trying
to
figure
out
how
to
develop
containers
and
how
to
secure
them.
So
I
believe
that
there
are
some
benefits
of
using
containers.
B
Those
are
matter-of-fact
type
things
and
I'm
not
saying
that
containers
are
silver
bullets;
either
they
have
their
weaknesses,
their
vulnerabilities,
their
their
quirks
and
their
exploits.
So
it
may
not
be
right
for
every
use
case.
However,
they
have
advantages
that
we
should
always
keep
in
the
back
of
our
heads.
B
So,
first
and
foremost,
one
of
the
things
I
enjoy
about
containers
is
that
they're,
based
on
Linux
technologies,
as
I
mentioned,
they
are
a
result
of
Google's
contribution
to
the
Linux
kernel
and
about
a
decade
ago,
or
a
little
bit
less
than
that
which
included
secrets
and
namespaces
which
provide
the
ability
to
create
isolation
for
services
on
the
same
host.
So
these
containers
operated
similarly
to
things
like
Solaris
zones
or
BSD
jails.
So,
if
you're
familiar
with
that,
the
container
concept
is
is
very
familiar.
B
The
way
that
you
actually
use
the
interface
or
the
API
is
is
different,
so
docker
itself
is
pretty
easy
to
use.
It
has
a
very
streamlined
developer,
workflow,
which
I'll
be
talking
about
in
a
second
but
like
other
server,
templating
tools.
Containers
allow
us
to
package
our
apps
and
our
dependencies
into
a
container
image
using
a
copy,
I
write
file
system.
So
the
process
of
building
containers
results
in
artifacts
that
are
number
one
lightweight
and
they're
they're
more
lightweight
than
VMs.
B
You
package
a
single
service
or
application
into
a
container,
and
maybe
have
some
sidecar
for
logging
or
metrics
collection,
or
something
like
that.
But
the
idea
is
that
you
want
to
have
one
application
running
in
a
container.
You
want
to
avoid
noisy
neighbor
syndrome.
You
want
to
have
separation
and
concerns
and
create
services
that
are
discrete
and
that
can
be
scaled
independently
and
horizontally
versus
vertically,
which
you
would
have
to
do
with
the
monolithic
application
structure.
Second,
is
that
they're
portable?
B
We
know
the
classic
the
the
constant
reminder
to
us
why
we
use
containers
is
to
avoid
it
works
on
my
machine
dilemma.
So
if
the
development
team
was
building
something
in
an
operation
team
or
an
individual
was
supporting
the
infrastructure,
sometimes
these
devs
would
throw
applications
over
the
wall
and
just
have
that
the
operations
teams
figure
out
how
to
deploy
them.
B
So
packaging
is
streamlined
across
this
this
workflow
and
we
can
place
these
minted
images
once
we
build
these
images
into
a
registry
or
container
registry,
resulting
in
a
consistent
experience
for
everybody.
That's
pulling
and
deploying
that,
and
since
containers
are
inherently
smaller
in
size,
as
I
mentioned,
they
can
be
scaled
horizontally,
which
is
very
advantageous
for
us.
B
It's
takes
less
hardware,
we
could
leverage
more
densification
in
their
servers
and
it
allows
us
to
use-
and
it
encourages
us
to
use
micro
service
based
architecture
patterns
and
micro
services
themselves
is
a
lot
of
content
on
that
I
recommend
reading
about
it
through
some
blogs,
new
thought
works
that
I
had
some
great
stuff.
However,
micro
services
essentially
allow
us
to
create
discrete
services,
expose
them
via
some
standardized
API,
like
a
REST,
API
and
there's
separation
of
concerns
between
different
services.
B
So
if
there's
many
discrete
teams,
they
could
iterate
on
their
service
and
dependently
without
effect
of
each
other.
This
is
great
because
we
have
higher
velocity
of
feature
creation,
there's
not
that
many
dependencies
and
since
everything's
exposed
to
a
single
API
there's
not
much
changing
for
consumers
of
that
service,
so
anything
happening
or
changes
happening
on
the
back
end
behind
that
is
abstracted
away
from
different
services.
So
there's
a
few
advantages
to
micro
services.
They
can
get
overly
complicated
and
they're,
not
again
a
silver
bullet,
but
they
do
promote
advantageous
patterns
for
dev
teams.
B
So
containers
are
very
useful.
They
seem
to
be
very
useful
for,
in
my
experience
for
teams
and
a
very
common
pattern
that
I
see
that
or
I
thought
I
have
seen
over
the
years
is
the
way
that
teams
develop
containers.
So
the
first
step
would
be
a
developer
that
has
a
container
runtime
on
their
workstation
they're
building
they're
testing
they're
breaking
their
containers
they're
making
in
their
application
their
general-purpose
programming
languages
into
these
containers
and
since
containers
are,
were
intended
to
be
a
developer,
centric
or
developer
focused
tool.
They
are
really
easy
to
create.
B
They
allow
developers
to
go
ahead
and
create
different
versions
of
their
images
without
affecting
any
production
environment.
So
developers
now
have
the
ability
to
be
a
part
of
that
deployment
process
that
delivery
process.
So
just
to
give
you
guys
some
context,
the
code
that
the
this
developer
would
be
writing
would
probably
be
stored
in
a
place
like
github
or
gitlab,
and
teams
are
most
likely
following
a
standardized
branching
story.
Any
versioning
strategy
such
as
truck
based
development
or
github
flow.
B
So
this,
how
could
developer
get
some
warm
tea
and
some
some
strong
coffee,
iterates
on
their
application,
builds
an
image
and
pushes
their
code
up
to
a
repository
in
their
future
branch.
There's
some
pull
request,
review
process
that
happens,
and
once
this
happens,
a
CI
job
typically
runs,
and
once
the
dockerfile
runs
a
set
of
unit
tests
for
the
general-purpose
language
and
any
other
test
that
that's,
the
team
finds
relevant.
Typically,
there's
a
in
the
baking
process.
B
There's
tools
like
twist,
lock
or
native
features
in
the
elastic
container
registry
in
Amazon
that
provide
native
image
scanning
service
solutions.
So
you
could
run
vulnerability
scans
against
images
that
you
create,
and,
after
the
approval
process
happens,
the
container
is
ready
to
be
pushed
to
a
registry.
So
container
registry
holds
of
production,
ready
image,
that's
versioned!
It's
it's
tagged
and
we
understand
that
it
has
been
tested
on
different
environments
and
works
across
a
series
of
different
environments,
so
eventually,
when
it
runs
in
production.
B
There
wasn't
docker
composed
there
wasn't
any
sort
of
docker
stacks
available
so
running
these
containers
building.
These
containers
was
pretty
much
something
you
had
to
have
a
really
good
handle
on
and
once
kubernetes
came,
I
became
more
popular
once
dr.
swarm
had
more
features
to
deploy,
multi
container
applications.
Those
types
of
patterns
started
to
arise
and
have
their
own
testing
related
to
them.
B
B
So
working
for
never
works
I've
had
the
privilege
to
work
with
some
very
large
brands,
providing
build
engineering
services,
training
and
consulting
and
there's
a
continuum
that
we
have
found.
That
is
somewhat
consistent
across
teams
that
are
attempting
to
adopt
containers.
So
the
initial
step
is
to
build
an
orchestration
platform
for
a
team
to
use
in
the
past.
I've
worked
on
bootstrapping
or
automating
the
deployment
of
open-source
docker
swarm
clusters,
I
use
ansible
to
bootstrap
kubernetes
onto
on-prem
nodes,
as
well
as
raspberry
PI's,
using
managed
services
like
eks
or
aks.
B
But
the
idea
is
that
you
need
a
cluster
up
and
running
to
start
the
journey,
and
obviously
this
is
not
going
to
be
a
production
very
clustered.
But
it's
something
to
get
you
going.
It's
something
that
allows
teams
to
start
experimenting,
and
once
you
have
this
process
baked
apps
deploy
one.
Hopefully
you
have
some
automation
or
using
something
like
infrastructures
code
that
allows
you
to
duplicate
these
environments
very
easily,
so
the
first,
the
first
step
is
to
get
started
there.
The
second
is
to
identify
the
domains
to
test
and
secure.
B
So
the
next
step
after
identification
of
these
domains
is
to
actually
execute
securing
them.
So,
as
I
mentioned,
there's
some
immutable
inner
ability
scanning
solutions
that
exists
that
can
get
you
some
very
easy
wins.
There's
other
open-source
solutions
like
Clair
like
Ankur,
that
you
can
use
and
integrate
into
your
CI
process.
There's
also
native
cloud
image:
vulnerability
scanning
solutions
for
your
containers,
so
understanding
that
those
tools
exist
and
understanding
how
to
use
them
is
very
important
and
finally,
end-to-end
telemetry
and
security.
B
So
monitoring
and
logging
containers
versus
virtual
machines
or
bare
metal
is
slightly
different,
there's
more
layers
to
begin
to
analyze
here.
So
first
there's
the
container
level,
the
container
login
metrics
application
tracing
machine
metrics
for
the
nodes
that
are
running
as
part
of
a
cluster,
as
well
as
the
the
kubernetes
other
container
orchestration
platform
itself.
So
all
of
these,
these
different
systems
need
to
be
monitored
and
logged,
and
so
this
may
take
a
little
while
your
organization
may
have
some
standard
tools
for
logging
in
metrics
collection.
B
So
integrating
those
into
the
container
solution
is
something
that
I
find
is
takes
a
little
bit
more
time
and
then
being
able
to
consolidate
that
data,
whether
it's
machine,
metrics
or
logs,
consolidating
that
and
being
able
to
perform
some
analysis
on
it
in
order
to
extract
relevant
information.
So
setting
up,
alerting
things
like
that.
B
So
over
time,
once
the
team
understands
the
the
domains
that
existing
in
this
kind
of
Factory,
this
workflow
their
skills
with
with
containers
with
kubernetes
with
the
tooling
around
the
testing,
the
securing
the
telemetry
begin
to
increase
and
they
could
start
driving
business
value
much
faster.
So,
as
you
can
see,
it's
a
it's
a
progressive
journey,
it's
not
a
one
state
that
you
get
to
when
you're
done.
It's
understanding
that
it
is
a
journey
and
that
sometimes
teams
that
may
just
be
getting
started
need
a
path
forward.
B
That's
simple
that
is
transparent
and
that
gets
them
value
fast.
There's
been
times
where
people
have
worked
with
teams
that
are
building
POCs,
but
there's
no
attempt
to
standardize
on
these.
The
continuous
integration
workflows
or
the
continuous
delivery
workflows.
There's
no
intention
to
standardize
on
the
branching
strategies.
So
having
that
in
the
back
of
your
head,
an
understanding
that
when
you
do
have
standards
when
you,
when
you
enforce
the
standards,
it
typically
makes
it
easier
to
automate
things
versus
development
teams,
doing
if
they're
doing
kind
of
whatever
they
want
to
do.
B
It
makes
it
a
little
more
difficult
to
understand
what
tools
can
help
them
achieve
what
they
want,
but
having
a
baseline
standard
and
going
from
there.
In
my
experiences,
help
teams
really
take
advantage
of
these
technologies
and
now
kubernetes,
so
containers
provide
some
isolation
for
us.
They
provide
a
streamlined
workflow
for
packaging
up
our
images
from
a
development
perspective.
After
a
few
iterations
of
building
containers
running
through
the
you
know,
pull
request
approval
process.
This
becomes
second
nature.
B
So
now,
if
we
wanted
to
deploy
hundreds
or
thousands
or
tens
of
thousands
of
containers,
it
would
be
much
too
cumbersome
to
do
it
manually
or
even
with
the
script.
So
in
order
to
solve
this
problem
of
massively
scaled
container
deployments,
we
introduced
not,
you
know
not
myself,
but
the
community
introduced
container
orchestration
platforms.
B
There's
a
lot
of
reasons
why
kubernetes
is
a
great
tool
to
use
if
you're
not
already
using
it.
It
exposes
compute
resources
as
a
single
deployment
platform,
so
you
can
define
a
cluster,
and
if
you
provide
a
manifest
you
post
a
manifest
to
the
kubernetes
api,
it
will
go
ahead
and
deploy
that
container
application.
On
your
behalf-
and
you
don't
really
have
to
worry
about
where
it's
being
deployed
to
you,
you
can
even
provide
specific
selectors
or
options
where
you,
if
you
have
a
requirement
for
that
application,
to
learn
a
specific
type
of
hardware.
B
Kubernetes
will
go
ahead
and
find
that
out
for
you.
So
generally
it's
a
scalable
platform.
It's
flexible,
it's
a
platform
for
building
platforms.
So
how
did
we
get
to
to
kubernetes?
Well
about
a
decade
ago,
Google
had
a
internal
orchestration
platform
called
Borg
familiar
with
board.
It
was
kind
of
the
first
container
orchestration
platform
that
mounted
in
the
first
at
Google,
but
it
was
widely
used
at
Google
and
began
to
create
a
long
line
of
other
orchestration
platforms
to
get
us
where
we
are
today.
B
So
Borg
is
an
internal
clustering
platform
that
is
similar
to
kubernetes,
but
the
interface
and
the
API
looked
quite
different.
So
over
the
years,
Google
understood
that
there
was
some
inefficiencies
with
that
board
architecture,
so
they
created
a
tool
called
Omega
and
Omega
intended
to
improve
the
design
decisions.
B
So
this
container
orchestration
platform
was
designed
based
on
all
the
stuff
that
Google
learned,
building,
Omega
and
board,
and
they
wanted
to
make
a
very
developer,
focused
platform
to
abstract
away
all
the
infrastructure,
make
everything
a
REST
API
and
so
to
simplify
that
architecture,
as
well
as
make
it
easy
for
developers
to
consume,
and
this
is
a
architecture
grant
diagram
that
you
all
might
be
familiar
with.
It
was
just
borrowed
from
the
kubernetes,
not
on
your
website.
B
So
on
the
left
side,
we
have
the
kinetics
control
plane,
and
this
control
plan
consists
of
a
series
of
services
that
essentially
provide
the
ability
to
create
applications
on
the
communities
cluster.
So
this
includes
SCD
the
API
server,
a
bunch
of
controllers
that
are
in
charge
of
creating
specific
objects,
the
scheduler
and
on
the
right
side
we
have
the
kubernetes
nodes
themselves.
So
this
is
the
machines
that
are
actually
running
the
workloads
and
it's
best
practice
to
not
run
workloads
on
the
control
plane,
so
everything's
running
on
the
right.
B
So
one
thing
to
note
is
that
if
we
were
to
self
host
this,
it's
good
to
have
an
H
a
set
up,
so
you
might
need
to
back
up
EDD,
have
a
three
master
node
minimum
for
the
control
plane,
be
able
to
have
automation
to
easily
deploy
this
control
plan,
update
it
manage
it
and
so
forth.
So
managing
this
on
your
own
might
be
a
little
bit
cumbersome.
However,
it's
been
done
in
the
past.
I've
done
it
personally
about
a
dozen
times,
and
as
long
as
you
have
the
automation,
you
have
the
scripts.
B
It's
you
know
once
you
build
them,
it's
it's
a
downhill
from
there,
however,
having
a
team
or
individual
manage
that
control
plane
creates
unnecessary
overhead,
which
is
why,
if
you
are
getting
started,
I
would
recommend
using
in
kubernetes
manage
service,
so
one
that
I'm
very
comfortable
with
is
the
elastic
kubernetes
service.
So
this
is
the
kubernetes
platform
that's
available
to
use
in
Amazon.
The
control
plane
is
abstracted
away.
So
you
can
just
basically
focus
on
provisioning
the
nodes
themselves
and
then
connect
to
your
eks
cluster
with
the
with
the
certificate.
B
That's
provided
to
you
and
run
the
kubernetes
applications
that
you
like
so
the
EPS
service
or
any
other
services
that
are
similar
to
this,
make
it
very
easy
to
get
bootstrapped.
And
today,
if
we
look
at
the
three
big
clouds,
they
all
have
a
kubernetes
managed
they're,
all
generally
available
they're,
all
maybe
one
minor
version
behind
the
latest
kubernetes
released
one
or
two
there's
some
Burton's
or
some
version
like
there.
They
have
our
back
that
multi
a-z
and
about
a
year
ago
this
this
chart
or
this
table
would
not
have
been
true.
B
There
wasn't
GA
and
IKS,
and
some
of
them
did
not
support
multi
a-z.
So
you
know
just
to
give
you
a
comparison,
as
I
said:
I'm
most
comfortable
with
Amazon,
so
I
use
dks,
but
the
service
model
for
these
platforms
are
these.
These
platforms
as
a
service
or
they're,
more
infrastructures
and
service
are
very
similar.
They
abstract
with
a
control
plane.
You
manage
the
No
votes
that
you
want
to
be
the
worker
notes
and
you
begin
to
distribute
applications
to
that
cluster.
B
So
imagine
that
you're
going
to
take
the
plunge
to
start
using
kubernetes
in
the
cloud
or
on-premise
it
doesn't
really
matter.
How
would
you
manage
that
infrastructure?
Would
it
be
manually,
maybe
it's
using
a
configuration
management
tool
or
batch
scripts,
or
you
know
any
other
method
in
order
to
configure
services
onto
your
hosts?
The
way
that
we
have
managed
infrastructure
has
evolved
over
the
years.
So,
instead
of
manually
configuring
and
installing
servers
and
networks,
we
can
represent
infrastructure
virtually
and
as
source
code.
So
why
is
why?
Is
that
even
advantageous
for
us?
Why
is
why?
B
Should
we
use
infrastructures
code
well,
for
starters,
since
its
code,
we
can
apply
software
conventions
and
standards
around
how
we
build
something
we
can
add
comments
into
what
we're
doing.
We
have
the
ability
to
take
advantage
of
declarative
languages,
so
taking
advantage
of
a
declared
of
language
I'll
get
into
a
second
allows
us
to
define
the
desired
state
of
something
and
let
that
go
reconcile
it
for
us.
B
We
can
encourage
self-service.
So
if
we
have
an
infrastructure
as
code
code
base
and
a
development
team
is
leveraging
it,
this
encourages
everybody
to
participate.
If
there's
a
single
repository,
where
there's
code
that
exists,
we
can
make
pool
requests,
we
can
create
a
backlog
of
issues
and
let
a
team
or
the
community
help
her
in
that
down.
So
these
these
types
of
patterns
that
exist
in
software
engineering
development
can
be
applied
to
our
infrastructure
today.
Another
great
reason
to
use
infrastructures
code
is
that
you
can
move
faster
and
safer
with
automation.
B
So
if
you
have
CI
CD
one
CD,
two
workflows,
you
can
add
in
automation
to
test
the
infrastructures
code
that
you're
building
you
can
have
release
engineering
processes
over
it.
So
there's
a
lot
of
great
reasons
why
infrastructures
code
is
advantageous,
so
I
never
looked
where
I
work.
We
use
terraform
souter
forms
an
open
source
tool
that
is
cloud
agnostic
and
allows
you
to
deploy
provisioning
resources
using
a
declarative
language.
B
There's
other
tools
like
paluma,
which
is
also
a
great
tool
for
infrastructure
as
code,
if
you
haven't
used,
it
allows
you
to
use
any
gem
purpose
programming
language
in
order
to
provision
and
manage
your
infrastructure.
The
idea
here
is
that
if
we
have
an
agnostic
tool,
we
were
able
to
pivot
from
cloud
to
cloud
as
we
deem
necessary
so
having
something
that's
very
specific
to
the
cloud
platform
like
arm
templates
or
cloud
formation,
they're
they're,
good
tools,
they
work.
B
Well,
however,
they
don't
really
provide
transferable
skills,
so
terraform,
for
example,
if
you
learn
terraform,
you
learn
one
domain-specific
language
called
HCl
and
you're
able
to
transfer
those
skills
that
knowledge
across
multiple
cloud
platforms
declared
it
so
declarative
languages
operate
differently
than
imperative
languages.
So
the
main
difference
here
is
that
an
imperative
languages
or
imperative
tools.
You
have
to
provide
a
procedural
definition
in
how
to
execute
some
program.
B
So
it's
step
after
step
where
declarative
is
defined,
providing
an
end
state
or
a
desired
state
and
allowing
the
tool
to
focus
on
how
to
how
to
actually
reach
that
end
state.
So
the
difference
between,
for
example,
ansible,
which
would
be
imperative
and
terraform,
for
example,
would
be
if
you
wanted
to
provision
1082
instances
with
ansible.
You
could
do
that.
B
You
could
do
that
as
well
with
terraform
and
you
want
it
to
scale
up
to
15
if
you
created,
if
you
added
five
to
that
ansible
manifest
it
will
create
15
more,
it
doesn't
really
understand
that
something
already
exists,
but
what
terraform
you
can
upscale
that
note
count
and
terraform
will
understand
that
something
already
exists,
so
it
only
needs
to
add
five
more
versus
15.
So
there's
some
advantages
to
that.
Another
one
is
that,
since
we're
using
repositories
to
handle
all
this
source
code,
we
could
treat
the
source
code
as
a
source
of
truth.
B
There's
a
popular
term,
that's
kind
of
going
around
called
git
ops
that
essentially
means
driving
all
operations
through
the
code
through
the
source
code
management
tool.
So
we
can
apply
these
software
development,
software
engineering
practices,
continuous
integration,
for
example,
approval
review
approval
processes.
All
these
things
that
we
do
with
general-purpose
languages.
We
can
apply
to
infrastructures
code,
so
we
can
add
rigor
to
it.
We
can
add
a
layer
of
automation,
security
and
so
on
and
another
great
feature
of
terraform
or
generally
these
types
of
infrastructures
called
code
tools
is
desire.
B
State
management,
this
means
basically
state,
is
information
about
infrastructure
that
you
have
deployed.
So
if
you
wanted
to
make
changes
to
an
existing
deployment,
terraform
would
be
able
to
reconcile
what
exists
in
reality,
based
on
what
your
manifest
defines
at
your
local
workstation.
So
if
you
wanted
to
make
an
update,
you
can
transparently
do
that
with
the
terraform
plan
and
apply.
B
So
why
am
I
talking
about
terraform?
So
much
look
at
the
point
of
infrastructures
code
and
how
does
it
relate
to
kubernetes?
Well,
here's
an
example
and
I'll
jump
into
a
demo,
really
quick,
but
just
to
show
you
this
is
terraform,
so
the
resource
is
a
key
word
here.
These
are
key
words
and
the
second
variable
or
value
here
is
the
resource
type.
B
So,
in
this
case,
I'm
deploying
an
eks
cluster
and
an
e
KS
node
group,
so
the
control
plane
and
the
worker
nodes
and
I'm
naming
them
something
that
is
identifiable
for
myself
and
since
I
had
a
pre
provision
EPC
that
I
was
using
in
our
sandbox
environment.
That's
provided
to
us
by
my
organization,
I
just
referenced.
B
These
and
I
could
have
used
a
data
resource
here
to
pull
down
data
from
that
B
PC,
but
for
the
sake
of
simplicity,
I,
just
added
in
a
few
private
subnets
that
I'd
like
to
deploy
in
mind
note
groups
due
to
my
instances
from
my
node
group
to
you.
So
here
is
that
two
resources
that
allow
me
to
create
and
manage
an
e
KS
cluster.
B
B
B
And
in
the
variable
section,
I
have
a
cluster
name:
that's
just
something:
I
had
a
default
for
since
in
our
sandbox
environment
that
I'm
testing
us
and
we
have
a
V
PC
with
subnets
that
have
pre-existing
annotations,
Kuroda's
annotations,
so
I
just
had
to
match
this
up
to
what
was
already
pre
provision
for
me,
and
this
deployment
is
actually
already
done.
I
did
this
deployment
earlier
and
it
takes
about
15
minutes.
So
I
wanted
just
to
prove
to
you
guys
that
this
deployment
exists,
so
I'm
gonna
run
the
terraforming
show
command.
B
That's
just
gonna
show
me
pre
provision
or
infrastructure
that
I've
already
provisioned.
So
we
have
the
eks
cluster
is
providing
me
the
certificate
authority.
There's
information
about
the
DPC,
where
the
note
groups
are
going
to
live.
The
note
group
itself,
the
ami
type
that
I'm
using
all
this
stuff
is
provision
in
reality.
B
So
I'm
just
going
to
make
sure
my
profile.
My
AWS
profile
is
set
here
and
there's
an
AWS
ets
command
that
I
could
run
that
allows
me
to
update
my
coop
config
and
note
itself
and
skate
to
the
cluster
quickly
about
terraform.
There's
a
few
nifty
things
that
you
can
do
to
validate
that
files
that
you're
building
our
builds
correctly
on
so
I
just
ran
terraform,
healthier
and
there's
a
couple
of
commands
that
I'd
like
to
show
you.
B
One
of
them
is
terraforming
fun,
so
terraforming
bunks
can
be
added
to
your
your
BIM
to
auto
pump,
something
to
you
when
you
automatically
when
you
say
it
could
automatically
run
this
or
you
could
add
func
to
a
CI
process.
But
essentially
this
provides
you
spacing
standards
in
all
of
your
Tarricone
files,
so
that
there's
consistency
across
all
the
spacing
and
how
you
set
up
a
general
resource
in
terraform.
Another
tool
that
I'd
like
to
share
with
you
is
called
validation.
B
So
if
I
accidentally
made
a
typo,
there's
no
variable
called
blustered
or
name,
but
if
I
accidentally
did
that
in
a
rabbit
air
around
her
phone
validate
without
having
to
plan
or
apply
anything
it'll,
be
able
to
spit
back
to
me,
hey
this,
this
variable
doesn't
exist.
Did
you
mean
cluster
name?
So
I
can
go
back
in
to
that
file,
update
the
error
and
run
the
same
command,
and
it
shows
that
it's
valid.
B
So
this
is
just
a
very
basic
example
to
show
you
that
you
could
add
terraform
linting
to
a
CI
process,
so
to
show
you
that
the
cluster
is
up,
I'll
run
a
few
huge
CTL
commands,
so
I
did
just
provision
two
nodes
here
and
show
you
desired
size
to
mid-size
one.
So
there's
a
scaling
configurations
associated
with
the
node
group
and
the
instance
types
are
my
nodes
are
running
and
for
extra
legends.
The
reason
I
have
and
for
charges
was
basically
I
was
running.
B
The
coop
flow
platform,
which
is
just
a
machine
learning
or
data
science
platform
that
you
can
run
on
top
of
communities
and
the
minimum
requirements
were
a
single
node
of
12
gigabytes
and
to
be
CPU,
so
I
just
chose
the
appropriate
size.
One
thing
I
learned,
though,
was
that
the
instance
types
are
variable
so
in
terms
of
their
compatibility
with
eks,
so
make
sure
to
double
check
on
the
instance
type
and
make
sure
it's
compatible
to
be
used
as
a
worker,
node
and
your
ribbon
in
each
cluster.
There
was
one
that
was
a.
B
It
was
like
an
a.1.
It
was
the
a
series
instance
types
that
was
cheaper
than
the
amp
or
extra-large
with
the
same
same
specs,
but
I
tried
to
use
that
instance
type
without
checking
the
table
of
what
their
compatibility
and
it
didn't
work
and
I
was
try
to
figure
out
what's
going
on,
and
it
was
the
instance
type
that
was
not
compatible
and
there's
also
some
ami
types
that
are
not
compatible,
but
you
could
also
pass
in
your
own
ami
types
so
that
just
to
also
show
you
just
some
more
information
about
the
coop
cluster.
B
B
However,
typically
when
we
do
build
we're
working
on
engaging
this
building
to
terraform
code
bases
for
our
clients,
we
have
discreet
directories,
so
development
production
and
staging-
as
you
can
see
in
this
tree
here
on
this
level
and
a
modules
directory,
so
I'm
not
gonna,
go
into
modules,
but
basically,
if
I
pull
this
tree
into
the
modules
directory,
I
can
reference
it
in
this
directory
Deb
Pradhan
stage
and
make
a
module
reference,
and
that
allows
me
to
have
discrete
state
management
for
three
discrete
environments.
So
this
example
is
just
to
show
you.
B
This
is
a
very
simple,
a
very
simple
way
that
you
can
get
started
with
your
committees.
So
back
to
the
deck,
there
was
one
tool
that
I
would
like
to
share
with
everybody
here
and
it
was
helm.
So
if
you're
not
familiar
with
helm,
come
according
to
you,
the
website
is
the
best
way
to
find,
share
and
use
software
built
for
kubernetes,
and
this
tool
has
evolved
a
little
bit
in
the
last
few
Khan
I
went
to
in
San
Diego.
B
They
announced
the
hump
3,
and
that
was
basically
removing
the
server
side,
control
component
tiller
that
existed
with
helm,
2
and
that's
a
great
great
tool
to
package
and
deploy
applications.
So
I
wanted
to
just
show
a
quick
demo
of
running
a
helmet
art
onto
my
cluster,
so
I'm
gonna
deploy
a
tool
called
prometheus
and
Prometheus
is
basically
going
to
help
us
extract
metrics
about
our
containers,
our
nodes
and
so
on.
So
helm
is
a
CLI
tool.
B
So
this
is
a
pre-loaded
command
that
I
have
that
installs
Prometheus
on
one
thing
it
does
require
was
a
namespace
called
prometheus,
so
I
actually
already
created
this
namespace
already
so
just
to
double
check.
I,
just
Boop
CTL
get
namespace
and
was
able
to
see
that
the
Prometheus
namespace
was
there.
So
when
I
run
the
helm
install
command
the
few
options
that
I
can
pass
in,
it's
creating
a
persistent
volume
is
providing
the
namespace
called
Prometheus
and
currently
installing
this
Prometheus
deployment
to
the
cluster.
B
So,
as
we
can
see,
it's
it's
sent
back
some
information.
It's
talking
to
us
about
how
we
can
access
different
endpoints
that
were
made
available
by
Prometheus.
So,
for
example,
this
is
the
your
the
end.
Point
1990
provide
this
access
to
the
dashboard.
So
if
I
run
this
command,
it
just
exports
the
pod
name
and
then
runs
the
Qt
TL
port
forward.
Human.
A
B
B
So
generally,
you
could
yeah
this
different
selection
of
stats
that
you
can
monitor
so,
for
example,
describe
in
a
random
linear,
so
go
metrics
go
any
sort
of
go
metrics.
You
can
select
that
execute
and
build
graphs,
and
also
there's
some
endpoints
that
are
made
available,
such
as
the
gateway
that
allows
you
to
scrape
metrics
from
from
media's.
But
the
purpose
of
this
demo
is
to
show
you
how
easy
it
was
to
deploy
a
home
chart.
B
You
just
love,
heard
the
CLI
tool
at
the
pre-existing
cluster
and
be
able
to
consume
pre
build
applications
such
as
Prometheus
and
I'm.
Gonna
cancel
that
forward.
There
was
one
other
thing
I'd
like
to
share
with
you
guys,
and
it
was
mentioned
earlier-
a
deployment
of
coop
flow.
So
this
is
just
a
machine
learning
platform
that
I
deployed
to
this
kubernetes
cluster
and
I
used
the
tool
called
tubes
or
CF
CTL,
and
this
is
the
this
is
a
tool.
B
So
if
you
go
to
the
cupola
website,
you
could
download
this
binary,
set
some
environment
variables
so
like
what
manifests
and
deploy
and
runs
the
fctl
apply
if
you
have
a
previous
semester
created
and
it
will
be
able
to
deploy
a
bit
version
of
coop
flow
to
your
cluster,
so
I
just
beep
TTL
get
pods
and
people
oh.
So
this
is
the
cupola
namespace.
So
all
of
these
services
are
related
to
food
flow.
So
we
see
we
have
our
go.
B
B
It's
composed
in
many
different
services,
so
sto
was
a
great
option
for
them
to
build
in
so
I
just
wanted
to
quickly
share
with
you
the
fact
that,
for
example,
if
there's
a
data
science
team
that
you're
supporting
and
they
wanted
to
run
some
risk
Buster,
an
infrastructure
team
that
is
in
charge
of
terraform
could
easily
provision
that
kubernetes
cluster.
For
them
there
could
be
some.
You
know:
automation
involved
in
bootstrapping,
the
cupola
platform
using
KX
ETL
and
from
this
platform,
so
I'm
doing
a
port
forward
on
another
terminal
off
screen.
Here,
it's
localhost:8080
1.
B
So
this
is
running
in
micro
grants.
Cluster
I
could
provision
what
are
called
notebook
servers,
so
a
new
server
could
just
be
selecting
an
image.
These
are
all
pre-built,
tensorflow
images
that
google
provides
you,
but,
for
example,
if
you're
familiar
with
jupiter
notebooks,
this
will
look
very
familiar,
but
basically
you
could
leverage
a
interactive
development
environment
in
python
or
any
other
sort
of
application
dependencies
that
you
want
to
inject
in
the
container.
B
You
can
build
that
container
and
build
the
notebook
from
it,
and
this
is
just
running
a
an
experiment,
importing
the
tensorflow
library
in
Python,
and
it's
pulling
some
images
from
this
public
database
that
has
a
bunch
of
images
and
doing
some.
It's
just
running
some
algorithms
against
those
energies.
So
just
to
recap
this.
This
is
a
entire
platform
goop
flow
and
provides
a
very
niche
set
of
libraries
and
tools
for
data
scientists.
So
kubernetes
is,
as
I
mentioned
earlier,
a
platform
for
platforms.
B
So
if
you
wanted
to
get
bootstrap
too
quickly
and
be
able
to
deploy
these
sophisticated
tools
like
coop
flow
kubernetes
makes
it
pretty
simple.
As
I
showed
you,
the
history
here,
I
just
found
a
history
pipe
data
graph
or
KFC
TL.
This
is
basically
what
it
took
to
deploy
that
coop
flow
environment
and
essentially
running
a
port
forward
against
an
sto
ingress
gateway
and
allows
me
to
start
running
experiments
with
the
models
or
algorithms.
That
I
would
like.
B
So
that
will
conclude
the
demo
section,
so,
to
recap:
I
wanted
to
just
share
with
everybody.
What
I
would
like
that
main
takeaways
to
be
so?
Containers
enable
developer
productivity.
They
enable
portability,
seamless,
transition
from
development
way
left
to
production,
which
is
on
the
right
kubernetes.
That
container
arbitration
platform
provides
the
stability
to
deploying
managed
container
based
apps.
The
cloud
offers
great
options
for
managed
services.
Infrastructures
code
provides
a
very
sane
way
to
build
and
create
repeatable
and
transparent
infrastructure
and
helm
is
also
a
great
tool
to
deploy
applications
on
to
kubernetes.
B
So
all
of
these
tools
put
together
can
really
help
bootstrap
your
application
teams,
putting
standards
around
these
tools
and
processes,
and
my
experience
has
provided
teams
to
move
which
month
with
much
higher
velocity
than
methods
that
they
were
using
before
so.
That
concludes
this
presentation
and
thank
you
so
much
everybody
for
attending
and
at
this
point
I'll
hand
it
back
to
Daniel.
A
Awesome,
there's
often
a
great
presentation
in
a
really
practical
demos,
I
love
that
so
now
we
have
some
time
for
questions.
So
if
you
have
any
question-
and
you
would
like
to
ask
a
pre
dropping
in
the
create
tab
at
the
bottom,
your
screen
and
we
get
as
many
do-
we
have
time
for
so
it's
time
for
the
question.
Actually
I
will
just
got
one
question
here.
B
B
However,
I
think
there
needs
to
be
some
discussion
around
the
scope
of
each
of
these
tools,
so
infrastructures
code
to
like
terraform
is
great
at
deploying
raw
infrastructure
or
what
I
like
to
call
infrastructure
scaffolding
so
setting
up
the
raw
and
resources
like
the
ec2
instances,
the
load
balancers,
your
B,
pcs,
the
subnets
all
that
information,
all
those
all
those
components
are
all
created
through
infrastructures
code
helm,
on
the
other
hand,
is
a
tool
specifically
for
deploying
applications
to
kubernetes.
So
it
has
some
lifecycle
management
features,
so
you
can
do
rolling
updates
against
your
charts.
B
You
can
update
these
applications
in
real
time.
You
can
manage
all
your
deployments
with
helm
versus
terraform
is
more
of
a
infrastructure
focused
tool.
There
is
a
blurry
line
between
the
two,
so
I
would
say
you
know,
use
home
for
kubernetes
application
life
cycles,
because
you
know
you
don't
want
to
tie
in
together
the
life
cycle
of
your
kubernetes
applications
with
your
infrastructure
life
cycles.
A
B
So
I've
used
tools
like
console
Connect.
So
that's
a
that's
a
good
question
and
it's
a
it's
a
broad
one
but
I'm
trying
to
just
Ross
provide
for
my
experience.
So
if
you're,
if
you're
focused
on
understanding
and
operationalizing
service,
meshes
that
type
of
technology
like
console
or
sto,
or
even
things
like
envoy,
those
provide
MPLS,
out-of-the-box
and
so
forth
for
a
specific
project
that
I've
worked
on,
that
I
can
speak
to
was
deploying
kubernetes,
running,
console,
connect
and
then,
additionally,
in
that
same
environment
deploying
Boult
enterprise.
B
So
vault
was
a
secret
engine
that
was
being
used
to
consume
or
to
distribute
secrets
and
some
other
encryption
based
initiatives.
But
basically
we
wanted
to
use
a
tool
like
console
Connect
in
order
to
control
traffic
between
not
only
the
applications
running
in
kubernetes,
but
when
we
have
a
heterogeneous
workload
where
we
have
VMs
and
kubernetes
applications.
We
use
console
connect
to
control
traffic
between
those
two.
So
there's
a
few
options
out.
B
A
B
That's
a
good
question,
so
that's
a
common,
a
common,
a
common
thing,
I
run
into
you
when
talking
about
terraform,
and
the
answer
is
that
there's
no
transparent
portability
in
terraform,
which
means
each
cloud
platform
has
similar
services,
but
they
are
they're
called
the
naming.
Conventions
are
different.
The
nomenclature
is
different,
so
what
you
provision
in
AWS
versus
what
your
provision
and
Google
would
look
different.
B
So
what
you
can
do
in
order
to
understand
what
you
would
need
to
do,
for
that
is
I
would
just
say:
do
a
search
on
terraform
terraform,
let's
say
gke
cluster,
so
you
can
just
do
a
quick
google
search
and
you
can
find
what
it
takes
to
actually
go
ahead
and
purpose.
So
obviously
these
providers
used
your
native
authentication
mechanism
that
you're
using,
but
in
this
case,
compared
to
what
I
had
it's
this,
this
cluster
versus.
B
This
cluster,
so
Atos
decays
cluster,
so
there's
the
name,
the
location,
the
node
count
so
they're
a
little
bit
different
looking,
but
that
could
just
be
an
easy,
a
quick
google
search
and
obviously
take
some
understanding
of
google
cloud.
So
you
have
to
kind
of
know
the
basics
about
each
cloud
platform
when
you
tend
to
use
it,
but
generally
the
way
that
you
use
terraformed
is
the
same
so
resource
definition.
Inside
of
this
code
block
or
the
stanza.
B
It
has
some
attributes
that
you
passing
some
values
to,
and
this
is
a
basic
example
to
get
you
go
in
example,
juicers.
So
if
you
wanted
to
understand
what
the
specific
difference
is
where
I'd
recommend
that
you
go
to
the
tariff
on
website
and
do
a
search
about
the
specific
resource
that
you'd
like
to
create.
B
Yeah,
that's
a
good
question,
so
typically,
there's
there's
separation
of
concerns
for
developers
and
operation,
their
infrastructure
teams,
networking
teams
and
so
on.
So
in
my
experience,
there's
an
Operations
team
or
infrastructure
team
that
is
managing
the
terraform
code
bases
and
they
work
very
closely
with
their
customers,
which
are
the
development
teams,
so
typically
developers
that
are
using
or
if
there's
groomer
Nettie's
is
a
component
of
a
workflow
within
an
organization
or
business
unit.
The
developers
have
an
understanding
of
kubernetes.
B
They
may
not
be
managing
the
cluster
themselves,
but
they're
building
the
kubernetes
manifest
they're
building,
C
R,
DS
they're,
building
the
container
images.
So
if
you
ship,
the
helm
and
kubernetes
stuff
is
more
shifted
left
on
the
developer
side,
but
also
really
depending
Albertina,
structured
and
the
infrastructure
management.
The
infrastructure
requests
that
are
coming
in
people
that
are
burning
down
the
backlog
for
all
the
terraform
related
initiatives
is
typically
the
operations
team
or
the
infrastructure.
B
A
A
It's
cool
all
right,
I
think
so
all
right.
There
is
all
question
that
we
have
time
for
today
and
thanks
again,
the
answer
you
put
great
presentation
in
a
really
lovely
demo
and
thanks
for
joining
us
today
and
the
webinar
recording
and
slides
will
be
online
later
today
already
mentioned
earlier,
and
we
are
looking
forward
to
seeing
you
a
future
sensible,
webinar
and
have
a
good
rest
of
the
day.
Thank
you
thanks,
Daniel.
Thank
you.
Everybody
thanks.