►
From YouTube: Improving Deployments with Kubernetes (2021-06-10)
Description
Hosted by Jacksonville (FL US) Dotnet User Group.
We’ll look at what Kubernetes is and why you would or wouldn’t use it. Then we’ll look at how it can help streamline your deployment pipeline and help with testing, even if you don’t use it to run your app in production.
Jason Gerard has over 20 years experience building software for healthcare, finance, insurance, even cloud providers. Jason has been married for almost as long as he's been coding and has 3 children and a somewhat goofy boston terrier. He hates talking about himself in the third person.
A
Hello
and
welcome
to
the
june
edition
of
jax
doug
thanks
for
joining.
If
you
have
any
questions,
hit
the
comments
window
there,
we'll
we'll
get
to
them
as
soon
as
we
have
a
minute,
and
we
are
looking
for
speakers
and
also
for
topics.
So
if
you
have
any
particular
topic,
you're
looking
for
please
reach
out
to
us
on
meetup
or
in
chat
here,
let
us
know
what
you're
looking
for.
So
we
can
try
to
find
a
speaker
to
meet
that
need.
A
I
want
to
introduce
jason
for
tonight's
session
and
he's
going
to
be
discussing
kubernetes,
so
I
will
turn
it
over
to
jason,
now,
jason.
Take
it
away.
B
All
right,
thanks,
jeff
we're
gonna,
be
talking
tonight
about
improving
deployments
with
kubernetes.
So
first
just
a
little
bit
about
who
I
am
my
name's
jason
gerard,
I'm
a
native
of
jacksonville.
I've
been
writing
software
for
a
long
time.
20
plus
years,
I've
worked
in
healthcare
insurance
finance
telecom
worked
for
cloud
providers.
B
You
can
follow
me
on
github
medium.
I
probably
haven't
written
anything
there
in
a
while
and
twitter.
If
you
want
to
get
in
touch
with
me,
you
know
twitter
dm
is
probably
the
quickest
way
to
do
it
all
right.
So
tonight
we're
gonna
have
an
overview
of
containerization
and
kubernetes
and
we'll
go
into
some
details
on
what
it
means.
B
So
the
first
thing
I
want
to
kind
of
go
over
is
how
applications
are
deployed.
You
know
we,
I
think
you
know
everyone
here
is
probably
a
software
developer
and
you've
written
code,
you've
compiled
it
and
you
run
it
locally.
B
You
know
that's
the
traditional
model
where
you
have
apps
they're
running
on
a
operating
system.
That's
running
right
on
top
of
hardware,
that's
what's
happening
for
your
local
deployment
and
depending
on
how
you're
deploying
it
that
may
be
how
it's
running
in
production
too,
but
probably
most
of
us
are
actually
deploying
into
a
virtualized
environment
where,
whether
it's
azure
aws
google
ibm
or
it's
our
own
on-premise
data
center,
where
we're
running
vmware
or
hyper-v
or
openstack,
or
something
like
that.
B
We
have
a
bunch
of
hardware,
that's
running
some
operating
system,
which
is
then
running,
you
know
or
a
hypervisor,
which
is
then
running
actual
virtual
machines.
On
top,
you
know,
windows,
server,
linux.
What
would
be
it
and
that's
actually
what's
running
our
application,
so
one
physical
machine
may
be
running.
You
know
10
20,
virtual
machines
that
have
our
database
and
have
our
application
server
and
whatever
else
we
need.
B
I
think
that's
the
model
that
most
of
us
have
been
familiar
with
for
quite
a
while
now
and
then
the
model
that's
emerged
in
the
last
six
years
now
I'd
say
would
be
it's
an
evol
evolution
of
that
where
we're
deploying
our
applications
using
containers-
and
you
know
it's
not
really
much
different
than
the
other
models-
it's
just
a
different
way
of
packaging
and
deploying
those
applications.
B
So
just
kind
of
keep
that
in
mind
as
we
go.
So
the
first
thing,
let's
talk
about
is
let's,
let's,
let's
get
to
a
common
definition
of
what
containerization
is
a
container
is
not
a
virtual
machine
with
a
virtual
machine.
B
You
have
a
a
hypervisor
of
some
kind,
that's
going
to
virtualize
the
hardware
of
the
machine
and
expose
it
to
another
bit
of
software,
typically,
an
operating
system
to
run
so
that
operating
system
doesn't
really
know
that
it's
running
on
virtual
hardware
versus
real
hardware
and
that
provides
lots
of
isolation
and
you've
got
the
stuff
like
the
dtx
extensions
and
everything
that
has
been
done
in
hardware
to
really
help
that
out.
So
you
have
really
good
isolation
between
your
application.
B
What
a
container
is,
though,
is
it's
an
os
level
virtualization,
so
we're
not
we're
not
virtualizing
the
operating
or
the
virtual
machine.
We're
operating
the
we're,
not
virtualizing
the
the
hardware,
we're
virtualizing
the
operating
system,
so
we're
going
to
provide
process
and
resource
isolation
to
the
application,
but
we're
still
going
to
have
the
shared
operating
system
kernel
between
all
of
those
processes.
So
an
example
would
be.
I
run
two
applications.
B
I
run
a
web
browser
and
I
run
a
word
processor
they're
both
running
on
my
machine
they're,
both
using
the
operating
system
kernel
same
with
a
container.
If
I
have
two
containers
running
they're,
both
sharing
that
kernel,
whereas
I've
had
two
vms
running,
they
have
their
own
kernel
running
inside
that
vm.
B
So,
let's,
let's
look
at
the
history
of
containers
a
little
bit.
Some
of
these
concepts
are
actually
older
than
older
than
I
probably
bet
some
of
the
people
in
this
this
this
talk
here.
It
started
with
the
change
root
command,
which
came
out
in
unix
version,
seven
in
1979,
and
this
command
was
very
simple.
It
allows
you
to
say
a
application
when
it's
running
what
its
root
directory
looks
like.
B
So,
if
you
were
to
say,
run
an
application
from
your
home
directory,
so
you
know
in
windows
would
be
like
c
colon,
slash
user,
slash,
json,
and
if
that
application
were
to
run
it
would
say
hey
what
directory
am
I
in?
It
would
see
that
it's
in
c
colon
users
json
and
if
it
wanted
to
go
up
to
the
root
and
then
change
into
the
windows
directory
and
see.
B
What's
there
it
could
with
the
change
route,
though
you
would
say
I'm
going
to
have
this
application
in
my
home
directory,
but
I'm
going
to
make
the
home
directory
the
root.
So
now,
when
I
do
that-
and
I
run
this
command-
and
it
says
hey
what
folder
am
I
in
physically
on
my
machine-
it's
in
my
home,
folder
c,
colon
users.json,
but
the
application
thinks
it's
at
c.
Colon
essentially
thinks
it's
at
the
root
and
then,
when
it
says,
hey
what
are
the
files
around
here?
B
It
can
only
see
what's
in
my
home
directory
and
below,
and
so
it
thinks
that
it's
at
the
root.
So
this
is
this
is
the
key
one
of
the
key
pieces
of
technology
that
was
started
there
and
it's
not
a
security.
B
It
doesn't
provide
security
out
of
the
box
and
that
way
there
are
ways
to
break
out
of
that,
but
all
the
other
things
that
we
layer
on
top
of
that
make
it
pretty
powerful
and
secure
system.
So
the
next
technology
that
came
out
was
was
jails
and
freebsd
in
the
year
2000
that
kind
of
built
on
that
change,
route
and
some
other
stuff.
B
And
then
we
had
solaris,
which
was
another
unix
variant,
had
zones
and
and
then
containers
that
came
out
in
around
2004
that
provided
more
functionality,
more
isolation.
B
In
2006,
google
created
the
c
groups
which
is
short
for
control
groups
and
contributed
to
linux
to
help
with
some
of
the
isolation
they
were
doing
and
then
in
2008,
a
project
called
lxc
which
stands
for
linux.
Containers
was
created
to
kind
of
emulate.
B
B
B
So
so,
from
now
on,
when
I
say
container,
I'm
specifically
talking
about
an
oci
format
container,
it's
going
to
provide
name,
space,
isolation,
both
process
network
and
file
system.
So
the
file
system
would
be
like
just
what
I
talked
about
with
that
that
change
root
command
like
I
can
only
see,
what's
been
exposed
to
me,
and
I
can
only
see
things
that
that
I
can
only
know
about
things
that
have
been
exposed
to
me,
so
I'm
gonna
be
rooted
somewhere
and
I
can't
go
get
out
of
that
process
isolation.
B
So
if
you
were
to
open
up
task
manager
on
your
your
system,
you
can
see
all
the
processes
that
are
running
inside
of
a
container.
You
can
only
see
the
processes
that
are
running
inside
that
container
and
one
of
the
things
since
this
has
been
primarily
a
linux
based
technology.
Here,
with
these
oci
containers
on
linux
and
unix,
when
the
operating
system
starts,
you
have
a
initial
process,
id
called
pid1,
and
then
everything
goes
up
from
there.
B
So,
every
time
you
have
a
container
that
starts
it's
going
to
get
a
new
name
space
and
it's
going
to
it's
going
to
start
from
there.
So
from
the
outside.
The
processes
are,
you
know,
6
500
6501,
on
the
inside
they're
process,
id
1
process
id2,
and
they
don't
know
about
any
of
the
other
processes
that
are
running.
B
They
do
share
the
os
kernel,
so
all
of
those
containers
will
be
using
the
shared
linux
kernel.
There
is
native
support
for
kernels
or
for
containers
on
linux
and
windows.
So
this
was
initially
a
docker
was
developed
for
linux.
It
runs
natively
on
linux
and
for
the
longest
time
when
you
ran
docker
on
mac
or
windows,
you
were
not
actually
running
docker
on
mac
or
windows.
You
were
running
a
linux
vm
in
the
background
and
the
docker
command
was
talking
to
that
vm
and
running
things
inside
that
vm.
B
That's
still
the
way
it
works
on
mac
on
windows.
If
you
run
a
windows
container
that
is
running
in
windows,
if
you
run
a
linux
container,
that's
going
to
run
in
a
virtual
machine
and
then
windows
also
has
the
concept
of
process
ice
or
I'm
sorry,
bm,
isolation
using
hyper-v
well,
which
will
actually
spin
your
container
up
inside
a
separate
super
lightweight
windows,
virtual
machine
to
run
your
windows
containers
there.
B
That's
about
all
I'm
going
to
talk
about
that,
because
I've
never
really
used
using
a
windows
that
much.
But
it's
a
you
know.
Deep
dive
to
research
container
orchestration
container
orchestration
is
the
ability
to
take
all
these
containers.
We
built
that
have
container
applications
and
then
run
them
on
one
or
more
machines,
and
so
it's
it's
no
different
really
than
you
have.
Let's
say
you
have
a
bunch
of
micro
services
or
something,
and
you
need
them
to
run
across
a
cluster
of
machines.
B
You
know
you
may
be
deploying
them
to
iis
or
running
them
under
nginx,
or
something
or
deploying
them
using
like
elastic
bean
stock,
or
something
like
that.
So
this
allows
you
to
use
the
common
building
block
of
the
container
and
then
run
that
anywhere.
So
some
of
the
early
examples
of
this
would
be
google
borg.
That
was
a
project
that
they
created.
That's
actually
they're,
still
using
that's
how
they
run
all
their
infrastructure
and
kubernetes
actually
came
out
of
that
I'll
talk
about
that.
B
In
a
minute
there
was
apache
mesos,
which
I
think
was
announced
in
like
2009.
There
was
some
research
out
of
uc
berkeley
that
people
were
starting
with
this
and
then
around
2014
2015,
they
kind
of
switched
to
hosting
docker
containers,
and
then
I
think,
by
like
2016,
it
kind
of
like
was
kind
of
on
the
back
burner,
because
it
everyone
had
switched
to
docker,
swarm
or
kubernetes
backer
swarm
was
another
thing
that
came
out
for
a
while
there
they
eventually
dropped
that
and
embraced.
B
Kubernetes
hashicorp
nomad
is
a
is
another
technology
that
you
can
use.
It's
not
it's
not
entirely
open
source.
I've
never
used
it,
but
hashcorp's
got
a
lot
of
good
stuff.
It's.
It
has
a
smaller
scope
than
kubernetes.
It's
really
just
for
orchestrating
and
running
the
distributed
containers
and
then
kubernetes,
which
will
be
the
focus
of
this
talk,
is
the
engine
so
a
little
bit
of
history.
In
kubernetes,
first
of
all
it
it's
a
greek
word,
and
I
know
I'm
not
pronouncing
it
right.
B
It's
probably
like
you
know,
kubernetes
or
something
I'm
not
quite
sure
how
to
pronounce
it,
but
it
means
helmsman
in
greek
and
because
essentially
it's
driving,
you
know
the
ship
of
your
application,
so
the
project
started
at
google
in
2014.
B
B
It
had
a
1.0
release
in
july
2015,
which
was
I
I
want
to
say.
I
started
working
with
with
kubernetes
around
the
one
three
one
four
release,
so
it
wasn't
too
long
after
that
it
was
designed
as
an
open
source
reimagining
of
google
borg.
So
all
the
concepts
and
lessons
learned
that
that
they
had
learned
working
on
board.
They,
they
re-implemented
borg,
is
written
all
in
c
plus,
while
kubernetes
is
written
entirely
in
go
and
lots
of
yaml.
B
As
we'll
see,
it
was
originally
called
project,
seven
after
seven
of
nine,
from
star
trek,
voyager,
so
jerry
ryan's
character
on
star
trek
voyager,
which
was
a
a
former
borg,
and
then
the
the
logo
here
has
seven
points
in
the
the
wheel
there,
which
is
a
reference
back
to
that
project.
Seven.
B
I
learned
that
like
today,
so
so,
kubernetes
is
a
distributed
scheduler.
So
what
that
means
is
we
have
one
or
more
machines
running
and
it
will
determine
what
runs
where
and
on
what
machine
and
for
how
long
it
provides
service
discovery.
So
you
can
say
I
need
to
talk
to
service
x.
B
It
will
say
here:
is
service
x,
go
talk
to
it
and
it
primarily
does
that
through
dns,
it
provides
configuration
management,
so
you
can
store
all
of
your
application
configuration
in
kubernetes
and
then
expose
it
to
your
applications
and
then
update
it
through
kubernetes
and
it
uses
a
deci
desired
state
configuration.
So
you
you
define
how
you
want
everything
to
be,
and
it
makes
it
that
way.
B
B
Okay,
so
the
structure
of
a
kubernetes
cluster
is
you
have
a
control
plane
which
is
kind
of
the
the
master
it
has
the
api,
the
scheduler
and
the
controllers
there,
and
then
you
have
your
nodes.
They
run
what's
called
the
the
kubelet
which
is
kind
of
the
workhorse
that
handles
you
know,
keeping
everything
running
on
the
machine
and
the
proxy
for
your
networking.
B
Typically,
if
you're
using
kubernetes
in
a
cloud
environment,
your
cloud
provider
is
going
to
control
the
master
and
you
will
never
that
part
of
that
control.
Plane
and
you'll
never
have
to
deal
with
that.
So
if
you're,
using
amazon's,
eks
or
azure
kubernetes
service
or
google
kubernetes
engine,
they
all
abstract
away
the
control
plane
for
you.
So
you
only
see
have
to
deal
with
the
nodes.
B
B
This
is
very
similar
to
kubernetes,
so
the
building
block
in
kubernetes
is
a
pod
and
a
pod
contains
one
or
more
containers
you
can
think
of
those
as
your
threads.
It
has
a
scheduler.
You
can
think
of
your
nodes
as
your
as
your
compute
and
then
they
all
share.
You
know
these.
These
containers
are
going
to
be
sharing
the
ram
and
the
disk
and
the
network
of
the
cluster.
B
So
if
you
think
about
containers
as
processes
or
threads-
and
you
know
pods
processes,
it
can
kind
of
help.
You
get
your
your
mind
wrapped
around
some
things,
so
the
building
blocks
of
kubernetes
we
talked
about
what
a
container
is
so
the
first
building
block
is
a
pod
which
I'll
go
into
here
in
a
second
and
then
we
have
deployments
jobs
and
daemon
sets.
B
Those
are
different
ways
to
allow
for
certain
ways
to
run
your
pods
services
are
for
the
service
discoveries,
so
exposing
your
pods
to
other
pods
and
then
ingresses
are
for
exposing
your
services
to
the
outside
world.
There's
much
more
in-depth
items
like
volumes
and
persistent
volume
planes
and
whatnot.
I'm
not
going
to
go
into
detail
on
that,
because
we'd
be
here
all
night,
but
there's
there's
much
more
to
kubernetes
than
just
this,
but
these
are
the
basic
building
blocks
that
you
can
build
your
application
from.
B
So
a
pod
in
kubernetes
is
kind
of
the
smallest
unit
of
work.
You
can
do
it's
going
to
consist
of
one
or
more
containers
and
it
defines
the
environment,
the
storage,
the
image
all
the
constraints
for
that
container.
So
in
this
example,
here
I
have
a
pod.
It
has
a
name,
my
app
and
then
like
a
little
hash
there.
B
B
We
named
the
container,
also
my
app
and
then
we're
going
to
expose
port
3000
from
this
container
and
that
we're
going
to
always
restart
it.
So
if
this
container
crashes
go
ahead
and
restart
it
yeah,
you
know
options,
you
also
say
never,
so
it
can
run
once
it
dies.
It's
not
going
to
restart
it
deployments.
B
So
so
everything
uses
this.
This
pod
spec
that
we
just
saw
here
and
so
a
deployment
it
specifies
one
of
those
pod
specs
and
a
number
of
replicas.
So
this
is
where
you
would
use.
If
you
say,
if
you
have
a
web
app
and
you
need
five
instances
running,
you
would
create
a
deployment
of
it
and
say
run
five
replicas
for
me.
B
If
you
need
something
to
just
start
up,
run
and
terminate,
you
would
use
a
job.
So
if
you
had
a
pots
back
and
you
let's
say
you
have
like
a
something
that
you
need
to
run
monthly
to
you
know
generate
some
data
for
a
report
in
the
database.
You
could
create
a
a
job,
that's
going
to
run
it's
going
to
crunch
those
numbers
and
then
it's
going
to
terminate
once
it's
done
and
then
you
would
you
can
kubernetes
actually
has
cron
built
in
also.
B
B
So
if
you
need
to
run
a
service
on
all
of
the
the
nodes,
the
way
to
guarantee
that
there's
one
instance
on
each
node
is
to
use
a
daemon
set,
and
that's
for
things
like
log
collectors
and
really
anything
that
you
would
run
as
a
service
where
you
want
every
single
node
in
your
cluster
or
within
your
your
list
of
nodes
that
you
wanted
to
run
on
to
have
one
copy
of
it
with
a
deployment.
B
You
don't
really
control
how
many
copies
are
running
on
the
same
node
and
where
they're
running
there's
some
some
ways
you
can
can
influence
where
it
runs,
but
and
then
it
will
try
to
run.
You
know
to
spread
it
out
amongst
all
your
your
nodes,
but
you
can
have
a
node.
That's
got
two
copies
running
and
then
you
can
have
another
node,
that's
one
and
another
node
with
none,
whereas
the
daemon
set's
going
to
run
one
copy
everywhere.
B
B
B
I
can
just
say
you
know
http
colon,
my
app
and
it
would
work
and
then
finally
and
ingress
ingress
is
kind
of
special
and
it's
actually
it's
kind
of
funny,
because
it's
been
beta
for
a
long
time
and
it's
finally
hit
general
availability,
but
I
think
in
kubernetes
one
so
one
two
120
or
is
it
121
as
the
current
version
and
123?
B
I
believe
the
ingresses
are
starting
to
get
phased
out
in
favor
of
another
concept,
and
I
forget
the
name
of
it
at
the
moment
and
the
reason
is,
is
ingresses
are
have
they're
very
specific
to
the
implementation,
so
an
ingress
exposes
your
app
to
the
outside
world.
The
default
one
that
comes
with
kubernetes
is
backed
by
nginx.
B
B
Now
and
h.a
proxy
ingress,
and
so
some
of
the
things
that
you
would
do
and
specify
have
to
be
done,
like
kind
of
in
an
ingress
specific
way,
so
they're
redoing
the
api,
so
there's
going
to
be
less
implementation,
specific
items
in
the
definition,
but
essentially
what
this
ingress
is
doing,
is
it's
going
to
create
an
ingress
named
myapp
again
and
then
it's
going
to
say
expose
this
as
hope
as
host
myapp.sonicbox
myapp.sonicbox.io
and
then,
when
traffic
comes
in
on
that,
I
want
you
to
route
it
to
this
service,
my
app
on
port
80
and
you're
going
to
route
all
you
know
anything
that
comes
to
root
or
or
below
that
to
that
application,
and
so
what's
going
to
happen
here
in
this
network
and
I'll
and
I'll
show
this
is
we
have
an
nginx
controller,
that's
running
that
the
traffic
is
coming
into
and
then
it's
routing
it
out
into
our
individual
application
inside
kubernetes.
B
Code,
okay,
so
what
I've
got
here
is
and
let's
see,
let
me
make
this
a
little
bit
bigger
full
screen.
B
So
I
have
an
application
that
I
wrote
and
go
just
because
quick
and
easy-
and
it's
it
does
it's
very
simple:
we
just
spin
up
a
web
server.
We
return.
If
you
hit
the
slash
hello
endpoint,
it
will
return
hello.
If
you
hit
the
slash
version
endpoint,
it
will
return
the
version
which
is
just
the
github,
commit
hash,
and
that's
it
that's
all
our
application
does.
B
If
we
look
at
our
docker
file,
oops
switch
editors
and
all
your
keyboard
shortcuts
are
different.
So
if
we
look
at
our
docker
file
here,
we're
gonna
build
a
container
from
for
this
application
and
in
this
docker
file.
Here
I'm
saying
I
want
to
use
the
debian
buster
image
as
a
base,
so
this
is
going
to
set
up
a
a
change
route
that
has
essentially
the
base
utilities
that
come
with
debian
buster
as
the
route,
so
I'll
get
bash
I'll
get
app.
B
So
I
can
install
things
I'll,
get
all
the
basic
tools
that
come
with
wm
buster.
I
could,
if
I
didn't
want
any
of
that,
I
could
say
from
scratch
and
then
it
would
it'd
be
empty.
The
only
thing
that
would
be
in
this
image
is
what
I
add
to
it,
and
the
reason
I'm
using
w
and
buster
is.
I
want
to
show
the
the
size
difference
when
you
on
some
of
these
images
used.
B
So
what
I
do
here
is
I
I
use
debian
as
the
base,
and
then
I
run,
the
user
add
command
to
create
a
user
called
greeter,
and
I
set
its
home
directory
to
the
slash
app
folder.
I
then
copy
the
file
main
linux,
which
gets
built
by
my
build
into
to
slash
app,
slash
main.
I
change
the
user
that
this
is
going
to
run
as
to
greeter,
and
then
I
set
the
working
directory,
slash
app
and
then
I
you
know
we
execute
the
command.
B
So
we
want
to
drop
down
to
a
user
with
less
permissions,
so
this
user
is
created
specifically
inside
this
container
and
only
has
permissions
to
do
what
we
give
it,
which
really
is
nothing
at
this
case.
This
file
is
executable
by
everyone,
so
greeter
can
run
it
now.
Let's
see,
let's
go
ahead
and
I'm
going
to
build
him
and
then
I'm
going.
C
That's
right,
first
demo
fail
the
day
I
forgot.
B
B
We
can
say
hello
from
the
squid,
so
one
of
the
things
that
we
do
is
when
this
starts
up.
We
just
create
a
uuid
a
grid
and
we
keep
that
same
good
lifetime
of
the
running,
and
I
just
did
that
so
we
can
illustrate
showing
the
lifetime.
So
this
instance
is
happens
to
be
the
squid.
If
I
hit
it
again,
I'll
get
the
same
one,
but
if
I
stop
and
restart
them,
we'll
have
a
different
one.
B
B
C
B
There,
so
this
is
just
like
what
we
saw
before
I'm
going
to
have
this
application
run,
it's
called
greeter
and
I'm
going
to
have
one
copy
of
it
running
and
it's
going
to
run
this
image
and
then
I'm
going
to
expose
it
on
this
host.
B
Charge
up
my
lenode
bill
with
a
bunch
of
bandwidth
trying
to
be
a
little
bit
kind
to
me,
and
then
this
is
the
service.
That's
going
to
expose
it.
So
let's
go
ahead
and
we'll
tell
kubernetes.
I
have
a
cluster
running
and
I'll
show
you
that.
B
I
will
be
typing
k
because
I
have
it
as
an
alias,
so
just
know
that
that's
an
alias
for
goop
ctl,
I'm
lazy.
So
we
can
see
a
few
name
spaces
here.
Name
spaces
are
how
you
provide
scoping
inside
kubernetes.
So
we
have
a.
We
have
a
gear
name
space
that
I
created
and
I'm
going
to
tell
kubernetes
to
apply
all
the
files.
B
To
the
greeter
space,
not
great,
and
then
if
we
curl.
C
All
right,
let's
try
hello,
we
have
a
bug.
Okay,
let's
see.
B
Oh,
I
did
not
deploy
that
image.
B
Version
so
right
now,
what
I
do
did
is.
I
have
a
little
make
file
it's
going
to
build
this
for
linux.
C
That's
not
good
service
unavailable
unable
to
push
the
docker
hub
there.
We
go
okay.
B
B
B
So
you
can
see
here
that
my
cluster,
I
just
want
to
show
you.
My
cluster
has
three
nodes
running
inside
of
it.
This
cluster
is
running
in
lenode,
which
is
a
host
or
kind
of
like
digitalocean,
just
quick,
cheap
and
easy
to
get
a
kubernetes
cluster
up
and
running.
It
took
a
couple
minutes
couple
button
clicks
good
way
for
you
to
play
around
with
it
and
learn,
learn
it
running
in
the
cloud
environment.
B
So
now
we
have
this
running
in
kubernetes.
Let's
see
what
else
can
we
do?
We've
only
got
one
copy
of
it
running,
we
can
say
get
pods
and
we
can
see
that
there's
one
copy
running,
we
can
say
get
deployment,
I
mean,
there's
our
deployment,
we
see
one
of
one.
So
now,
let's
let's
say
we
know,
we
want
to
run
more
because
you
know
we
want
some
high
high
availability.
B
So
now,
if
we
get
our
deployment,
we
can
see
eight
of
10
ready
already.
It's
already
got
running
and
see,
there's
our
there's
our
pods,
so
this
pod's
been
running
for
99
seconds.
That's
how
the
original
one
we
started
and
then
it
fired
up
nine
more
to
meet
demand,
and
so
now,
if
we
curl,
let's
do
hello.
B
That
are
running
so
now
I've
scaled
out,
my
application,
the
end
user
didn't
see
anything
different
other
than
hey.
Now
their
application
is
running.
Potentially
it
may
be
more
responsive
because
I
got
more
instances
of
this
running.
So,
let's,
let's
let's
show
these
pods
here.
So
let's
say
I
want
to
simulate
some
failure
now.
So
let
me
show
you
if
I
say
I'm
going
to
delete
this
pod.
B
Specifically,
if
I
do
that
and
then
I
get
my
pods
again,
we
can
see
that
it's
already
restarted
one.
So
this
guy's
four
seconds
old
now,
so
that
that
pod,
that
I
deleted
5j
he's
not
here
anymore
and
we
have
this
new
one.
So
it
we
told
kubernetes
that
we
want
10
copies
of
this
running
at
all
times.
So
if
I
go
in
and
I'd
start
deleting
copies,
that's
not
how
you
scale
down
you,
you
tell
it
only
run.
You
know
five
copies
or
one
copy.
B
So
what
I'm
going
to
do
is
if
I
want
to
go
back
down
to
one
copy,
I'm
just
going
to
do
this,
and
now
it's
going
to
I'll
look
at
my
pods
and
we
see
that
all,
but
one
is
terminating
and
see
it
it
doesn't,
it
doesn't
keep.
It
doesn't
really
matter
to
it
which
one
it
keeps
so
we
can
see
it
actually
terminated
our
first
pod,
that's
three
minutes
and
23
seconds
old.
B
I
know
if
there's
any
questions
coming
through
in
the
chat
or
not.
B
B
I
have
three
nodes:
those
are
the
three
physical
machines
they're,
not
really
physical,
the
virtual
machines
running
in
the
cloud,
but
they're
they're,
my
linux
machines
that
are
running
kubernetes
for
me.
So
if
I
wanted
to
scale
out,
I
could
I
would
have.
I
would
go
into
my
orchestrator
in
my
cloud.
So
in
this
case
I
would
go
into
lenode
and,
I
would
say,
add
two
more
nodes
to
this
group.
B
This
is
an
area,
that's
not
something
that
you
can
control
through
kubernetes
itself,
you
can't
say:
hey
kubernetes.
I
want
like
five
nodes.
There's
an
api
called
the
cluster
api
that
that
is
supposed
to
allow
that
it's
not
I'm
not
sure
what
the
status
of
it
is.
Yet
most
people
use
an
orchestration
tool
on
top
of
that,
like
rancher
from
rancher
labs,
it's
free,
actually,
rancher
labs
is
now
part
of
souza,
but
rancher
labs
provides
that
they
provide
api
access.
B
So
you
can
say
I
want
a
node
pool,
let's
say,
and
I
want
all
the
nodes
in
that
pool
to
have
10
cores
and
32
gigs
of
ram
and
80
gigs
of
storage,
and
I
want
them
to
all
run
this
version
of
the
operating
system,
and
it
will
make
sure
that
that
happens,
and
if
you
want
to
scale
it
up,
you
can
do
it
and
cloud
providers
like
amazon.
It
can
be
part
of
a
like
auto
scaling
group.
B
Typically,
though,
you're
going
to
be
scaling,
your
pods
out
before
you're
scaling
your
nodes
out,
but
there
is
support
for
that.
It's
just
kind
of
implementation.
Specific!
Let's
see
good
question,
though,.
B
Okay,
so
I've
got
I've
got
my
my
simple
app
running
and
let's
say
I
okay
now
this
is
all
great,
but
maybe
the
way
I
deploy
my
apps
to
production
is
just
fine
or
all
right.
You
know
I'm
not
going
to
change
that,
but
I
have
the
problem
where
I've
got
a
bunch
of
different
developers,
all
working
on
the
app
and
they're
all
working
on
different
things,
and
I
need
to
be
able
to
test
it
and
schedule
things
to
go
at
different
times.
B
B
B
If
I
want
to
have
an
environment
where
I
excuse
me,
every
time
a
commit
happens,
I
I
deploy
the
application
somewhere,
so
I
can
run
integration
tests
and
automated
scripting
against
it,
and
I
don't
want
that
environment
to
be
the
same
one
that
everyone's
kind
of
manual
testing
in,
because
I
don't
want
them
to
to
screw
the
data
up
that
my
automated
scripts
need.
I
can
do
that
very
easily.
Also
one
thing
I'm
not
showing
is
having
any
sort
of
data
here
like
a
database.
B
You
can
for
these
test
scenarios,
you
can
very
easily
spin
up
a
database
inside
kubernetes
like
postgres
mysql
sql
server
has
a
linux
version.
Now
I
don't
know
how
well
sql
server
works
with
windows,
containers
or
if
it
works
at
all,
but
you
can
certainly
run
the
mysql
linux
version,
especially
for
testing
the
the
data
engine
should
be
the
same.
You
could
do
that
very
easily,
and
the
thing
to
remember
about
that
is.
These
containers
are
all
ephemeral.
B
So
if
I
were
to
do
anything
inside
a
container
onto
the
disk-
and
I
delete
that
image
while
it's
running
that
data
is
gone
now,
kubernetes
has
a
way
to
keep
that
persistent.
So
if
you
did
actually
want
to
run
your
production
database
inside
kubernetes,
you
could
I'm
not
going
to
go
into
that,
but
you
would
look
at
stateful
sets
and
persistent
volumes.
B
Those
get
a
little
more
complicated,
there's
also
projects
where
you
can
have
custom
resources.
B
So
just
like
how
we
have
this
deployment,
you
can
have
a
resource
type
of
let's
say,
rds
mysql,
and
when
you
push
that
out,
it
will
have
a
controller.
That's
just
looking
for
that
and
it's
going
to
go
out
and
create
a
new
instance
of
an
rds
mysql
database
and
do
whatever
you
tell
it
to
so.
You
can
write
your
own
custom
types
essentially
for
kubernetes
and
then
you
implement
those
in
a
controller.
B
B
B
But
let's
look
at
let's
look
at
this
example.
Here.
I've
got
another
question
coming
in
so
to
understanding
the
load.
You
can
say
I
don't
have
the
metric
server
installed,
but
so
this
is
there's
a
a
a
project
for
kubernetes.
That's
called
the
metric
server
and
it
collects
all
the
metrics
of
the
nodes
and
the
pods.
B
I
don't
have
it
installed
in
this
cluster,
and
so
you
have
to
install
that
in
the
cluster
and
then
it
cap
gathers
all
those
metrics
and
you
can
see
that
using
the
the
the
command
there.
Another
thing
that
you
can
do
inside
your
spec
here
is:
you
can
set
up
resource
constraints,
so
you
can
say
that
a
particular.
B
Container
can
only
use
like,
let's
say
two
gigs
of
memory.
Let's
say,
you've
got
a
really
really
hungry
java
application
that
eats
up
a
lot
of
memory,
but
you
don't
want
it
to
eat
up
more
than
two
gigs
of
memory,
so
you
can
specify
that
in
here
and
then
what
will
happen
is
once
you
hit
that
limit.
B
Kubernetes
will
kill
it
and
restart
it.
Essentially,
there's
there's
one
for
cpu
usage.
Also,
there's
some
work
around
this
quota
stuff,
I'm
just
not
that
familiar
with
it.
B
B
If,
for
example,
if
you
set
up
an
eks
cluster,
they
have
their
own
controller
that
instead
of
using
nginx
for
the
ingress,
it
uses
their
application
load
balancer
things
like
that,
I'm
not
using
I'm
just
using
lenode
because
for
this
demo,
because
it's
cheap,
so
they
don't
do
any
of
that
stuff
by
default.
B
Okay,
let's
look
at
a
let's
look
at
a
github
action,
really
quick!
So
I've
got
this
quick
action
here
and
if
I
push
to
branch
one
or
branch,
two
I'm
gonna
run
this
I've
got
my
coupe
config
file
being
injected
in
by
a
secret.
B
I'm
going
to
check
out
my
code,
I'm
going
to
do
some
housekeeping
here
to
get
the
branch
name,
I'm
going
to
use
that
branch
name
to
set
the
namespace
we're
going
to
use.
I
set
up.
Go
I
build
the
application
I
log
into
docker
hub.
I
push
push
my
new
image
using
the
github
hash
as
the
the
label
and
then
we're
gonna
deploy
it.
So
I'm
gonna
use
the
kubectl
command
to
take
this
app
that
I've
already
got
running
and
and
there
and
I'm
going
to
update
the
instance.
B
So
if
I
let's
see.
C
I'm
really
good
at
commit
messages
and
then
we
say
check
out
hb
qa1.
B
B
It's
gonna
run,
it's
gonna,
do
its
thing,
there's
some
automated
tools
that
will
do
this.
So
if
you're
familiar
with
the
jenkins
ci
tool,
there
is
a
new
project
called
jenkins
x.
B
Okay,
gotta
fix
that
one.
Second,
there's
a
new
project
called
jenkins
x,
which
is
kind
of
a
redoing
of
jenkins.
It's
written
in,
go
it
runs
inside
kubernetes,
it's
native.
It
will
spin
up
environments
and
stuff
for
you.
It's
very,
very
complicated.
I've
used
it
before
I've
worked
on
it
a
little
bit.
It's
very
complicated-
and
I
actually
haven't
touched
it
in
here.
So
maybe
it's
a
lot
easier
now,
but
just
know
that
that
is
out
there.
B
If
you
don't
want
to
set
up
some
of
this
stuff
yourself,
but
this
is
probably
still
easier
all
right.
So
what
is
my
error
message.
C
B
B
It's
a
lot
of
work
to
do
for
a
talk,
so
I
did
not
automate
everything,
but
it
is
something
that
I
have
done
before
for
for
places
in
the
past
inside
their
own
personal
clouds.
So
it's
very
specific
to
their
setup,
but
so
you
could
run
this
application.
You
could
have
this
deploy
your
code.
B
You
could
have
it
spin
up
an
instance
of
your
database
and
then
load
it
with
some
some
test
data
that
you
already
have,
and
then
you
could
run
your
automated
tests
against
this
or
your
manual
test,
whatever
you
want
to
do
and
then,
while
the
whole
time
you're
not
affecting
anyone
else,
who's
in
the
qa
two
or
three
or
four
environments
that
you
want
to
do
so
it
makes
it
really
easy
to
do
that.
You're
not
you're,
not
having
to
go
in,
and
you
know
old
school
applications
you
would
have
to
say.
B
Okay,
I've
got
this
set
of
vms
that
I
need
for
this,
and
for
that
and
I
need
to
provision
the
same
way
and
I
need
to
make
sure
it's
the
same
version
of
ubuntu
and
all
the
same
patch
level
and
all
that
installed.
I'm
not
worrying
about
any
of
that
right
now,
I'm
just
saying:
hey!
Here's
this
namespace
go
run
this
code
over
there
in
that
namespace,
and
it
just
it
makes
it
makes
deployment
so
much
easier.
Now
so,
like
my
deployment
is
literally.
B
Kubernetes
is
going
to
say
based
off,
you
know
how
you've
configured
some
of
the
rules.
Okay,
he
wants
to
change
the
image
in
this
deployment
called
greeter
in
the
container
calculator
to
this
image.
B
So
what
he's
gonna?
What
kubernetes
is
gonna
do?
Is
it's
going
to
spin
up
another
pod
set
to
this
image
and
let
it
get
started
and
then
it's
going
to
once
that
guy's
up
and
running
then
he's
going
to
kill
the
other,
the
other
one,
that's
running,
so
you
you're
not
having
any
downtime,
and
so,
if
you
had
10
replicas,
it's
going
to
scale
them
out
and
then
you
have
you.
B
Can
you
can
specify
how
many
instances
you
want
based
off
a
percentage
like
you
know
at
least
50
percent
of
all
the
pods
have
to
be
running.
So
if
I
had
10
running
it,
it
won't
kill
more
than
five
it'll
always
keep
five
running
in
a
time.
B
So
you
can
do
that
quickly.
Let's
say
we
realized.
Oh,
no.
This
code
is
bad,
it's
a
bug
and
we
got
to
roll
back
well.
We
just
run
this
command
again
and
we
give
it
the
previous
version,
and
it
rolls
back
now.
Of
course,
you
have
to
design
your
application
to
work
that
way.
So
you
know
this
makes
you
know
one
part
of
this
really
easy.
The
other
half
of
this
is
you
have
to
make
sure
your
application
is
designed
for
that.
B
So,
if
you
have
database
changes
that
are
going
to
take
a
while
to
run,
you
know,
you
need
to
make
sure
that
they
run
before
you
do
this,
and
you
need
to
make
sure
that
both
versions
of
the
application
can
work
for
those
changes.
B
The
current
version
that's
in
production
and
the
new
version,
if
you're
renaming
a
table
or
changing
a
column
or
something
you
know
you
have
to
do
that
in
steps,
and
you
have
to
make
sure
that
if
you
find
a
problem,
you
can
roll
back
the
deployment
of
the
application
you're
not
going
to
be
rolling
back
necessarily
those
database
changes
so
you're
not
going
to
rename
a
column
in
in
one
step,
you're,
probably
going
to
do
that
in
two
steps:
you're
going
to
add
a
column
up,
you
know,
update
code
to
start
reading
from
still
read
from
the
old
column,
migrate
data
free
from
the
new
et
cetera,
so
it
can
be
a
multi-step
process,
but
deploying
the
application
bits
doesn't
have
to
be
hard.
B
The
rest
of
it
still
can
be
a
challenge,
but
the
deployment
you
know
can
be
easy.
Okay,
I
think
that's
about
it
for
my
demo.
Unless
anyone
wants
to
see
anything
else
there.
Let's
see
we'll
go
back
to
our
slides,
though.
B
B
B
I
could
have
easily
changed
the
value
in
here
and
then
just
ran
that
apply
command
again
and
it's
gonna.
It's
gonna,
do
the
diff
on
its
end
and
just
update.
What's
what's
been
updated
if
you've
got
lots
of
different
components
and
services,
so
you
know
I've.
I've
worked
on
systems
that
had
you
know,
20
30,
40,
different
services
that
need
to
be
deployed
and
they
need
to
be
deployed
some
some
at
the
same
time.
B
Kubernetes
makes
that
easy
and
you
can
have
different
classes
of
vms
in
there
too
there's
a
concept.
It's
called
taints
and
tolerations
in
kubernetes,
where
let's
say
you
had
some
machines
that
you
want
to
do
some
machine
learning
on
and
you're
using
you
got
some.
You
know
really
beefy
gpu
instances,
you
know
in
azure
and
you're
gonna.
You
know
train
a
model
on
it.
B
You
could
use
kubernetes
to
deploy
it,
but
you
don't
want
your
web
app
to
deploy
on
these
machines
with
you
know
these
thousand
dollar
an
hour,
gpu
machines.
You
just
want
your
models
to
deploy
on
it,
so
you
can
add
taints
and
tolerations
to
those
particular
deployments
and
those
nodes
so
that
when
you
deploy
your
web
app,
he
doesn't
get
scheduled
onto
the
gpu
machine.
B
B
So
you
can
have
you
know
multiple
different
tiers
of
clusters
and
I'm
sorry
of
nodes
and
schedule
appropriately.
If
you
wanted
to
kubernetes
helps
you
support
multi-cloud
deployments
now
for
most
people
you're,
like
oh
we're
only
on
one
cloud
like
we're
not
going
to
swear
on
azure
we're,
not
switching
right,
we're
on
aws
we're
not
switching.
So
it's
kind
of
like
databases,
people
don't
change,
databases
and
that's
true,
but
where
this
does
help
is
you
know
any
company
of
any
you
know?
B
Significant
size
is
eventually
going
to
merge
with
another
company
either
buy
or
be
bought
by,
and
you
know
you
may
get
merged
with
a
company.
That's
running
an
amazon
and
you're
running
in
google
cloud
or
you're
running
in
azure,
and
so
now,
if
you're
using
you,
know
elastic
bean
stock
and
amazon
to
deploy
everything
and
now.
B
Your
you
know,
your
new
company
is
running
azure.
You
can't
really
take
that
for
one
to
one,
but
if
you
have
this
multi-cloud
system
set
up,
you
know
you're
kind
of
already
ready
to
go,
and
so
you
know
when
you're
bringing
on
some
other
groups-
and
you
know,
you've
acquired
some
new
technology.
You
just
say:
hey
dockerize,
this
we're
going
to
run
it
in
kubernetes
and
your
your
path
to
getting
on
our
infrastructure
is
easy.
B
We
kind
of
showed
this
already.
Multiple
testing
environments
makes
it
super
easy.
You
know
you
can
have
multiple
manual
testers
who
can
go
in
there
and
they
can
do
whatever
they
want
the
data
and
screw
it
up.
And
then
you
have
a
separate
environment.
That's
got
the
same
initial
seed
data
and
your
automated
tests
run
and
it
generates.
B
B
B
Then
google
started
offering
its
service,
and
now
every
cloud
has
a
kubernetes
service,
so
getting
kubernetes
initially
set
up
is
is
much
easier
than
it
used
to
be,
but
there
are
still
things
that
you
need
to
know
about
kubernetes,
and
you
need
to
be
able
to
have
someone
on
your
team
dedicated
to
to
being
able
to
support
that.
B
B
You
don't
need
kubernetes,
you
know-
and
this
is
a
you
know
large
chunk
of
internal
line
of
business
applications.
You
don't
need
kubernetes,
you
don't
need
all
the
power
it
provides
if
you're
just
getting
started
out
on
your
application.
You
don't
necessarily
need
to
say
we're
gonna.
B
We're
gonna
run
it.
You
need
to
prove
out
your
idea
first
and
then,
once
you,
your
idea
has
some
traction.
Then
you
can
figure
out
how
you're
going
to
use
kubernetes
to
scale
it
and
once
again,
if
you
don't
have
the
resources
to
support
it
back
to
that
line
of
business
comment,
though
I
made,
if
you're
a
small
company,
and
you
just
got
a
few
applications,
sure
you
don't
need
it.
B
If
you
have,
you
know
dozens
or
even
hundreds
of
internal
applications
that
need
to
get
deployed
and
supported
and
managed,
then
you
might
want
to
start
looking
at
kubernetes
to
manage
your
internal
setup
applications.
If
you
have
public
facing
applications
same
same
deal,
if
you're
you're
small
scale,
you
don't
need
kubernetes,
but
once
you
get
to
a
point
where
your
deployment
is
now
complex,
scaling
is
difficult.
That's
when
you
want
to
start
looking
at
a
tool
like
kubernetes
to
bring
in
to
help
you.
B
B
The
most
has
the
most
traction
behind
it,
but
there's
also
amazon's
ec
elastic
container
compute
container
forget
the
name
of
it:
ecs
elastic
container
service
that
is
kind
of
a
competitor
to
kubernetes.
It
also
runs
with
it.
I
think
azure
has
the
ability
to
run
containers
too.
So
containers
are
very
helpful
because
they
make
things
portable.
B
So
I
can
say
this
application.
I
can
package
an
application
with
all
its
dependencies
and
then
not
have
to
worry
about.
Is
it
going
to
run
your
machine
or
my
machine
if
you
need
a
very
specific
version
of
you,
know
python
or
some
library,
and
you
don't
want
the
person
who's
going
to
be
running
it?
B
They
have
to
do
a
lot
of
work
to
get
that
very
specific
version
to
to
run
then
package
it
as
a
container
and
then
they'll
be
able
to
run
it
no
matter
what
okay,
I
mean,
I
think
that's
it
for
me.
I
think
I've
talked
for
an
over
an
hour
now.
So
are
there
any
other
questions,
anything
that
anybody
would
like
me
to
review.
B
B
Okay,
yeah,
so
ryan
says
that
he
loves
having
a
docker
image
for
sql
server
and
he's
interested
in
having
an
image
for
his
development
environment.
Yeah
using
docker
for
your
your
development
makes
things
so
much
faster
easier
to
do
so.
I,
instead
of
having
to
install
sql
server
and
then
get
the
you
know
the
database
seated
and
then
install
all
the
different
applications.
B
To
help
help
you
there,
so
you
can
run
that
locally,
so
david
fecke
just
asked
is
kubernetes
removing
the
need
for
using
docker
containers.
So
no
so
recently
there
was
an
announcement
that
kubernetes
was
dropping
support
for
docker
and
what
that
means
is
they
were
dropping
support
for
using
docker
as
the
run
time.
So
there
are
multiple
different
runtime
engines
for
containers,
they're,
all
based
off
that
oci
spec,
that
docker
donated
back
in
2015.
B
So
docker
is
one
of
those
runtimes
there's
also
creo,
there's,
there's
container
d,
there's
pod,
there's
all
kinds
and
docker's
a
little
heavy-handed.
It
requires
a
service
to
be
running
in
the
background
that
manages
everything
and
then
you
talk
to
it
through
rpc,
using
the
docker
command.
B
When
when
kubernetes
first
came
out,
it
used
docker,
and
so
they
had
a
little
shim
that
would
talk
to
docker
over
this
network
they're
not
doing
that
they're
dropping
support
for
that.
So
they're
going
to
use
container
d,
I
believe,
but
the
image
you
build,
whether
you
build
it
with
docker
or
anything
else.
It's
an
oci
image.
It's
going
to
run
anywhere
where
oci
images
can
run.
B
There
is
also
a
project
called
kubevert
and
the
the
kind
of
what
you
see
is
the
long-term
thinking
is
is
kubernetes
will
evolve
beyond
just
a
place
to
run
your
containers,
it
will
be
a
place
to
essentially
run
your
entire
cloud.
So
kubevert
is
a
project
that
allows
you
to
specify
just
like
we
saw
with
like
a
deployment
or
a
service
or
something
you
specify
a
virtual
machine
and
kubernetes
will
talk
to.
B
You,
know
your
cloud
provider,
it's
virtual
machine
manager
whatever
and
it
will
spin
up
a
virtual
machine
for
you
and
so
it's
using
kubernetes
to
manage
all
your
resources
in
the
declarative
fashion.
So
there's
a
project
I
mentioned
earlier
too,
where
you
can
actually
use
kubernetes
to
create
an
rds
instance
for
you
in
amazon,
so
you
could
say
hey.
I
want
a
mysql
database
and
then
you
you
give
it
this
yaml
file,
and
then
it
makes
sure
that
that
mysql
database
gets
created
inside
your
amazon
account
inside
rds.
B
So
kubernetes
is
not
actually
running
mysql
at
all.
It's
just
doing
all
of
the
work
to
talk
to
aws
for
you
and
create
that
instance.
All
you
did
was
give
it
a
yaml
file
and
say
I
want
an
rds
since
it
looks
like
this
and
kubernetes
will
make
sure
that,
as
long
as
that
animal
file
is
there
like
that,
that
you'll
have
an
rds
instance
running
that
matches
that
definition.
B
So
it's
really
turning
all
of
your
infrastructure
into
this
desire
desired
state
configuration.
So
you
just
say
I
want
my
infrastructure
to
look
like
x
and
kubernetes
will,
with
all
the
plugins
and
stuff
that
are
being
created,
for
it
will
make
sure
that
your
infrastructure
looks
like
x
and
when
it
deviates
from
that
it
will
do
everything
it
can
to
get
it
back
to
that.
A
Cool,
I
was
gonna,
throw
out
a
question
to
everybody
who's
attending
how
many
of
you
are
using
kubernetes.
Currently,
I
just
want
to
see
show
hands
how
how
many
people
are
actually
using
it
right.
A
A
And
it
was
with
kubernetes
coming
on
in
2019,
with
sql
server
2019
having
the
ability
to
build
out
big
data
clusters.
I
know
that
that's
become
a
a
new
thing
in
the
sql
community,
so
I
know
a
lot
of
people
are
starting
to
use
that.
B
Yeah
for
a
while
there,
I
would
have
said
if
you
want
to
run
anything
with
state
like
a
database
in
kubernetes
for
anything
other
than
testing
and
development.
I
wouldn't
recommend
it
because
the
story
for
stateful
applications
wasn't
that
strong.
But
it's
come
a
long
way
now
and
they've
done
a
lot
of
work
there,
and
I
think
you
know
it
it'd
be
a
safe
option
to
run
it
there
before
it
would.
It
was
just
a
lot
of
work
and
it
was
kind
of
difficult
to
to
get
things
to
work.
B
The
way
you
needed
them
to
but,
like
I
said,
a
lot
of
work's
been
done
there
and
you
know
I
would.
I
would
probably
I'm
out
of
the
mindset
that
I'm
not
a
dba,
and
I
don't
want
to
play
one
on
tv
either.
So
I
like
to
let
other
people
run
my
databases
for
me.
So
I
would
do
something
like
rds,
but
if
you
are
comfortable
with
running
you
know,
you
already
run
a
database
on
a
windows
machine
anyway
for
sql
server.
B
Then
yeah
go
for
it
run
it
in
kubernetes
and
then
you
know
it'll
give
you
a
little
bit
more
power
and
flexibility
that
you
wouldn't
have
just
you
know
running
it
on
a
server.
But
for
me,
if
you're,
not
a
dba,
let
the
dbas
run
the
database
for
you.
B
Yep
yep
you
can,
you
can,
with
with
docker,
you
can
have
your
data
stored
outside
the
issue
with
kubernetes
is
since
you're
in
a
cluster
the
if
you're,
if
your
image,
if
your
container
stops
running
you're,
not
guaranteed
that
it's
going
to
start
up
on
the
same
node
that
it
was
before.
So
it
wouldn't
have
access
to
that
data
that
you
might
have
stored
outside
on
that
node.
So
they
matter
the
concept
of
persistent
volumes
and
persistent
volume
claims.
B
A
B
Exactly
so,
it's
just
a
little
tricky
and
you
just
got
to
make
sure
you
got
everything
right.
I
mean
it
works
great
now
back
in
when
it
when
things
were,
people
were
first
trying
to
do
this.
It
was
not
all
the
pieces
were
there
that
were
needed
to
to
build
that
they're
all
there
now
and
they
all
work
it's
much
easier
to
do
inside
a
cloud
like
aws
or
azure.
B
If
you're
running
kubernetes
inside
something
like
openstack,
which
is
kind
of
like
an
open
source
cloud
provider
for
your
own
data
center
that
you
might
do
at
rackspace
or
something
it's
a
little
bit
more
difficult,
because
you've
got
to
make
sure
you've
got
all
the
storage
plugins
and
all
that
kind
of
stuff
set
up
correctly.
B
So
so
michael
mann
asked
what
is
the
best
practice
for
partitioning
environments
in
dev,
qa
or
prod
with
kubernetes.
B
And
honestly,
I
would
probably
have
a
separate
cluster
for
production
and
then
a
cluster
for
everything
else,
like
tests
qa
just
just
for
that
extra
isolation,
especially
depending
on
what
kind
of
production
you're
working
with
if
it's
any
sort
of
pci
or
hipaa
data,
or
anything
like
that.
You
probably
your
your
your
auditors-
are
probably
gonna-
require
you
to
have
that
separate
anyway.
So
I
would
have
a
separate
cluster
for
production.
You
can
use
namespaces
to
isolate
things.
B
There
are
network
policies
in
kubernetes,
you
have
to
install
a
a
cni,
a
container
network
plug-in
like
flannel
or
istio.
Has
one
there's
there's
all
these,
but
you
can
do
no
work
policies.
Amazon
has
one.
I
think
it's
in
beta
still
where
the
network
policies
are
backed
by
amazon's
security
groups,
but
essentially
you
declaratively
say
you
know,
I
kind
of
like
you
define
a
security
group
in
aws.
You
know
the
traffic
can
go
from.
B
I
would
do
the
same
thing
in
dev
test
qa
just
so
you
can
catch
any
configuration
issues,
but
for
running
the
clusters,
I
would
have
a
a
decent
cluster
for
production
and
then
you
could
have
a
smaller
cluster,
for
you
know
to
run
your
dev
test
qa
in
as
long
as
you
know,
your
your
auditors
will
be
okay
with
that.
You
know
any
sort
of
data
mixing
there.
A
B
Yeah,
so
if
you
do
an
environment,
a
cluster
per
environment,
you
do
have
more
overhead
because
you
have
for
every
cluster,
you
have
a
control
plane,
so
that's
also
a
set
of
machines.
Now
the
cloud
providers
abstract
that
away
from
you
but
they've
all
started
charging
for
that.
So
excuse
me:
it's
around
a
75
a
month
charge
just
out
of
a
cluster
and
that's
to
cover
the
control
plane
resources.
So
that's
another
set
of
machines
that
you
don't
see.
B
B
So
for
every
cluster
you
spin
up,
you
will
have
another
control
plane
and
you
know
I
think
azure
charges
like
80
bucks,
amazon,
I
think
just
started
charging
google's
charging
around
70
or
80
bucks
for
it.
They
provide
an
sla
on
it,
but
it
is
just
something
to
be
aware.
I
mean
it's
not
a
totally
big
cost,
but
it
is
something
to
be
aware
of,
and
then
your
upgrade
cycle
will
be
different.
B
So
the
way
you
upgrade
kubernetes
is
you
want
to
upgrade
the
master
and
then
the
cloud
providers
do
that
they
usually
just
the
button
like
hey,
there's
a
new
version
available
and
you
hit
up
upgrade
and
it
goes
off
for
15
minutes
and
you
don't
notice
anything.
And
then
it's
running
the
next
version.
Your
nodes,
though,
don't
get
updated,
so
you
need
to
up.
B
You
know,
tell
those
nodes
to
be
updated
and
typically,
what
you
would
do
is
you
would
cordon
off
a
node
which
says:
hey,
don't
schedule
any
work
on
this
node
and
drain
it
so
get
all
the
work
that
is
on
that
node
and
move.
A
B
To
a
different
node
and
so
there's
nothing
running
on
it,
and
then
you
can
kill
that
node
and
then
spin
up
a
new
node
on
the
new
version
of
kubernetes
it'll
rejoin
the
cluster.
And
then
you
can,
you
know,
start
rescheduling
work
on
that
new
node.
You
would
you
know
cycle
through
all
your
nodes.
You
know,
depending
on
how,
how
much
nodes
being
down
you
can
tolerate
one
at
a
time
two,
at
a
time
you
know
whatever
it
is,
and
then
until
you
get
to
the
next
version,
you
can't
skip
versions.
A
B
Go
119
to
120
to
121,
so
it's
best.
If
you
stay
up
to
date,
kubernetes
comes
out.
I
think
they
have
three
releases
a
year.
Now
is
what
they're
targeting
so
they
move
kind
of
fast.
So
you
know
you
want
to
stay
up
to
date.
Another
thing
that
will
happen
is
the
cloud
providers
will
they'll
only
support.
You
know
a
few
versions
back,
so
if
it's
on
120,
they
only
support,
you
know
18,
19
and
20..
B
So
if
you're
on
117,
you
know
they're
gonna
force
the
no,
the
the
control
plane
to
be
updated,
probably
and
then
you'll
have
to
upgrade
your
node.
So
you
want
to
stay
on
top
of
that,
if
possible,
so
that
that's
another
downside
of
having
multiple
clusters,
it's
just
more
administrative
overhead
that
you
got
to.
You
got
to
be
aware
of
to
take
care
of.
C
Out
yeah.
B
We
can
go.
Let
me
let
me
go
back
to
the
top,
so
yeah,
I'm
I'm
at
the
jason
gerard
on
twitter.
That's
probably
the
best
way
to
get
a
hold
of
me.
I'm
also,
you
know,
linkedin
slash
whatever
slash
jason
gerrard.
You
can
find
me
on
linkedin,
but
my
dms
are
open.
So
if
you
just
want
to
hit
me
up
to
talk
about
anything
hit
me
up
on
linkedin
hit
me
up
on
twitter
and
I'll
get
back
to
you.
If
you
got
any
other
questions,
you
got
a
problem.