►
From YouTube: Kubernetes on bare metal with CoreOS - Kelsey Hightower
Description
Recorded on Feburary 25th, 2015 at the Kubernetes Gathering in San Francisco, CA, USA
A
So
we're
going
to
talk
about
like
hands-on
with
kubernetes,
so
I've
love
you
guys
to
have
read
the
documentation.
Viewed
the
website.
You
know
that
there's
a
lot
of
hand
waving
right,
so
I
do
a
lot
of
talks
on
a
row
and
I
talk
to
system
in
there,
but
you
got
to
show
me
something
real
right.
So
in
this
demo
that's
the
goal.
We're
gonna
try
to
show
something
you
would
actually
do
so.
A
One
of
the
good
things
about
kubernetes
in
general
is
that
even
though
it's
mint
and
designed
to
run
large-scale
infrastructure,
it
actually
scales
down
to
reality
right.
There
are
about
zero
percent
of
you
that
have
Google
scale
infrastructure
that
don't
work
at
Google
or
Facebook,
but
normally
people
have
much
smaller
infrastructures
and
ku
Bernays
actually
works
well
for
you
as
well,
and
the
way
I
like
to
frame
the
thought
pattern
here,
around
people
coming
from
a
more
traditional
infrastructure
set
up.
Is
this
question
here?
A
How
would
you
design
your
infrastructure
if
you
could
never
login
like
ever
and
when
posed
with
that
when
I
take
away
assists?
That
means
SSH
keys?
You
want
to
build
a
system
that
actually
can
be
a
little
bit
smart
about
what
it
does.
You
can't
worry
about
host
names
because
you're
not
going
to
log
in
so
let's
walk
through
that
pattern
now
and
see
what
we
end
up
with
so
we're
going
to
try
a
real
world
example.
Here
we
have
a
cluster,
so
I
have
a
mimicking
a
bare-metal
environment.
A
So
we're
not
going
to
do
this
on
a
cloud
I'm
using
VMware
have
a
I
pick
C
server
running
locally
on
my
machine
and
we're
going
to
actually
do
some
cross
hosts
communication
between
the
containers
and
we're
going
to
deploy
this
app.
So
here's
a
demo
app
called
PG
view.
Pg
s
view
will
basically
be
a
really
cool,
startup
idea
of
providing
Postgres
info
as
a
service.
A
Someone
will
probably
give
you
a
few
bitcoins
for
that
and
we
have
a
couple
requirements
so
some
real-world
with
crimes
that
I
hear
from
people
it
needs
to
be
horizontally
scalable,
so
you
got
to
be
able
to
add
more
nodes
as
you
go
and
you
want
to
have
things
like
in
this
case
was
dedicated
like
memcache
per
service.
So
you
want
some
Co
scheduling
here
and
easy,
as
that
sounds
that's
really
hard
to
pull
off.
If
you
don't
have
things
like
a
scheduler
at
your
disposal
and
then
the
Holy
Grail
is
a
Postgres
database.
A
This
is
where
it
all
falls
down
right.
You
bring
in
a
database
and
it
has
stateful
data.
How
do
you
handle
that?
Well,
spoiler
is
not
going
to
be
elegant,
but
it's
going
to
be
what
you
can
do
today
and
operational
requirements.
We
want
to
have
automated
service
discovery.
You
want
to
do
a
lot
of
hand
waving
and
configuring
things,
and
we
also
want
to
try
to
do
zero,
downtime
right
and
for
most
people.
Zero
downtime
is
really
a
pipe
dream
because
they
don't
have
the
right
tools
in
place.
A
I'm
going
to
see
if
we
can
pull
that
off
so
the
service
we
have
looks
like
this.
It
basically
has
a
couple
of
endpoints
some
RPC
endpoints,
you
hit
it.
It
will
tell
you
all.
The
features
in
your
database,
Postgres
is
pretty
old-school.
Has
all
your
favorite
languages
if
you're
over
50
all
right?
So
that's
the
end
of
the
slash,
let's
just
get
to
it.
A
So
since
we
only
have
about
a
few
minutes
here,
I
won't
be
able
to
dig
into
the
complete
details.
I'm
going
to
assume
you
all
know
what
a
pot
is.
Do
we
know
what
a
pot
is?
You
can
pretend
you
know
what
a
pot
is.
Just
raise
your
hand,
you
look
cool
to
the
person
sitting
next
to
you,
but
I'll
really
show
a
pot
really
quick,
so
we
can
all
be
on
the
same
page.
So
we
have
this
thing
called
pods
and
all
of
you
are
familiar
with.
A
Containers
for
10
like
you
are,
if
you're
not,
and
the
goal
with
kubernetes
is
actually
to
provide
a
way
of
coupling
the
services
that
are
two
distances
right.
How
do
you
express
dependencies
between
your
services,
so
this
particular
app
I'm
going
to
show
an
actual
pot
that
actually
needs
dependency,
which
going
to
use
a
replication
controller
and
what
we
have
here
and
just
going
to
show
a
small
section.
We
have
two
docker
containers
that
require
each
other,
so
we
have
our
PG
view
app
and
it
wants
its
own
local
memcache.
A
So
that
I
can
cache
things
locally
right,
so
kubernetes
gives
us
a
way
to
express
this
dependency
and
we
can
do
it
in
the
way
where
we
can
actually
stamp
out
as
many
of
these
things
as
we
need.
But
the
beauty
here
is
that
we
can
think
about
this
as
one
atomic
unit.
It's
a
pod
right.
So
a
pod
is
a
collection
of
containers
that
we
can
treat
as
one
logical
service
all
right,
we're
on
the
same
page.
A
Here
we're
not
going
to
be
logging
into
any
servers,
because
that's
the
one
of
the
challenges
here,
but
we
will
list
some
of
the
nodes.
This
command
I'm
typing
is
cube.
Cfg
I've
been
using
Cooper
days
for
a
long
time,
and
now
this
is
a
legacy
CLI
tool.
So
if
you
work
at
Google
on
the
kubernetes
team
deal
with
it,
we're
going
to
keep
going
all
right,
so
we're
going
to
list
nodes,
you
have
a
special
snowflake
in
there.
I
actually
have
labels
and
we'll
see
that
labels
are
a
very
powerful
concept
in
kubernetes.
A
This
is
how
we
make
all
the
other
components
work
together
as
a
team.
So
one
of
the
labels
we
have
that's
unique
on
one
node
is
where
the
database
is
going
to
land.
You
can
picture
in
your
mind.
That's
the
only
machine
in
the
cluster
that
has
access
to
the
data
for
a
particular
database.
So
this
is
like
reality:
Google
people
like
Wow
one
machine
for
the
database.
No,
we
don't
have
spanner
in
the
real
world.
So
we're
going
to
do
is
kick
things
off
by
spinning
up
a
Postgres
instance.
A
So
all
of
these
things
are
basically
expressed
as
configs.
We
only
want
one
Postgres
instance
here
and
what
we're
going
to
do
is
going
to
use
a
pod
and
not
a
replication
controller,
because
we
don't
Postgres,
isn't
really
designed
to
scale
out
horizontally
automatically.
You
have
to
do
a
bunch
of
work,
but
what
we
can
do
is
make
sure
that
we
schedule
it
to
a
specific
node
so
inside
this
pot
definition
we're
specifying
some
labels
of
the
host
that
must
match
mainly
that
it
needs
to
be
the
database
label.
A
So
we're
going
to
do
is
create
this
pod
and
we'll
end
up
with
the
scheduler
kicking
in
so
for
a
lot
of
people.
Scheduling
of
services
is
a
new
thing.
Most
people
do
have
a
schedule.
You
have
like
a
meet
scheduler
right.
You
have
a
couple
of
guys
that
you
call
sis
that
mins
and
they
have
this
awesome
database
called
a
spreadsheet
and
when
you
need
them
to
deploy
something,
they
update
the
database
and
they
deploy
your
app.
If
he
takes
a
vacation.
A
Good
luck,
so
once
that's
deployed,
we
can
list
all
the
pods
in
the
system.
So
now
that
is
up
and
running.
Another
challenge
we
had
is
we
want
an
automatic
service
discovery.
I,
don't
want
to
go
around
giving
this
IPA
out
to
anyone,
because
this
pod
could
move
to
another
machine.
So
the
goal
here
is
that
we
need
an
abstraction
that
handles
service
discovery
and
something
that
we
can
give
out
to
other
applications,
so
we're
going
to
introduce
this
concept
of
a
service.
A
A
All
right,
so
what
we
ended
up
with
what
is
a
portal
IP,
so
kubernetes
has
a
concept
of
a
collection
of
IPs
that
we
can
use
for
services
for
application.
This
is
really
key
once
we
start
getting
into
services
that
scale
out
right.
So
with
this
service
IP,
if
everything
is
actually
working,
I
should
be
able
to
use
pulse
grass
or
Postgres
client
to
hit
the
database.
I
love
doing
these
demos
because
half
the
people
on
the
crowd
of
light.
It's
not
going
to
work.
So,
let's
see
click!
A
A
A
A
A
A
A
A
A
All
right
now,
demo
God's
all
right
so
now,
if
this
all
works,
we're
going
to
create
kid.
Are
you
kicking
me
out,
keep
coding
all
right
so
create
our
pods,
so
we're
going
to
launch
our
Postgres
pod
again,
Postgres
pods,
so
Postgres
is
up
and
running
now.
Postgres
start!
It's
me,
my
personal
relationship,
all
right,
so
we're
going
to
have
do
now.
We
have
Postgres
up
and
running.
A
A
A
Alright,
so
now
we're
going
to
do
is
we're
going
to
create
a
replication
controller,
because
we
want
one
or
more,
it
has
to
be
able
to
scale
horizontally
right.
So
we're
going
to
do
is
create
a
replication
controller.
For
this
thing
called
PG
view.
We're
almost
done
here,
though.
It's
not
going
to
take
too
long
I'm
going
to
create
connect.
Our
replication
controllers,
Oh.
A
Alright,
so
that
works
we'll
do
a
list
of
pods,
so
we're
going
to
do
is
the
replication
controller
is
going
to
control
in
number
of
pods
and
right
now,
we've
asked
it
to
give
us
one
pot.
We
don't
trust
the
developer,
so
we
give
them
one
and
they
can
hang
with.
That
will
maybe
give
them
more
all
right.
A
So
we
wait
till
that's
up
and
it
says
it's
now
running
if
this
isn't
truly
in
fact
running
we're
going
to
do
the
same
thing
and
provide
a
service,
because
we
don't
want
to
hit
these
things
directly
and
also
we
use
a
service
we're
going
to
take
advantage
of
the
built-in
service
proxy
in
kubernetes.
So
we're
going
to
really
quick
create
another
service,
and
this
is
going
to
be
for
the
PG
view
application
and
we're
going
to
create
a
service
there
we
go
so
we
have
a
new
service,
let's
list
our
services.
A
So
if
this
works,
where
we
are
here,
we
go
so
here's
another
service,
IP
and
if
this
works
I'll
be
able
to
get
the
version
of
the
application.
That's
running
that
works.
I
am
happy
whoo
all
right.
So
now
that
we
have
the
version
our
application
is
deployed
and
now
we're
ready
we're
in
production,
so
we're
going
to
see
we're
going
to
actually
test
the
whole
thing
right.
It's
actually
we're
going
to
get
sequel
features.
We
introduced
a
long
sleep.
This
is
a
long
Kray.
It
takes
a
long
time
if
memcache
is
actually
working.
A
A
What's
that
there
we
go
so
this
is
production,
we
got
one
node
and
we
can
raise
another
round
the
VC
funding
alright.
A
A
And
what
should
happen
is
it
should
notice
that
we
don't
have
all
the
pods
running
and
we'll
list
our
pods
now
and
we'll
see
that
they're
coming
up
there
in
unknown
State,
you
can
be
an
unknown
state
and
that's
our
what
our
pods
are
doing
so
they're
downloading
Dockers
doing
this
thing
now,
they're
up
and
running,
let's
see
what
the
customer
observes
no
downtime
right.
So
the
service
proxy
is
handling
this.
For
us,
this
is
pretty
cool.
A
So
now
we
like
the
application
and
we're
getting
to
the
end
here
we
want
to
roll
out
a
new
version,
but
you
don't
trust
your
developers
again.
So
you
want
to
roll
out
one
of
the
new
versions
using
what
we
call
the
canary
pattern.
So
we're
going
to
do
is
release
a
new
replication
controller,
we're
going
to
create
a
replication
controller
for
PG
view,
but
the
canary
controller
right.
A
All
right
so
now
we
have
another
one
up,
we'll
look
at
our
pods,
and
now
we
see
that
there's
a
two
data
pod
coming
online.
If
this
is
working,
here's
what
the
customer
seen
we
see
two
dotto
there.
Can
we
see
that
all
right?
Yes,
so
your
developers
are
running
great
in
production
and
we're
about
to
end
now
and
then
the
last
thing
we
need
to
do
too
is
you
have
lost
control,
we're
in
the
middle
of
production
deployment?
You
got
a
chill
out.
Okay,
all
right!
A
So
now
that
you're
happy
that
the
canary
pattern
is
working,
you
are
going
to
roll
this
out
across
the
entire
cluster
and
we're
going
to
do
this
with
a
nice
helper
command,
we're
going
to
say
Cooper
neighs,
go
through
the
stable
image
and
do
a
rolling
upgrade
and
update
the
version
of
the
pod.
That's
in
play.
A
If
this
works,
you
guys
should
clap
really
really
loud,
because
the
demo
guys
are
being
nice
all
right
so
going
to
do
a
live,
rolling
upgrade
of
our
server
and
we
hope
that
we
see
no
drop
connections
on
the
other
side.
Some
of
you
guys
are
going
to
edge
of
the
seats,
let's
see
if
that
axe
is
going
to
work,
and
we
see
to
that.
Oh,
it
looks
like
5050
at
this
point.
Oh
oh,
is
it
going
to
up
to
that?